1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataflow
  5. dataflow/v1b3
  6. Job

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.31.1 published on Thursday, Jul 20, 2023 by Pulumi

google-native.dataflow/v1b3.Job

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.31.1 published on Thursday, Jul 20, 2023 by Pulumi

    Creates a Cloud Dataflow job. To create a job, we recommend using projects.locations.jobs.create with a [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints). Using projects.jobs.create is not recommended, as your job will always start in us-central1. Do not enter confidential information when you supply string values using the API. Note - this resource’s API doesn’t support deletion. When deleted, the resource will persist on Google Cloud even though it will be deleted from Pulumi state.

    Create Job Resource

    new Job(name: string, args?: JobArgs, opts?: CustomResourceOptions);
    @overload
    def Job(resource_name: str,
            opts: Optional[ResourceOptions] = None,
            client_request_id: Optional[str] = None,
            create_time: Optional[str] = None,
            created_from_snapshot_id: Optional[str] = None,
            current_state: Optional[JobCurrentState] = None,
            current_state_time: Optional[str] = None,
            environment: Optional[EnvironmentArgs] = None,
            execution_info: Optional[JobExecutionInfoArgs] = None,
            id: Optional[str] = None,
            job_metadata: Optional[JobMetadataArgs] = None,
            labels: Optional[Mapping[str, str]] = None,
            location: Optional[str] = None,
            name: Optional[str] = None,
            pipeline_description: Optional[PipelineDescriptionArgs] = None,
            project: Optional[str] = None,
            replace_job_id: Optional[str] = None,
            replaced_by_job_id: Optional[str] = None,
            requested_state: Optional[JobRequestedState] = None,
            runtime_updatable_params: Optional[RuntimeUpdatableParamsArgs] = None,
            satisfies_pzs: Optional[bool] = None,
            stage_states: Optional[Sequence[ExecutionStageStateArgs]] = None,
            start_time: Optional[str] = None,
            steps: Optional[Sequence[StepArgs]] = None,
            steps_location: Optional[str] = None,
            temp_files: Optional[Sequence[str]] = None,
            transform_name_mapping: Optional[Mapping[str, str]] = None,
            type: Optional[JobType] = None,
            view: Optional[str] = None)
    @overload
    def Job(resource_name: str,
            args: Optional[JobArgs] = None,
            opts: Optional[ResourceOptions] = None)
    func NewJob(ctx *Context, name string, args *JobArgs, opts ...ResourceOption) (*Job, error)
    public Job(string name, JobArgs? args = null, CustomResourceOptions? opts = null)
    public Job(String name, JobArgs args)
    public Job(String name, JobArgs args, CustomResourceOptions options)
    
    type: google-native:dataflow/v1b3:Job
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    
    name string
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Job Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    The Job resource accepts the following input properties:

    ClientRequestId string

    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.

    CreateTime string

    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.

    CreatedFromSnapshotId string

    If this is specified, the job's initial state is populated from the given snapshot.

    CurrentState Pulumi.GoogleNative.Dataflow.V1b3.JobCurrentState

    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

    CurrentStateTime string

    The timestamp associated with the current state.

    Environment Pulumi.GoogleNative.Dataflow.V1b3.Inputs.Environment

    The environment for the job.

    ExecutionInfo Pulumi.GoogleNative.Dataflow.V1b3.Inputs.JobExecutionInfo

    Deprecated.

    Deprecated:

    Deprecated.

    Id string

    The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.

    JobMetadata Pulumi.GoogleNative.Dataflow.V1b3.Inputs.JobMetadata

    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.

    Labels Dictionary<string, string>

    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.

    Location string

    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.

    Name string

    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?

    PipelineDescription Pulumi.GoogleNative.Dataflow.V1b3.Inputs.PipelineDescription

    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.

    Project string

    The ID of the Cloud Platform project that the job belongs to.

    ReplaceJobId string

    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.

    ReplacedByJobId string

    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.

    RequestedState Pulumi.GoogleNative.Dataflow.V1b3.JobRequestedState

    The job's requested state. UpdateJob may be used to switch between the JOB_STATE_STOPPED and JOB_STATE_RUNNING states, by setting requested_state. UpdateJob may also be used to directly set a job's requested state to JOB_STATE_CANCELLED or JOB_STATE_DONE, irrevocably terminating the job if it has not already reached a terminal state.

    RuntimeUpdatableParams Pulumi.GoogleNative.Dataflow.V1b3.Inputs.RuntimeUpdatableParams

    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.

    SatisfiesPzs bool

    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.

    StageStates List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ExecutionStageState>

    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

    StartTime string

    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.

    Steps List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.Step>

    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.

    StepsLocation string

    The Cloud Storage location where the steps are stored.

    TempFiles List<string>

    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    TransformNameMapping Dictionary<string, string>

    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.

    Type Pulumi.GoogleNative.Dataflow.V1b3.JobType

    The type of Cloud Dataflow job.

    View string

    The level of information requested in response.

    ClientRequestId string

    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.

    CreateTime string

    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.

    CreatedFromSnapshotId string

    If this is specified, the job's initial state is populated from the given snapshot.

    CurrentState JobCurrentState

    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

    CurrentStateTime string

    The timestamp associated with the current state.

    Environment EnvironmentArgs

    The environment for the job.

    ExecutionInfo JobExecutionInfoArgs

    Deprecated.

    Deprecated:

    Deprecated.

    Id string

    The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.

    JobMetadata JobMetadataArgs

    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.

    Labels map[string]string

    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.

    Location string

    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.

    Name string

    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?

    PipelineDescription PipelineDescriptionArgs

    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.

    Project string

    The ID of the Cloud Platform project that the job belongs to.

    ReplaceJobId string

    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.

    ReplacedByJobId string

    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.

    RequestedState JobRequestedState

    The job's requested state. UpdateJob may be used to switch between the JOB_STATE_STOPPED and JOB_STATE_RUNNING states, by setting requested_state. UpdateJob may also be used to directly set a job's requested state to JOB_STATE_CANCELLED or JOB_STATE_DONE, irrevocably terminating the job if it has not already reached a terminal state.

    RuntimeUpdatableParams RuntimeUpdatableParamsArgs

    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.

    SatisfiesPzs bool

    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.

    StageStates []ExecutionStageStateArgs

    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

    StartTime string

    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.

    Steps []StepArgs

    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.

    StepsLocation string

    The Cloud Storage location where the steps are stored.

    TempFiles []string

    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    TransformNameMapping map[string]string

    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.

    Type JobType

    The type of Cloud Dataflow job.

    View string

    The level of information requested in response.

    clientRequestId String

    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.

    createTime String

    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.

    createdFromSnapshotId String

    If this is specified, the job's initial state is populated from the given snapshot.

    currentState JobCurrentState

    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

    currentStateTime String

    The timestamp associated with the current state.

    environment Environment

    The environment for the job.

    executionInfo JobExecutionInfo

    Deprecated.

    Deprecated:

    Deprecated.

    id String

    The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.

    jobMetadata JobMetadata

    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.

    labels Map<String,String>

    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.

    location String

    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.

    name String

    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?

    pipelineDescription PipelineDescription

    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.

    project String

    The ID of the Cloud Platform project that the job belongs to.

    replaceJobId String

    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.

    replacedByJobId String

    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.

    requestedState JobRequestedState

    The job's requested state. UpdateJob may be used to switch between the JOB_STATE_STOPPED and JOB_STATE_RUNNING states, by setting requested_state. UpdateJob may also be used to directly set a job's requested state to JOB_STATE_CANCELLED or JOB_STATE_DONE, irrevocably terminating the job if it has not already reached a terminal state.

    runtimeUpdatableParams RuntimeUpdatableParams

    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.

    satisfiesPzs Boolean

    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.

    stageStates List<ExecutionStageState>

    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

    startTime String

    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.

    steps List<Step>

    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.

    stepsLocation String

    The Cloud Storage location where the steps are stored.

    tempFiles List<String>

    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    transformNameMapping Map<String,String>

    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.

    type JobType

    The type of Cloud Dataflow job.

    view String

    The level of information requested in response.

    clientRequestId string

    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.

    createTime string

    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.

    createdFromSnapshotId string

    If this is specified, the job's initial state is populated from the given snapshot.

    currentState JobCurrentState

    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

    currentStateTime string

    The timestamp associated with the current state.

    environment Environment

    The environment for the job.

    executionInfo JobExecutionInfo

    Deprecated.

    Deprecated:

    Deprecated.

    id string

    The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.

    jobMetadata JobMetadata

    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.

    labels {[key: string]: string}

    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.

    location string

    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.

    name string

    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?

    pipelineDescription PipelineDescription

    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.

    project string

    The ID of the Cloud Platform project that the job belongs to.

    replaceJobId string

    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.

    replacedByJobId string

    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.

    requestedState JobRequestedState

    The job's requested state. UpdateJob may be used to switch between the JOB_STATE_STOPPED and JOB_STATE_RUNNING states, by setting requested_state. UpdateJob may also be used to directly set a job's requested state to JOB_STATE_CANCELLED or JOB_STATE_DONE, irrevocably terminating the job if it has not already reached a terminal state.

    runtimeUpdatableParams RuntimeUpdatableParams

    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.

    satisfiesPzs boolean

    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.

    stageStates ExecutionStageState[]

    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

    startTime string

    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.

    steps Step[]

    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.

    stepsLocation string

    The Cloud Storage location where the steps are stored.

    tempFiles string[]

    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    transformNameMapping {[key: string]: string}

    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.

    type JobType

    The type of Cloud Dataflow job.

    view string

    The level of information requested in response.

    client_request_id str

    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.

    create_time str

    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.

    created_from_snapshot_id str

    If this is specified, the job's initial state is populated from the given snapshot.

    current_state JobCurrentState

    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

    current_state_time str

    The timestamp associated with the current state.

    environment EnvironmentArgs

    The environment for the job.

    execution_info JobExecutionInfoArgs

    Deprecated.

    Deprecated:

    Deprecated.

    id str

    The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.

    job_metadata JobMetadataArgs

    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.

    labels Mapping[str, str]

    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.

    location str

    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.

    name str

    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?

    pipeline_description PipelineDescriptionArgs

    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.

    project str

    The ID of the Cloud Platform project that the job belongs to.

    replace_job_id str

    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.

    replaced_by_job_id str

    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.

    requested_state JobRequestedState

    The job's requested state. UpdateJob may be used to switch between the JOB_STATE_STOPPED and JOB_STATE_RUNNING states, by setting requested_state. UpdateJob may also be used to directly set a job's requested state to JOB_STATE_CANCELLED or JOB_STATE_DONE, irrevocably terminating the job if it has not already reached a terminal state.

    runtime_updatable_params RuntimeUpdatableParamsArgs

    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.

    satisfies_pzs bool

    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.

    stage_states Sequence[ExecutionStageStateArgs]

    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

    start_time str

    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.

    steps Sequence[StepArgs]

    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.

    steps_location str

    The Cloud Storage location where the steps are stored.

    temp_files Sequence[str]

    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    transform_name_mapping Mapping[str, str]

    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.

    type JobType

    The type of Cloud Dataflow job.

    view str

    The level of information requested in response.

    clientRequestId String

    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.

    createTime String

    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.

    createdFromSnapshotId String

    If this is specified, the job's initial state is populated from the given snapshot.

    currentState "JOB_STATE_UNKNOWN" | "JOB_STATE_STOPPED" | "JOB_STATE_RUNNING" | "JOB_STATE_DONE" | "JOB_STATE_FAILED" | "JOB_STATE_CANCELLED" | "JOB_STATE_UPDATED" | "JOB_STATE_DRAINING" | "JOB_STATE_DRAINED" | "JOB_STATE_PENDING" | "JOB_STATE_CANCELLING" | "JOB_STATE_QUEUED" | "JOB_STATE_RESOURCE_CLEANING_UP"

    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

    currentStateTime String

    The timestamp associated with the current state.

    environment Property Map

    The environment for the job.

    executionInfo Property Map

    Deprecated.

    Deprecated:

    Deprecated.

    id String

    The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.

    jobMetadata Property Map

    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.

    labels Map<String>

    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.

    location String

    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.

    name String

    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?

    pipelineDescription Property Map

    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.

    project String

    The ID of the Cloud Platform project that the job belongs to.

    replaceJobId String

    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.

    replacedByJobId String

    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.

    requestedState "JOB_STATE_UNKNOWN" | "JOB_STATE_STOPPED" | "JOB_STATE_RUNNING" | "JOB_STATE_DONE" | "JOB_STATE_FAILED" | "JOB_STATE_CANCELLED" | "JOB_STATE_UPDATED" | "JOB_STATE_DRAINING" | "JOB_STATE_DRAINED" | "JOB_STATE_PENDING" | "JOB_STATE_CANCELLING" | "JOB_STATE_QUEUED" | "JOB_STATE_RESOURCE_CLEANING_UP"

    The job's requested state. UpdateJob may be used to switch between the JOB_STATE_STOPPED and JOB_STATE_RUNNING states, by setting requested_state. UpdateJob may also be used to directly set a job's requested state to JOB_STATE_CANCELLED or JOB_STATE_DONE, irrevocably terminating the job if it has not already reached a terminal state.

    runtimeUpdatableParams Property Map

    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.

    satisfiesPzs Boolean

    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.

    stageStates List<Property Map>

    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

    startTime String

    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.

    steps List<Property Map>

    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.

    stepsLocation String

    The Cloud Storage location where the steps are stored.

    tempFiles List<String>

    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    transformNameMapping Map<String>

    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.

    type "JOB_TYPE_UNKNOWN" | "JOB_TYPE_BATCH" | "JOB_TYPE_STREAMING"

    The type of Cloud Dataflow job.

    view String

    The level of information requested in response.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the Job resource produces the following output properties:

    Id string

    The provider-assigned unique ID for this managed resource.

    Id string

    The provider-assigned unique ID for this managed resource.

    id String

    The provider-assigned unique ID for this managed resource.

    id string

    The provider-assigned unique ID for this managed resource.

    id str

    The provider-assigned unique ID for this managed resource.

    id String

    The provider-assigned unique ID for this managed resource.

    Supporting Types

    AutoscalingSettings, AutoscalingSettingsArgs

    Algorithm Pulumi.GoogleNative.Dataflow.V1b3.AutoscalingSettingsAlgorithm

    The algorithm to use for autoscaling.

    MaxNumWorkers int

    The maximum number of workers to cap scaling at.

    Algorithm AutoscalingSettingsAlgorithm

    The algorithm to use for autoscaling.

    MaxNumWorkers int

    The maximum number of workers to cap scaling at.

    algorithm AutoscalingSettingsAlgorithm

    The algorithm to use for autoscaling.

    maxNumWorkers Integer

    The maximum number of workers to cap scaling at.

    algorithm AutoscalingSettingsAlgorithm

    The algorithm to use for autoscaling.

    maxNumWorkers number

    The maximum number of workers to cap scaling at.

    algorithm AutoscalingSettingsAlgorithm

    The algorithm to use for autoscaling.

    max_num_workers int

    The maximum number of workers to cap scaling at.

    algorithm "AUTOSCALING_ALGORITHM_UNKNOWN" | "AUTOSCALING_ALGORITHM_NONE" | "AUTOSCALING_ALGORITHM_BASIC"

    The algorithm to use for autoscaling.

    maxNumWorkers Number

    The maximum number of workers to cap scaling at.

    AutoscalingSettingsAlgorithm, AutoscalingSettingsAlgorithmArgs

    AutoscalingAlgorithmUnknown
    AUTOSCALING_ALGORITHM_UNKNOWN

    The algorithm is unknown, or unspecified.

    AutoscalingAlgorithmNone
    AUTOSCALING_ALGORITHM_NONE

    Disable autoscaling.

    AutoscalingAlgorithmBasic
    AUTOSCALING_ALGORITHM_BASIC

    Increase worker count over time to reduce job execution time.

    AutoscalingSettingsAlgorithmAutoscalingAlgorithmUnknown
    AUTOSCALING_ALGORITHM_UNKNOWN

    The algorithm is unknown, or unspecified.

    AutoscalingSettingsAlgorithmAutoscalingAlgorithmNone
    AUTOSCALING_ALGORITHM_NONE

    Disable autoscaling.

    AutoscalingSettingsAlgorithmAutoscalingAlgorithmBasic
    AUTOSCALING_ALGORITHM_BASIC

    Increase worker count over time to reduce job execution time.

    AutoscalingAlgorithmUnknown
    AUTOSCALING_ALGORITHM_UNKNOWN

    The algorithm is unknown, or unspecified.

    AutoscalingAlgorithmNone
    AUTOSCALING_ALGORITHM_NONE

    Disable autoscaling.

    AutoscalingAlgorithmBasic
    AUTOSCALING_ALGORITHM_BASIC

    Increase worker count over time to reduce job execution time.

    AutoscalingAlgorithmUnknown
    AUTOSCALING_ALGORITHM_UNKNOWN

    The algorithm is unknown, or unspecified.

    AutoscalingAlgorithmNone
    AUTOSCALING_ALGORITHM_NONE

    Disable autoscaling.

    AutoscalingAlgorithmBasic
    AUTOSCALING_ALGORITHM_BASIC

    Increase worker count over time to reduce job execution time.

    AUTOSCALING_ALGORITHM_UNKNOWN
    AUTOSCALING_ALGORITHM_UNKNOWN

    The algorithm is unknown, or unspecified.

    AUTOSCALING_ALGORITHM_NONE
    AUTOSCALING_ALGORITHM_NONE

    Disable autoscaling.

    AUTOSCALING_ALGORITHM_BASIC
    AUTOSCALING_ALGORITHM_BASIC

    Increase worker count over time to reduce job execution time.

    "AUTOSCALING_ALGORITHM_UNKNOWN"
    AUTOSCALING_ALGORITHM_UNKNOWN

    The algorithm is unknown, or unspecified.

    "AUTOSCALING_ALGORITHM_NONE"
    AUTOSCALING_ALGORITHM_NONE

    Disable autoscaling.

    "AUTOSCALING_ALGORITHM_BASIC"
    AUTOSCALING_ALGORITHM_BASIC

    Increase worker count over time to reduce job execution time.

    AutoscalingSettingsResponse, AutoscalingSettingsResponseArgs

    Algorithm string

    The algorithm to use for autoscaling.

    MaxNumWorkers int

    The maximum number of workers to cap scaling at.

    Algorithm string

    The algorithm to use for autoscaling.

    MaxNumWorkers int

    The maximum number of workers to cap scaling at.

    algorithm String

    The algorithm to use for autoscaling.

    maxNumWorkers Integer

    The maximum number of workers to cap scaling at.

    algorithm string

    The algorithm to use for autoscaling.

    maxNumWorkers number

    The maximum number of workers to cap scaling at.

    algorithm str

    The algorithm to use for autoscaling.

    max_num_workers int

    The maximum number of workers to cap scaling at.

    algorithm String

    The algorithm to use for autoscaling.

    maxNumWorkers Number

    The maximum number of workers to cap scaling at.

    BigQueryIODetails, BigQueryIODetailsArgs

    Dataset string

    Dataset accessed in the connection.

    Project string

    Project accessed in the connection.

    Query string

    Query used to access data in the connection.

    Table string

    Table accessed in the connection.

    Dataset string

    Dataset accessed in the connection.

    Project string

    Project accessed in the connection.

    Query string

    Query used to access data in the connection.

    Table string

    Table accessed in the connection.

    dataset String

    Dataset accessed in the connection.

    project String

    Project accessed in the connection.

    query String

    Query used to access data in the connection.

    table String

    Table accessed in the connection.

    dataset string

    Dataset accessed in the connection.

    project string

    Project accessed in the connection.

    query string

    Query used to access data in the connection.

    table string

    Table accessed in the connection.

    dataset str

    Dataset accessed in the connection.

    project str

    Project accessed in the connection.

    query str

    Query used to access data in the connection.

    table str

    Table accessed in the connection.

    dataset String

    Dataset accessed in the connection.

    project String

    Project accessed in the connection.

    query String

    Query used to access data in the connection.

    table String

    Table accessed in the connection.

    BigQueryIODetailsResponse, BigQueryIODetailsResponseArgs

    Dataset string

    Dataset accessed in the connection.

    Project string

    Project accessed in the connection.

    Query string

    Query used to access data in the connection.

    Table string

    Table accessed in the connection.

    Dataset string

    Dataset accessed in the connection.

    Project string

    Project accessed in the connection.

    Query string

    Query used to access data in the connection.

    Table string

    Table accessed in the connection.

    dataset String

    Dataset accessed in the connection.

    project String

    Project accessed in the connection.

    query String

    Query used to access data in the connection.

    table String

    Table accessed in the connection.

    dataset string

    Dataset accessed in the connection.

    project string

    Project accessed in the connection.

    query string

    Query used to access data in the connection.

    table string

    Table accessed in the connection.

    dataset str

    Dataset accessed in the connection.

    project str

    Project accessed in the connection.

    query str

    Query used to access data in the connection.

    table str

    Table accessed in the connection.

    dataset String

    Dataset accessed in the connection.

    project String

    Project accessed in the connection.

    query String

    Query used to access data in the connection.

    table String

    Table accessed in the connection.

    BigTableIODetails, BigTableIODetailsArgs

    InstanceId string

    InstanceId accessed in the connection.

    Project string

    ProjectId accessed in the connection.

    TableId string

    TableId accessed in the connection.

    InstanceId string

    InstanceId accessed in the connection.

    Project string

    ProjectId accessed in the connection.

    TableId string

    TableId accessed in the connection.

    instanceId String

    InstanceId accessed in the connection.

    project String

    ProjectId accessed in the connection.

    tableId String

    TableId accessed in the connection.

    instanceId string

    InstanceId accessed in the connection.

    project string

    ProjectId accessed in the connection.

    tableId string

    TableId accessed in the connection.

    instance_id str

    InstanceId accessed in the connection.

    project str

    ProjectId accessed in the connection.

    table_id str

    TableId accessed in the connection.

    instanceId String

    InstanceId accessed in the connection.

    project String

    ProjectId accessed in the connection.

    tableId String

    TableId accessed in the connection.

    BigTableIODetailsResponse, BigTableIODetailsResponseArgs

    InstanceId string

    InstanceId accessed in the connection.

    Project string

    ProjectId accessed in the connection.

    TableId string

    TableId accessed in the connection.

    InstanceId string

    InstanceId accessed in the connection.

    Project string

    ProjectId accessed in the connection.

    TableId string

    TableId accessed in the connection.

    instanceId String

    InstanceId accessed in the connection.

    project String

    ProjectId accessed in the connection.

    tableId String

    TableId accessed in the connection.

    instanceId string

    InstanceId accessed in the connection.

    project string

    ProjectId accessed in the connection.

    tableId string

    TableId accessed in the connection.

    instance_id str

    InstanceId accessed in the connection.

    project str

    ProjectId accessed in the connection.

    table_id str

    TableId accessed in the connection.

    instanceId String

    InstanceId accessed in the connection.

    project String

    ProjectId accessed in the connection.

    tableId String

    TableId accessed in the connection.

    ComponentSource, ComponentSourceArgs

    Name string

    Dataflow service generated name for this source.

    OriginalTransformOrCollection string

    User name for the original user transform or collection with which this source is most closely associated.

    UserName string

    Human-readable name for this transform; may be user or system generated.

    Name string

    Dataflow service generated name for this source.

    OriginalTransformOrCollection string

    User name for the original user transform or collection with which this source is most closely associated.

    UserName string

    Human-readable name for this transform; may be user or system generated.

    name String

    Dataflow service generated name for this source.

    originalTransformOrCollection String

    User name for the original user transform or collection with which this source is most closely associated.

    userName String

    Human-readable name for this transform; may be user or system generated.

    name string

    Dataflow service generated name for this source.

    originalTransformOrCollection string

    User name for the original user transform or collection with which this source is most closely associated.

    userName string

    Human-readable name for this transform; may be user or system generated.

    name str

    Dataflow service generated name for this source.

    original_transform_or_collection str

    User name for the original user transform or collection with which this source is most closely associated.

    user_name str

    Human-readable name for this transform; may be user or system generated.

    name String

    Dataflow service generated name for this source.

    originalTransformOrCollection String

    User name for the original user transform or collection with which this source is most closely associated.

    userName String

    Human-readable name for this transform; may be user or system generated.

    ComponentSourceResponse, ComponentSourceResponseArgs

    Name string

    Dataflow service generated name for this source.

    OriginalTransformOrCollection string

    User name for the original user transform or collection with which this source is most closely associated.

    UserName string

    Human-readable name for this transform; may be user or system generated.

    Name string

    Dataflow service generated name for this source.

    OriginalTransformOrCollection string

    User name for the original user transform or collection with which this source is most closely associated.

    UserName string

    Human-readable name for this transform; may be user or system generated.

    name String

    Dataflow service generated name for this source.

    originalTransformOrCollection String

    User name for the original user transform or collection with which this source is most closely associated.

    userName String

    Human-readable name for this transform; may be user or system generated.

    name string

    Dataflow service generated name for this source.

    originalTransformOrCollection string

    User name for the original user transform or collection with which this source is most closely associated.

    userName string

    Human-readable name for this transform; may be user or system generated.

    name str

    Dataflow service generated name for this source.

    original_transform_or_collection str

    User name for the original user transform or collection with which this source is most closely associated.

    user_name str

    Human-readable name for this transform; may be user or system generated.

    name String

    Dataflow service generated name for this source.

    originalTransformOrCollection String

    User name for the original user transform or collection with which this source is most closely associated.

    userName String

    Human-readable name for this transform; may be user or system generated.

    ComponentTransform, ComponentTransformArgs

    Name string

    Dataflow service generated name for this source.

    OriginalTransform string

    User name for the original user transform with which this transform is most closely associated.

    UserName string

    Human-readable name for this transform; may be user or system generated.

    Name string

    Dataflow service generated name for this source.

    OriginalTransform string

    User name for the original user transform with which this transform is most closely associated.

    UserName string

    Human-readable name for this transform; may be user or system generated.

    name String

    Dataflow service generated name for this source.

    originalTransform String

    User name for the original user transform with which this transform is most closely associated.

    userName String

    Human-readable name for this transform; may be user or system generated.

    name string

    Dataflow service generated name for this source.

    originalTransform string

    User name for the original user transform with which this transform is most closely associated.

    userName string

    Human-readable name for this transform; may be user or system generated.

    name str

    Dataflow service generated name for this source.

    original_transform str

    User name for the original user transform with which this transform is most closely associated.

    user_name str

    Human-readable name for this transform; may be user or system generated.

    name String

    Dataflow service generated name for this source.

    originalTransform String

    User name for the original user transform with which this transform is most closely associated.

    userName String

    Human-readable name for this transform; may be user or system generated.

    ComponentTransformResponse, ComponentTransformResponseArgs

    Name string

    Dataflow service generated name for this source.

    OriginalTransform string

    User name for the original user transform with which this transform is most closely associated.

    UserName string

    Human-readable name for this transform; may be user or system generated.

    Name string

    Dataflow service generated name for this source.

    OriginalTransform string

    User name for the original user transform with which this transform is most closely associated.

    UserName string

    Human-readable name for this transform; may be user or system generated.

    name String

    Dataflow service generated name for this source.

    originalTransform String

    User name for the original user transform with which this transform is most closely associated.

    userName String

    Human-readable name for this transform; may be user or system generated.

    name string

    Dataflow service generated name for this source.

    originalTransform string

    User name for the original user transform with which this transform is most closely associated.

    userName string

    Human-readable name for this transform; may be user or system generated.

    name str

    Dataflow service generated name for this source.

    original_transform str

    User name for the original user transform with which this transform is most closely associated.

    user_name str

    Human-readable name for this transform; may be user or system generated.

    name String

    Dataflow service generated name for this source.

    originalTransform String

    User name for the original user transform with which this transform is most closely associated.

    userName String

    Human-readable name for this transform; may be user or system generated.

    DatastoreIODetails, DatastoreIODetailsArgs

    Namespace string

    Namespace used in the connection.

    Project string

    ProjectId accessed in the connection.

    Namespace string

    Namespace used in the connection.

    Project string

    ProjectId accessed in the connection.

    namespace String

    Namespace used in the connection.

    project String

    ProjectId accessed in the connection.

    namespace string

    Namespace used in the connection.

    project string

    ProjectId accessed in the connection.

    namespace str

    Namespace used in the connection.

    project str

    ProjectId accessed in the connection.

    namespace String

    Namespace used in the connection.

    project String

    ProjectId accessed in the connection.

    DatastoreIODetailsResponse, DatastoreIODetailsResponseArgs

    Namespace string

    Namespace used in the connection.

    Project string

    ProjectId accessed in the connection.

    Namespace string

    Namespace used in the connection.

    Project string

    ProjectId accessed in the connection.

    namespace String

    Namespace used in the connection.

    project String

    ProjectId accessed in the connection.

    namespace string

    Namespace used in the connection.

    project string

    ProjectId accessed in the connection.

    namespace str

    Namespace used in the connection.

    project str

    ProjectId accessed in the connection.

    namespace String

    Namespace used in the connection.

    project String

    ProjectId accessed in the connection.

    DebugOptions, DebugOptionsArgs

    EnableHotKeyLogging bool

    When true, enables the logging of the literal hot key to the user's Cloud Logging.

    EnableHotKeyLogging bool

    When true, enables the logging of the literal hot key to the user's Cloud Logging.

    enableHotKeyLogging Boolean

    When true, enables the logging of the literal hot key to the user's Cloud Logging.

    enableHotKeyLogging boolean

    When true, enables the logging of the literal hot key to the user's Cloud Logging.

    enable_hot_key_logging bool

    When true, enables the logging of the literal hot key to the user's Cloud Logging.

    enableHotKeyLogging Boolean

    When true, enables the logging of the literal hot key to the user's Cloud Logging.

    DebugOptionsResponse, DebugOptionsResponseArgs

    EnableHotKeyLogging bool

    When true, enables the logging of the literal hot key to the user's Cloud Logging.

    EnableHotKeyLogging bool

    When true, enables the logging of the literal hot key to the user's Cloud Logging.

    enableHotKeyLogging Boolean

    When true, enables the logging of the literal hot key to the user's Cloud Logging.

    enableHotKeyLogging boolean

    When true, enables the logging of the literal hot key to the user's Cloud Logging.

    enable_hot_key_logging bool

    When true, enables the logging of the literal hot key to the user's Cloud Logging.

    enableHotKeyLogging Boolean

    When true, enables the logging of the literal hot key to the user's Cloud Logging.

    Disk, DiskArgs

    DiskType string

    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard

    MountPoint string

    Directory in a VM where disk is mounted.

    SizeGb int

    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    DiskType string

    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard

    MountPoint string

    Directory in a VM where disk is mounted.

    SizeGb int

    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    diskType String

    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard

    mountPoint String

    Directory in a VM where disk is mounted.

    sizeGb Integer

    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    diskType string

    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard

    mountPoint string

    Directory in a VM where disk is mounted.

    sizeGb number

    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    disk_type str

    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard

    mount_point str

    Directory in a VM where disk is mounted.

    size_gb int

    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    diskType String

    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard

    mountPoint String

    Directory in a VM where disk is mounted.

    sizeGb Number

    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    DiskResponse, DiskResponseArgs

    DiskType string

    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard

    MountPoint string

    Directory in a VM where disk is mounted.

    SizeGb int

    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    DiskType string

    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard

    MountPoint string

    Directory in a VM where disk is mounted.

    SizeGb int

    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    diskType String

    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard

    mountPoint String

    Directory in a VM where disk is mounted.

    sizeGb Integer

    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    diskType string

    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard

    mountPoint string

    Directory in a VM where disk is mounted.

    sizeGb number

    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    disk_type str

    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard

    mount_point str

    Directory in a VM where disk is mounted.

    size_gb int

    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    diskType String

    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard

    mountPoint String

    Directory in a VM where disk is mounted.

    sizeGb Number

    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    DisplayData, DisplayDataArgs

    BoolValue bool

    Contains value if the data is of a boolean type.

    DurationValue string

    Contains value if the data is of duration type.

    FloatValue double

    Contains value if the data is of float type.

    Int64Value string

    Contains value if the data is of int64 type.

    JavaClassValue string

    Contains value if the data is of java class type.

    Key string

    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.

    Label string

    An optional label to display in a dax UI for the element.

    Namespace string

    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.

    ShortStrValue string

    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.

    StrValue string

    Contains value if the data is of string type.

    TimestampValue string

    Contains value if the data is of timestamp type.

    Url string

    An optional full URL.

    BoolValue bool

    Contains value if the data is of a boolean type.

    DurationValue string

    Contains value if the data is of duration type.

    FloatValue float64

    Contains value if the data is of float type.

    Int64Value string

    Contains value if the data is of int64 type.

    JavaClassValue string

    Contains value if the data is of java class type.

    Key string

    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.

    Label string

    An optional label to display in a dax UI for the element.

    Namespace string

    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.

    ShortStrValue string

    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.

    StrValue string

    Contains value if the data is of string type.

    TimestampValue string

    Contains value if the data is of timestamp type.

    Url string

    An optional full URL.

    boolValue Boolean

    Contains value if the data is of a boolean type.

    durationValue String

    Contains value if the data is of duration type.

    floatValue Double

    Contains value if the data is of float type.

    int64Value String

    Contains value if the data is of int64 type.

    javaClassValue String

    Contains value if the data is of java class type.

    key String

    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.

    label String

    An optional label to display in a dax UI for the element.

    namespace String

    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.

    shortStrValue String

    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.

    strValue String

    Contains value if the data is of string type.

    timestampValue String

    Contains value if the data is of timestamp type.

    url String

    An optional full URL.

    boolValue boolean

    Contains value if the data is of a boolean type.

    durationValue string

    Contains value if the data is of duration type.

    floatValue number

    Contains value if the data is of float type.

    int64Value string

    Contains value if the data is of int64 type.

    javaClassValue string

    Contains value if the data is of java class type.

    key string

    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.

    label string

    An optional label to display in a dax UI for the element.

    namespace string

    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.

    shortStrValue string

    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.

    strValue string

    Contains value if the data is of string type.

    timestampValue string

    Contains value if the data is of timestamp type.

    url string

    An optional full URL.

    bool_value bool

    Contains value if the data is of a boolean type.

    duration_value str

    Contains value if the data is of duration type.

    float_value float

    Contains value if the data is of float type.

    int64_value str

    Contains value if the data is of int64 type.

    java_class_value str

    Contains value if the data is of java class type.

    key str

    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.

    label str

    An optional label to display in a dax UI for the element.

    namespace str

    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.

    short_str_value str

    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.

    str_value str

    Contains value if the data is of string type.

    timestamp_value str

    Contains value if the data is of timestamp type.

    url str

    An optional full URL.

    boolValue Boolean

    Contains value if the data is of a boolean type.

    durationValue String

    Contains value if the data is of duration type.

    floatValue Number

    Contains value if the data is of float type.

    int64Value String

    Contains value if the data is of int64 type.

    javaClassValue String

    Contains value if the data is of java class type.

    key String

    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.

    label String

    An optional label to display in a dax UI for the element.

    namespace String

    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.

    shortStrValue String

    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.

    strValue String

    Contains value if the data is of string type.

    timestampValue String

    Contains value if the data is of timestamp type.

    url String

    An optional full URL.

    DisplayDataResponse, DisplayDataResponseArgs

    BoolValue bool

    Contains value if the data is of a boolean type.

    DurationValue string

    Contains value if the data is of duration type.

    FloatValue double

    Contains value if the data is of float type.

    Int64Value string

    Contains value if the data is of int64 type.

    JavaClassValue string

    Contains value if the data is of java class type.

    Key string

    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.

    Label string

    An optional label to display in a dax UI for the element.

    Namespace string

    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.

    ShortStrValue string

    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.

    StrValue string

    Contains value if the data is of string type.

    TimestampValue string

    Contains value if the data is of timestamp type.

    Url string

    An optional full URL.

    BoolValue bool

    Contains value if the data is of a boolean type.

    DurationValue string

    Contains value if the data is of duration type.

    FloatValue float64

    Contains value if the data is of float type.

    Int64Value string

    Contains value if the data is of int64 type.

    JavaClassValue string

    Contains value if the data is of java class type.

    Key string

    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.

    Label string

    An optional label to display in a dax UI for the element.

    Namespace string

    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.

    ShortStrValue string

    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.

    StrValue string

    Contains value if the data is of string type.

    TimestampValue string

    Contains value if the data is of timestamp type.

    Url string

    An optional full URL.

    boolValue Boolean

    Contains value if the data is of a boolean type.

    durationValue String

    Contains value if the data is of duration type.

    floatValue Double

    Contains value if the data is of float type.

    int64Value String

    Contains value if the data is of int64 type.

    javaClassValue String

    Contains value if the data is of java class type.

    key String

    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.

    label String

    An optional label to display in a dax UI for the element.

    namespace String

    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.

    shortStrValue String

    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.

    strValue String

    Contains value if the data is of string type.

    timestampValue String

    Contains value if the data is of timestamp type.

    url String

    An optional full URL.

    boolValue boolean

    Contains value if the data is of a boolean type.

    durationValue string

    Contains value if the data is of duration type.

    floatValue number

    Contains value if the data is of float type.

    int64Value string

    Contains value if the data is of int64 type.

    javaClassValue string

    Contains value if the data is of java class type.

    key string

    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.

    label string

    An optional label to display in a dax UI for the element.

    namespace string

    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.

    shortStrValue string

    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.

    strValue string

    Contains value if the data is of string type.

    timestampValue string

    Contains value if the data is of timestamp type.

    url string

    An optional full URL.

    bool_value bool

    Contains value if the data is of a boolean type.

    duration_value str

    Contains value if the data is of duration type.

    float_value float

    Contains value if the data is of float type.

    int64_value str

    Contains value if the data is of int64 type.

    java_class_value str

    Contains value if the data is of java class type.

    key str

    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.

    label str

    An optional label to display in a dax UI for the element.

    namespace str

    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.

    short_str_value str

    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.

    str_value str

    Contains value if the data is of string type.

    timestamp_value str

    Contains value if the data is of timestamp type.

    url str

    An optional full URL.

    boolValue Boolean

    Contains value if the data is of a boolean type.

    durationValue String

    Contains value if the data is of duration type.

    floatValue Number

    Contains value if the data is of float type.

    int64Value String

    Contains value if the data is of int64 type.

    javaClassValue String

    Contains value if the data is of java class type.

    key String

    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.

    label String

    An optional label to display in a dax UI for the element.

    namespace String

    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.

    shortStrValue String

    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.

    strValue String

    Contains value if the data is of string type.

    timestampValue String

    Contains value if the data is of timestamp type.

    url String

    An optional full URL.

    Environment, EnvironmentArgs

    ClusterManagerApiService string

    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".

    Dataset string

    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}

    DebugOptions Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DebugOptions

    Any debugging options to be supplied to the job.

    Experiments List<string>

    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.

    FlexResourceSchedulingGoal Pulumi.GoogleNative.Dataflow.V1b3.EnvironmentFlexResourceSchedulingGoal

    Which Flexible Resource Scheduling mode to run in.

    InternalExperiments Dictionary<string, string>

    Experimental settings.

    SdkPipelineOptions Dictionary<string, string>

    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.

    ServiceAccountEmail string

    Identity to run virtual machines as. Defaults to the default account.

    ServiceKmsKeyName string

    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    ServiceOptions List<string>

    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).

    TempStoragePrefix string

    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    UserAgent Dictionary<string, string>

    A description of the process that generated the request.

    Version Dictionary<string, string>

    A structure describing which components and their versions of the service are required in order to run the job.

    WorkerPools List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.WorkerPool>

    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.

    WorkerRegion string

    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

    WorkerZone string

    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

    ClusterManagerApiService string

    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".

    Dataset string

    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}

    DebugOptions DebugOptions

    Any debugging options to be supplied to the job.

    Experiments []string

    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.

    FlexResourceSchedulingGoal EnvironmentFlexResourceSchedulingGoal

    Which Flexible Resource Scheduling mode to run in.

    InternalExperiments map[string]string

    Experimental settings.

    SdkPipelineOptions map[string]string

    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.

    ServiceAccountEmail string

    Identity to run virtual machines as. Defaults to the default account.

    ServiceKmsKeyName string

    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    ServiceOptions []string

    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).

    TempStoragePrefix string

    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    UserAgent map[string]string

    A description of the process that generated the request.

    Version map[string]string

    A structure describing which components and their versions of the service are required in order to run the job.

    WorkerPools []WorkerPool

    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.

    WorkerRegion string

    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

    WorkerZone string

    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

    clusterManagerApiService String

    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".

    dataset String

    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}

    debugOptions DebugOptions

    Any debugging options to be supplied to the job.

    experiments List<String>

    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.

    flexResourceSchedulingGoal EnvironmentFlexResourceSchedulingGoal

    Which Flexible Resource Scheduling mode to run in.

    internalExperiments Map<String,String>

    Experimental settings.

    sdkPipelineOptions Map<String,String>

    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.

    serviceAccountEmail String

    Identity to run virtual machines as. Defaults to the default account.

    serviceKmsKeyName String

    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    serviceOptions List<String>

    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).

    tempStoragePrefix String

    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    userAgent Map<String,String>

    A description of the process that generated the request.

    version Map<String,String>

    A structure describing which components and their versions of the service are required in order to run the job.

    workerPools List<WorkerPool>

    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.

    workerRegion String

    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

    workerZone String

    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

    clusterManagerApiService string

    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".

    dataset string

    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}

    debugOptions DebugOptions

    Any debugging options to be supplied to the job.

    experiments string[]

    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.

    flexResourceSchedulingGoal EnvironmentFlexResourceSchedulingGoal

    Which Flexible Resource Scheduling mode to run in.

    internalExperiments {[key: string]: string}

    Experimental settings.

    sdkPipelineOptions {[key: string]: string}

    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.

    serviceAccountEmail string

    Identity to run virtual machines as. Defaults to the default account.

    serviceKmsKeyName string

    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    serviceOptions string[]

    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).

    tempStoragePrefix string

    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    userAgent {[key: string]: string}

    A description of the process that generated the request.

    version {[key: string]: string}

    A structure describing which components and their versions of the service are required in order to run the job.

    workerPools WorkerPool[]

    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.

    workerRegion string

    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

    workerZone string

    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

    cluster_manager_api_service str

    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".

    dataset str

    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}

    debug_options DebugOptions

    Any debugging options to be supplied to the job.

    experiments Sequence[str]

    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.

    flex_resource_scheduling_goal EnvironmentFlexResourceSchedulingGoal

    Which Flexible Resource Scheduling mode to run in.

    internal_experiments Mapping[str, str]

    Experimental settings.

    sdk_pipeline_options Mapping[str, str]

    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.

    service_account_email str

    Identity to run virtual machines as. Defaults to the default account.

    service_kms_key_name str

    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    service_options Sequence[str]

    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).

    temp_storage_prefix str

    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    user_agent Mapping[str, str]

    A description of the process that generated the request.

    version Mapping[str, str]

    A structure describing which components and their versions of the service are required in order to run the job.

    worker_pools Sequence[WorkerPool]

    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.

    worker_region str

    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

    worker_zone str

    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

    clusterManagerApiService String

    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".

    dataset String

    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}

    debugOptions Property Map

    Any debugging options to be supplied to the job.

    experiments List<String>

    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.

    flexResourceSchedulingGoal "FLEXRS_UNSPECIFIED" | "FLEXRS_SPEED_OPTIMIZED" | "FLEXRS_COST_OPTIMIZED"

    Which Flexible Resource Scheduling mode to run in.

    internalExperiments Map<String>

    Experimental settings.

    sdkPipelineOptions Map<String>

    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.

    serviceAccountEmail String

    Identity to run virtual machines as. Defaults to the default account.

    serviceKmsKeyName String

    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    serviceOptions List<String>

    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).

    tempStoragePrefix String

    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    userAgent Map<String>

    A description of the process that generated the request.

    version Map<String>

    A structure describing which components and their versions of the service are required in order to run the job.

    workerPools List<Property Map>

    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.

    workerRegion String

    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

    workerZone String

    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

    EnvironmentFlexResourceSchedulingGoal, EnvironmentFlexResourceSchedulingGoalArgs

    FlexrsUnspecified
    FLEXRS_UNSPECIFIED

    Run in the default mode.

    FlexrsSpeedOptimized
    FLEXRS_SPEED_OPTIMIZED

    Optimize for lower execution time.

    FlexrsCostOptimized
    FLEXRS_COST_OPTIMIZED

    Optimize for lower cost.

    EnvironmentFlexResourceSchedulingGoalFlexrsUnspecified
    FLEXRS_UNSPECIFIED

    Run in the default mode.

    EnvironmentFlexResourceSchedulingGoalFlexrsSpeedOptimized
    FLEXRS_SPEED_OPTIMIZED

    Optimize for lower execution time.

    EnvironmentFlexResourceSchedulingGoalFlexrsCostOptimized
    FLEXRS_COST_OPTIMIZED

    Optimize for lower cost.

    FlexrsUnspecified
    FLEXRS_UNSPECIFIED

    Run in the default mode.

    FlexrsSpeedOptimized
    FLEXRS_SPEED_OPTIMIZED

    Optimize for lower execution time.

    FlexrsCostOptimized
    FLEXRS_COST_OPTIMIZED

    Optimize for lower cost.

    FlexrsUnspecified
    FLEXRS_UNSPECIFIED

    Run in the default mode.

    FlexrsSpeedOptimized
    FLEXRS_SPEED_OPTIMIZED

    Optimize for lower execution time.

    FlexrsCostOptimized
    FLEXRS_COST_OPTIMIZED

    Optimize for lower cost.

    FLEXRS_UNSPECIFIED
    FLEXRS_UNSPECIFIED

    Run in the default mode.

    FLEXRS_SPEED_OPTIMIZED
    FLEXRS_SPEED_OPTIMIZED

    Optimize for lower execution time.

    FLEXRS_COST_OPTIMIZED
    FLEXRS_COST_OPTIMIZED

    Optimize for lower cost.

    "FLEXRS_UNSPECIFIED"
    FLEXRS_UNSPECIFIED

    Run in the default mode.

    "FLEXRS_SPEED_OPTIMIZED"
    FLEXRS_SPEED_OPTIMIZED

    Optimize for lower execution time.

    "FLEXRS_COST_OPTIMIZED"
    FLEXRS_COST_OPTIMIZED

    Optimize for lower cost.

    EnvironmentResponse, EnvironmentResponseArgs

    ClusterManagerApiService string

    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".

    Dataset string

    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}

    DebugOptions Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DebugOptionsResponse

    Any debugging options to be supplied to the job.

    Experiments List<string>

    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.

    FlexResourceSchedulingGoal string

    Which Flexible Resource Scheduling mode to run in.

    InternalExperiments Dictionary<string, string>

    Experimental settings.

    SdkPipelineOptions Dictionary<string, string>

    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.

    ServiceAccountEmail string

    Identity to run virtual machines as. Defaults to the default account.

    ServiceKmsKeyName string

    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    ServiceOptions List<string>

    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).

    ShuffleMode string

    The shuffle mode used for the job.

    TempStoragePrefix string

    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    UserAgent Dictionary<string, string>

    A description of the process that generated the request.

    Version Dictionary<string, string>

    A structure describing which components and their versions of the service are required in order to run the job.

    WorkerPools List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.WorkerPoolResponse>

    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.

    WorkerRegion string

    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

    WorkerZone string

    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

    ClusterManagerApiService string

    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".

    Dataset string

    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}

    DebugOptions DebugOptionsResponse

    Any debugging options to be supplied to the job.

    Experiments []string

    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.

    FlexResourceSchedulingGoal string

    Which Flexible Resource Scheduling mode to run in.

    InternalExperiments map[string]string

    Experimental settings.

    SdkPipelineOptions map[string]string

    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.

    ServiceAccountEmail string

    Identity to run virtual machines as. Defaults to the default account.

    ServiceKmsKeyName string

    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    ServiceOptions []string

    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).

    ShuffleMode string

    The shuffle mode used for the job.

    TempStoragePrefix string

    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    UserAgent map[string]string

    A description of the process that generated the request.

    Version map[string]string

    A structure describing which components and their versions of the service are required in order to run the job.

    WorkerPools []WorkerPoolResponse

    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.

    WorkerRegion string

    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

    WorkerZone string

    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

    clusterManagerApiService String

    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".

    dataset String

    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}

    debugOptions DebugOptionsResponse

    Any debugging options to be supplied to the job.

    experiments List<String>

    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.

    flexResourceSchedulingGoal String

    Which Flexible Resource Scheduling mode to run in.

    internalExperiments Map<String,String>

    Experimental settings.

    sdkPipelineOptions Map<String,String>

    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.

    serviceAccountEmail String

    Identity to run virtual machines as. Defaults to the default account.

    serviceKmsKeyName String

    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    serviceOptions List<String>

    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).

    shuffleMode String

    The shuffle mode used for the job.

    tempStoragePrefix String

    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    userAgent Map<String,String>

    A description of the process that generated the request.

    version Map<String,String>

    A structure describing which components and their versions of the service are required in order to run the job.

    workerPools List<WorkerPoolResponse>

    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.

    workerRegion String

    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

    workerZone String

    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

    clusterManagerApiService string

    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".

    dataset string

    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}

    debugOptions DebugOptionsResponse

    Any debugging options to be supplied to the job.

    experiments string[]

    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.

    flexResourceSchedulingGoal string

    Which Flexible Resource Scheduling mode to run in.

    internalExperiments {[key: string]: string}

    Experimental settings.

    sdkPipelineOptions {[key: string]: string}

    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.

    serviceAccountEmail string

    Identity to run virtual machines as. Defaults to the default account.

    serviceKmsKeyName string

    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    serviceOptions string[]

    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).

    shuffleMode string

    The shuffle mode used for the job.

    tempStoragePrefix string

    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    userAgent {[key: string]: string}

    A description of the process that generated the request.

    version {[key: string]: string}

    A structure describing which components and their versions of the service are required in order to run the job.

    workerPools WorkerPoolResponse[]

    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.

    workerRegion string

    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

    workerZone string

    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

    cluster_manager_api_service str

    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".

    dataset str

    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}

    debug_options DebugOptionsResponse

    Any debugging options to be supplied to the job.

    experiments Sequence[str]

    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.

    flex_resource_scheduling_goal str

    Which Flexible Resource Scheduling mode to run in.

    internal_experiments Mapping[str, str]

    Experimental settings.

    sdk_pipeline_options Mapping[str, str]

    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.

    service_account_email str

    Identity to run virtual machines as. Defaults to the default account.

    service_kms_key_name str

    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    service_options Sequence[str]

    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).

    shuffle_mode str

    The shuffle mode used for the job.

    temp_storage_prefix str

    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    user_agent Mapping[str, str]

    A description of the process that generated the request.

    version Mapping[str, str]

    A structure describing which components and their versions of the service are required in order to run the job.

    worker_pools Sequence[WorkerPoolResponse]

    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.

    worker_region str

    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

    worker_zone str

    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

    clusterManagerApiService String

    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".

    dataset String

    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}

    debugOptions Property Map

    Any debugging options to be supplied to the job.

    experiments List<String>

    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.

    flexResourceSchedulingGoal String

    Which Flexible Resource Scheduling mode to run in.

    internalExperiments Map<String>

    Experimental settings.

    sdkPipelineOptions Map<String>

    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.

    serviceAccountEmail String

    Identity to run virtual machines as. Defaults to the default account.

    serviceKmsKeyName String

    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    serviceOptions List<String>

    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).

    shuffleMode String

    The shuffle mode used for the job.

    tempStoragePrefix String

    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    userAgent Map<String>

    A description of the process that generated the request.

    version Map<String>

    A structure describing which components and their versions of the service are required in order to run the job.

    workerPools List<Property Map>

    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.

    workerRegion String

    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

    workerZone String

    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

    ExecutionStageState, ExecutionStageStateArgs

    CurrentStateTime string

    The time at which the stage transitioned to this state.

    ExecutionStageName string

    The name of the execution stage.

    ExecutionStageState Pulumi.GoogleNative.Dataflow.V1b3.ExecutionStageStateExecutionStageState

    Executions stage states allow the same set of values as JobState.

    CurrentStateTime string

    The time at which the stage transitioned to this state.

    ExecutionStageName string

    The name of the execution stage.

    ExecutionStageState ExecutionStageStateExecutionStageState

    Executions stage states allow the same set of values as JobState.

    currentStateTime String

    The time at which the stage transitioned to this state.

    executionStageName String

    The name of the execution stage.

    executionStageState ExecutionStageStateExecutionStageState

    Executions stage states allow the same set of values as JobState.

    currentStateTime string

    The time at which the stage transitioned to this state.

    executionStageName string

    The name of the execution stage.

    executionStageState ExecutionStageStateExecutionStageState

    Executions stage states allow the same set of values as JobState.

    current_state_time str

    The time at which the stage transitioned to this state.

    execution_stage_name str

    The name of the execution stage.

    execution_stage_state ExecutionStageStateExecutionStageState

    Executions stage states allow the same set of values as JobState.

    ExecutionStageStateExecutionStageState, ExecutionStageStateExecutionStageStateArgs

    JobStateUnknown
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    JobStateStopped
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    JobStateRunning
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    JobStateDone
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    JobStateFailed
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateCancelled
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    JobStateUpdated
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateDraining
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    JobStateDrained
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    JobStatePending
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    JobStateCancelling
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    JobStateQueued
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    ExecutionStageStateExecutionStageStateJobStateUnknown
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    ExecutionStageStateExecutionStageStateJobStateStopped
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    ExecutionStageStateExecutionStageStateJobStateRunning
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    ExecutionStageStateExecutionStageStateJobStateDone
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    ExecutionStageStateExecutionStageStateJobStateFailed
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    ExecutionStageStateExecutionStageStateJobStateCancelled
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    ExecutionStageStateExecutionStageStateJobStateUpdated
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    ExecutionStageStateExecutionStageStateJobStateDraining
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    ExecutionStageStateExecutionStageStateJobStateDrained
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    ExecutionStageStateExecutionStageStateJobStatePending
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    ExecutionStageStateExecutionStageStateJobStateCancelling
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    ExecutionStageStateExecutionStageStateJobStateQueued
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    ExecutionStageStateExecutionStageStateJobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    JobStateUnknown
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    JobStateStopped
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    JobStateRunning
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    JobStateDone
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    JobStateFailed
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateCancelled
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    JobStateUpdated
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateDraining
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    JobStateDrained
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    JobStatePending
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    JobStateCancelling
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    JobStateQueued
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    JobStateUnknown
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    JobStateStopped
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    JobStateRunning
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    JobStateDone
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    JobStateFailed
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateCancelled
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    JobStateUpdated
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateDraining
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    JobStateDrained
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    JobStatePending
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    JobStateCancelling
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    JobStateQueued
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    JOB_STATE_UNKNOWN
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    JOB_STATE_STOPPED
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    JOB_STATE_RUNNING
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    JOB_STATE_DONE
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    JOB_STATE_FAILED
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JOB_STATE_CANCELLED
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    JOB_STATE_UPDATED
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JOB_STATE_DRAINING
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    JOB_STATE_DRAINED
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    JOB_STATE_PENDING
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    JOB_STATE_CANCELLING
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    JOB_STATE_QUEUED
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    JOB_STATE_RESOURCE_CLEANING_UP
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    "JOB_STATE_UNKNOWN"
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    "JOB_STATE_STOPPED"
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    "JOB_STATE_RUNNING"
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    "JOB_STATE_DONE"
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    "JOB_STATE_FAILED"
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    "JOB_STATE_CANCELLED"
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    "JOB_STATE_UPDATED"
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    "JOB_STATE_DRAINING"
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    "JOB_STATE_DRAINED"
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    "JOB_STATE_PENDING"
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    "JOB_STATE_CANCELLING"
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    "JOB_STATE_QUEUED"
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    "JOB_STATE_RESOURCE_CLEANING_UP"
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    ExecutionStageStateResponse, ExecutionStageStateResponseArgs

    CurrentStateTime string

    The time at which the stage transitioned to this state.

    ExecutionStageName string

    The name of the execution stage.

    ExecutionStageState string

    Executions stage states allow the same set of values as JobState.

    CurrentStateTime string

    The time at which the stage transitioned to this state.

    ExecutionStageName string

    The name of the execution stage.

    ExecutionStageState string

    Executions stage states allow the same set of values as JobState.

    currentStateTime String

    The time at which the stage transitioned to this state.

    executionStageName String

    The name of the execution stage.

    executionStageState String

    Executions stage states allow the same set of values as JobState.

    currentStateTime string

    The time at which the stage transitioned to this state.

    executionStageName string

    The name of the execution stage.

    executionStageState string

    Executions stage states allow the same set of values as JobState.

    current_state_time str

    The time at which the stage transitioned to this state.

    execution_stage_name str

    The name of the execution stage.

    execution_stage_state str

    Executions stage states allow the same set of values as JobState.

    currentStateTime String

    The time at which the stage transitioned to this state.

    executionStageName String

    The name of the execution stage.

    executionStageState String

    Executions stage states allow the same set of values as JobState.

    ExecutionStageSummary, ExecutionStageSummaryArgs

    ComponentSource List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ComponentSource>

    Collections produced and consumed by component transforms of this stage.

    ComponentTransform List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ComponentTransform>

    Transforms that comprise this execution stage.

    Id string

    Dataflow service generated id for this stage.

    InputSource List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.StageSource>

    Input sources for this stage.

    Kind Pulumi.GoogleNative.Dataflow.V1b3.ExecutionStageSummaryKind

    Type of transform this stage is executing.

    Name string

    Dataflow service generated name for this stage.

    OutputSource List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.StageSource>

    Output sources for this stage.

    PrerequisiteStage List<string>

    Other stages that must complete before this stage can run.

    ComponentSource []ComponentSource

    Collections produced and consumed by component transforms of this stage.

    ComponentTransform []ComponentTransform

    Transforms that comprise this execution stage.

    Id string

    Dataflow service generated id for this stage.

    InputSource []StageSource

    Input sources for this stage.

    Kind ExecutionStageSummaryKind

    Type of transform this stage is executing.

    Name string

    Dataflow service generated name for this stage.

    OutputSource []StageSource

    Output sources for this stage.

    PrerequisiteStage []string

    Other stages that must complete before this stage can run.

    componentSource List<ComponentSource>

    Collections produced and consumed by component transforms of this stage.

    componentTransform List<ComponentTransform>

    Transforms that comprise this execution stage.

    id String

    Dataflow service generated id for this stage.

    inputSource List<StageSource>

    Input sources for this stage.

    kind ExecutionStageSummaryKind

    Type of transform this stage is executing.

    name String

    Dataflow service generated name for this stage.

    outputSource List<StageSource>

    Output sources for this stage.

    prerequisiteStage List<String>

    Other stages that must complete before this stage can run.

    componentSource ComponentSource[]

    Collections produced and consumed by component transforms of this stage.

    componentTransform ComponentTransform[]

    Transforms that comprise this execution stage.

    id string

    Dataflow service generated id for this stage.

    inputSource StageSource[]

    Input sources for this stage.

    kind ExecutionStageSummaryKind

    Type of transform this stage is executing.

    name string

    Dataflow service generated name for this stage.

    outputSource StageSource[]

    Output sources for this stage.

    prerequisiteStage string[]

    Other stages that must complete before this stage can run.

    component_source Sequence[ComponentSource]

    Collections produced and consumed by component transforms of this stage.

    component_transform Sequence[ComponentTransform]

    Transforms that comprise this execution stage.

    id str

    Dataflow service generated id for this stage.

    input_source Sequence[StageSource]

    Input sources for this stage.

    kind ExecutionStageSummaryKind

    Type of transform this stage is executing.

    name str

    Dataflow service generated name for this stage.

    output_source Sequence[StageSource]

    Output sources for this stage.

    prerequisite_stage Sequence[str]

    Other stages that must complete before this stage can run.

    componentSource List<Property Map>

    Collections produced and consumed by component transforms of this stage.

    componentTransform List<Property Map>

    Transforms that comprise this execution stage.

    id String

    Dataflow service generated id for this stage.

    inputSource List<Property Map>

    Input sources for this stage.

    kind "UNKNOWN_KIND" | "PAR_DO_KIND" | "GROUP_BY_KEY_KIND" | "FLATTEN_KIND" | "READ_KIND" | "WRITE_KIND" | "CONSTANT_KIND" | "SINGLETON_KIND" | "SHUFFLE_KIND"

    Type of transform this stage is executing.

    name String

    Dataflow service generated name for this stage.

    outputSource List<Property Map>

    Output sources for this stage.

    prerequisiteStage List<String>

    Other stages that must complete before this stage can run.

    ExecutionStageSummaryKind, ExecutionStageSummaryKindArgs

    UnknownKind
    UNKNOWN_KIND

    Unrecognized transform type.

    ParDoKind
    PAR_DO_KIND

    ParDo transform.

    GroupByKeyKind
    GROUP_BY_KEY_KIND

    Group By Key transform.

    FlattenKind
    FLATTEN_KIND

    Flatten transform.

    ReadKind
    READ_KIND

    Read transform.

    WriteKind
    WRITE_KIND

    Write transform.

    ConstantKind
    CONSTANT_KIND

    Constructs from a constant value, such as with Create.of.

    SingletonKind
    SINGLETON_KIND

    Creates a Singleton view of a collection.

    ShuffleKind
    SHUFFLE_KIND

    Opening or closing a shuffle session, often as part of a GroupByKey.

    ExecutionStageSummaryKindUnknownKind
    UNKNOWN_KIND

    Unrecognized transform type.

    ExecutionStageSummaryKindParDoKind
    PAR_DO_KIND

    ParDo transform.

    ExecutionStageSummaryKindGroupByKeyKind
    GROUP_BY_KEY_KIND

    Group By Key transform.

    ExecutionStageSummaryKindFlattenKind
    FLATTEN_KIND

    Flatten transform.

    ExecutionStageSummaryKindReadKind
    READ_KIND

    Read transform.

    ExecutionStageSummaryKindWriteKind
    WRITE_KIND

    Write transform.

    ExecutionStageSummaryKindConstantKind
    CONSTANT_KIND

    Constructs from a constant value, such as with Create.of.

    ExecutionStageSummaryKindSingletonKind
    SINGLETON_KIND

    Creates a Singleton view of a collection.

    ExecutionStageSummaryKindShuffleKind
    SHUFFLE_KIND

    Opening or closing a shuffle session, often as part of a GroupByKey.

    UnknownKind
    UNKNOWN_KIND

    Unrecognized transform type.

    ParDoKind
    PAR_DO_KIND

    ParDo transform.

    GroupByKeyKind
    GROUP_BY_KEY_KIND

    Group By Key transform.

    FlattenKind
    FLATTEN_KIND

    Flatten transform.

    ReadKind
    READ_KIND

    Read transform.

    WriteKind
    WRITE_KIND

    Write transform.

    ConstantKind
    CONSTANT_KIND

    Constructs from a constant value, such as with Create.of.

    SingletonKind
    SINGLETON_KIND

    Creates a Singleton view of a collection.

    ShuffleKind
    SHUFFLE_KIND

    Opening or closing a shuffle session, often as part of a GroupByKey.

    UnknownKind
    UNKNOWN_KIND

    Unrecognized transform type.

    ParDoKind
    PAR_DO_KIND

    ParDo transform.

    GroupByKeyKind
    GROUP_BY_KEY_KIND

    Group By Key transform.

    FlattenKind
    FLATTEN_KIND

    Flatten transform.

    ReadKind
    READ_KIND

    Read transform.

    WriteKind
    WRITE_KIND

    Write transform.

    ConstantKind
    CONSTANT_KIND

    Constructs from a constant value, such as with Create.of.

    SingletonKind
    SINGLETON_KIND

    Creates a Singleton view of a collection.

    ShuffleKind
    SHUFFLE_KIND

    Opening or closing a shuffle session, often as part of a GroupByKey.

    UNKNOWN_KIND
    UNKNOWN_KIND

    Unrecognized transform type.

    PAR_DO_KIND
    PAR_DO_KIND

    ParDo transform.

    GROUP_BY_KEY_KIND
    GROUP_BY_KEY_KIND

    Group By Key transform.

    FLATTEN_KIND
    FLATTEN_KIND

    Flatten transform.

    READ_KIND
    READ_KIND

    Read transform.

    WRITE_KIND
    WRITE_KIND

    Write transform.

    CONSTANT_KIND
    CONSTANT_KIND

    Constructs from a constant value, such as with Create.of.

    SINGLETON_KIND
    SINGLETON_KIND

    Creates a Singleton view of a collection.

    SHUFFLE_KIND
    SHUFFLE_KIND

    Opening or closing a shuffle session, often as part of a GroupByKey.

    "UNKNOWN_KIND"
    UNKNOWN_KIND

    Unrecognized transform type.

    "PAR_DO_KIND"
    PAR_DO_KIND

    ParDo transform.

    "GROUP_BY_KEY_KIND"
    GROUP_BY_KEY_KIND

    Group By Key transform.

    "FLATTEN_KIND"
    FLATTEN_KIND

    Flatten transform.

    "READ_KIND"
    READ_KIND

    Read transform.

    "WRITE_KIND"
    WRITE_KIND

    Write transform.

    "CONSTANT_KIND"
    CONSTANT_KIND

    Constructs from a constant value, such as with Create.of.

    "SINGLETON_KIND"
    SINGLETON_KIND

    Creates a Singleton view of a collection.

    "SHUFFLE_KIND"
    SHUFFLE_KIND

    Opening or closing a shuffle session, often as part of a GroupByKey.

    ExecutionStageSummaryResponse, ExecutionStageSummaryResponseArgs

    ComponentSource List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ComponentSourceResponse>

    Collections produced and consumed by component transforms of this stage.

    ComponentTransform List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ComponentTransformResponse>

    Transforms that comprise this execution stage.

    InputSource List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.StageSourceResponse>

    Input sources for this stage.

    Kind string

    Type of transform this stage is executing.

    Name string

    Dataflow service generated name for this stage.

    OutputSource List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.StageSourceResponse>

    Output sources for this stage.

    PrerequisiteStage List<string>

    Other stages that must complete before this stage can run.

    ComponentSource []ComponentSourceResponse

    Collections produced and consumed by component transforms of this stage.

    ComponentTransform []ComponentTransformResponse

    Transforms that comprise this execution stage.

    InputSource []StageSourceResponse

    Input sources for this stage.

    Kind string

    Type of transform this stage is executing.

    Name string

    Dataflow service generated name for this stage.

    OutputSource []StageSourceResponse

    Output sources for this stage.

    PrerequisiteStage []string

    Other stages that must complete before this stage can run.

    componentSource List<ComponentSourceResponse>

    Collections produced and consumed by component transforms of this stage.

    componentTransform List<ComponentTransformResponse>

    Transforms that comprise this execution stage.

    inputSource List<StageSourceResponse>

    Input sources for this stage.

    kind String

    Type of transform this stage is executing.

    name String

    Dataflow service generated name for this stage.

    outputSource List<StageSourceResponse>

    Output sources for this stage.

    prerequisiteStage List<String>

    Other stages that must complete before this stage can run.

    componentSource ComponentSourceResponse[]

    Collections produced and consumed by component transforms of this stage.

    componentTransform ComponentTransformResponse[]

    Transforms that comprise this execution stage.

    inputSource StageSourceResponse[]

    Input sources for this stage.

    kind string

    Type of transform this stage is executing.

    name string

    Dataflow service generated name for this stage.

    outputSource StageSourceResponse[]

    Output sources for this stage.

    prerequisiteStage string[]

    Other stages that must complete before this stage can run.

    component_source Sequence[ComponentSourceResponse]

    Collections produced and consumed by component transforms of this stage.

    component_transform Sequence[ComponentTransformResponse]

    Transforms that comprise this execution stage.

    input_source Sequence[StageSourceResponse]

    Input sources for this stage.

    kind str

    Type of transform this stage is executing.

    name str

    Dataflow service generated name for this stage.

    output_source Sequence[StageSourceResponse]

    Output sources for this stage.

    prerequisite_stage Sequence[str]

    Other stages that must complete before this stage can run.

    componentSource List<Property Map>

    Collections produced and consumed by component transforms of this stage.

    componentTransform List<Property Map>

    Transforms that comprise this execution stage.

    inputSource List<Property Map>

    Input sources for this stage.

    kind String

    Type of transform this stage is executing.

    name String

    Dataflow service generated name for this stage.

    outputSource List<Property Map>

    Output sources for this stage.

    prerequisiteStage List<String>

    Other stages that must complete before this stage can run.

    FileIODetails, FileIODetailsArgs

    FilePattern string

    File Pattern used to access files by the connector.

    FilePattern string

    File Pattern used to access files by the connector.

    filePattern String

    File Pattern used to access files by the connector.

    filePattern string

    File Pattern used to access files by the connector.

    file_pattern str

    File Pattern used to access files by the connector.

    filePattern String

    File Pattern used to access files by the connector.

    FileIODetailsResponse, FileIODetailsResponseArgs

    FilePattern string

    File Pattern used to access files by the connector.

    FilePattern string

    File Pattern used to access files by the connector.

    filePattern String

    File Pattern used to access files by the connector.

    filePattern string

    File Pattern used to access files by the connector.

    file_pattern str

    File Pattern used to access files by the connector.

    filePattern String

    File Pattern used to access files by the connector.

    JobCurrentState, JobCurrentStateArgs

    JobStateUnknown
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    JobStateStopped
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    JobStateRunning
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    JobStateDone
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    JobStateFailed
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateCancelled
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    JobStateUpdated
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateDraining
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    JobStateDrained
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    JobStatePending
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    JobStateCancelling
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    JobStateQueued
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    JobCurrentStateJobStateUnknown
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    JobCurrentStateJobStateStopped
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    JobCurrentStateJobStateRunning
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    JobCurrentStateJobStateDone
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    JobCurrentStateJobStateFailed
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobCurrentStateJobStateCancelled
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    JobCurrentStateJobStateUpdated
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobCurrentStateJobStateDraining
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    JobCurrentStateJobStateDrained
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    JobCurrentStateJobStatePending
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    JobCurrentStateJobStateCancelling
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    JobCurrentStateJobStateQueued
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    JobCurrentStateJobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    JobStateUnknown
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    JobStateStopped
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    JobStateRunning
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    JobStateDone
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    JobStateFailed
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateCancelled
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    JobStateUpdated
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateDraining
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    JobStateDrained
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    JobStatePending
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    JobStateCancelling
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    JobStateQueued
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    JobStateUnknown
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    JobStateStopped
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    JobStateRunning
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    JobStateDone
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    JobStateFailed
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateCancelled
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    JobStateUpdated
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateDraining
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    JobStateDrained
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    JobStatePending
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    JobStateCancelling
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    JobStateQueued
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    JOB_STATE_UNKNOWN
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    JOB_STATE_STOPPED
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    JOB_STATE_RUNNING
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    JOB_STATE_DONE
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    JOB_STATE_FAILED
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JOB_STATE_CANCELLED
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    JOB_STATE_UPDATED
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JOB_STATE_DRAINING
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    JOB_STATE_DRAINED
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    JOB_STATE_PENDING
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    JOB_STATE_CANCELLING
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    JOB_STATE_QUEUED
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    JOB_STATE_RESOURCE_CLEANING_UP
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    "JOB_STATE_UNKNOWN"
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    "JOB_STATE_STOPPED"
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    "JOB_STATE_RUNNING"
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    "JOB_STATE_DONE"
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    "JOB_STATE_FAILED"
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    "JOB_STATE_CANCELLED"
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    "JOB_STATE_UPDATED"
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    "JOB_STATE_DRAINING"
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    "JOB_STATE_DRAINED"
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    "JOB_STATE_PENDING"
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    "JOB_STATE_CANCELLING"
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    "JOB_STATE_QUEUED"
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    "JOB_STATE_RESOURCE_CLEANING_UP"
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    JobExecutionInfo, JobExecutionInfoArgs

    Stages Dictionary<string, string>

    A mapping from each stage to the information about that stage.

    Stages map[string]string

    A mapping from each stage to the information about that stage.

    stages Map<String,String>

    A mapping from each stage to the information about that stage.

    stages {[key: string]: string}

    A mapping from each stage to the information about that stage.

    stages Mapping[str, str]

    A mapping from each stage to the information about that stage.

    stages Map<String>

    A mapping from each stage to the information about that stage.

    JobExecutionInfoResponse, JobExecutionInfoResponseArgs

    Stages Dictionary<string, string>

    A mapping from each stage to the information about that stage.

    Stages map[string]string

    A mapping from each stage to the information about that stage.

    stages Map<String,String>

    A mapping from each stage to the information about that stage.

    stages {[key: string]: string}

    A mapping from each stage to the information about that stage.

    stages Mapping[str, str]

    A mapping from each stage to the information about that stage.

    stages Map<String>

    A mapping from each stage to the information about that stage.

    JobMetadata, JobMetadataArgs

    BigTableDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.BigTableIODetails>

    Identification of a Cloud Bigtable source used in the Dataflow job.

    BigqueryDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.BigQueryIODetails>

    Identification of a BigQuery source used in the Dataflow job.

    DatastoreDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DatastoreIODetails>

    Identification of a Datastore source used in the Dataflow job.

    FileDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.FileIODetails>

    Identification of a File source used in the Dataflow job.

    PubsubDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.PubSubIODetails>

    Identification of a Pub/Sub source used in the Dataflow job.

    SdkVersion Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SdkVersion

    The SDK version used to run the job.

    SpannerDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SpannerIODetails>

    Identification of a Spanner source used in the Dataflow job.

    UserDisplayProperties Dictionary<string, string>

    List of display properties to help UI filter jobs.

    BigTableDetails []BigTableIODetails

    Identification of a Cloud Bigtable source used in the Dataflow job.

    BigqueryDetails []BigQueryIODetails

    Identification of a BigQuery source used in the Dataflow job.

    DatastoreDetails []DatastoreIODetails

    Identification of a Datastore source used in the Dataflow job.

    FileDetails []FileIODetails

    Identification of a File source used in the Dataflow job.

    PubsubDetails []PubSubIODetails

    Identification of a Pub/Sub source used in the Dataflow job.

    SdkVersion SdkVersion

    The SDK version used to run the job.

    SpannerDetails []SpannerIODetails

    Identification of a Spanner source used in the Dataflow job.

    UserDisplayProperties map[string]string

    List of display properties to help UI filter jobs.

    bigTableDetails List<BigTableIODetails>

    Identification of a Cloud Bigtable source used in the Dataflow job.

    bigqueryDetails List<BigQueryIODetails>

    Identification of a BigQuery source used in the Dataflow job.

    datastoreDetails List<DatastoreIODetails>

    Identification of a Datastore source used in the Dataflow job.

    fileDetails List<FileIODetails>

    Identification of a File source used in the Dataflow job.

    pubsubDetails List<PubSubIODetails>

    Identification of a Pub/Sub source used in the Dataflow job.

    sdkVersion SdkVersion

    The SDK version used to run the job.

    spannerDetails List<SpannerIODetails>

    Identification of a Spanner source used in the Dataflow job.

    userDisplayProperties Map<String,String>

    List of display properties to help UI filter jobs.

    bigTableDetails BigTableIODetails[]

    Identification of a Cloud Bigtable source used in the Dataflow job.

    bigqueryDetails BigQueryIODetails[]

    Identification of a BigQuery source used in the Dataflow job.

    datastoreDetails DatastoreIODetails[]

    Identification of a Datastore source used in the Dataflow job.

    fileDetails FileIODetails[]

    Identification of a File source used in the Dataflow job.

    pubsubDetails PubSubIODetails[]

    Identification of a Pub/Sub source used in the Dataflow job.

    sdkVersion SdkVersion

    The SDK version used to run the job.

    spannerDetails SpannerIODetails[]

    Identification of a Spanner source used in the Dataflow job.

    userDisplayProperties {[key: string]: string}

    List of display properties to help UI filter jobs.

    big_table_details Sequence[BigTableIODetails]

    Identification of a Cloud Bigtable source used in the Dataflow job.

    bigquery_details Sequence[BigQueryIODetails]

    Identification of a BigQuery source used in the Dataflow job.

    datastore_details Sequence[DatastoreIODetails]

    Identification of a Datastore source used in the Dataflow job.

    file_details Sequence[FileIODetails]

    Identification of a File source used in the Dataflow job.

    pubsub_details Sequence[PubSubIODetails]

    Identification of a Pub/Sub source used in the Dataflow job.

    sdk_version SdkVersion

    The SDK version used to run the job.

    spanner_details Sequence[SpannerIODetails]

    Identification of a Spanner source used in the Dataflow job.

    user_display_properties Mapping[str, str]

    List of display properties to help UI filter jobs.

    bigTableDetails List<Property Map>

    Identification of a Cloud Bigtable source used in the Dataflow job.

    bigqueryDetails List<Property Map>

    Identification of a BigQuery source used in the Dataflow job.

    datastoreDetails List<Property Map>

    Identification of a Datastore source used in the Dataflow job.

    fileDetails List<Property Map>

    Identification of a File source used in the Dataflow job.

    pubsubDetails List<Property Map>

    Identification of a Pub/Sub source used in the Dataflow job.

    sdkVersion Property Map

    The SDK version used to run the job.

    spannerDetails List<Property Map>

    Identification of a Spanner source used in the Dataflow job.

    userDisplayProperties Map<String>

    List of display properties to help UI filter jobs.

    JobMetadataResponse, JobMetadataResponseArgs

    BigTableDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.BigTableIODetailsResponse>

    Identification of a Cloud Bigtable source used in the Dataflow job.

    BigqueryDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.BigQueryIODetailsResponse>

    Identification of a BigQuery source used in the Dataflow job.

    DatastoreDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DatastoreIODetailsResponse>

    Identification of a Datastore source used in the Dataflow job.

    FileDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.FileIODetailsResponse>

    Identification of a File source used in the Dataflow job.

    PubsubDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.PubSubIODetailsResponse>

    Identification of a Pub/Sub source used in the Dataflow job.

    SdkVersion Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SdkVersionResponse

    The SDK version used to run the job.

    SpannerDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SpannerIODetailsResponse>

    Identification of a Spanner source used in the Dataflow job.

    UserDisplayProperties Dictionary<string, string>

    List of display properties to help UI filter jobs.

    BigTableDetails []BigTableIODetailsResponse

    Identification of a Cloud Bigtable source used in the Dataflow job.

    BigqueryDetails []BigQueryIODetailsResponse

    Identification of a BigQuery source used in the Dataflow job.

    DatastoreDetails []DatastoreIODetailsResponse

    Identification of a Datastore source used in the Dataflow job.

    FileDetails []FileIODetailsResponse

    Identification of a File source used in the Dataflow job.

    PubsubDetails []PubSubIODetailsResponse

    Identification of a Pub/Sub source used in the Dataflow job.

    SdkVersion SdkVersionResponse

    The SDK version used to run the job.

    SpannerDetails []SpannerIODetailsResponse

    Identification of a Spanner source used in the Dataflow job.

    UserDisplayProperties map[string]string

    List of display properties to help UI filter jobs.

    bigTableDetails List<BigTableIODetailsResponse>

    Identification of a Cloud Bigtable source used in the Dataflow job.

    bigqueryDetails List<BigQueryIODetailsResponse>

    Identification of a BigQuery source used in the Dataflow job.

    datastoreDetails List<DatastoreIODetailsResponse>

    Identification of a Datastore source used in the Dataflow job.

    fileDetails List<FileIODetailsResponse>

    Identification of a File source used in the Dataflow job.

    pubsubDetails List<PubSubIODetailsResponse>

    Identification of a Pub/Sub source used in the Dataflow job.

    sdkVersion SdkVersionResponse

    The SDK version used to run the job.

    spannerDetails List<SpannerIODetailsResponse>

    Identification of a Spanner source used in the Dataflow job.

    userDisplayProperties Map<String,String>

    List of display properties to help UI filter jobs.

    bigTableDetails BigTableIODetailsResponse[]

    Identification of a Cloud Bigtable source used in the Dataflow job.

    bigqueryDetails BigQueryIODetailsResponse[]

    Identification of a BigQuery source used in the Dataflow job.

    datastoreDetails DatastoreIODetailsResponse[]

    Identification of a Datastore source used in the Dataflow job.

    fileDetails FileIODetailsResponse[]

    Identification of a File source used in the Dataflow job.

    pubsubDetails PubSubIODetailsResponse[]

    Identification of a Pub/Sub source used in the Dataflow job.

    sdkVersion SdkVersionResponse

    The SDK version used to run the job.

    spannerDetails SpannerIODetailsResponse[]

    Identification of a Spanner source used in the Dataflow job.

    userDisplayProperties {[key: string]: string}

    List of display properties to help UI filter jobs.

    big_table_details Sequence[BigTableIODetailsResponse]

    Identification of a Cloud Bigtable source used in the Dataflow job.

    bigquery_details Sequence[BigQueryIODetailsResponse]

    Identification of a BigQuery source used in the Dataflow job.

    datastore_details Sequence[DatastoreIODetailsResponse]

    Identification of a Datastore source used in the Dataflow job.

    file_details Sequence[FileIODetailsResponse]

    Identification of a File source used in the Dataflow job.

    pubsub_details Sequence[PubSubIODetailsResponse]

    Identification of a Pub/Sub source used in the Dataflow job.

    sdk_version SdkVersionResponse

    The SDK version used to run the job.

    spanner_details Sequence[SpannerIODetailsResponse]

    Identification of a Spanner source used in the Dataflow job.

    user_display_properties Mapping[str, str]

    List of display properties to help UI filter jobs.

    bigTableDetails List<Property Map>

    Identification of a Cloud Bigtable source used in the Dataflow job.

    bigqueryDetails List<Property Map>

    Identification of a BigQuery source used in the Dataflow job.

    datastoreDetails List<Property Map>

    Identification of a Datastore source used in the Dataflow job.

    fileDetails List<Property Map>

    Identification of a File source used in the Dataflow job.

    pubsubDetails List<Property Map>

    Identification of a Pub/Sub source used in the Dataflow job.

    sdkVersion Property Map

    The SDK version used to run the job.

    spannerDetails List<Property Map>

    Identification of a Spanner source used in the Dataflow job.

    userDisplayProperties Map<String>

    List of display properties to help UI filter jobs.

    JobRequestedState, JobRequestedStateArgs

    JobStateUnknown
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    JobStateStopped
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    JobStateRunning
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    JobStateDone
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    JobStateFailed
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateCancelled
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    JobStateUpdated
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateDraining
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    JobStateDrained
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    JobStatePending
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    JobStateCancelling
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    JobStateQueued
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    JobRequestedStateJobStateUnknown
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    JobRequestedStateJobStateStopped
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    JobRequestedStateJobStateRunning
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    JobRequestedStateJobStateDone
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    JobRequestedStateJobStateFailed
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobRequestedStateJobStateCancelled
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    JobRequestedStateJobStateUpdated
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobRequestedStateJobStateDraining
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    JobRequestedStateJobStateDrained
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    JobRequestedStateJobStatePending
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    JobRequestedStateJobStateCancelling
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    JobRequestedStateJobStateQueued
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    JobRequestedStateJobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    JobStateUnknown
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    JobStateStopped
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    JobStateRunning
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    JobStateDone
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    JobStateFailed
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateCancelled
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    JobStateUpdated
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateDraining
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    JobStateDrained
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    JobStatePending
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    JobStateCancelling
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    JobStateQueued
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    JobStateUnknown
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    JobStateStopped
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    JobStateRunning
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    JobStateDone
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    JobStateFailed
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateCancelled
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    JobStateUpdated
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JobStateDraining
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    JobStateDrained
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    JobStatePending
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    JobStateCancelling
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    JobStateQueued
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    JOB_STATE_UNKNOWN
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    JOB_STATE_STOPPED
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    JOB_STATE_RUNNING
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    JOB_STATE_DONE
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    JOB_STATE_FAILED
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JOB_STATE_CANCELLED
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    JOB_STATE_UPDATED
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    JOB_STATE_DRAINING
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    JOB_STATE_DRAINED
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    JOB_STATE_PENDING
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    JOB_STATE_CANCELLING
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    JOB_STATE_QUEUED
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    JOB_STATE_RESOURCE_CLEANING_UP
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    "JOB_STATE_UNKNOWN"
    JOB_STATE_UNKNOWN

    The job's run state isn't specified.

    "JOB_STATE_STOPPED"
    JOB_STATE_STOPPED

    JOB_STATE_STOPPED indicates that the job has not yet started to run.

    "JOB_STATE_RUNNING"
    JOB_STATE_RUNNING

    JOB_STATE_RUNNING indicates that the job is currently running.

    "JOB_STATE_DONE"
    JOB_STATE_DONE

    JOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.

    "JOB_STATE_FAILED"
    JOB_STATE_FAILED

    JOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    "JOB_STATE_CANCELLED"
    JOB_STATE_CANCELLED

    JOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.

    "JOB_STATE_UPDATED"
    JOB_STATE_UPDATED

    JOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.

    "JOB_STATE_DRAINING"
    JOB_STATE_DRAINING

    JOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.

    "JOB_STATE_DRAINED"
    JOB_STATE_DRAINED

    JOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.

    "JOB_STATE_PENDING"
    JOB_STATE_PENDING

    JOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.

    "JOB_STATE_CANCELLING"
    JOB_STATE_CANCELLING

    JOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.

    "JOB_STATE_QUEUED"
    JOB_STATE_QUEUED

    JOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.

    "JOB_STATE_RESOURCE_CLEANING_UP"
    JOB_STATE_RESOURCE_CLEANING_UP

    JOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    JobType, JobTypeArgs

    JobTypeUnknown
    JOB_TYPE_UNKNOWN

    The type of the job is unspecified, or unknown.

    JobTypeBatch
    JOB_TYPE_BATCH

    A batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.

    JobTypeStreaming
    JOB_TYPE_STREAMING

    A continuously streaming job with no end: data is read, processed, and written continuously.

    JobTypeJobTypeUnknown
    JOB_TYPE_UNKNOWN

    The type of the job is unspecified, or unknown.

    JobTypeJobTypeBatch
    JOB_TYPE_BATCH

    A batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.

    JobTypeJobTypeStreaming
    JOB_TYPE_STREAMING

    A continuously streaming job with no end: data is read, processed, and written continuously.

    JobTypeUnknown
    JOB_TYPE_UNKNOWN

    The type of the job is unspecified, or unknown.

    JobTypeBatch
    JOB_TYPE_BATCH

    A batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.

    JobTypeStreaming
    JOB_TYPE_STREAMING

    A continuously streaming job with no end: data is read, processed, and written continuously.

    JobTypeUnknown
    JOB_TYPE_UNKNOWN

    The type of the job is unspecified, or unknown.

    JobTypeBatch
    JOB_TYPE_BATCH

    A batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.

    JobTypeStreaming
    JOB_TYPE_STREAMING

    A continuously streaming job with no end: data is read, processed, and written continuously.

    JOB_TYPE_UNKNOWN
    JOB_TYPE_UNKNOWN

    The type of the job is unspecified, or unknown.

    JOB_TYPE_BATCH
    JOB_TYPE_BATCH

    A batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.

    JOB_TYPE_STREAMING
    JOB_TYPE_STREAMING

    A continuously streaming job with no end: data is read, processed, and written continuously.

    "JOB_TYPE_UNKNOWN"
    JOB_TYPE_UNKNOWN

    The type of the job is unspecified, or unknown.

    "JOB_TYPE_BATCH"
    JOB_TYPE_BATCH

    A batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.

    "JOB_TYPE_STREAMING"
    JOB_TYPE_STREAMING

    A continuously streaming job with no end: data is read, processed, and written continuously.

    Package, PackageArgs

    Location string

    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/

    Name string

    The name of the package.

    Location string

    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/

    Name string

    The name of the package.

    location String

    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/

    name String

    The name of the package.

    location string

    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/

    name string

    The name of the package.

    location str

    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/

    name str

    The name of the package.

    location String

    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/

    name String

    The name of the package.

    PackageResponse, PackageResponseArgs

    Location string

    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/

    Name string

    The name of the package.

    Location string

    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/

    Name string

    The name of the package.

    location String

    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/

    name String

    The name of the package.

    location string

    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/

    name string

    The name of the package.

    location str

    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/

    name str

    The name of the package.

    location String

    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/

    name String

    The name of the package.

    PipelineDescription, PipelineDescriptionArgs

    DisplayData List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DisplayData>

    Pipeline level display data.

    ExecutionPipelineStage List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ExecutionStageSummary>

    Description of each stage of execution of the pipeline.

    OriginalPipelineTransform List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.TransformSummary>

    Description of each transform in the pipeline and collections between them.

    StepNamesHash string

    A hash value of the submitted pipeline portable graph step names if exists.

    DisplayData []DisplayData

    Pipeline level display data.

    ExecutionPipelineStage []ExecutionStageSummary

    Description of each stage of execution of the pipeline.

    OriginalPipelineTransform []TransformSummary

    Description of each transform in the pipeline and collections between them.

    StepNamesHash string

    A hash value of the submitted pipeline portable graph step names if exists.

    displayData List<DisplayData>

    Pipeline level display data.

    executionPipelineStage List<ExecutionStageSummary>

    Description of each stage of execution of the pipeline.

    originalPipelineTransform List<TransformSummary>

    Description of each transform in the pipeline and collections between them.

    stepNamesHash String

    A hash value of the submitted pipeline portable graph step names if exists.

    displayData DisplayData[]

    Pipeline level display data.

    executionPipelineStage ExecutionStageSummary[]

    Description of each stage of execution of the pipeline.

    originalPipelineTransform TransformSummary[]

    Description of each transform in the pipeline and collections between them.

    stepNamesHash string

    A hash value of the submitted pipeline portable graph step names if exists.

    display_data Sequence[DisplayData]

    Pipeline level display data.

    execution_pipeline_stage Sequence[ExecutionStageSummary]

    Description of each stage of execution of the pipeline.

    original_pipeline_transform Sequence[TransformSummary]

    Description of each transform in the pipeline and collections between them.

    step_names_hash str

    A hash value of the submitted pipeline portable graph step names if exists.

    displayData List<Property Map>

    Pipeline level display data.

    executionPipelineStage List<Property Map>

    Description of each stage of execution of the pipeline.

    originalPipelineTransform List<Property Map>

    Description of each transform in the pipeline and collections between them.

    stepNamesHash String

    A hash value of the submitted pipeline portable graph step names if exists.

    PipelineDescriptionResponse, PipelineDescriptionResponseArgs

    DisplayData List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DisplayDataResponse>

    Pipeline level display data.

    ExecutionPipelineStage List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ExecutionStageSummaryResponse>

    Description of each stage of execution of the pipeline.

    OriginalPipelineTransform List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.TransformSummaryResponse>

    Description of each transform in the pipeline and collections between them.

    StepNamesHash string

    A hash value of the submitted pipeline portable graph step names if exists.

    DisplayData []DisplayDataResponse

    Pipeline level display data.

    ExecutionPipelineStage []ExecutionStageSummaryResponse

    Description of each stage of execution of the pipeline.

    OriginalPipelineTransform []TransformSummaryResponse

    Description of each transform in the pipeline and collections between them.

    StepNamesHash string

    A hash value of the submitted pipeline portable graph step names if exists.

    displayData List<DisplayDataResponse>

    Pipeline level display data.

    executionPipelineStage List<ExecutionStageSummaryResponse>

    Description of each stage of execution of the pipeline.

    originalPipelineTransform List<TransformSummaryResponse>

    Description of each transform in the pipeline and collections between them.

    stepNamesHash String

    A hash value of the submitted pipeline portable graph step names if exists.

    displayData DisplayDataResponse[]

    Pipeline level display data.

    executionPipelineStage ExecutionStageSummaryResponse[]

    Description of each stage of execution of the pipeline.

    originalPipelineTransform TransformSummaryResponse[]

    Description of each transform in the pipeline and collections between them.

    stepNamesHash string

    A hash value of the submitted pipeline portable graph step names if exists.

    display_data Sequence[DisplayDataResponse]

    Pipeline level display data.

    execution_pipeline_stage Sequence[ExecutionStageSummaryResponse]

    Description of each stage of execution of the pipeline.

    original_pipeline_transform Sequence[TransformSummaryResponse]

    Description of each transform in the pipeline and collections between them.

    step_names_hash str

    A hash value of the submitted pipeline portable graph step names if exists.

    displayData List<Property Map>

    Pipeline level display data.

    executionPipelineStage List<Property Map>

    Description of each stage of execution of the pipeline.

    originalPipelineTransform List<Property Map>

    Description of each transform in the pipeline and collections between them.

    stepNamesHash String

    A hash value of the submitted pipeline portable graph step names if exists.

    PubSubIODetails, PubSubIODetailsArgs

    Subscription string

    Subscription used in the connection.

    Topic string

    Topic accessed in the connection.

    Subscription string

    Subscription used in the connection.

    Topic string

    Topic accessed in the connection.

    subscription String

    Subscription used in the connection.

    topic String

    Topic accessed in the connection.

    subscription string

    Subscription used in the connection.

    topic string

    Topic accessed in the connection.

    subscription str

    Subscription used in the connection.

    topic str

    Topic accessed in the connection.

    subscription String

    Subscription used in the connection.

    topic String

    Topic accessed in the connection.

    PubSubIODetailsResponse, PubSubIODetailsResponseArgs

    Subscription string

    Subscription used in the connection.

    Topic string

    Topic accessed in the connection.

    Subscription string

    Subscription used in the connection.

    Topic string

    Topic accessed in the connection.

    subscription String

    Subscription used in the connection.

    topic String

    Topic accessed in the connection.

    subscription string

    Subscription used in the connection.

    topic string

    Topic accessed in the connection.

    subscription str

    Subscription used in the connection.

    topic str

    Topic accessed in the connection.

    subscription String

    Subscription used in the connection.

    topic String

    Topic accessed in the connection.

    RuntimeUpdatableParams, RuntimeUpdatableParamsArgs

    MaxNumWorkers int

    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.

    MinNumWorkers int

    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

    MaxNumWorkers int

    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.

    MinNumWorkers int

    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

    maxNumWorkers Integer

    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.

    minNumWorkers Integer

    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

    maxNumWorkers number

    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.

    minNumWorkers number

    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

    max_num_workers int

    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.

    min_num_workers int

    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

    maxNumWorkers Number

    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.

    minNumWorkers Number

    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

    RuntimeUpdatableParamsResponse, RuntimeUpdatableParamsResponseArgs

    MaxNumWorkers int

    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.

    MinNumWorkers int

    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

    MaxNumWorkers int

    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.

    MinNumWorkers int

    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

    maxNumWorkers Integer

    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.

    minNumWorkers Integer

    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

    maxNumWorkers number

    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.

    minNumWorkers number

    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

    max_num_workers int

    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.

    min_num_workers int

    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

    maxNumWorkers Number

    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.

    minNumWorkers Number

    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

    SdkHarnessContainerImage, SdkHarnessContainerImageArgs

    Capabilities List<string>

    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto

    ContainerImage string

    A docker container image that resides in Google Container Registry.

    EnvironmentId string

    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.

    UseSingleCorePerContainer bool

    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

    Capabilities []string

    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto

    ContainerImage string

    A docker container image that resides in Google Container Registry.

    EnvironmentId string

    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.

    UseSingleCorePerContainer bool

    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

    capabilities List<String>

    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto

    containerImage String

    A docker container image that resides in Google Container Registry.

    environmentId String

    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.

    useSingleCorePerContainer Boolean

    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

    capabilities string[]

    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto

    containerImage string

    A docker container image that resides in Google Container Registry.

    environmentId string

    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.

    useSingleCorePerContainer boolean

    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

    capabilities Sequence[str]

    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto

    container_image str

    A docker container image that resides in Google Container Registry.

    environment_id str

    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.

    use_single_core_per_container bool

    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

    capabilities List<String>

    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto

    containerImage String

    A docker container image that resides in Google Container Registry.

    environmentId String

    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.

    useSingleCorePerContainer Boolean

    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

    SdkHarnessContainerImageResponse, SdkHarnessContainerImageResponseArgs

    Capabilities List<string>

    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto

    ContainerImage string

    A docker container image that resides in Google Container Registry.

    EnvironmentId string

    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.

    UseSingleCorePerContainer bool

    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

    Capabilities []string

    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto

    ContainerImage string

    A docker container image that resides in Google Container Registry.

    EnvironmentId string

    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.

    UseSingleCorePerContainer bool

    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

    capabilities List<String>

    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto

    containerImage String

    A docker container image that resides in Google Container Registry.

    environmentId String

    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.

    useSingleCorePerContainer Boolean

    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

    capabilities string[]

    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto

    containerImage string

    A docker container image that resides in Google Container Registry.

    environmentId string

    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.

    useSingleCorePerContainer boolean

    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

    capabilities Sequence[str]

    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto

    container_image str

    A docker container image that resides in Google Container Registry.

    environment_id str

    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.

    use_single_core_per_container bool

    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

    capabilities List<String>

    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto

    containerImage String

    A docker container image that resides in Google Container Registry.

    environmentId String

    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.

    useSingleCorePerContainer Boolean

    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

    SdkVersion, SdkVersionArgs

    SdkSupportStatus Pulumi.GoogleNative.Dataflow.V1b3.SdkVersionSdkSupportStatus

    The support status for this SDK version.

    Version string

    The version of the SDK used to run the job.

    VersionDisplayName string

    A readable string describing the version of the SDK.

    SdkSupportStatus SdkVersionSdkSupportStatus

    The support status for this SDK version.

    Version string

    The version of the SDK used to run the job.

    VersionDisplayName string

    A readable string describing the version of the SDK.

    sdkSupportStatus SdkVersionSdkSupportStatus

    The support status for this SDK version.

    version String

    The version of the SDK used to run the job.

    versionDisplayName String

    A readable string describing the version of the SDK.

    sdkSupportStatus SdkVersionSdkSupportStatus

    The support status for this SDK version.

    version string

    The version of the SDK used to run the job.

    versionDisplayName string

    A readable string describing the version of the SDK.

    sdk_support_status SdkVersionSdkSupportStatus

    The support status for this SDK version.

    version str

    The version of the SDK used to run the job.

    version_display_name str

    A readable string describing the version of the SDK.

    sdkSupportStatus "UNKNOWN" | "SUPPORTED" | "STALE" | "DEPRECATED" | "UNSUPPORTED"

    The support status for this SDK version.

    version String

    The version of the SDK used to run the job.

    versionDisplayName String

    A readable string describing the version of the SDK.

    SdkVersionResponse, SdkVersionResponseArgs

    SdkSupportStatus string

    The support status for this SDK version.

    Version string

    The version of the SDK used to run the job.

    VersionDisplayName string

    A readable string describing the version of the SDK.

    SdkSupportStatus string

    The support status for this SDK version.

    Version string

    The version of the SDK used to run the job.

    VersionDisplayName string

    A readable string describing the version of the SDK.

    sdkSupportStatus String

    The support status for this SDK version.

    version String

    The version of the SDK used to run the job.

    versionDisplayName String

    A readable string describing the version of the SDK.

    sdkSupportStatus string

    The support status for this SDK version.

    version string

    The version of the SDK used to run the job.

    versionDisplayName string

    A readable string describing the version of the SDK.

    sdk_support_status str

    The support status for this SDK version.

    version str

    The version of the SDK used to run the job.

    version_display_name str

    A readable string describing the version of the SDK.

    sdkSupportStatus String

    The support status for this SDK version.

    version String

    The version of the SDK used to run the job.

    versionDisplayName String

    A readable string describing the version of the SDK.

    SdkVersionSdkSupportStatus, SdkVersionSdkSupportStatusArgs

    Unknown
    UNKNOWN

    Cloud Dataflow is unaware of this version.

    Supported
    SUPPORTED

    This is a known version of an SDK, and is supported.

    Stale
    STALE

    A newer version of the SDK family exists, and an update is recommended.

    Deprecated
    DEPRECATED

    This version of the SDK is deprecated and will eventually be unsupported.

    Unsupported
    UNSUPPORTED

    Support for this SDK version has ended and it should no longer be used.

    SdkVersionSdkSupportStatusUnknown
    UNKNOWN

    Cloud Dataflow is unaware of this version.

    SdkVersionSdkSupportStatusSupported
    SUPPORTED

    This is a known version of an SDK, and is supported.

    SdkVersionSdkSupportStatusStale
    STALE

    A newer version of the SDK family exists, and an update is recommended.

    SdkVersionSdkSupportStatusDeprecated
    DEPRECATED

    This version of the SDK is deprecated and will eventually be unsupported.

    SdkVersionSdkSupportStatusUnsupported
    UNSUPPORTED

    Support for this SDK version has ended and it should no longer be used.

    Unknown
    UNKNOWN

    Cloud Dataflow is unaware of this version.

    Supported
    SUPPORTED

    This is a known version of an SDK, and is supported.

    Stale
    STALE

    A newer version of the SDK family exists, and an update is recommended.

    Deprecated
    DEPRECATED

    This version of the SDK is deprecated and will eventually be unsupported.

    Unsupported
    UNSUPPORTED

    Support for this SDK version has ended and it should no longer be used.

    Unknown
    UNKNOWN

    Cloud Dataflow is unaware of this version.

    Supported
    SUPPORTED

    This is a known version of an SDK, and is supported.

    Stale
    STALE

    A newer version of the SDK family exists, and an update is recommended.

    Deprecated
    DEPRECATED

    This version of the SDK is deprecated and will eventually be unsupported.

    Unsupported
    UNSUPPORTED

    Support for this SDK version has ended and it should no longer be used.

    UNKNOWN
    UNKNOWN

    Cloud Dataflow is unaware of this version.

    SUPPORTED
    SUPPORTED

    This is a known version of an SDK, and is supported.

    STALE
    STALE

    A newer version of the SDK family exists, and an update is recommended.

    DEPRECATED
    DEPRECATED

    This version of the SDK is deprecated and will eventually be unsupported.

    UNSUPPORTED
    UNSUPPORTED

    Support for this SDK version has ended and it should no longer be used.

    "UNKNOWN"
    UNKNOWN

    Cloud Dataflow is unaware of this version.

    "SUPPORTED"
    SUPPORTED

    This is a known version of an SDK, and is supported.

    "STALE"
    STALE

    A newer version of the SDK family exists, and an update is recommended.

    "DEPRECATED"
    DEPRECATED

    This version of the SDK is deprecated and will eventually be unsupported.

    "UNSUPPORTED"
    UNSUPPORTED

    Support for this SDK version has ended and it should no longer be used.

    SpannerIODetails, SpannerIODetailsArgs

    DatabaseId string

    DatabaseId accessed in the connection.

    InstanceId string

    InstanceId accessed in the connection.

    Project string

    ProjectId accessed in the connection.

    DatabaseId string

    DatabaseId accessed in the connection.

    InstanceId string

    InstanceId accessed in the connection.

    Project string

    ProjectId accessed in the connection.

    databaseId String

    DatabaseId accessed in the connection.

    instanceId String

    InstanceId accessed in the connection.

    project String

    ProjectId accessed in the connection.

    databaseId string

    DatabaseId accessed in the connection.

    instanceId string

    InstanceId accessed in the connection.

    project string

    ProjectId accessed in the connection.

    database_id str

    DatabaseId accessed in the connection.

    instance_id str

    InstanceId accessed in the connection.

    project str

    ProjectId accessed in the connection.

    databaseId String

    DatabaseId accessed in the connection.

    instanceId String

    InstanceId accessed in the connection.

    project String

    ProjectId accessed in the connection.

    SpannerIODetailsResponse, SpannerIODetailsResponseArgs

    DatabaseId string

    DatabaseId accessed in the connection.

    InstanceId string

    InstanceId accessed in the connection.

    Project string

    ProjectId accessed in the connection.

    DatabaseId string

    DatabaseId accessed in the connection.

    InstanceId string

    InstanceId accessed in the connection.

    Project string

    ProjectId accessed in the connection.

    databaseId String

    DatabaseId accessed in the connection.

    instanceId String

    InstanceId accessed in the connection.

    project String

    ProjectId accessed in the connection.

    databaseId string

    DatabaseId accessed in the connection.

    instanceId string

    InstanceId accessed in the connection.

    project string

    ProjectId accessed in the connection.

    database_id str

    DatabaseId accessed in the connection.

    instance_id str

    InstanceId accessed in the connection.

    project str

    ProjectId accessed in the connection.

    databaseId String

    DatabaseId accessed in the connection.

    instanceId String

    InstanceId accessed in the connection.

    project String

    ProjectId accessed in the connection.

    StageSource, StageSourceArgs

    Name string

    Dataflow service generated name for this source.

    OriginalTransformOrCollection string

    User name for the original user transform or collection with which this source is most closely associated.

    SizeBytes string

    Size of the source, if measurable.

    UserName string

    Human-readable name for this source; may be user or system generated.

    Name string

    Dataflow service generated name for this source.

    OriginalTransformOrCollection string

    User name for the original user transform or collection with which this source is most closely associated.

    SizeBytes string

    Size of the source, if measurable.

    UserName string

    Human-readable name for this source; may be user or system generated.

    name String

    Dataflow service generated name for this source.

    originalTransformOrCollection String

    User name for the original user transform or collection with which this source is most closely associated.

    sizeBytes String

    Size of the source, if measurable.

    userName String

    Human-readable name for this source; may be user or system generated.

    name string

    Dataflow service generated name for this source.

    originalTransformOrCollection string

    User name for the original user transform or collection with which this source is most closely associated.

    sizeBytes string

    Size of the source, if measurable.

    userName string

    Human-readable name for this source; may be user or system generated.

    name str

    Dataflow service generated name for this source.

    original_transform_or_collection str

    User name for the original user transform or collection with which this source is most closely associated.

    size_bytes str

    Size of the source, if measurable.

    user_name str

    Human-readable name for this source; may be user or system generated.

    name String

    Dataflow service generated name for this source.

    originalTransformOrCollection String

    User name for the original user transform or collection with which this source is most closely associated.

    sizeBytes String

    Size of the source, if measurable.

    userName String

    Human-readable name for this source; may be user or system generated.

    StageSourceResponse, StageSourceResponseArgs

    Name string

    Dataflow service generated name for this source.

    OriginalTransformOrCollection string

    User name for the original user transform or collection with which this source is most closely associated.

    SizeBytes string

    Size of the source, if measurable.

    UserName string

    Human-readable name for this source; may be user or system generated.

    Name string

    Dataflow service generated name for this source.

    OriginalTransformOrCollection string

    User name for the original user transform or collection with which this source is most closely associated.

    SizeBytes string

    Size of the source, if measurable.

    UserName string

    Human-readable name for this source; may be user or system generated.

    name String

    Dataflow service generated name for this source.

    originalTransformOrCollection String

    User name for the original user transform or collection with which this source is most closely associated.

    sizeBytes String

    Size of the source, if measurable.

    userName String

    Human-readable name for this source; may be user or system generated.

    name string

    Dataflow service generated name for this source.

    originalTransformOrCollection string

    User name for the original user transform or collection with which this source is most closely associated.

    sizeBytes string

    Size of the source, if measurable.

    userName string

    Human-readable name for this source; may be user or system generated.

    name str

    Dataflow service generated name for this source.

    original_transform_or_collection str

    User name for the original user transform or collection with which this source is most closely associated.

    size_bytes str

    Size of the source, if measurable.

    user_name str

    Human-readable name for this source; may be user or system generated.

    name String

    Dataflow service generated name for this source.

    originalTransformOrCollection String

    User name for the original user transform or collection with which this source is most closely associated.

    sizeBytes String

    Size of the source, if measurable.

    userName String

    Human-readable name for this source; may be user or system generated.

    Step, StepArgs

    Kind string

    The kind of step in the Cloud Dataflow job.

    Name string

    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.

    Properties Dictionary<string, string>

    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

    Kind string

    The kind of step in the Cloud Dataflow job.

    Name string

    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.

    Properties map[string]string

    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

    kind String

    The kind of step in the Cloud Dataflow job.

    name String

    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.

    properties Map<String,String>

    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

    kind string

    The kind of step in the Cloud Dataflow job.

    name string

    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.

    properties {[key: string]: string}

    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

    kind str

    The kind of step in the Cloud Dataflow job.

    name str

    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.

    properties Mapping[str, str]

    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

    kind String

    The kind of step in the Cloud Dataflow job.

    name String

    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.

    properties Map<String>

    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

    StepResponse, StepResponseArgs

    Kind string

    The kind of step in the Cloud Dataflow job.

    Name string

    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.

    Properties Dictionary<string, string>

    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

    Kind string

    The kind of step in the Cloud Dataflow job.

    Name string

    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.

    Properties map[string]string

    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

    kind String

    The kind of step in the Cloud Dataflow job.

    name String

    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.

    properties Map<String,String>

    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

    kind string

    The kind of step in the Cloud Dataflow job.

    name string

    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.

    properties {[key: string]: string}

    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

    kind str

    The kind of step in the Cloud Dataflow job.

    name str

    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.

    properties Mapping[str, str]

    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

    kind String

    The kind of step in the Cloud Dataflow job.

    name String

    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.

    properties Map<String>

    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

    TaskRunnerSettings, TaskRunnerSettingsArgs

    Alsologtostderr bool

    Whether to also send taskrunner log info to stderr.

    BaseTaskDir string

    The location on the worker for task-specific subdirectories.

    BaseUrl string

    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"

    CommandlinesFileName string

    The file to store preprocessing commands in.

    ContinueOnException bool

    Whether to continue taskrunner if an exception is hit.

    DataflowApiVersion string

    The API version of endpoint, e.g. "v1b3"

    HarnessCommand string

    The command to launch the worker harness.

    LanguageHint string

    The suggested backend language.

    LogDir string

    The directory on the VM to store logs.

    LogToSerialconsole bool

    Whether to send taskrunner log info to Google Compute Engine VM serial console.

    LogUploadLocation string

    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    OauthScopes List<string>

    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.

    ParallelWorkerSettings Pulumi.GoogleNative.Dataflow.V1b3.Inputs.WorkerSettings

    The settings to pass to the parallel worker harness.

    StreamingWorkerMainClass string

    The streaming worker main class name.

    TaskGroup string

    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".

    TaskUser string

    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".

    TempStoragePrefix string

    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    VmId string

    The ID string of the VM.

    WorkflowFileName string

    The file to store the workflow in.

    Alsologtostderr bool

    Whether to also send taskrunner log info to stderr.

    BaseTaskDir string

    The location on the worker for task-specific subdirectories.

    BaseUrl string

    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"

    CommandlinesFileName string

    The file to store preprocessing commands in.

    ContinueOnException bool

    Whether to continue taskrunner if an exception is hit.

    DataflowApiVersion string

    The API version of endpoint, e.g. "v1b3"

    HarnessCommand string

    The command to launch the worker harness.

    LanguageHint string

    The suggested backend language.

    LogDir string

    The directory on the VM to store logs.

    LogToSerialconsole bool

    Whether to send taskrunner log info to Google Compute Engine VM serial console.

    LogUploadLocation string

    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    OauthScopes []string

    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.

    ParallelWorkerSettings WorkerSettings

    The settings to pass to the parallel worker harness.

    StreamingWorkerMainClass string

    The streaming worker main class name.

    TaskGroup string

    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".

    TaskUser string

    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".

    TempStoragePrefix string

    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    VmId string

    The ID string of the VM.

    WorkflowFileName string

    The file to store the workflow in.

    alsologtostderr Boolean

    Whether to also send taskrunner log info to stderr.

    baseTaskDir String

    The location on the worker for task-specific subdirectories.

    baseUrl String

    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"

    commandlinesFileName String

    The file to store preprocessing commands in.

    continueOnException Boolean

    Whether to continue taskrunner if an exception is hit.

    dataflowApiVersion String

    The API version of endpoint, e.g. "v1b3"

    harnessCommand String

    The command to launch the worker harness.

    languageHint String

    The suggested backend language.

    logDir String

    The directory on the VM to store logs.

    logToSerialconsole Boolean

    Whether to send taskrunner log info to Google Compute Engine VM serial console.

    logUploadLocation String

    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    oauthScopes List<String>

    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.

    parallelWorkerSettings WorkerSettings

    The settings to pass to the parallel worker harness.

    streamingWorkerMainClass String

    The streaming worker main class name.

    taskGroup String

    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".

    taskUser String

    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".

    tempStoragePrefix String

    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    vmId String

    The ID string of the VM.

    workflowFileName String

    The file to store the workflow in.

    alsologtostderr boolean

    Whether to also send taskrunner log info to stderr.

    baseTaskDir string

    The location on the worker for task-specific subdirectories.

    baseUrl string

    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"

    commandlinesFileName string

    The file to store preprocessing commands in.

    continueOnException boolean

    Whether to continue taskrunner if an exception is hit.

    dataflowApiVersion string

    The API version of endpoint, e.g. "v1b3"

    harnessCommand string

    The command to launch the worker harness.

    languageHint string

    The suggested backend language.

    logDir string

    The directory on the VM to store logs.

    logToSerialconsole boolean

    Whether to send taskrunner log info to Google Compute Engine VM serial console.

    logUploadLocation string

    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    oauthScopes string[]

    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.

    parallelWorkerSettings WorkerSettings

    The settings to pass to the parallel worker harness.

    streamingWorkerMainClass string

    The streaming worker main class name.

    taskGroup string

    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".

    taskUser string

    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".

    tempStoragePrefix string

    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    vmId string

    The ID string of the VM.

    workflowFileName string

    The file to store the workflow in.

    alsologtostderr bool

    Whether to also send taskrunner log info to stderr.

    base_task_dir str

    The location on the worker for task-specific subdirectories.

    base_url str

    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"

    commandlines_file_name str

    The file to store preprocessing commands in.

    continue_on_exception bool

    Whether to continue taskrunner if an exception is hit.

    dataflow_api_version str

    The API version of endpoint, e.g. "v1b3"

    harness_command str

    The command to launch the worker harness.

    language_hint str

    The suggested backend language.

    log_dir str

    The directory on the VM to store logs.

    log_to_serialconsole bool

    Whether to send taskrunner log info to Google Compute Engine VM serial console.

    log_upload_location str

    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    oauth_scopes Sequence[str]

    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.

    parallel_worker_settings WorkerSettings

    The settings to pass to the parallel worker harness.

    streaming_worker_main_class str

    The streaming worker main class name.

    task_group str

    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".

    task_user str

    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".

    temp_storage_prefix str

    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    vm_id str

    The ID string of the VM.

    workflow_file_name str

    The file to store the workflow in.

    alsologtostderr Boolean

    Whether to also send taskrunner log info to stderr.

    baseTaskDir String

    The location on the worker for task-specific subdirectories.

    baseUrl String

    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"

    commandlinesFileName String

    The file to store preprocessing commands in.

    continueOnException Boolean

    Whether to continue taskrunner if an exception is hit.

    dataflowApiVersion String

    The API version of endpoint, e.g. "v1b3"

    harnessCommand String

    The command to launch the worker harness.

    languageHint String

    The suggested backend language.

    logDir String

    The directory on the VM to store logs.

    logToSerialconsole Boolean

    Whether to send taskrunner log info to Google Compute Engine VM serial console.

    logUploadLocation String

    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    oauthScopes List<String>

    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.

    parallelWorkerSettings Property Map

    The settings to pass to the parallel worker harness.

    streamingWorkerMainClass String

    The streaming worker main class name.

    taskGroup String

    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".

    taskUser String

    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".

    tempStoragePrefix String

    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    vmId String

    The ID string of the VM.

    workflowFileName String

    The file to store the workflow in.

    TaskRunnerSettingsResponse, TaskRunnerSettingsResponseArgs

    Alsologtostderr bool

    Whether to also send taskrunner log info to stderr.

    BaseTaskDir string

    The location on the worker for task-specific subdirectories.

    BaseUrl string

    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"

    CommandlinesFileName string

    The file to store preprocessing commands in.

    ContinueOnException bool

    Whether to continue taskrunner if an exception is hit.

    DataflowApiVersion string

    The API version of endpoint, e.g. "v1b3"

    HarnessCommand string

    The command to launch the worker harness.

    LanguageHint string

    The suggested backend language.

    LogDir string

    The directory on the VM to store logs.

    LogToSerialconsole bool

    Whether to send taskrunner log info to Google Compute Engine VM serial console.

    LogUploadLocation string

    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    OauthScopes List<string>

    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.

    ParallelWorkerSettings Pulumi.GoogleNative.Dataflow.V1b3.Inputs.WorkerSettingsResponse

    The settings to pass to the parallel worker harness.

    StreamingWorkerMainClass string

    The streaming worker main class name.

    TaskGroup string

    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".

    TaskUser string

    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".

    TempStoragePrefix string

    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    VmId string

    The ID string of the VM.

    WorkflowFileName string

    The file to store the workflow in.

    Alsologtostderr bool

    Whether to also send taskrunner log info to stderr.

    BaseTaskDir string

    The location on the worker for task-specific subdirectories.

    BaseUrl string

    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"

    CommandlinesFileName string

    The file to store preprocessing commands in.

    ContinueOnException bool

    Whether to continue taskrunner if an exception is hit.

    DataflowApiVersion string

    The API version of endpoint, e.g. "v1b3"

    HarnessCommand string

    The command to launch the worker harness.

    LanguageHint string

    The suggested backend language.

    LogDir string

    The directory on the VM to store logs.

    LogToSerialconsole bool

    Whether to send taskrunner log info to Google Compute Engine VM serial console.

    LogUploadLocation string

    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    OauthScopes []string

    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.

    ParallelWorkerSettings WorkerSettingsResponse

    The settings to pass to the parallel worker harness.

    StreamingWorkerMainClass string

    The streaming worker main class name.

    TaskGroup string

    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".

    TaskUser string

    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".

    TempStoragePrefix string

    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    VmId string

    The ID string of the VM.

    WorkflowFileName string

    The file to store the workflow in.

    alsologtostderr Boolean

    Whether to also send taskrunner log info to stderr.

    baseTaskDir String

    The location on the worker for task-specific subdirectories.

    baseUrl String

    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"

    commandlinesFileName String

    The file to store preprocessing commands in.

    continueOnException Boolean

    Whether to continue taskrunner if an exception is hit.

    dataflowApiVersion String

    The API version of endpoint, e.g. "v1b3"

    harnessCommand String

    The command to launch the worker harness.

    languageHint String

    The suggested backend language.

    logDir String

    The directory on the VM to store logs.

    logToSerialconsole Boolean

    Whether to send taskrunner log info to Google Compute Engine VM serial console.

    logUploadLocation String

    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    oauthScopes List<String>

    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.

    parallelWorkerSettings WorkerSettingsResponse

    The settings to pass to the parallel worker harness.

    streamingWorkerMainClass String

    The streaming worker main class name.

    taskGroup String

    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".

    taskUser String

    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".

    tempStoragePrefix String

    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    vmId String

    The ID string of the VM.

    workflowFileName String

    The file to store the workflow in.

    alsologtostderr boolean

    Whether to also send taskrunner log info to stderr.

    baseTaskDir string

    The location on the worker for task-specific subdirectories.

    baseUrl string

    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"

    commandlinesFileName string

    The file to store preprocessing commands in.

    continueOnException boolean

    Whether to continue taskrunner if an exception is hit.

    dataflowApiVersion string

    The API version of endpoint, e.g. "v1b3"

    harnessCommand string

    The command to launch the worker harness.

    languageHint string

    The suggested backend language.

    logDir string

    The directory on the VM to store logs.

    logToSerialconsole boolean

    Whether to send taskrunner log info to Google Compute Engine VM serial console.

    logUploadLocation string

    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    oauthScopes string[]

    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.

    parallelWorkerSettings WorkerSettingsResponse

    The settings to pass to the parallel worker harness.

    streamingWorkerMainClass string

    The streaming worker main class name.

    taskGroup string

    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".

    taskUser string

    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".

    tempStoragePrefix string

    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    vmId string

    The ID string of the VM.

    workflowFileName string

    The file to store the workflow in.

    alsologtostderr bool

    Whether to also send taskrunner log info to stderr.

    base_task_dir str

    The location on the worker for task-specific subdirectories.

    base_url str

    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"

    commandlines_file_name str

    The file to store preprocessing commands in.

    continue_on_exception bool

    Whether to continue taskrunner if an exception is hit.

    dataflow_api_version str

    The API version of endpoint, e.g. "v1b3"

    harness_command str

    The command to launch the worker harness.

    language_hint str

    The suggested backend language.

    log_dir str

    The directory on the VM to store logs.

    log_to_serialconsole bool

    Whether to send taskrunner log info to Google Compute Engine VM serial console.

    log_upload_location str

    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    oauth_scopes Sequence[str]

    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.

    parallel_worker_settings WorkerSettingsResponse

    The settings to pass to the parallel worker harness.

    streaming_worker_main_class str

    The streaming worker main class name.

    task_group str

    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".

    task_user str

    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".

    temp_storage_prefix str

    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    vm_id str

    The ID string of the VM.

    workflow_file_name str

    The file to store the workflow in.

    alsologtostderr Boolean

    Whether to also send taskrunner log info to stderr.

    baseTaskDir String

    The location on the worker for task-specific subdirectories.

    baseUrl String

    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"

    commandlinesFileName String

    The file to store preprocessing commands in.

    continueOnException Boolean

    Whether to continue taskrunner if an exception is hit.

    dataflowApiVersion String

    The API version of endpoint, e.g. "v1b3"

    harnessCommand String

    The command to launch the worker harness.

    languageHint String

    The suggested backend language.

    logDir String

    The directory on the VM to store logs.

    logToSerialconsole Boolean

    Whether to send taskrunner log info to Google Compute Engine VM serial console.

    logUploadLocation String

    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    oauthScopes List<String>

    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.

    parallelWorkerSettings Property Map

    The settings to pass to the parallel worker harness.

    streamingWorkerMainClass String

    The streaming worker main class name.

    taskGroup String

    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".

    taskUser String

    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".

    tempStoragePrefix String

    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

    vmId String

    The ID string of the VM.

    workflowFileName String

    The file to store the workflow in.

    TransformSummary, TransformSummaryArgs

    DisplayData List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DisplayData>

    Transform-specific display data.

    Id string

    SDK generated id of this transform instance.

    InputCollectionName List<string>

    User names for all collection inputs to this transform.

    Kind Pulumi.GoogleNative.Dataflow.V1b3.TransformSummaryKind

    Type of transform.

    Name string

    User provided name for this transform instance.

    OutputCollectionName List<string>

    User names for all collection outputs to this transform.

    DisplayData []DisplayData

    Transform-specific display data.

    Id string

    SDK generated id of this transform instance.

    InputCollectionName []string

    User names for all collection inputs to this transform.

    Kind TransformSummaryKind

    Type of transform.

    Name string

    User provided name for this transform instance.

    OutputCollectionName []string

    User names for all collection outputs to this transform.

    displayData List<DisplayData>

    Transform-specific display data.

    id String

    SDK generated id of this transform instance.

    inputCollectionName List<String>

    User names for all collection inputs to this transform.

    kind TransformSummaryKind

    Type of transform.

    name String

    User provided name for this transform instance.

    outputCollectionName List<String>

    User names for all collection outputs to this transform.

    displayData DisplayData[]

    Transform-specific display data.

    id string

    SDK generated id of this transform instance.

    inputCollectionName string[]

    User names for all collection inputs to this transform.

    kind TransformSummaryKind

    Type of transform.

    name string

    User provided name for this transform instance.

    outputCollectionName string[]

    User names for all collection outputs to this transform.

    display_data Sequence[DisplayData]

    Transform-specific display data.

    id str

    SDK generated id of this transform instance.

    input_collection_name Sequence[str]

    User names for all collection inputs to this transform.

    kind TransformSummaryKind

    Type of transform.

    name str

    User provided name for this transform instance.

    output_collection_name Sequence[str]

    User names for all collection outputs to this transform.

    displayData List<Property Map>

    Transform-specific display data.

    id String

    SDK generated id of this transform instance.

    inputCollectionName List<String>

    User names for all collection inputs to this transform.

    kind "UNKNOWN_KIND" | "PAR_DO_KIND" | "GROUP_BY_KEY_KIND" | "FLATTEN_KIND" | "READ_KIND" | "WRITE_KIND" | "CONSTANT_KIND" | "SINGLETON_KIND" | "SHUFFLE_KIND"

    Type of transform.

    name String

    User provided name for this transform instance.

    outputCollectionName List<String>

    User names for all collection outputs to this transform.

    TransformSummaryKind, TransformSummaryKindArgs

    UnknownKind
    UNKNOWN_KIND

    Unrecognized transform type.

    ParDoKind
    PAR_DO_KIND

    ParDo transform.

    GroupByKeyKind
    GROUP_BY_KEY_KIND

    Group By Key transform.

    FlattenKind
    FLATTEN_KIND

    Flatten transform.

    ReadKind
    READ_KIND

    Read transform.

    WriteKind
    WRITE_KIND

    Write transform.

    ConstantKind
    CONSTANT_KIND

    Constructs from a constant value, such as with Create.of.

    SingletonKind
    SINGLETON_KIND

    Creates a Singleton view of a collection.

    ShuffleKind
    SHUFFLE_KIND

    Opening or closing a shuffle session, often as part of a GroupByKey.

    TransformSummaryKindUnknownKind
    UNKNOWN_KIND

    Unrecognized transform type.

    TransformSummaryKindParDoKind
    PAR_DO_KIND

    ParDo transform.

    TransformSummaryKindGroupByKeyKind
    GROUP_BY_KEY_KIND

    Group By Key transform.

    TransformSummaryKindFlattenKind
    FLATTEN_KIND

    Flatten transform.

    TransformSummaryKindReadKind
    READ_KIND

    Read transform.

    TransformSummaryKindWriteKind
    WRITE_KIND

    Write transform.

    TransformSummaryKindConstantKind
    CONSTANT_KIND

    Constructs from a constant value, such as with Create.of.

    TransformSummaryKindSingletonKind
    SINGLETON_KIND

    Creates a Singleton view of a collection.

    TransformSummaryKindShuffleKind
    SHUFFLE_KIND

    Opening or closing a shuffle session, often as part of a GroupByKey.

    UnknownKind
    UNKNOWN_KIND

    Unrecognized transform type.

    ParDoKind
    PAR_DO_KIND

    ParDo transform.

    GroupByKeyKind
    GROUP_BY_KEY_KIND

    Group By Key transform.

    FlattenKind
    FLATTEN_KIND

    Flatten transform.

    ReadKind
    READ_KIND

    Read transform.

    WriteKind
    WRITE_KIND

    Write transform.

    ConstantKind
    CONSTANT_KIND

    Constructs from a constant value, such as with Create.of.

    SingletonKind
    SINGLETON_KIND

    Creates a Singleton view of a collection.

    ShuffleKind
    SHUFFLE_KIND

    Opening or closing a shuffle session, often as part of a GroupByKey.

    UnknownKind
    UNKNOWN_KIND

    Unrecognized transform type.

    ParDoKind
    PAR_DO_KIND

    ParDo transform.

    GroupByKeyKind
    GROUP_BY_KEY_KIND

    Group By Key transform.

    FlattenKind
    FLATTEN_KIND

    Flatten transform.

    ReadKind
    READ_KIND

    Read transform.

    WriteKind
    WRITE_KIND

    Write transform.

    ConstantKind
    CONSTANT_KIND

    Constructs from a constant value, such as with Create.of.

    SingletonKind
    SINGLETON_KIND

    Creates a Singleton view of a collection.

    ShuffleKind
    SHUFFLE_KIND

    Opening or closing a shuffle session, often as part of a GroupByKey.

    UNKNOWN_KIND
    UNKNOWN_KIND

    Unrecognized transform type.

    PAR_DO_KIND
    PAR_DO_KIND

    ParDo transform.

    GROUP_BY_KEY_KIND
    GROUP_BY_KEY_KIND

    Group By Key transform.

    FLATTEN_KIND
    FLATTEN_KIND

    Flatten transform.

    READ_KIND
    READ_KIND

    Read transform.

    WRITE_KIND
    WRITE_KIND

    Write transform.

    CONSTANT_KIND
    CONSTANT_KIND

    Constructs from a constant value, such as with Create.of.

    SINGLETON_KIND
    SINGLETON_KIND

    Creates a Singleton view of a collection.

    SHUFFLE_KIND
    SHUFFLE_KIND

    Opening or closing a shuffle session, often as part of a GroupByKey.

    "UNKNOWN_KIND"
    UNKNOWN_KIND

    Unrecognized transform type.

    "PAR_DO_KIND"
    PAR_DO_KIND

    ParDo transform.

    "GROUP_BY_KEY_KIND"
    GROUP_BY_KEY_KIND

    Group By Key transform.

    "FLATTEN_KIND"
    FLATTEN_KIND

    Flatten transform.

    "READ_KIND"
    READ_KIND

    Read transform.

    "WRITE_KIND"
    WRITE_KIND

    Write transform.

    "CONSTANT_KIND"
    CONSTANT_KIND

    Constructs from a constant value, such as with Create.of.

    "SINGLETON_KIND"
    SINGLETON_KIND

    Creates a Singleton view of a collection.

    "SHUFFLE_KIND"
    SHUFFLE_KIND

    Opening or closing a shuffle session, often as part of a GroupByKey.

    TransformSummaryResponse, TransformSummaryResponseArgs

    DisplayData List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DisplayDataResponse>

    Transform-specific display data.

    InputCollectionName List<string>

    User names for all collection inputs to this transform.

    Kind string

    Type of transform.

    Name string

    User provided name for this transform instance.

    OutputCollectionName List<string>

    User names for all collection outputs to this transform.

    DisplayData []DisplayDataResponse

    Transform-specific display data.

    InputCollectionName []string

    User names for all collection inputs to this transform.

    Kind string

    Type of transform.

    Name string

    User provided name for this transform instance.

    OutputCollectionName []string

    User names for all collection outputs to this transform.

    displayData List<DisplayDataResponse>

    Transform-specific display data.

    inputCollectionName List<String>

    User names for all collection inputs to this transform.

    kind String

    Type of transform.

    name String

    User provided name for this transform instance.

    outputCollectionName List<String>

    User names for all collection outputs to this transform.

    displayData DisplayDataResponse[]

    Transform-specific display data.

    inputCollectionName string[]

    User names for all collection inputs to this transform.

    kind string

    Type of transform.

    name string

    User provided name for this transform instance.

    outputCollectionName string[]

    User names for all collection outputs to this transform.

    display_data Sequence[DisplayDataResponse]

    Transform-specific display data.

    input_collection_name Sequence[str]

    User names for all collection inputs to this transform.

    kind str

    Type of transform.

    name str

    User provided name for this transform instance.

    output_collection_name Sequence[str]

    User names for all collection outputs to this transform.

    displayData List<Property Map>

    Transform-specific display data.

    inputCollectionName List<String>

    User names for all collection inputs to this transform.

    kind String

    Type of transform.

    name String

    User provided name for this transform instance.

    outputCollectionName List<String>

    User names for all collection outputs to this transform.

    WorkerPool, WorkerPoolArgs

    WorkerHarnessContainerImage string

    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:

    Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    AutoscalingSettings Pulumi.GoogleNative.Dataflow.V1b3.Inputs.AutoscalingSettings

    Settings for autoscaling of this WorkerPool.

    DataDisks List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.Disk>

    Data disks that are used by a VM in this workflow.

    DefaultPackageSet Pulumi.GoogleNative.Dataflow.V1b3.WorkerPoolDefaultPackageSet

    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.

    DiskSizeGb int

    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    DiskSourceImage string

    Fully qualified source image for disks.

    DiskType string

    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.

    IpConfiguration Pulumi.GoogleNative.Dataflow.V1b3.WorkerPoolIpConfiguration

    Configuration for VM IPs.

    Kind string

    The kind of the worker pool; currently only harness and shuffle are supported.

    MachineType string

    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.

    Metadata Dictionary<string, string>

    Metadata to set on the Google Compute Engine VMs.

    Network string

    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

    NumThreadsPerWorker int

    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).

    NumWorkers int

    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.

    OnHostMaintenance string

    The action to take on host maintenance, as defined by the Google Compute Engine API.

    Packages List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.Package>

    Packages to be installed on workers.

    PoolArgs Dictionary<string, string>

    Extra arguments for this worker pool.

    SdkHarnessContainerImages List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SdkHarnessContainerImage>

    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.

    Subnetwork string

    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".

    TaskrunnerSettings Pulumi.GoogleNative.Dataflow.V1b3.Inputs.TaskRunnerSettings

    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.

    TeardownPolicy Pulumi.GoogleNative.Dataflow.V1b3.WorkerPoolTeardownPolicy

    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.

    Zone string

    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

    WorkerHarnessContainerImage string

    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:

    Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    AutoscalingSettings AutoscalingSettings

    Settings for autoscaling of this WorkerPool.

    DataDisks []Disk

    Data disks that are used by a VM in this workflow.

    DefaultPackageSet WorkerPoolDefaultPackageSet

    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.

    DiskSizeGb int

    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    DiskSourceImage string

    Fully qualified source image for disks.

    DiskType string

    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.

    IpConfiguration WorkerPoolIpConfiguration

    Configuration for VM IPs.

    Kind string

    The kind of the worker pool; currently only harness and shuffle are supported.

    MachineType string

    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.

    Metadata map[string]string

    Metadata to set on the Google Compute Engine VMs.

    Network string

    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

    NumThreadsPerWorker int

    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).

    NumWorkers int

    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.

    OnHostMaintenance string

    The action to take on host maintenance, as defined by the Google Compute Engine API.

    Packages []Package

    Packages to be installed on workers.

    PoolArgs map[string]string

    Extra arguments for this worker pool.

    SdkHarnessContainerImages []SdkHarnessContainerImage

    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.

    Subnetwork string

    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".

    TaskrunnerSettings TaskRunnerSettings

    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.

    TeardownPolicy WorkerPoolTeardownPolicy

    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.

    Zone string

    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

    workerHarnessContainerImage String

    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:

    Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    autoscalingSettings AutoscalingSettings

    Settings for autoscaling of this WorkerPool.

    dataDisks List<Disk>

    Data disks that are used by a VM in this workflow.

    defaultPackageSet WorkerPoolDefaultPackageSet

    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.

    diskSizeGb Integer

    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    diskSourceImage String

    Fully qualified source image for disks.

    diskType String

    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.

    ipConfiguration WorkerPoolIpConfiguration

    Configuration for VM IPs.

    kind String

    The kind of the worker pool; currently only harness and shuffle are supported.

    machineType String

    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.

    metadata Map<String,String>

    Metadata to set on the Google Compute Engine VMs.

    network String

    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

    numThreadsPerWorker Integer

    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).

    numWorkers Integer

    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.

    onHostMaintenance String

    The action to take on host maintenance, as defined by the Google Compute Engine API.

    packages List<Package>

    Packages to be installed on workers.

    poolArgs Map<String,String>

    Extra arguments for this worker pool.

    sdkHarnessContainerImages List<SdkHarnessContainerImage>

    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.

    subnetwork String

    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".

    taskrunnerSettings TaskRunnerSettings

    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.

    teardownPolicy WorkerPoolTeardownPolicy

    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.

    zone String

    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

    workerHarnessContainerImage string

    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:

    Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    autoscalingSettings AutoscalingSettings

    Settings for autoscaling of this WorkerPool.

    dataDisks Disk[]

    Data disks that are used by a VM in this workflow.

    defaultPackageSet WorkerPoolDefaultPackageSet

    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.

    diskSizeGb number

    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    diskSourceImage string

    Fully qualified source image for disks.

    diskType string

    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.

    ipConfiguration WorkerPoolIpConfiguration

    Configuration for VM IPs.

    kind string

    The kind of the worker pool; currently only harness and shuffle are supported.

    machineType string

    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.

    metadata {[key: string]: string}

    Metadata to set on the Google Compute Engine VMs.

    network string

    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

    numThreadsPerWorker number

    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).

    numWorkers number

    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.

    onHostMaintenance string

    The action to take on host maintenance, as defined by the Google Compute Engine API.

    packages Package[]

    Packages to be installed on workers.

    poolArgs {[key: string]: string}

    Extra arguments for this worker pool.

    sdkHarnessContainerImages SdkHarnessContainerImage[]

    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.

    subnetwork string

    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".

    taskrunnerSettings TaskRunnerSettings

    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.

    teardownPolicy WorkerPoolTeardownPolicy

    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.

    zone string

    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

    worker_harness_container_image str

    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:

    Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    autoscaling_settings AutoscalingSettings

    Settings for autoscaling of this WorkerPool.

    data_disks Sequence[Disk]

    Data disks that are used by a VM in this workflow.

    default_package_set WorkerPoolDefaultPackageSet

    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.

    disk_size_gb int

    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    disk_source_image str

    Fully qualified source image for disks.

    disk_type str

    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.

    ip_configuration WorkerPoolIpConfiguration

    Configuration for VM IPs.

    kind str

    The kind of the worker pool; currently only harness and shuffle are supported.

    machine_type str

    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.

    metadata Mapping[str, str]

    Metadata to set on the Google Compute Engine VMs.

    network str

    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

    num_threads_per_worker int

    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).

    num_workers int

    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.

    on_host_maintenance str

    The action to take on host maintenance, as defined by the Google Compute Engine API.

    packages Sequence[Package]

    Packages to be installed on workers.

    pool_args Mapping[str, str]

    Extra arguments for this worker pool.

    sdk_harness_container_images Sequence[SdkHarnessContainerImage]

    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.

    subnetwork str

    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".

    taskrunner_settings TaskRunnerSettings

    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.

    teardown_policy WorkerPoolTeardownPolicy

    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.

    zone str

    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

    workerHarnessContainerImage String

    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:

    Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    autoscalingSettings Property Map

    Settings for autoscaling of this WorkerPool.

    dataDisks List<Property Map>

    Data disks that are used by a VM in this workflow.

    defaultPackageSet "DEFAULT_PACKAGE_SET_UNKNOWN" | "DEFAULT_PACKAGE_SET_NONE" | "DEFAULT_PACKAGE_SET_JAVA" | "DEFAULT_PACKAGE_SET_PYTHON"

    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.

    diskSizeGb Number

    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    diskSourceImage String

    Fully qualified source image for disks.

    diskType String

    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.

    ipConfiguration "WORKER_IP_UNSPECIFIED" | "WORKER_IP_PUBLIC" | "WORKER_IP_PRIVATE"

    Configuration for VM IPs.

    kind String

    The kind of the worker pool; currently only harness and shuffle are supported.

    machineType String

    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.

    metadata Map<String>

    Metadata to set on the Google Compute Engine VMs.

    network String

    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

    numThreadsPerWorker Number

    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).

    numWorkers Number

    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.

    onHostMaintenance String

    The action to take on host maintenance, as defined by the Google Compute Engine API.

    packages List<Property Map>

    Packages to be installed on workers.

    poolArgs Map<String>

    Extra arguments for this worker pool.

    sdkHarnessContainerImages List<Property Map>

    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.

    subnetwork String

    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".

    taskrunnerSettings Property Map

    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.

    teardownPolicy "TEARDOWN_POLICY_UNKNOWN" | "TEARDOWN_ALWAYS" | "TEARDOWN_ON_SUCCESS" | "TEARDOWN_NEVER"

    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.

    zone String

    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

    WorkerPoolDefaultPackageSet, WorkerPoolDefaultPackageSetArgs

    DefaultPackageSetUnknown
    DEFAULT_PACKAGE_SET_UNKNOWN

    The default set of packages to stage is unknown, or unspecified.

    DefaultPackageSetNone
    DEFAULT_PACKAGE_SET_NONE

    Indicates that no packages should be staged at the worker unless explicitly specified by the job.

    DefaultPackageSetJava
    DEFAULT_PACKAGE_SET_JAVA

    Stage packages typically useful to workers written in Java.

    DefaultPackageSetPython
    DEFAULT_PACKAGE_SET_PYTHON

    Stage packages typically useful to workers written in Python.

    WorkerPoolDefaultPackageSetDefaultPackageSetUnknown
    DEFAULT_PACKAGE_SET_UNKNOWN

    The default set of packages to stage is unknown, or unspecified.

    WorkerPoolDefaultPackageSetDefaultPackageSetNone
    DEFAULT_PACKAGE_SET_NONE

    Indicates that no packages should be staged at the worker unless explicitly specified by the job.

    WorkerPoolDefaultPackageSetDefaultPackageSetJava
    DEFAULT_PACKAGE_SET_JAVA

    Stage packages typically useful to workers written in Java.

    WorkerPoolDefaultPackageSetDefaultPackageSetPython
    DEFAULT_PACKAGE_SET_PYTHON

    Stage packages typically useful to workers written in Python.

    DefaultPackageSetUnknown
    DEFAULT_PACKAGE_SET_UNKNOWN

    The default set of packages to stage is unknown, or unspecified.

    DefaultPackageSetNone
    DEFAULT_PACKAGE_SET_NONE

    Indicates that no packages should be staged at the worker unless explicitly specified by the job.

    DefaultPackageSetJava
    DEFAULT_PACKAGE_SET_JAVA

    Stage packages typically useful to workers written in Java.

    DefaultPackageSetPython
    DEFAULT_PACKAGE_SET_PYTHON

    Stage packages typically useful to workers written in Python.

    DefaultPackageSetUnknown
    DEFAULT_PACKAGE_SET_UNKNOWN

    The default set of packages to stage is unknown, or unspecified.

    DefaultPackageSetNone
    DEFAULT_PACKAGE_SET_NONE

    Indicates that no packages should be staged at the worker unless explicitly specified by the job.

    DefaultPackageSetJava
    DEFAULT_PACKAGE_SET_JAVA

    Stage packages typically useful to workers written in Java.

    DefaultPackageSetPython
    DEFAULT_PACKAGE_SET_PYTHON

    Stage packages typically useful to workers written in Python.

    DEFAULT_PACKAGE_SET_UNKNOWN
    DEFAULT_PACKAGE_SET_UNKNOWN

    The default set of packages to stage is unknown, or unspecified.

    DEFAULT_PACKAGE_SET_NONE
    DEFAULT_PACKAGE_SET_NONE

    Indicates that no packages should be staged at the worker unless explicitly specified by the job.

    DEFAULT_PACKAGE_SET_JAVA
    DEFAULT_PACKAGE_SET_JAVA

    Stage packages typically useful to workers written in Java.

    DEFAULT_PACKAGE_SET_PYTHON
    DEFAULT_PACKAGE_SET_PYTHON

    Stage packages typically useful to workers written in Python.

    "DEFAULT_PACKAGE_SET_UNKNOWN"
    DEFAULT_PACKAGE_SET_UNKNOWN

    The default set of packages to stage is unknown, or unspecified.

    "DEFAULT_PACKAGE_SET_NONE"
    DEFAULT_PACKAGE_SET_NONE

    Indicates that no packages should be staged at the worker unless explicitly specified by the job.

    "DEFAULT_PACKAGE_SET_JAVA"
    DEFAULT_PACKAGE_SET_JAVA

    Stage packages typically useful to workers written in Java.

    "DEFAULT_PACKAGE_SET_PYTHON"
    DEFAULT_PACKAGE_SET_PYTHON

    Stage packages typically useful to workers written in Python.

    WorkerPoolIpConfiguration, WorkerPoolIpConfigurationArgs

    WorkerIpUnspecified
    WORKER_IP_UNSPECIFIED

    The configuration is unknown, or unspecified.

    WorkerIpPublic
    WORKER_IP_PUBLIC

    Workers should have public IP addresses.

    WorkerIpPrivate
    WORKER_IP_PRIVATE

    Workers should have private IP addresses.

    WorkerPoolIpConfigurationWorkerIpUnspecified
    WORKER_IP_UNSPECIFIED

    The configuration is unknown, or unspecified.

    WorkerPoolIpConfigurationWorkerIpPublic
    WORKER_IP_PUBLIC

    Workers should have public IP addresses.

    WorkerPoolIpConfigurationWorkerIpPrivate
    WORKER_IP_PRIVATE

    Workers should have private IP addresses.

    WorkerIpUnspecified
    WORKER_IP_UNSPECIFIED

    The configuration is unknown, or unspecified.

    WorkerIpPublic
    WORKER_IP_PUBLIC

    Workers should have public IP addresses.

    WorkerIpPrivate
    WORKER_IP_PRIVATE

    Workers should have private IP addresses.

    WorkerIpUnspecified
    WORKER_IP_UNSPECIFIED

    The configuration is unknown, or unspecified.

    WorkerIpPublic
    WORKER_IP_PUBLIC

    Workers should have public IP addresses.

    WorkerIpPrivate
    WORKER_IP_PRIVATE

    Workers should have private IP addresses.

    WORKER_IP_UNSPECIFIED
    WORKER_IP_UNSPECIFIED

    The configuration is unknown, or unspecified.

    WORKER_IP_PUBLIC
    WORKER_IP_PUBLIC

    Workers should have public IP addresses.

    WORKER_IP_PRIVATE
    WORKER_IP_PRIVATE

    Workers should have private IP addresses.

    "WORKER_IP_UNSPECIFIED"
    WORKER_IP_UNSPECIFIED

    The configuration is unknown, or unspecified.

    "WORKER_IP_PUBLIC"
    WORKER_IP_PUBLIC

    Workers should have public IP addresses.

    "WORKER_IP_PRIVATE"
    WORKER_IP_PRIVATE

    Workers should have private IP addresses.

    WorkerPoolResponse, WorkerPoolResponseArgs

    AutoscalingSettings Pulumi.GoogleNative.Dataflow.V1b3.Inputs.AutoscalingSettingsResponse

    Settings for autoscaling of this WorkerPool.

    DataDisks List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DiskResponse>

    Data disks that are used by a VM in this workflow.

    DefaultPackageSet string

    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.

    DiskSizeGb int

    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    DiskSourceImage string

    Fully qualified source image for disks.

    DiskType string

    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.

    IpConfiguration string

    Configuration for VM IPs.

    Kind string

    The kind of the worker pool; currently only harness and shuffle are supported.

    MachineType string

    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.

    Metadata Dictionary<string, string>

    Metadata to set on the Google Compute Engine VMs.

    Network string

    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

    NumThreadsPerWorker int

    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).

    NumWorkers int

    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.

    OnHostMaintenance string

    The action to take on host maintenance, as defined by the Google Compute Engine API.

    Packages List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.PackageResponse>

    Packages to be installed on workers.

    PoolArgs Dictionary<string, string>

    Extra arguments for this worker pool.

    SdkHarnessContainerImages List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SdkHarnessContainerImageResponse>

    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.

    Subnetwork string

    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".

    TaskrunnerSettings Pulumi.GoogleNative.Dataflow.V1b3.Inputs.TaskRunnerSettingsResponse

    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.

    TeardownPolicy string

    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.

    WorkerHarnessContainerImage string

    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:

    Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Zone string

    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

    AutoscalingSettings AutoscalingSettingsResponse

    Settings for autoscaling of this WorkerPool.

    DataDisks []DiskResponse

    Data disks that are used by a VM in this workflow.

    DefaultPackageSet string

    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.

    DiskSizeGb int

    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    DiskSourceImage string

    Fully qualified source image for disks.

    DiskType string

    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.

    IpConfiguration string

    Configuration for VM IPs.

    Kind string

    The kind of the worker pool; currently only harness and shuffle are supported.

    MachineType string

    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.

    Metadata map[string]string

    Metadata to set on the Google Compute Engine VMs.

    Network string

    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

    NumThreadsPerWorker int

    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).

    NumWorkers int

    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.

    OnHostMaintenance string

    The action to take on host maintenance, as defined by the Google Compute Engine API.

    Packages []PackageResponse

    Packages to be installed on workers.

    PoolArgs map[string]string

    Extra arguments for this worker pool.

    SdkHarnessContainerImages []SdkHarnessContainerImageResponse

    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.

    Subnetwork string

    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".

    TaskrunnerSettings TaskRunnerSettingsResponse

    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.

    TeardownPolicy string

    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.

    WorkerHarnessContainerImage string

    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:

    Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Zone string

    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

    autoscalingSettings AutoscalingSettingsResponse

    Settings for autoscaling of this WorkerPool.

    dataDisks List<DiskResponse>

    Data disks that are used by a VM in this workflow.

    defaultPackageSet String

    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.

    diskSizeGb Integer

    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    diskSourceImage String

    Fully qualified source image for disks.

    diskType String

    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.

    ipConfiguration String

    Configuration for VM IPs.

    kind String

    The kind of the worker pool; currently only harness and shuffle are supported.

    machineType String

    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.

    metadata Map<String,String>

    Metadata to set on the Google Compute Engine VMs.

    network String

    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

    numThreadsPerWorker Integer

    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).

    numWorkers Integer

    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.

    onHostMaintenance String

    The action to take on host maintenance, as defined by the Google Compute Engine API.

    packages List<PackageResponse>

    Packages to be installed on workers.

    poolArgs Map<String,String>

    Extra arguments for this worker pool.

    sdkHarnessContainerImages List<SdkHarnessContainerImageResponse>

    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.

    subnetwork String

    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".

    taskrunnerSettings TaskRunnerSettingsResponse

    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.

    teardownPolicy String

    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.

    workerHarnessContainerImage String

    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:

    Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    zone String

    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

    autoscalingSettings AutoscalingSettingsResponse

    Settings for autoscaling of this WorkerPool.

    dataDisks DiskResponse[]

    Data disks that are used by a VM in this workflow.

    defaultPackageSet string

    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.

    diskSizeGb number

    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    diskSourceImage string

    Fully qualified source image for disks.

    diskType string

    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.

    ipConfiguration string

    Configuration for VM IPs.

    kind string

    The kind of the worker pool; currently only harness and shuffle are supported.

    machineType string

    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.

    metadata {[key: string]: string}

    Metadata to set on the Google Compute Engine VMs.

    network string

    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

    numThreadsPerWorker number

    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).

    numWorkers number

    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.

    onHostMaintenance string

    The action to take on host maintenance, as defined by the Google Compute Engine API.

    packages PackageResponse[]

    Packages to be installed on workers.

    poolArgs {[key: string]: string}

    Extra arguments for this worker pool.

    sdkHarnessContainerImages SdkHarnessContainerImageResponse[]

    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.

    subnetwork string

    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".

    taskrunnerSettings TaskRunnerSettingsResponse

    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.

    teardownPolicy string

    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.

    workerHarnessContainerImage string

    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:

    Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    zone string

    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

    autoscaling_settings AutoscalingSettingsResponse

    Settings for autoscaling of this WorkerPool.

    data_disks Sequence[DiskResponse]

    Data disks that are used by a VM in this workflow.

    default_package_set str

    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.

    disk_size_gb int

    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    disk_source_image str

    Fully qualified source image for disks.

    disk_type str

    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.

    ip_configuration str

    Configuration for VM IPs.

    kind str

    The kind of the worker pool; currently only harness and shuffle are supported.

    machine_type str

    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.

    metadata Mapping[str, str]

    Metadata to set on the Google Compute Engine VMs.

    network str

    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

    num_threads_per_worker int

    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).

    num_workers int

    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.

    on_host_maintenance str

    The action to take on host maintenance, as defined by the Google Compute Engine API.

    packages Sequence[PackageResponse]

    Packages to be installed on workers.

    pool_args Mapping[str, str]

    Extra arguments for this worker pool.

    sdk_harness_container_images Sequence[SdkHarnessContainerImageResponse]

    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.

    subnetwork str

    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".

    taskrunner_settings TaskRunnerSettingsResponse

    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.

    teardown_policy str

    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.

    worker_harness_container_image str

    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:

    Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    zone str

    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

    autoscalingSettings Property Map

    Settings for autoscaling of this WorkerPool.

    dataDisks List<Property Map>

    Data disks that are used by a VM in this workflow.

    defaultPackageSet String

    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.

    diskSizeGb Number

    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    diskSourceImage String

    Fully qualified source image for disks.

    diskType String

    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.

    ipConfiguration String

    Configuration for VM IPs.

    kind String

    The kind of the worker pool; currently only harness and shuffle are supported.

    machineType String

    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.

    metadata Map<String>

    Metadata to set on the Google Compute Engine VMs.

    network String

    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

    numThreadsPerWorker Number

    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).

    numWorkers Number

    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.

    onHostMaintenance String

    The action to take on host maintenance, as defined by the Google Compute Engine API.

    packages List<Property Map>

    Packages to be installed on workers.

    poolArgs Map<String>

    Extra arguments for this worker pool.

    sdkHarnessContainerImages List<Property Map>

    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.

    subnetwork String

    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".

    taskrunnerSettings Property Map

    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.

    teardownPolicy String

    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.

    workerHarnessContainerImage String

    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:

    Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    zone String

    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

    WorkerPoolTeardownPolicy, WorkerPoolTeardownPolicyArgs

    TeardownPolicyUnknown
    TEARDOWN_POLICY_UNKNOWN

    The teardown policy isn't specified, or is unknown.

    TeardownAlways
    TEARDOWN_ALWAYS

    Always teardown the resource.

    TeardownOnSuccess
    TEARDOWN_ON_SUCCESS

    Teardown the resource on success. This is useful for debugging failures.

    TeardownNever
    TEARDOWN_NEVER

    Never teardown the resource. This is useful for debugging and development.

    WorkerPoolTeardownPolicyTeardownPolicyUnknown
    TEARDOWN_POLICY_UNKNOWN

    The teardown policy isn't specified, or is unknown.

    WorkerPoolTeardownPolicyTeardownAlways
    TEARDOWN_ALWAYS

    Always teardown the resource.

    WorkerPoolTeardownPolicyTeardownOnSuccess
    TEARDOWN_ON_SUCCESS

    Teardown the resource on success. This is useful for debugging failures.

    WorkerPoolTeardownPolicyTeardownNever
    TEARDOWN_NEVER

    Never teardown the resource. This is useful for debugging and development.

    TeardownPolicyUnknown
    TEARDOWN_POLICY_UNKNOWN

    The teardown policy isn't specified, or is unknown.

    TeardownAlways
    TEARDOWN_ALWAYS

    Always teardown the resource.

    TeardownOnSuccess