1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataflow
  5. dataflow/v1b3
  6. getJob

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.dataflow/v1b3.getJob

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Gets the state of the specified Cloud Dataflow job. To get the state of a job, we recommend using projects.locations.jobs.get with a [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints). Using projects.jobs.get is not recommended, as you can only get the state of jobs that are running in us-central1.

    Using getJob

    Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

    function getJob(args: GetJobArgs, opts?: InvokeOptions): Promise<GetJobResult>
    function getJobOutput(args: GetJobOutputArgs, opts?: InvokeOptions): Output<GetJobResult>
    def get_job(job_id: Optional[str] = None,
                location: Optional[str] = None,
                project: Optional[str] = None,
                view: Optional[str] = None,
                opts: Optional[InvokeOptions] = None) -> GetJobResult
    def get_job_output(job_id: Optional[pulumi.Input[str]] = None,
                location: Optional[pulumi.Input[str]] = None,
                project: Optional[pulumi.Input[str]] = None,
                view: Optional[pulumi.Input[str]] = None,
                opts: Optional[InvokeOptions] = None) -> Output[GetJobResult]
    func LookupJob(ctx *Context, args *LookupJobArgs, opts ...InvokeOption) (*LookupJobResult, error)
    func LookupJobOutput(ctx *Context, args *LookupJobOutputArgs, opts ...InvokeOption) LookupJobResultOutput

    > Note: This function is named LookupJob in the Go SDK.

    public static class GetJob 
    {
        public static Task<GetJobResult> InvokeAsync(GetJobArgs args, InvokeOptions? opts = null)
        public static Output<GetJobResult> Invoke(GetJobInvokeArgs args, InvokeOptions? opts = null)
    }
    public static CompletableFuture<GetJobResult> getJob(GetJobArgs args, InvokeOptions options)
    // Output-based functions aren't available in Java yet
    
    fn::invoke:
      function: google-native:dataflow/v1b3:getJob
      arguments:
        # arguments dictionary

    The following arguments are supported:

    JobId string
    Location string
    Project string
    View string
    JobId string
    Location string
    Project string
    View string
    jobId String
    location String
    project String
    view String
    jobId string
    location string
    project string
    view string
    jobId String
    location String
    project String
    view String

    getJob Result

    The following output properties are available:

    ClientRequestId string
    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
    CreateTime string
    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
    CreatedFromSnapshotId string
    If this is specified, the job's initial state is populated from the given snapshot.
    CurrentState string
    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    CurrentStateTime string
    The timestamp associated with the current state.
    Environment Pulumi.GoogleNative.Dataflow.V1b3.Outputs.EnvironmentResponse
    The environment for the job.
    ExecutionInfo Pulumi.GoogleNative.Dataflow.V1b3.Outputs.JobExecutionInfoResponse
    Deprecated.

    Deprecated:Deprecated.

    JobMetadata Pulumi.GoogleNative.Dataflow.V1b3.Outputs.JobMetadataResponse
    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
    Labels Dictionary<string, string>
    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
    Location string
    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
    Name string
    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
    PipelineDescription Pulumi.GoogleNative.Dataflow.V1b3.Outputs.PipelineDescriptionResponse
    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
    Project string
    The ID of the Cloud Platform project that the job belongs to.
    ReplaceJobId string
    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
    ReplacedByJobId string
    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
    RequestedState string
    The job's requested state. Applies to UpdateJob requests. Set requested_state with UpdateJob requests to switch between the states JOB_STATE_STOPPED and JOB_STATE_RUNNING. You can also use UpdateJob requests to change a job's state from JOB_STATE_RUNNING to JOB_STATE_CANCELLED, JOB_STATE_DONE, or JOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect on CreateJob requests.
    RuntimeUpdatableParams Pulumi.GoogleNative.Dataflow.V1b3.Outputs.RuntimeUpdatableParamsResponse
    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
    SatisfiesPzi bool
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    SatisfiesPzs bool
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    StageStates List<Pulumi.GoogleNative.Dataflow.V1b3.Outputs.ExecutionStageStateResponse>
    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    StartTime string
    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
    Steps List<Pulumi.GoogleNative.Dataflow.V1b3.Outputs.StepResponse>
    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
    StepsLocation string
    The Cloud Storage location where the steps are stored.
    TempFiles List<string>
    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    TransformNameMapping Dictionary<string, string>
    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
    Type string
    The type of Cloud Dataflow job.
    ClientRequestId string
    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
    CreateTime string
    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
    CreatedFromSnapshotId string
    If this is specified, the job's initial state is populated from the given snapshot.
    CurrentState string
    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    CurrentStateTime string
    The timestamp associated with the current state.
    Environment EnvironmentResponse
    The environment for the job.
    ExecutionInfo JobExecutionInfoResponse
    Deprecated.

    Deprecated:Deprecated.

    JobMetadata JobMetadataResponse
    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
    Labels map[string]string
    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
    Location string
    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
    Name string
    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
    PipelineDescription PipelineDescriptionResponse
    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
    Project string
    The ID of the Cloud Platform project that the job belongs to.
    ReplaceJobId string
    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
    ReplacedByJobId string
    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
    RequestedState string
    The job's requested state. Applies to UpdateJob requests. Set requested_state with UpdateJob requests to switch between the states JOB_STATE_STOPPED and JOB_STATE_RUNNING. You can also use UpdateJob requests to change a job's state from JOB_STATE_RUNNING to JOB_STATE_CANCELLED, JOB_STATE_DONE, or JOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect on CreateJob requests.
    RuntimeUpdatableParams RuntimeUpdatableParamsResponse
    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
    SatisfiesPzi bool
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    SatisfiesPzs bool
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    StageStates []ExecutionStageStateResponse
    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    StartTime string
    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
    Steps []StepResponse
    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
    StepsLocation string
    The Cloud Storage location where the steps are stored.
    TempFiles []string
    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    TransformNameMapping map[string]string
    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
    Type string
    The type of Cloud Dataflow job.
    clientRequestId String
    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
    createTime String
    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
    createdFromSnapshotId String
    If this is specified, the job's initial state is populated from the given snapshot.
    currentState String
    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    currentStateTime String
    The timestamp associated with the current state.
    environment EnvironmentResponse
    The environment for the job.
    executionInfo JobExecutionInfoResponse
    Deprecated.

    Deprecated:Deprecated.

    jobMetadata JobMetadataResponse
    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
    labels Map<String,String>
    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
    location String
    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
    name String
    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
    pipelineDescription PipelineDescriptionResponse
    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
    project String
    The ID of the Cloud Platform project that the job belongs to.
    replaceJobId String
    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
    replacedByJobId String
    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
    requestedState String
    The job's requested state. Applies to UpdateJob requests. Set requested_state with UpdateJob requests to switch between the states JOB_STATE_STOPPED and JOB_STATE_RUNNING. You can also use UpdateJob requests to change a job's state from JOB_STATE_RUNNING to JOB_STATE_CANCELLED, JOB_STATE_DONE, or JOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect on CreateJob requests.
    runtimeUpdatableParams RuntimeUpdatableParamsResponse
    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
    satisfiesPzi Boolean
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    satisfiesPzs Boolean
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    stageStates List<ExecutionStageStateResponse>
    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    startTime String
    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
    steps List<StepResponse>
    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
    stepsLocation String
    The Cloud Storage location where the steps are stored.
    tempFiles List<String>
    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    transformNameMapping Map<String,String>
    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
    type String
    The type of Cloud Dataflow job.
    clientRequestId string
    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
    createTime string
    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
    createdFromSnapshotId string
    If this is specified, the job's initial state is populated from the given snapshot.
    currentState string
    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    currentStateTime string
    The timestamp associated with the current state.
    environment EnvironmentResponse
    The environment for the job.
    executionInfo JobExecutionInfoResponse
    Deprecated.

    Deprecated:Deprecated.

    jobMetadata JobMetadataResponse
    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
    labels {[key: string]: string}
    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
    location string
    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
    name string
    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
    pipelineDescription PipelineDescriptionResponse
    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
    project string
    The ID of the Cloud Platform project that the job belongs to.
    replaceJobId string
    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
    replacedByJobId string
    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
    requestedState string
    The job's requested state. Applies to UpdateJob requests. Set requested_state with UpdateJob requests to switch between the states JOB_STATE_STOPPED and JOB_STATE_RUNNING. You can also use UpdateJob requests to change a job's state from JOB_STATE_RUNNING to JOB_STATE_CANCELLED, JOB_STATE_DONE, or JOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect on CreateJob requests.
    runtimeUpdatableParams RuntimeUpdatableParamsResponse
    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
    satisfiesPzi boolean
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    satisfiesPzs boolean
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    stageStates ExecutionStageStateResponse[]
    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    startTime string
    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
    steps StepResponse[]
    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
    stepsLocation string
    The Cloud Storage location where the steps are stored.
    tempFiles string[]
    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    transformNameMapping {[key: string]: string}
    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
    type string
    The type of Cloud Dataflow job.
    client_request_id str
    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
    create_time str
    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
    created_from_snapshot_id str
    If this is specified, the job's initial state is populated from the given snapshot.
    current_state str
    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    current_state_time str
    The timestamp associated with the current state.
    environment EnvironmentResponse
    The environment for the job.
    execution_info JobExecutionInfoResponse
    Deprecated.

    Deprecated:Deprecated.

    job_metadata JobMetadataResponse
    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
    labels Mapping[str, str]
    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
    location str
    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
    name str
    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
    pipeline_description PipelineDescriptionResponse
    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
    project str
    The ID of the Cloud Platform project that the job belongs to.
    replace_job_id str
    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
    replaced_by_job_id str
    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
    requested_state str
    The job's requested state. Applies to UpdateJob requests. Set requested_state with UpdateJob requests to switch between the states JOB_STATE_STOPPED and JOB_STATE_RUNNING. You can also use UpdateJob requests to change a job's state from JOB_STATE_RUNNING to JOB_STATE_CANCELLED, JOB_STATE_DONE, or JOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect on CreateJob requests.
    runtime_updatable_params RuntimeUpdatableParamsResponse
    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
    satisfies_pzi bool
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    satisfies_pzs bool
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    stage_states Sequence[ExecutionStageStateResponse]
    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    start_time str
    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
    steps Sequence[StepResponse]
    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
    steps_location str
    The Cloud Storage location where the steps are stored.
    temp_files Sequence[str]
    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    transform_name_mapping Mapping[str, str]
    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
    type str
    The type of Cloud Dataflow job.
    clientRequestId String
    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
    createTime String
    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
    createdFromSnapshotId String
    If this is specified, the job's initial state is populated from the given snapshot.
    currentState String
    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    currentStateTime String
    The timestamp associated with the current state.
    environment Property Map
    The environment for the job.
    executionInfo Property Map
    Deprecated.

    Deprecated:Deprecated.

    jobMetadata Property Map
    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
    labels Map<String>
    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
    location String
    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
    name String
    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
    pipelineDescription Property Map
    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
    project String
    The ID of the Cloud Platform project that the job belongs to.
    replaceJobId String
    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
    replacedByJobId String
    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
    requestedState String
    The job's requested state. Applies to UpdateJob requests. Set requested_state with UpdateJob requests to switch between the states JOB_STATE_STOPPED and JOB_STATE_RUNNING. You can also use UpdateJob requests to change a job's state from JOB_STATE_RUNNING to JOB_STATE_CANCELLED, JOB_STATE_DONE, or JOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect on CreateJob requests.
    runtimeUpdatableParams Property Map
    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
    satisfiesPzi Boolean
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    satisfiesPzs Boolean
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    stageStates List<Property Map>
    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    startTime String
    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
    steps List<Property Map>
    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
    stepsLocation String
    The Cloud Storage location where the steps are stored.
    tempFiles List<String>
    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    transformNameMapping Map<String>
    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
    type String
    The type of Cloud Dataflow job.

    Supporting Types

    AutoscalingSettingsResponse

    Algorithm string
    The algorithm to use for autoscaling.
    MaxNumWorkers int
    The maximum number of workers to cap scaling at.
    Algorithm string
    The algorithm to use for autoscaling.
    MaxNumWorkers int
    The maximum number of workers to cap scaling at.
    algorithm String
    The algorithm to use for autoscaling.
    maxNumWorkers Integer
    The maximum number of workers to cap scaling at.
    algorithm string
    The algorithm to use for autoscaling.
    maxNumWorkers number
    The maximum number of workers to cap scaling at.
    algorithm str
    The algorithm to use for autoscaling.
    max_num_workers int
    The maximum number of workers to cap scaling at.
    algorithm String
    The algorithm to use for autoscaling.
    maxNumWorkers Number
    The maximum number of workers to cap scaling at.

    BigQueryIODetailsResponse

    Dataset string
    Dataset accessed in the connection.
    Project string
    Project accessed in the connection.
    Query string
    Query used to access data in the connection.
    Table string
    Table accessed in the connection.
    Dataset string
    Dataset accessed in the connection.
    Project string
    Project accessed in the connection.
    Query string
    Query used to access data in the connection.
    Table string
    Table accessed in the connection.
    dataset String
    Dataset accessed in the connection.
    project String
    Project accessed in the connection.
    query String
    Query used to access data in the connection.
    table String
    Table accessed in the connection.
    dataset string
    Dataset accessed in the connection.
    project string
    Project accessed in the connection.
    query string
    Query used to access data in the connection.
    table string
    Table accessed in the connection.
    dataset str
    Dataset accessed in the connection.
    project str
    Project accessed in the connection.
    query str
    Query used to access data in the connection.
    table str
    Table accessed in the connection.
    dataset String
    Dataset accessed in the connection.
    project String
    Project accessed in the connection.
    query String
    Query used to access data in the connection.
    table String
    Table accessed in the connection.

    BigTableIODetailsResponse

    InstanceId string
    InstanceId accessed in the connection.
    Project string
    ProjectId accessed in the connection.
    TableId string
    TableId accessed in the connection.
    InstanceId string
    InstanceId accessed in the connection.
    Project string
    ProjectId accessed in the connection.
    TableId string
    TableId accessed in the connection.
    instanceId String
    InstanceId accessed in the connection.
    project String
    ProjectId accessed in the connection.
    tableId String
    TableId accessed in the connection.
    instanceId string
    InstanceId accessed in the connection.
    project string
    ProjectId accessed in the connection.
    tableId string
    TableId accessed in the connection.
    instance_id str
    InstanceId accessed in the connection.
    project str
    ProjectId accessed in the connection.
    table_id str
    TableId accessed in the connection.
    instanceId String
    InstanceId accessed in the connection.
    project String
    ProjectId accessed in the connection.
    tableId String
    TableId accessed in the connection.

    ComponentSourceResponse

    Name string
    Dataflow service generated name for this source.
    OriginalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    UserName string
    Human-readable name for this transform; may be user or system generated.
    Name string
    Dataflow service generated name for this source.
    OriginalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    UserName string
    Human-readable name for this transform; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransformOrCollection String
    User name for the original user transform or collection with which this source is most closely associated.
    userName String
    Human-readable name for this transform; may be user or system generated.
    name string
    Dataflow service generated name for this source.
    originalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    userName string
    Human-readable name for this transform; may be user or system generated.
    name str
    Dataflow service generated name for this source.
    original_transform_or_collection str
    User name for the original user transform or collection with which this source is most closely associated.
    user_name str
    Human-readable name for this transform; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransformOrCollection String
    User name for the original user transform or collection with which this source is most closely associated.
    userName String
    Human-readable name for this transform; may be user or system generated.

    ComponentTransformResponse

    Name string
    Dataflow service generated name for this source.
    OriginalTransform string
    User name for the original user transform with which this transform is most closely associated.
    UserName string
    Human-readable name for this transform; may be user or system generated.
    Name string
    Dataflow service generated name for this source.
    OriginalTransform string
    User name for the original user transform with which this transform is most closely associated.
    UserName string
    Human-readable name for this transform; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransform String
    User name for the original user transform with which this transform is most closely associated.
    userName String
    Human-readable name for this transform; may be user or system generated.
    name string
    Dataflow service generated name for this source.
    originalTransform string
    User name for the original user transform with which this transform is most closely associated.
    userName string
    Human-readable name for this transform; may be user or system generated.
    name str
    Dataflow service generated name for this source.
    original_transform str
    User name for the original user transform with which this transform is most closely associated.
    user_name str
    Human-readable name for this transform; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransform String
    User name for the original user transform with which this transform is most closely associated.
    userName String
    Human-readable name for this transform; may be user or system generated.

    DataSamplingConfigResponse

    Behaviors List<string>
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
    Behaviors []string
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
    behaviors List<String>
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
    behaviors string[]
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
    behaviors Sequence[str]
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
    behaviors List<String>
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.

    DatastoreIODetailsResponse

    Namespace string
    Namespace used in the connection.
    Project string
    ProjectId accessed in the connection.
    Namespace string
    Namespace used in the connection.
    Project string
    ProjectId accessed in the connection.
    namespace String
    Namespace used in the connection.
    project String
    ProjectId accessed in the connection.
    namespace string
    Namespace used in the connection.
    project string
    ProjectId accessed in the connection.
    namespace str
    Namespace used in the connection.
    project str
    ProjectId accessed in the connection.
    namespace String
    Namespace used in the connection.
    project String
    ProjectId accessed in the connection.

    DebugOptionsResponse

    DataSampling Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DataSamplingConfigResponse
    Configuration options for sampling elements from a running pipeline.
    EnableHotKeyLogging bool
    When true, enables the logging of the literal hot key to the user's Cloud Logging.
    DataSampling DataSamplingConfigResponse
    Configuration options for sampling elements from a running pipeline.
    EnableHotKeyLogging bool
    When true, enables the logging of the literal hot key to the user's Cloud Logging.
    dataSampling DataSamplingConfigResponse
    Configuration options for sampling elements from a running pipeline.
    enableHotKeyLogging Boolean
    When true, enables the logging of the literal hot key to the user's Cloud Logging.
    dataSampling DataSamplingConfigResponse
    Configuration options for sampling elements from a running pipeline.
    enableHotKeyLogging boolean
    When true, enables the logging of the literal hot key to the user's Cloud Logging.
    data_sampling DataSamplingConfigResponse
    Configuration options for sampling elements from a running pipeline.
    enable_hot_key_logging bool
    When true, enables the logging of the literal hot key to the user's Cloud Logging.
    dataSampling Property Map
    Configuration options for sampling elements from a running pipeline.
    enableHotKeyLogging Boolean
    When true, enables the logging of the literal hot key to the user's Cloud Logging.

    DiskResponse

    DiskType string
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    MountPoint string
    Directory in a VM where disk is mounted.
    SizeGb int
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    DiskType string
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    MountPoint string
    Directory in a VM where disk is mounted.
    SizeGb int
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskType String
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    mountPoint String
    Directory in a VM where disk is mounted.
    sizeGb Integer
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskType string
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    mountPoint string
    Directory in a VM where disk is mounted.
    sizeGb number
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    disk_type str
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    mount_point str
    Directory in a VM where disk is mounted.
    size_gb int
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskType String
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    mountPoint String
    Directory in a VM where disk is mounted.
    sizeGb Number
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    DisplayDataResponse

    BoolValue bool
    Contains value if the data is of a boolean type.
    DurationValue string
    Contains value if the data is of duration type.
    FloatValue double
    Contains value if the data is of float type.
    Int64Value string
    Contains value if the data is of int64 type.
    JavaClassValue string
    Contains value if the data is of java class type.
    Key string
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    Label string
    An optional label to display in a dax UI for the element.
    Namespace string
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    ShortStrValue string
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    StrValue string
    Contains value if the data is of string type.
    TimestampValue string
    Contains value if the data is of timestamp type.
    Url string
    An optional full URL.
    BoolValue bool
    Contains value if the data is of a boolean type.
    DurationValue string
    Contains value if the data is of duration type.
    FloatValue float64
    Contains value if the data is of float type.
    Int64Value string
    Contains value if the data is of int64 type.
    JavaClassValue string
    Contains value if the data is of java class type.
    Key string
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    Label string
    An optional label to display in a dax UI for the element.
    Namespace string
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    ShortStrValue string
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    StrValue string
    Contains value if the data is of string type.
    TimestampValue string
    Contains value if the data is of timestamp type.
    Url string
    An optional full URL.
    boolValue Boolean
    Contains value if the data is of a boolean type.
    durationValue String
    Contains value if the data is of duration type.
    floatValue Double
    Contains value if the data is of float type.
    int64Value String
    Contains value if the data is of int64 type.
    javaClassValue String
    Contains value if the data is of java class type.
    key String
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    label String
    An optional label to display in a dax UI for the element.
    namespace String
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    shortStrValue String
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    strValue String
    Contains value if the data is of string type.
    timestampValue String
    Contains value if the data is of timestamp type.
    url String
    An optional full URL.
    boolValue boolean
    Contains value if the data is of a boolean type.
    durationValue string
    Contains value if the data is of duration type.
    floatValue number
    Contains value if the data is of float type.
    int64Value string
    Contains value if the data is of int64 type.
    javaClassValue string
    Contains value if the data is of java class type.
    key string
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    label string
    An optional label to display in a dax UI for the element.
    namespace string
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    shortStrValue string
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    strValue string
    Contains value if the data is of string type.
    timestampValue string
    Contains value if the data is of timestamp type.
    url string
    An optional full URL.
    bool_value bool
    Contains value if the data is of a boolean type.
    duration_value str
    Contains value if the data is of duration type.
    float_value float
    Contains value if the data is of float type.
    int64_value str
    Contains value if the data is of int64 type.
    java_class_value str
    Contains value if the data is of java class type.
    key str
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    label str
    An optional label to display in a dax UI for the element.
    namespace str
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    short_str_value str
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    str_value str
    Contains value if the data is of string type.
    timestamp_value str
    Contains value if the data is of timestamp type.
    url str
    An optional full URL.
    boolValue Boolean
    Contains value if the data is of a boolean type.
    durationValue String
    Contains value if the data is of duration type.
    floatValue Number
    Contains value if the data is of float type.
    int64Value String
    Contains value if the data is of int64 type.
    javaClassValue String
    Contains value if the data is of java class type.
    key String
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    label String
    An optional label to display in a dax UI for the element.
    namespace String
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    shortStrValue String
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    strValue String
    Contains value if the data is of string type.
    timestampValue String
    Contains value if the data is of timestamp type.
    url String
    An optional full URL.

    EnvironmentResponse

    ClusterManagerApiService string
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    Dataset string
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    DebugOptions Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DebugOptionsResponse
    Any debugging options to be supplied to the job.
    Experiments List<string>
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    FlexResourceSchedulingGoal string
    Which Flexible Resource Scheduling mode to run in.
    InternalExperiments Dictionary<string, string>
    Experimental settings.
    SdkPipelineOptions Dictionary<string, string>
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    ServiceAccountEmail string
    Identity to run virtual machines as. Defaults to the default account.
    ServiceKmsKeyName string
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    ServiceOptions List<string>
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    ShuffleMode string
    The shuffle mode used for the job.
    TempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    UseStreamingEngineResourceBasedBilling bool
    Whether the job uses the new streaming engine billing model based on resource usage.
    UserAgent Dictionary<string, string>
    A description of the process that generated the request.
    Version Dictionary<string, string>
    A structure describing which components and their versions of the service are required in order to run the job.
    WorkerPools List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.WorkerPoolResponse>
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    WorkerRegion string
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    WorkerZone string
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
    ClusterManagerApiService string
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    Dataset string
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    DebugOptions DebugOptionsResponse
    Any debugging options to be supplied to the job.
    Experiments []string
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    FlexResourceSchedulingGoal string
    Which Flexible Resource Scheduling mode to run in.
    InternalExperiments map[string]string
    Experimental settings.
    SdkPipelineOptions map[string]string
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    ServiceAccountEmail string
    Identity to run virtual machines as. Defaults to the default account.
    ServiceKmsKeyName string
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    ServiceOptions []string
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    ShuffleMode string
    The shuffle mode used for the job.
    TempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    UseStreamingEngineResourceBasedBilling bool
    Whether the job uses the new streaming engine billing model based on resource usage.
    UserAgent map[string]string
    A description of the process that generated the request.
    Version map[string]string
    A structure describing which components and their versions of the service are required in order to run the job.
    WorkerPools []WorkerPoolResponse
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    WorkerRegion string
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    WorkerZone string
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
    clusterManagerApiService String
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    dataset String
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    debugOptions DebugOptionsResponse
    Any debugging options to be supplied to the job.
    experiments List<String>
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    flexResourceSchedulingGoal String
    Which Flexible Resource Scheduling mode to run in.
    internalExperiments Map<String,String>
    Experimental settings.
    sdkPipelineOptions Map<String,String>
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    serviceAccountEmail String
    Identity to run virtual machines as. Defaults to the default account.
    serviceKmsKeyName String
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    serviceOptions List<String>
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    shuffleMode String
    The shuffle mode used for the job.
    tempStoragePrefix String
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    useStreamingEngineResourceBasedBilling Boolean
    Whether the job uses the new streaming engine billing model based on resource usage.
    userAgent Map<String,String>
    A description of the process that generated the request.
    version Map<String,String>
    A structure describing which components and their versions of the service are required in order to run the job.
    workerPools List<WorkerPoolResponse>
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    workerRegion String
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    workerZone String
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
    clusterManagerApiService string
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    dataset string
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    debugOptions DebugOptionsResponse
    Any debugging options to be supplied to the job.
    experiments string[]
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    flexResourceSchedulingGoal string
    Which Flexible Resource Scheduling mode to run in.
    internalExperiments {[key: string]: string}
    Experimental settings.
    sdkPipelineOptions {[key: string]: string}
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    serviceAccountEmail string
    Identity to run virtual machines as. Defaults to the default account.
    serviceKmsKeyName string
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    serviceOptions string[]
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    shuffleMode string
    The shuffle mode used for the job.
    tempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    useStreamingEngineResourceBasedBilling boolean
    Whether the job uses the new streaming engine billing model based on resource usage.
    userAgent {[key: string]: string}
    A description of the process that generated the request.
    version {[key: string]: string}
    A structure describing which components and their versions of the service are required in order to run the job.
    workerPools WorkerPoolResponse[]
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    workerRegion string
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    workerZone string
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
    cluster_manager_api_service str
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    dataset str
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    debug_options DebugOptionsResponse
    Any debugging options to be supplied to the job.
    experiments Sequence[str]
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    flex_resource_scheduling_goal str
    Which Flexible Resource Scheduling mode to run in.
    internal_experiments Mapping[str, str]
    Experimental settings.
    sdk_pipeline_options Mapping[str, str]
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    service_account_email str
    Identity to run virtual machines as. Defaults to the default account.
    service_kms_key_name str
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    service_options Sequence[str]
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    shuffle_mode str
    The shuffle mode used for the job.
    temp_storage_prefix str
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    use_streaming_engine_resource_based_billing bool
    Whether the job uses the new streaming engine billing model based on resource usage.
    user_agent Mapping[str, str]
    A description of the process that generated the request.
    version Mapping[str, str]
    A structure describing which components and their versions of the service are required in order to run the job.
    worker_pools Sequence[WorkerPoolResponse]
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    worker_region str
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    worker_zone str
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
    clusterManagerApiService String
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    dataset String
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    debugOptions Property Map
    Any debugging options to be supplied to the job.
    experiments List<String>
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    flexResourceSchedulingGoal String
    Which Flexible Resource Scheduling mode to run in.
    internalExperiments Map<String>
    Experimental settings.
    sdkPipelineOptions Map<String>
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    serviceAccountEmail String
    Identity to run virtual machines as. Defaults to the default account.
    serviceKmsKeyName String
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    serviceOptions List<String>
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    shuffleMode String
    The shuffle mode used for the job.
    tempStoragePrefix String
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    useStreamingEngineResourceBasedBilling Boolean
    Whether the job uses the new streaming engine billing model based on resource usage.
    userAgent Map<String>
    A description of the process that generated the request.
    version Map<String>
    A structure describing which components and their versions of the service are required in order to run the job.
    workerPools List<Property Map>
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    workerRegion String
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    workerZone String
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

    ExecutionStageStateResponse

    CurrentStateTime string
    The time at which the stage transitioned to this state.
    ExecutionStageName string
    The name of the execution stage.
    ExecutionStageState string
    Executions stage states allow the same set of values as JobState.
    CurrentStateTime string
    The time at which the stage transitioned to this state.
    ExecutionStageName string
    The name of the execution stage.
    ExecutionStageState string
    Executions stage states allow the same set of values as JobState.
    currentStateTime String
    The time at which the stage transitioned to this state.
    executionStageName String
    The name of the execution stage.
    executionStageState String
    Executions stage states allow the same set of values as JobState.
    currentStateTime string
    The time at which the stage transitioned to this state.
    executionStageName string
    The name of the execution stage.
    executionStageState string
    Executions stage states allow the same set of values as JobState.
    current_state_time str
    The time at which the stage transitioned to this state.
    execution_stage_name str
    The name of the execution stage.
    execution_stage_state str
    Executions stage states allow the same set of values as JobState.
    currentStateTime String
    The time at which the stage transitioned to this state.
    executionStageName String
    The name of the execution stage.
    executionStageState String
    Executions stage states allow the same set of values as JobState.

    ExecutionStageSummaryResponse

    ComponentSource List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ComponentSourceResponse>
    Collections produced and consumed by component transforms of this stage.
    ComponentTransform List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ComponentTransformResponse>
    Transforms that comprise this execution stage.
    InputSource List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.StageSourceResponse>
    Input sources for this stage.
    Kind string
    Type of transform this stage is executing.
    Name string
    Dataflow service generated name for this stage.
    OutputSource List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.StageSourceResponse>
    Output sources for this stage.
    PrerequisiteStage List<string>
    Other stages that must complete before this stage can run.
    ComponentSource []ComponentSourceResponse
    Collections produced and consumed by component transforms of this stage.
    ComponentTransform []ComponentTransformResponse
    Transforms that comprise this execution stage.
    InputSource []StageSourceResponse
    Input sources for this stage.
    Kind string
    Type of transform this stage is executing.
    Name string
    Dataflow service generated name for this stage.
    OutputSource []StageSourceResponse
    Output sources for this stage.
    PrerequisiteStage []string
    Other stages that must complete before this stage can run.
    componentSource List<ComponentSourceResponse>
    Collections produced and consumed by component transforms of this stage.
    componentTransform List<ComponentTransformResponse>
    Transforms that comprise this execution stage.
    inputSource List<StageSourceResponse>
    Input sources for this stage.
    kind String
    Type of transform this stage is executing.
    name String
    Dataflow service generated name for this stage.
    outputSource List<StageSourceResponse>
    Output sources for this stage.
    prerequisiteStage List<String>
    Other stages that must complete before this stage can run.
    componentSource ComponentSourceResponse[]
    Collections produced and consumed by component transforms of this stage.
    componentTransform ComponentTransformResponse[]
    Transforms that comprise this execution stage.
    inputSource StageSourceResponse[]
    Input sources for this stage.
    kind string
    Type of transform this stage is executing.
    name string
    Dataflow service generated name for this stage.
    outputSource StageSourceResponse[]
    Output sources for this stage.
    prerequisiteStage string[]
    Other stages that must complete before this stage can run.
    component_source Sequence[ComponentSourceResponse]
    Collections produced and consumed by component transforms of this stage.
    component_transform Sequence[ComponentTransformResponse]
    Transforms that comprise this execution stage.
    input_source Sequence[StageSourceResponse]
    Input sources for this stage.
    kind str
    Type of transform this stage is executing.
    name str
    Dataflow service generated name for this stage.
    output_source Sequence[StageSourceResponse]
    Output sources for this stage.
    prerequisite_stage Sequence[str]
    Other stages that must complete before this stage can run.
    componentSource List<Property Map>
    Collections produced and consumed by component transforms of this stage.
    componentTransform List<Property Map>
    Transforms that comprise this execution stage.
    inputSource List<Property Map>
    Input sources for this stage.
    kind String
    Type of transform this stage is executing.
    name String
    Dataflow service generated name for this stage.
    outputSource List<Property Map>
    Output sources for this stage.
    prerequisiteStage List<String>
    Other stages that must complete before this stage can run.

    FileIODetailsResponse

    FilePattern string
    File Pattern used to access files by the connector.
    FilePattern string
    File Pattern used to access files by the connector.
    filePattern String
    File Pattern used to access files by the connector.
    filePattern string
    File Pattern used to access files by the connector.
    file_pattern str
    File Pattern used to access files by the connector.
    filePattern String
    File Pattern used to access files by the connector.

    JobExecutionInfoResponse

    Stages Dictionary<string, string>
    A mapping from each stage to the information about that stage.
    Stages map[string]string
    A mapping from each stage to the information about that stage.
    stages Map<String,String>
    A mapping from each stage to the information about that stage.
    stages {[key: string]: string}
    A mapping from each stage to the information about that stage.
    stages Mapping[str, str]
    A mapping from each stage to the information about that stage.
    stages Map<String>
    A mapping from each stage to the information about that stage.

    JobMetadataResponse

    BigTableDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.BigTableIODetailsResponse>
    Identification of a Cloud Bigtable source used in the Dataflow job.
    BigqueryDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.BigQueryIODetailsResponse>
    Identification of a BigQuery source used in the Dataflow job.
    DatastoreDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DatastoreIODetailsResponse>
    Identification of a Datastore source used in the Dataflow job.
    FileDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.FileIODetailsResponse>
    Identification of a File source used in the Dataflow job.
    PubsubDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.PubSubIODetailsResponse>
    Identification of a Pub/Sub source used in the Dataflow job.
    SdkVersion Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SdkVersionResponse
    The SDK version used to run the job.
    SpannerDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SpannerIODetailsResponse>
    Identification of a Spanner source used in the Dataflow job.
    UserDisplayProperties Dictionary<string, string>
    List of display properties to help UI filter jobs.
    BigTableDetails []BigTableIODetailsResponse
    Identification of a Cloud Bigtable source used in the Dataflow job.
    BigqueryDetails []BigQueryIODetailsResponse
    Identification of a BigQuery source used in the Dataflow job.
    DatastoreDetails []DatastoreIODetailsResponse
    Identification of a Datastore source used in the Dataflow job.
    FileDetails []FileIODetailsResponse
    Identification of a File source used in the Dataflow job.
    PubsubDetails []PubSubIODetailsResponse
    Identification of a Pub/Sub source used in the Dataflow job.
    SdkVersion SdkVersionResponse
    The SDK version used to run the job.
    SpannerDetails []SpannerIODetailsResponse
    Identification of a Spanner source used in the Dataflow job.
    UserDisplayProperties map[string]string
    List of display properties to help UI filter jobs.
    bigTableDetails List<BigTableIODetailsResponse>
    Identification of a Cloud Bigtable source used in the Dataflow job.
    bigqueryDetails List<BigQueryIODetailsResponse>
    Identification of a BigQuery source used in the Dataflow job.
    datastoreDetails List<DatastoreIODetailsResponse>
    Identification of a Datastore source used in the Dataflow job.
    fileDetails List<FileIODetailsResponse>
    Identification of a File source used in the Dataflow job.
    pubsubDetails List<PubSubIODetailsResponse>
    Identification of a Pub/Sub source used in the Dataflow job.
    sdkVersion SdkVersionResponse
    The SDK version used to run the job.
    spannerDetails List<SpannerIODetailsResponse>
    Identification of a Spanner source used in the Dataflow job.
    userDisplayProperties Map<String,String>
    List of display properties to help UI filter jobs.
    bigTableDetails BigTableIODetailsResponse[]
    Identification of a Cloud Bigtable source used in the Dataflow job.
    bigqueryDetails BigQueryIODetailsResponse[]
    Identification of a BigQuery source used in the Dataflow job.
    datastoreDetails DatastoreIODetailsResponse[]
    Identification of a Datastore source used in the Dataflow job.
    fileDetails FileIODetailsResponse[]
    Identification of a File source used in the Dataflow job.
    pubsubDetails PubSubIODetailsResponse[]
    Identification of a Pub/Sub source used in the Dataflow job.
    sdkVersion SdkVersionResponse
    The SDK version used to run the job.
    spannerDetails SpannerIODetailsResponse[]
    Identification of a Spanner source used in the Dataflow job.
    userDisplayProperties {[key: string]: string}
    List of display properties to help UI filter jobs.
    big_table_details Sequence[BigTableIODetailsResponse]
    Identification of a Cloud Bigtable source used in the Dataflow job.
    bigquery_details Sequence[BigQueryIODetailsResponse]
    Identification of a BigQuery source used in the Dataflow job.
    datastore_details Sequence[DatastoreIODetailsResponse]
    Identification of a Datastore source used in the Dataflow job.
    file_details Sequence[FileIODetailsResponse]
    Identification of a File source used in the Dataflow job.
    pubsub_details Sequence[PubSubIODetailsResponse]
    Identification of a Pub/Sub source used in the Dataflow job.
    sdk_version SdkVersionResponse
    The SDK version used to run the job.
    spanner_details Sequence[SpannerIODetailsResponse]
    Identification of a Spanner source used in the Dataflow job.
    user_display_properties Mapping[str, str]
    List of display properties to help UI filter jobs.
    bigTableDetails List<Property Map>
    Identification of a Cloud Bigtable source used in the Dataflow job.
    bigqueryDetails List<Property Map>
    Identification of a BigQuery source used in the Dataflow job.
    datastoreDetails List<Property Map>
    Identification of a Datastore source used in the Dataflow job.
    fileDetails List<Property Map>
    Identification of a File source used in the Dataflow job.
    pubsubDetails List<Property Map>
    Identification of a Pub/Sub source used in the Dataflow job.
    sdkVersion Property Map
    The SDK version used to run the job.
    spannerDetails List<Property Map>
    Identification of a Spanner source used in the Dataflow job.
    userDisplayProperties Map<String>
    List of display properties to help UI filter jobs.

    PackageResponse

    Location string
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    Name string
    The name of the package.
    Location string
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    Name string
    The name of the package.
    location String
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    name String
    The name of the package.
    location string
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    name string
    The name of the package.
    location str
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    name str
    The name of the package.
    location String
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    name String
    The name of the package.

    PipelineDescriptionResponse

    DisplayData List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DisplayDataResponse>
    Pipeline level display data.
    ExecutionPipelineStage List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ExecutionStageSummaryResponse>
    Description of each stage of execution of the pipeline.
    OriginalPipelineTransform List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.TransformSummaryResponse>
    Description of each transform in the pipeline and collections between them.
    StepNamesHash string
    A hash value of the submitted pipeline portable graph step names if exists.
    DisplayData []DisplayDataResponse
    Pipeline level display data.
    ExecutionPipelineStage []ExecutionStageSummaryResponse
    Description of each stage of execution of the pipeline.
    OriginalPipelineTransform []TransformSummaryResponse
    Description of each transform in the pipeline and collections between them.
    StepNamesHash string
    A hash value of the submitted pipeline portable graph step names if exists.
    displayData List<DisplayDataResponse>
    Pipeline level display data.
    executionPipelineStage List<ExecutionStageSummaryResponse>
    Description of each stage of execution of the pipeline.
    originalPipelineTransform List<TransformSummaryResponse>
    Description of each transform in the pipeline and collections between them.
    stepNamesHash String
    A hash value of the submitted pipeline portable graph step names if exists.
    displayData DisplayDataResponse[]
    Pipeline level display data.
    executionPipelineStage ExecutionStageSummaryResponse[]
    Description of each stage of execution of the pipeline.
    originalPipelineTransform TransformSummaryResponse[]
    Description of each transform in the pipeline and collections between them.
    stepNamesHash string
    A hash value of the submitted pipeline portable graph step names if exists.
    display_data Sequence[DisplayDataResponse]
    Pipeline level display data.
    execution_pipeline_stage Sequence[ExecutionStageSummaryResponse]
    Description of each stage of execution of the pipeline.
    original_pipeline_transform Sequence[TransformSummaryResponse]
    Description of each transform in the pipeline and collections between them.
    step_names_hash str
    A hash value of the submitted pipeline portable graph step names if exists.
    displayData List<Property Map>
    Pipeline level display data.
    executionPipelineStage List<Property Map>
    Description of each stage of execution of the pipeline.
    originalPipelineTransform List<Property Map>
    Description of each transform in the pipeline and collections between them.
    stepNamesHash String
    A hash value of the submitted pipeline portable graph step names if exists.

    PubSubIODetailsResponse

    Subscription string
    Subscription used in the connection.
    Topic string
    Topic accessed in the connection.
    Subscription string
    Subscription used in the connection.
    Topic string
    Topic accessed in the connection.
    subscription String
    Subscription used in the connection.
    topic String
    Topic accessed in the connection.
    subscription string
    Subscription used in the connection.
    topic string
    Topic accessed in the connection.
    subscription str
    Subscription used in the connection.
    topic str
    Topic accessed in the connection.
    subscription String
    Subscription used in the connection.
    topic String
    Topic accessed in the connection.

    RuntimeUpdatableParamsResponse

    MaxNumWorkers int
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    MinNumWorkers int
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
    MaxNumWorkers int
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    MinNumWorkers int
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
    maxNumWorkers Integer
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    minNumWorkers Integer
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
    maxNumWorkers number
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    minNumWorkers number
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
    max_num_workers int
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    min_num_workers int
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
    maxNumWorkers Number
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    minNumWorkers Number
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

    SdkBugResponse

    Severity string
    How severe the SDK bug is.
    Type string
    Describes the impact of this SDK bug.
    Uri string
    Link to more information on the bug.
    Severity string
    How severe the SDK bug is.
    Type string
    Describes the impact of this SDK bug.
    Uri string
    Link to more information on the bug.
    severity String
    How severe the SDK bug is.
    type String
    Describes the impact of this SDK bug.
    uri String
    Link to more information on the bug.
    severity string
    How severe the SDK bug is.
    type string
    Describes the impact of this SDK bug.
    uri string
    Link to more information on the bug.
    severity str
    How severe the SDK bug is.
    type str
    Describes the impact of this SDK bug.
    uri str
    Link to more information on the bug.
    severity String
    How severe the SDK bug is.
    type String
    Describes the impact of this SDK bug.
    uri String
    Link to more information on the bug.

    SdkHarnessContainerImageResponse

    Capabilities List<string>
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    ContainerImage string
    A docker container image that resides in Google Container Registry.
    EnvironmentId string
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    UseSingleCorePerContainer bool
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
    Capabilities []string
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    ContainerImage string
    A docker container image that resides in Google Container Registry.
    EnvironmentId string
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    UseSingleCorePerContainer bool
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
    capabilities List<String>
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    containerImage String
    A docker container image that resides in Google Container Registry.
    environmentId String
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    useSingleCorePerContainer Boolean
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
    capabilities string[]
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    containerImage string
    A docker container image that resides in Google Container Registry.
    environmentId string
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    useSingleCorePerContainer boolean
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
    capabilities Sequence[str]
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    container_image str
    A docker container image that resides in Google Container Registry.
    environment_id str
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    use_single_core_per_container bool
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
    capabilities List<String>
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    containerImage String
    A docker container image that resides in Google Container Registry.
    environmentId String
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    useSingleCorePerContainer Boolean
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

    SdkVersionResponse

    Bugs List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SdkBugResponse>
    Known bugs found in this SDK version.
    SdkSupportStatus string
    The support status for this SDK version.
    Version string
    The version of the SDK used to run the job.
    VersionDisplayName string
    A readable string describing the version of the SDK.
    Bugs []SdkBugResponse
    Known bugs found in this SDK version.
    SdkSupportStatus string
    The support status for this SDK version.
    Version string
    The version of the SDK used to run the job.
    VersionDisplayName string
    A readable string describing the version of the SDK.
    bugs List<SdkBugResponse>
    Known bugs found in this SDK version.
    sdkSupportStatus String
    The support status for this SDK version.
    version String
    The version of the SDK used to run the job.
    versionDisplayName String
    A readable string describing the version of the SDK.
    bugs SdkBugResponse[]
    Known bugs found in this SDK version.
    sdkSupportStatus string
    The support status for this SDK version.
    version string
    The version of the SDK used to run the job.
    versionDisplayName string
    A readable string describing the version of the SDK.
    bugs Sequence[SdkBugResponse]
    Known bugs found in this SDK version.
    sdk_support_status str
    The support status for this SDK version.
    version str
    The version of the SDK used to run the job.
    version_display_name str
    A readable string describing the version of the SDK.
    bugs List<Property Map>
    Known bugs found in this SDK version.
    sdkSupportStatus String
    The support status for this SDK version.
    version String
    The version of the SDK used to run the job.
    versionDisplayName String
    A readable string describing the version of the SDK.

    SpannerIODetailsResponse

    DatabaseId string
    DatabaseId accessed in the connection.
    InstanceId string
    InstanceId accessed in the connection.
    Project string
    ProjectId accessed in the connection.
    DatabaseId string
    DatabaseId accessed in the connection.
    InstanceId string
    InstanceId accessed in the connection.
    Project string
    ProjectId accessed in the connection.
    databaseId String
    DatabaseId accessed in the connection.
    instanceId String
    InstanceId accessed in the connection.
    project String
    ProjectId accessed in the connection.
    databaseId string
    DatabaseId accessed in the connection.
    instanceId string
    InstanceId accessed in the connection.
    project string
    ProjectId accessed in the connection.
    database_id str
    DatabaseId accessed in the connection.
    instance_id str
    InstanceId accessed in the connection.
    project str
    ProjectId accessed in the connection.
    databaseId String
    DatabaseId accessed in the connection.
    instanceId String
    InstanceId accessed in the connection.
    project String
    ProjectId accessed in the connection.

    StageSourceResponse

    Name string
    Dataflow service generated name for this source.
    OriginalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    SizeBytes string
    Size of the source, if measurable.
    UserName string
    Human-readable name for this source; may be user or system generated.
    Name string
    Dataflow service generated name for this source.
    OriginalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    SizeBytes string
    Size of the source, if measurable.
    UserName string
    Human-readable name for this source; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransformOrCollection String
    User name for the original user transform or collection with which this source is most closely associated.
    sizeBytes String
    Size of the source, if measurable.
    userName String
    Human-readable name for this source; may be user or system generated.
    name string
    Dataflow service generated name for this source.
    originalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    sizeBytes string
    Size of the source, if measurable.
    userName string
    Human-readable name for this source; may be user or system generated.
    name str
    Dataflow service generated name for this source.
    original_transform_or_collection str
    User name for the original user transform or collection with which this source is most closely associated.
    size_bytes str
    Size of the source, if measurable.
    user_name str
    Human-readable name for this source; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransformOrCollection String
    User name for the original user transform or collection with which this source is most closely associated.
    sizeBytes String
    Size of the source, if measurable.
    userName String
    Human-readable name for this source; may be user or system generated.

    StepResponse

    Kind string
    The kind of step in the Cloud Dataflow job.
    Name string
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    Properties Dictionary<string, string>
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
    Kind string
    The kind of step in the Cloud Dataflow job.
    Name string
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    Properties map[string]string
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
    kind String
    The kind of step in the Cloud Dataflow job.
    name String
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    properties Map<String,String>
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
    kind string
    The kind of step in the Cloud Dataflow job.
    name string
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    properties {[key: string]: string}
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
    kind str
    The kind of step in the Cloud Dataflow job.
    name str
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    properties Mapping[str, str]
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
    kind String
    The kind of step in the Cloud Dataflow job.
    name String
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    properties Map<String>
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

    TaskRunnerSettingsResponse

    Alsologtostderr bool
    Whether to also send taskrunner log info to stderr.
    BaseTaskDir string
    The location on the worker for task-specific subdirectories.
    BaseUrl string
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    CommandlinesFileName string
    The file to store preprocessing commands in.
    ContinueOnException bool
    Whether to continue taskrunner if an exception is hit.
    DataflowApiVersion string
    The API version of endpoint, e.g. "v1b3"
    HarnessCommand string
    The command to launch the worker harness.
    LanguageHint string
    The suggested backend language.
    LogDir string
    The directory on the VM to store logs.
    LogToSerialconsole bool
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    LogUploadLocation string
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    OauthScopes List<string>
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    ParallelWorkerSettings Pulumi.GoogleNative.Dataflow.V1b3.Inputs.WorkerSettingsResponse
    The settings to pass to the parallel worker harness.
    StreamingWorkerMainClass string
    The streaming worker main class name.
    TaskGroup string
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    TaskUser string
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    TempStoragePrefix string
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    VmId string
    The ID string of the VM.
    WorkflowFileName string
    The file to store the workflow in.
    Alsologtostderr bool
    Whether to also send taskrunner log info to stderr.
    BaseTaskDir string
    The location on the worker for task-specific subdirectories.
    BaseUrl string
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    CommandlinesFileName string
    The file to store preprocessing commands in.
    ContinueOnException bool
    Whether to continue taskrunner if an exception is hit.
    DataflowApiVersion string
    The API version of endpoint, e.g. "v1b3"
    HarnessCommand string
    The command to launch the worker harness.
    LanguageHint string
    The suggested backend language.
    LogDir string
    The directory on the VM to store logs.
    LogToSerialconsole bool
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    LogUploadLocation string
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    OauthScopes []string
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    ParallelWorkerSettings WorkerSettingsResponse
    The settings to pass to the parallel worker harness.
    StreamingWorkerMainClass string
    The streaming worker main class name.
    TaskGroup string
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    TaskUser string
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    TempStoragePrefix string
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    VmId string
    The ID string of the VM.
    WorkflowFileName string
    The file to store the workflow in.
    alsologtostderr Boolean
    Whether to also send taskrunner log info to stderr.
    baseTaskDir String
    The location on the worker for task-specific subdirectories.
    baseUrl String
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    commandlinesFileName String
    The file to store preprocessing commands in.
    continueOnException Boolean
    Whether to continue taskrunner if an exception is hit.
    dataflowApiVersion String
    The API version of endpoint, e.g. "v1b3"
    harnessCommand String
    The command to launch the worker harness.
    languageHint String
    The suggested backend language.
    logDir String
    The directory on the VM to store logs.
    logToSerialconsole Boolean
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    logUploadLocation String
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    oauthScopes List<String>
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    parallelWorkerSettings WorkerSettingsResponse
    The settings to pass to the parallel worker harness.
    streamingWorkerMainClass String
    The streaming worker main class name.
    taskGroup String
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    taskUser String
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    tempStoragePrefix String
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    vmId String
    The ID string of the VM.
    workflowFileName String
    The file to store the workflow in.
    alsologtostderr boolean
    Whether to also send taskrunner log info to stderr.
    baseTaskDir string
    The location on the worker for task-specific subdirectories.
    baseUrl string
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    commandlinesFileName string
    The file to store preprocessing commands in.
    continueOnException boolean
    Whether to continue taskrunner if an exception is hit.
    dataflowApiVersion string
    The API version of endpoint, e.g. "v1b3"
    harnessCommand string
    The command to launch the worker harness.
    languageHint string
    The suggested backend language.
    logDir string
    The directory on the VM to store logs.
    logToSerialconsole boolean
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    logUploadLocation string
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    oauthScopes string[]
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    parallelWorkerSettings WorkerSettingsResponse
    The settings to pass to the parallel worker harness.
    streamingWorkerMainClass string
    The streaming worker main class name.
    taskGroup string
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    taskUser string
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    tempStoragePrefix string
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    vmId string
    The ID string of the VM.
    workflowFileName string
    The file to store the workflow in.
    alsologtostderr bool
    Whether to also send taskrunner log info to stderr.
    base_task_dir str
    The location on the worker for task-specific subdirectories.
    base_url str
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    commandlines_file_name str
    The file to store preprocessing commands in.
    continue_on_exception bool
    Whether to continue taskrunner if an exception is hit.
    dataflow_api_version str
    The API version of endpoint, e.g. "v1b3"
    harness_command str
    The command to launch the worker harness.
    language_hint str
    The suggested backend language.
    log_dir str
    The directory on the VM to store logs.
    log_to_serialconsole bool
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    log_upload_location str
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    oauth_scopes Sequence[str]
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    parallel_worker_settings WorkerSettingsResponse
    The settings to pass to the parallel worker harness.
    streaming_worker_main_class str
    The streaming worker main class name.
    task_group str
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    task_user str
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    temp_storage_prefix str
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    vm_id str
    The ID string of the VM.
    workflow_file_name str
    The file to store the workflow in.
    alsologtostderr Boolean
    Whether to also send taskrunner log info to stderr.
    baseTaskDir String
    The location on the worker for task-specific subdirectories.
    baseUrl String
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    commandlinesFileName String
    The file to store preprocessing commands in.
    continueOnException Boolean
    Whether to continue taskrunner if an exception is hit.
    dataflowApiVersion String
    The API version of endpoint, e.g. "v1b3"
    harnessCommand String
    The command to launch the worker harness.
    languageHint String
    The suggested backend language.
    logDir String
    The directory on the VM to store logs.
    logToSerialconsole Boolean
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    logUploadLocation String
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    oauthScopes List<String>
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    parallelWorkerSettings Property Map
    The settings to pass to the parallel worker harness.
    streamingWorkerMainClass String
    The streaming worker main class name.
    taskGroup String
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    taskUser String
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    tempStoragePrefix String
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    vmId String
    The ID string of the VM.
    workflowFileName String
    The file to store the workflow in.

    TransformSummaryResponse

    DisplayData List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DisplayDataResponse>
    Transform-specific display data.
    InputCollectionName List<string>
    User names for all collection inputs to this transform.
    Kind string
    Type of transform.
    Name string
    User provided name for this transform instance.
    OutputCollectionName List<string>
    User names for all collection outputs to this transform.
    DisplayData []DisplayDataResponse
    Transform-specific display data.
    InputCollectionName []string
    User names for all collection inputs to this transform.
    Kind string
    Type of transform.
    Name string
    User provided name for this transform instance.
    OutputCollectionName []string
    User names for all collection outputs to this transform.
    displayData List<DisplayDataResponse>
    Transform-specific display data.
    inputCollectionName List<String>
    User names for all collection inputs to this transform.
    kind String
    Type of transform.
    name String
    User provided name for this transform instance.
    outputCollectionName List<String>
    User names for all collection outputs to this transform.
    displayData DisplayDataResponse[]
    Transform-specific display data.
    inputCollectionName string[]
    User names for all collection inputs to this transform.
    kind string
    Type of transform.
    name string
    User provided name for this transform instance.
    outputCollectionName string[]
    User names for all collection outputs to this transform.
    display_data Sequence[DisplayDataResponse]
    Transform-specific display data.
    input_collection_name Sequence[str]
    User names for all collection inputs to this transform.
    kind str
    Type of transform.
    name str
    User provided name for this transform instance.
    output_collection_name Sequence[str]
    User names for all collection outputs to this transform.
    displayData List<Property Map>
    Transform-specific display data.
    inputCollectionName List<String>
    User names for all collection inputs to this transform.
    kind String
    Type of transform.
    name String
    User provided name for this transform instance.
    outputCollectionName List<String>
    User names for all collection outputs to this transform.

    WorkerPoolResponse

    AutoscalingSettings Pulumi.GoogleNative.Dataflow.V1b3.Inputs.AutoscalingSettingsResponse
    Settings for autoscaling of this WorkerPool.
    DataDisks List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DiskResponse>
    Data disks that are used by a VM in this workflow.
    DefaultPackageSet string
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    DiskSizeGb int
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    DiskSourceImage string
    Fully qualified source image for disks.
    DiskType string
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    IpConfiguration string
    Configuration for VM IPs.
    Kind string
    The kind of the worker pool; currently only harness and shuffle are supported.
    MachineType string
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    Metadata Dictionary<string, string>
    Metadata to set on the Google Compute Engine VMs.
    Network string
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    NumThreadsPerWorker int
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    NumWorkers int
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    OnHostMaintenance string
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    Packages List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.PackageResponse>
    Packages to be installed on workers.
    PoolArgs Dictionary<string, string>
    Extra arguments for this worker pool.
    SdkHarnessContainerImages List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SdkHarnessContainerImageResponse>
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    Subnetwork string
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    TaskrunnerSettings Pulumi.GoogleNative.Dataflow.V1b3.Inputs.TaskRunnerSettingsResponse
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    TeardownPolicy string
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    WorkerHarnessContainerImage string
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Zone string
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
    AutoscalingSettings AutoscalingSettingsResponse
    Settings for autoscaling of this WorkerPool.
    DataDisks []DiskResponse
    Data disks that are used by a VM in this workflow.
    DefaultPackageSet string
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    DiskSizeGb int
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    DiskSourceImage string
    Fully qualified source image for disks.
    DiskType string
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    IpConfiguration string
    Configuration for VM IPs.
    Kind string
    The kind of the worker pool; currently only harness and shuffle are supported.
    MachineType string
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    Metadata map[string]string
    Metadata to set on the Google Compute Engine VMs.
    Network string
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    NumThreadsPerWorker int
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    NumWorkers int
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    OnHostMaintenance string
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    Packages []PackageResponse
    Packages to be installed on workers.
    PoolArgs map[string]string
    Extra arguments for this worker pool.
    SdkHarnessContainerImages []SdkHarnessContainerImageResponse
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    Subnetwork string
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    TaskrunnerSettings TaskRunnerSettingsResponse
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    TeardownPolicy string
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    WorkerHarnessContainerImage string
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Zone string
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
    autoscalingSettings AutoscalingSettingsResponse
    Settings for autoscaling of this WorkerPool.
    dataDisks List<DiskResponse>
    Data disks that are used by a VM in this workflow.
    defaultPackageSet String
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    diskSizeGb Integer
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskSourceImage String
    Fully qualified source image for disks.
    diskType String
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    ipConfiguration String
    Configuration for VM IPs.
    kind String
    The kind of the worker pool; currently only harness and shuffle are supported.
    machineType String
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    metadata Map<String,String>
    Metadata to set on the Google Compute Engine VMs.
    network String
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    numThreadsPerWorker Integer
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    numWorkers Integer
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    onHostMaintenance String
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    packages List<PackageResponse>
    Packages to be installed on workers.
    poolArgs Map<String,String>
    Extra arguments for this worker pool.
    sdkHarnessContainerImages List<SdkHarnessContainerImageResponse>
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    subnetwork String
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    taskrunnerSettings TaskRunnerSettingsResponse
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    teardownPolicy String
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    workerHarnessContainerImage String
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    zone String
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
    autoscalingSettings AutoscalingSettingsResponse
    Settings for autoscaling of this WorkerPool.
    dataDisks DiskResponse[]
    Data disks that are used by a VM in this workflow.
    defaultPackageSet string
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    diskSizeGb number
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskSourceImage string
    Fully qualified source image for disks.
    diskType string
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    ipConfiguration string
    Configuration for VM IPs.
    kind string
    The kind of the worker pool; currently only harness and shuffle are supported.
    machineType string
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    metadata {[key: string]: string}
    Metadata to set on the Google Compute Engine VMs.
    network string
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    numThreadsPerWorker number
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    numWorkers number
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    onHostMaintenance string
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    packages PackageResponse[]
    Packages to be installed on workers.
    poolArgs {[key: string]: string}
    Extra arguments for this worker pool.
    sdkHarnessContainerImages SdkHarnessContainerImageResponse[]
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    subnetwork string
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    taskrunnerSettings TaskRunnerSettingsResponse
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    teardownPolicy string
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    workerHarnessContainerImage string
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    zone string
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
    autoscaling_settings AutoscalingSettingsResponse
    Settings for autoscaling of this WorkerPool.
    data_disks Sequence[DiskResponse]
    Data disks that are used by a VM in this workflow.
    default_package_set str
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    disk_size_gb int
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    disk_source_image str
    Fully qualified source image for disks.
    disk_type str
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    ip_configuration str
    Configuration for VM IPs.
    kind str
    The kind of the worker pool; currently only harness and shuffle are supported.
    machine_type str
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    metadata Mapping[str, str]
    Metadata to set on the Google Compute Engine VMs.
    network str
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    num_threads_per_worker int
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    num_workers int
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    on_host_maintenance str
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    packages Sequence[PackageResponse]
    Packages to be installed on workers.
    pool_args Mapping[str, str]
    Extra arguments for this worker pool.
    sdk_harness_container_images Sequence[SdkHarnessContainerImageResponse]
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    subnetwork str
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    taskrunner_settings TaskRunnerSettingsResponse
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    teardown_policy str
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    worker_harness_container_image str
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    zone str
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
    autoscalingSettings Property Map
    Settings for autoscaling of this WorkerPool.
    dataDisks List<Property Map>
    Data disks that are used by a VM in this workflow.
    defaultPackageSet String
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    diskSizeGb Number
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskSourceImage String
    Fully qualified source image for disks.
    diskType String
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    ipConfiguration String
    Configuration for VM IPs.
    kind String
    The kind of the worker pool; currently only harness and shuffle are supported.
    machineType String
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    metadata Map<String>
    Metadata to set on the Google Compute Engine VMs.
    network String
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    numThreadsPerWorker Number
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    numWorkers Number
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    onHostMaintenance String
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    packages List<Property Map>
    Packages to be installed on workers.
    poolArgs Map<String>
    Extra arguments for this worker pool.
    sdkHarnessContainerImages List<Property Map>
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    subnetwork String
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    taskrunnerSettings Property Map
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    teardownPolicy String
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    workerHarnessContainerImage String
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated:Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    zone String
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

    WorkerSettingsResponse

    BaseUrl string
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    ReportingEnabled bool
    Whether to send work progress updates to the service.
    ServicePath string
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    ShuffleServicePath string
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    TempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    WorkerId string
    The ID of the worker running this pipeline.
    BaseUrl string
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    ReportingEnabled bool
    Whether to send work progress updates to the service.
    ServicePath string
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    ShuffleServicePath string
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    TempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    WorkerId string
    The ID of the worker running this pipeline.
    baseUrl String
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    reportingEnabled Boolean
    Whether to send work progress updates to the service.
    servicePath String
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    shuffleServicePath String
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    tempStoragePrefix String
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    workerId String
    The ID of the worker running this pipeline.
    baseUrl string
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    reportingEnabled boolean
    Whether to send work progress updates to the service.
    servicePath string
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    shuffleServicePath string
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    tempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    workerId string
    The ID of the worker running this pipeline.
    base_url str
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    reporting_enabled bool
    Whether to send work progress updates to the service.
    service_path str
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    shuffle_service_path str
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    temp_storage_prefix str
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    worker_id str
    The ID of the worker running this pipeline.
    baseUrl String
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    reportingEnabled Boolean
    Whether to send work progress updates to the service.
    servicePath String
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    shuffleServicePath String
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    tempStoragePrefix String
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    workerId String
    The ID of the worker running this pipeline.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi