gcp.dataflow.Job
Explore with Pulumi AI
Import
Dataflow jobs can be imported using the job id
e.g.
$ pulumi import gcp:dataflow/job:Job example 2022-07-31_06_25_42-11926927532632678660
Create Job Resource
new Job(name: string, args: JobArgs, opts?: CustomResourceOptions);
@overload
def Job(resource_name: str,
opts: Optional[ResourceOptions] = None,
additional_experiments: Optional[Sequence[str]] = None,
enable_streaming_engine: Optional[bool] = None,
ip_configuration: Optional[str] = None,
kms_key_name: Optional[str] = None,
labels: Optional[Mapping[str, Any]] = None,
machine_type: Optional[str] = None,
max_workers: Optional[int] = None,
name: Optional[str] = None,
network: Optional[str] = None,
on_delete: Optional[str] = None,
parameters: Optional[Mapping[str, Any]] = None,
project: Optional[str] = None,
region: Optional[str] = None,
service_account_email: Optional[str] = None,
skip_wait_on_job_termination: Optional[bool] = None,
subnetwork: Optional[str] = None,
temp_gcs_location: Optional[str] = None,
template_gcs_path: Optional[str] = None,
transform_name_mapping: Optional[Mapping[str, Any]] = None,
zone: Optional[str] = None)
@overload
def Job(resource_name: str,
args: JobArgs,
opts: Optional[ResourceOptions] = None)
func NewJob(ctx *Context, name string, args JobArgs, opts ...ResourceOption) (*Job, error)
public Job(string name, JobArgs args, CustomResourceOptions? opts = null)
type: gcp:dataflow:Job
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Job Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The Job resource accepts the following input properties:
- Temp
Gcs stringLocation A writeable location on GCS for the Dataflow job to dump its temporary data.
- Template
Gcs stringPath The GCS path to the Dataflow job template.
- Additional
Experiments List<string> List of experiments that should be used by the job. An example value is
["enable_stackdriver_agent_metrics"]
.- Enable
Streaming boolEngine Enable/disable the use of Streaming Engine for the job. Note that Streaming Engine is enabled by default for pipelines developed against the Beam SDK for Python v2.21.0 or later when using Python 3.
- Ip
Configuration string The configuration for VM IPs. Options are
"WORKER_IP_PUBLIC"
or"WORKER_IP_PRIVATE"
.- Kms
Key stringName The name for the Cloud KMS key for the job. Key format is:
projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- Labels Dictionary<string, object>
User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: Google-provided Dataflow templates often provide default labels that begin with
goog-dataflow-provided
. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.- Machine
Type string The machine type to use for the job.
- Max
Workers int The number of workers permitted to work on the job. More workers may improve processing speed at additional cost.
- Name string
A unique name for the resource, required by Dataflow.
- Network string
The network to which VMs will be assigned. If it is not provided, "default" will be used.
- On
Delete string One of "drain" or "cancel". Specifies behavior of deletion during
pulumi destroy
. See above note.- Parameters Dictionary<string, object>
Key/Value pairs to be passed to the Dataflow job (as used in the template).
- Project string
The project in which the resource belongs. If it is not provided, the provider project is used.
- Region string
The region in which the created job should run.
- Service
Account stringEmail The Service Account email used to create the job.
- Skip
Wait boolOn Job Termination If set to
true
, Pulumi will treatDRAINING
andCANCELLING
as terminal states when deleting the resource, and will remove the resource from Pulumi state and move on. See above note.- Subnetwork string
The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL. For example
"googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME"
- Transform
Name Dictionary<string, object>Mapping Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job. This field is not used outside of update.
- Zone string
The zone in which the created job should run. If it is not provided, the provider zone is used.
- Temp
Gcs stringLocation A writeable location on GCS for the Dataflow job to dump its temporary data.
- Template
Gcs stringPath The GCS path to the Dataflow job template.
- Additional
Experiments []string List of experiments that should be used by the job. An example value is
["enable_stackdriver_agent_metrics"]
.- Enable
Streaming boolEngine Enable/disable the use of Streaming Engine for the job. Note that Streaming Engine is enabled by default for pipelines developed against the Beam SDK for Python v2.21.0 or later when using Python 3.
- Ip
Configuration string The configuration for VM IPs. Options are
"WORKER_IP_PUBLIC"
or"WORKER_IP_PRIVATE"
.- Kms
Key stringName The name for the Cloud KMS key for the job. Key format is:
projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- Labels map[string]interface{}
User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: Google-provided Dataflow templates often provide default labels that begin with
goog-dataflow-provided
. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.- Machine
Type string The machine type to use for the job.
- Max
Workers int The number of workers permitted to work on the job. More workers may improve processing speed at additional cost.
- Name string
A unique name for the resource, required by Dataflow.
- Network string
The network to which VMs will be assigned. If it is not provided, "default" will be used.
- On
Delete string One of "drain" or "cancel". Specifies behavior of deletion during
pulumi destroy
. See above note.- Parameters map[string]interface{}
Key/Value pairs to be passed to the Dataflow job (as used in the template).
- Project string
The project in which the resource belongs. If it is not provided, the provider project is used.
- Region string
The region in which the created job should run.
- Service
Account stringEmail The Service Account email used to create the job.
- Skip
Wait boolOn Job Termination If set to
true
, Pulumi will treatDRAINING
andCANCELLING
as terminal states when deleting the resource, and will remove the resource from Pulumi state and move on. See above note.- Subnetwork string
The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL. For example
"googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME"
- Transform
Name map[string]interface{}Mapping Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job. This field is not used outside of update.
- Zone string
The zone in which the created job should run. If it is not provided, the provider zone is used.
- temp
Gcs StringLocation A writeable location on GCS for the Dataflow job to dump its temporary data.
- template
Gcs StringPath The GCS path to the Dataflow job template.
- additional
Experiments List<String> List of experiments that should be used by the job. An example value is
["enable_stackdriver_agent_metrics"]
.- enable
Streaming BooleanEngine Enable/disable the use of Streaming Engine for the job. Note that Streaming Engine is enabled by default for pipelines developed against the Beam SDK for Python v2.21.0 or later when using Python 3.
- ip
Configuration String The configuration for VM IPs. Options are
"WORKER_IP_PUBLIC"
or"WORKER_IP_PRIVATE"
.- kms
Key StringName The name for the Cloud KMS key for the job. Key format is:
projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- labels Map<String,Object>
User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: Google-provided Dataflow templates often provide default labels that begin with
goog-dataflow-provided
. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.- machine
Type String The machine type to use for the job.
- max
Workers Integer The number of workers permitted to work on the job. More workers may improve processing speed at additional cost.
- name String
A unique name for the resource, required by Dataflow.
- network String
The network to which VMs will be assigned. If it is not provided, "default" will be used.
- on
Delete String One of "drain" or "cancel". Specifies behavior of deletion during
pulumi destroy
. See above note.- parameters Map<String,Object>
Key/Value pairs to be passed to the Dataflow job (as used in the template).
- project String
The project in which the resource belongs. If it is not provided, the provider project is used.
- region String
The region in which the created job should run.
- service
Account StringEmail The Service Account email used to create the job.
- skip
Wait BooleanOn Job Termination If set to
true
, Pulumi will treatDRAINING
andCANCELLING
as terminal states when deleting the resource, and will remove the resource from Pulumi state and move on. See above note.- subnetwork String
The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL. For example
"googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME"
- transform
Name Map<String,Object>Mapping Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job. This field is not used outside of update.
- zone String
The zone in which the created job should run. If it is not provided, the provider zone is used.
- temp
Gcs stringLocation A writeable location on GCS for the Dataflow job to dump its temporary data.
- template
Gcs stringPath The GCS path to the Dataflow job template.
- additional
Experiments string[] List of experiments that should be used by the job. An example value is
["enable_stackdriver_agent_metrics"]
.- enable
Streaming booleanEngine Enable/disable the use of Streaming Engine for the job. Note that Streaming Engine is enabled by default for pipelines developed against the Beam SDK for Python v2.21.0 or later when using Python 3.
- ip
Configuration string The configuration for VM IPs. Options are
"WORKER_IP_PUBLIC"
or"WORKER_IP_PRIVATE"
.- kms
Key stringName The name for the Cloud KMS key for the job. Key format is:
projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- labels {[key: string]: any}
User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: Google-provided Dataflow templates often provide default labels that begin with
goog-dataflow-provided
. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.- machine
Type string The machine type to use for the job.
- max
Workers number The number of workers permitted to work on the job. More workers may improve processing speed at additional cost.
- name string
A unique name for the resource, required by Dataflow.
- network string
The network to which VMs will be assigned. If it is not provided, "default" will be used.
- on
Delete string One of "drain" or "cancel". Specifies behavior of deletion during
pulumi destroy
. See above note.- parameters {[key: string]: any}
Key/Value pairs to be passed to the Dataflow job (as used in the template).
- project string
The project in which the resource belongs. If it is not provided, the provider project is used.
- region string
The region in which the created job should run.
- service
Account stringEmail The Service Account email used to create the job.
- skip
Wait booleanOn Job Termination If set to
true
, Pulumi will treatDRAINING
andCANCELLING
as terminal states when deleting the resource, and will remove the resource from Pulumi state and move on. See above note.- subnetwork string
The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL. For example
"googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME"
- transform
Name {[key: string]: any}Mapping Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job. This field is not used outside of update.
- zone string
The zone in which the created job should run. If it is not provided, the provider zone is used.
- temp_
gcs_ strlocation A writeable location on GCS for the Dataflow job to dump its temporary data.
- template_
gcs_ strpath The GCS path to the Dataflow job template.
- additional_
experiments Sequence[str] List of experiments that should be used by the job. An example value is
["enable_stackdriver_agent_metrics"]
.- enable_
streaming_ boolengine Enable/disable the use of Streaming Engine for the job. Note that Streaming Engine is enabled by default for pipelines developed against the Beam SDK for Python v2.21.0 or later when using Python 3.
- ip_
configuration str The configuration for VM IPs. Options are
"WORKER_IP_PUBLIC"
or"WORKER_IP_PRIVATE"
.- kms_
key_ strname The name for the Cloud KMS key for the job. Key format is:
projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- labels Mapping[str, Any]
User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: Google-provided Dataflow templates often provide default labels that begin with
goog-dataflow-provided
. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.- machine_
type str The machine type to use for the job.
- max_
workers int The number of workers permitted to work on the job. More workers may improve processing speed at additional cost.
- name str
A unique name for the resource, required by Dataflow.
- network str
The network to which VMs will be assigned. If it is not provided, "default" will be used.
- on_
delete str One of "drain" or "cancel". Specifies behavior of deletion during
pulumi destroy
. See above note.- parameters Mapping[str, Any]
Key/Value pairs to be passed to the Dataflow job (as used in the template).
- project str
The project in which the resource belongs. If it is not provided, the provider project is used.
- region str
The region in which the created job should run.
- service_
account_ stremail The Service Account email used to create the job.
- skip_
wait_ boolon_ job_ termination If set to
true
, Pulumi will treatDRAINING
andCANCELLING
as terminal states when deleting the resource, and will remove the resource from Pulumi state and move on. See above note.- subnetwork str
The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL. For example
"googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME"
- transform_
name_ Mapping[str, Any]mapping Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job. This field is not used outside of update.
- zone str
The zone in which the created job should run. If it is not provided, the provider zone is used.
- temp
Gcs StringLocation A writeable location on GCS for the Dataflow job to dump its temporary data.
- template
Gcs StringPath The GCS path to the Dataflow job template.
- additional
Experiments List<String> List of experiments that should be used by the job. An example value is
["enable_stackdriver_agent_metrics"]
.- enable
Streaming BooleanEngine Enable/disable the use of Streaming Engine for the job. Note that Streaming Engine is enabled by default for pipelines developed against the Beam SDK for Python v2.21.0 or later when using Python 3.
- ip
Configuration String The configuration for VM IPs. Options are
"WORKER_IP_PUBLIC"
or"WORKER_IP_PRIVATE"
.- kms
Key StringName The name for the Cloud KMS key for the job. Key format is:
projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- labels Map<Any>
User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: Google-provided Dataflow templates often provide default labels that begin with
goog-dataflow-provided
. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.- machine
Type String The machine type to use for the job.
- max
Workers Number The number of workers permitted to work on the job. More workers may improve processing speed at additional cost.
- name String
A unique name for the resource, required by Dataflow.
- network String
The network to which VMs will be assigned. If it is not provided, "default" will be used.
- on
Delete String One of "drain" or "cancel". Specifies behavior of deletion during
pulumi destroy
. See above note.- parameters Map<Any>
Key/Value pairs to be passed to the Dataflow job (as used in the template).
- project String
The project in which the resource belongs. If it is not provided, the provider project is used.
- region String
The region in which the created job should run.
- service
Account StringEmail The Service Account email used to create the job.
- skip
Wait BooleanOn Job Termination If set to
true
, Pulumi will treatDRAINING
andCANCELLING
as terminal states when deleting the resource, and will remove the resource from Pulumi state and move on. See above note.- subnetwork String
The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL. For example
"googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME"
- transform
Name Map<Any>Mapping Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job. This field is not used outside of update.
- zone String
The zone in which the created job should run. If it is not provided, the provider zone is used.
Outputs
All input properties are implicitly available as output properties. Additionally, the Job resource produces the following output properties:
- Id string
The provider-assigned unique ID for this managed resource.
- Job
Id string The unique ID of this job.
- State string
The current state of the resource, selected from the JobState enum
- Type string
The type of this job, selected from the JobType enum
- Id string
The provider-assigned unique ID for this managed resource.
- Job
Id string The unique ID of this job.
- State string
The current state of the resource, selected from the JobState enum
- Type string
The type of this job, selected from the JobType enum
- id String
The provider-assigned unique ID for this managed resource.
- job
Id String The unique ID of this job.
- state String
The current state of the resource, selected from the JobState enum
- type String
The type of this job, selected from the JobType enum
- id string
The provider-assigned unique ID for this managed resource.
- job
Id string The unique ID of this job.
- state string
The current state of the resource, selected from the JobState enum
- type string
The type of this job, selected from the JobType enum
- id str
The provider-assigned unique ID for this managed resource.
- job_
id str The unique ID of this job.
- state str
The current state of the resource, selected from the JobState enum
- type str
The type of this job, selected from the JobType enum
- id String
The provider-assigned unique ID for this managed resource.
- job
Id String The unique ID of this job.
- state String
The current state of the resource, selected from the JobState enum
- type String
The type of this job, selected from the JobType enum
Look up Existing Job Resource
Get an existing Job resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.
public static get(name: string, id: Input<ID>, state?: JobState, opts?: CustomResourceOptions): Job
@staticmethod
def get(resource_name: str,
id: str,
opts: Optional[ResourceOptions] = None,
additional_experiments: Optional[Sequence[str]] = None,
enable_streaming_engine: Optional[bool] = None,
ip_configuration: Optional[str] = None,
job_id: Optional[str] = None,
kms_key_name: Optional[str] = None,
labels: Optional[Mapping[str, Any]] = None,
machine_type: Optional[str] = None,
max_workers: Optional[int] = None,
name: Optional[str] = None,
network: Optional[str] = None,
on_delete: Optional[str] = None,
parameters: Optional[Mapping[str, Any]] = None,
project: Optional[str] = None,
region: Optional[str] = None,
service_account_email: Optional[str] = None,
skip_wait_on_job_termination: Optional[bool] = None,
state: Optional[str] = None,
subnetwork: Optional[str] = None,
temp_gcs_location: Optional[str] = None,
template_gcs_path: Optional[str] = None,
transform_name_mapping: Optional[Mapping[str, Any]] = None,
type: Optional[str] = None,
zone: Optional[str] = None) -> Job
func GetJob(ctx *Context, name string, id IDInput, state *JobState, opts ...ResourceOption) (*Job, error)
public static Job Get(string name, Input<string> id, JobState? state, CustomResourceOptions? opts = null)
public static Job get(String name, Output<String> id, JobState state, CustomResourceOptions options)
Resource lookup is not supported in YAML
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- resource_name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- Additional
Experiments List<string> List of experiments that should be used by the job. An example value is
["enable_stackdriver_agent_metrics"]
.- Enable
Streaming boolEngine Enable/disable the use of Streaming Engine for the job. Note that Streaming Engine is enabled by default for pipelines developed against the Beam SDK for Python v2.21.0 or later when using Python 3.
- Ip
Configuration string The configuration for VM IPs. Options are
"WORKER_IP_PUBLIC"
or"WORKER_IP_PRIVATE"
.- Job
Id string The unique ID of this job.
- Kms
Key stringName The name for the Cloud KMS key for the job. Key format is:
projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- Labels Dictionary<string, object>
User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: Google-provided Dataflow templates often provide default labels that begin with
goog-dataflow-provided
. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.- Machine
Type string The machine type to use for the job.
- Max
Workers int The number of workers permitted to work on the job. More workers may improve processing speed at additional cost.
- Name string
A unique name for the resource, required by Dataflow.
- Network string
The network to which VMs will be assigned. If it is not provided, "default" will be used.
- On
Delete string One of "drain" or "cancel". Specifies behavior of deletion during
pulumi destroy
. See above note.- Parameters Dictionary<string, object>
Key/Value pairs to be passed to the Dataflow job (as used in the template).
- Project string
The project in which the resource belongs. If it is not provided, the provider project is used.
- Region string
The region in which the created job should run.
- Service
Account stringEmail The Service Account email used to create the job.
- Skip
Wait boolOn Job Termination If set to
true
, Pulumi will treatDRAINING
andCANCELLING
as terminal states when deleting the resource, and will remove the resource from Pulumi state and move on. See above note.- State string
The current state of the resource, selected from the JobState enum
- Subnetwork string
The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL. For example
"googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME"
- Temp
Gcs stringLocation A writeable location on GCS for the Dataflow job to dump its temporary data.
- Template
Gcs stringPath The GCS path to the Dataflow job template.
- Transform
Name Dictionary<string, object>Mapping Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job. This field is not used outside of update.
- Type string
The type of this job, selected from the JobType enum
- Zone string
The zone in which the created job should run. If it is not provided, the provider zone is used.
- Additional
Experiments []string List of experiments that should be used by the job. An example value is
["enable_stackdriver_agent_metrics"]
.- Enable
Streaming boolEngine Enable/disable the use of Streaming Engine for the job. Note that Streaming Engine is enabled by default for pipelines developed against the Beam SDK for Python v2.21.0 or later when using Python 3.
- Ip
Configuration string The configuration for VM IPs. Options are
"WORKER_IP_PUBLIC"
or"WORKER_IP_PRIVATE"
.- Job
Id string The unique ID of this job.
- Kms
Key stringName The name for the Cloud KMS key for the job. Key format is:
projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- Labels map[string]interface{}
User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: Google-provided Dataflow templates often provide default labels that begin with
goog-dataflow-provided
. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.- Machine
Type string The machine type to use for the job.
- Max
Workers int The number of workers permitted to work on the job. More workers may improve processing speed at additional cost.
- Name string
A unique name for the resource, required by Dataflow.
- Network string
The network to which VMs will be assigned. If it is not provided, "default" will be used.
- On
Delete string One of "drain" or "cancel". Specifies behavior of deletion during
pulumi destroy
. See above note.- Parameters map[string]interface{}
Key/Value pairs to be passed to the Dataflow job (as used in the template).
- Project string
The project in which the resource belongs. If it is not provided, the provider project is used.
- Region string
The region in which the created job should run.
- Service
Account stringEmail The Service Account email used to create the job.
- Skip
Wait boolOn Job Termination If set to
true
, Pulumi will treatDRAINING
andCANCELLING
as terminal states when deleting the resource, and will remove the resource from Pulumi state and move on. See above note.- State string
The current state of the resource, selected from the JobState enum
- Subnetwork string
The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL. For example
"googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME"
- Temp
Gcs stringLocation A writeable location on GCS for the Dataflow job to dump its temporary data.
- Template
Gcs stringPath The GCS path to the Dataflow job template.
- Transform
Name map[string]interface{}Mapping Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job. This field is not used outside of update.
- Type string
The type of this job, selected from the JobType enum
- Zone string
The zone in which the created job should run. If it is not provided, the provider zone is used.
- additional
Experiments List<String> List of experiments that should be used by the job. An example value is
["enable_stackdriver_agent_metrics"]
.- enable
Streaming BooleanEngine Enable/disable the use of Streaming Engine for the job. Note that Streaming Engine is enabled by default for pipelines developed against the Beam SDK for Python v2.21.0 or later when using Python 3.
- ip
Configuration String The configuration for VM IPs. Options are
"WORKER_IP_PUBLIC"
or"WORKER_IP_PRIVATE"
.- job
Id String The unique ID of this job.
- kms
Key StringName The name for the Cloud KMS key for the job. Key format is:
projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- labels Map<String,Object>
User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: Google-provided Dataflow templates often provide default labels that begin with
goog-dataflow-provided
. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.- machine
Type String The machine type to use for the job.
- max
Workers Integer The number of workers permitted to work on the job. More workers may improve processing speed at additional cost.
- name String
A unique name for the resource, required by Dataflow.
- network String
The network to which VMs will be assigned. If it is not provided, "default" will be used.
- on
Delete String One of "drain" or "cancel". Specifies behavior of deletion during
pulumi destroy
. See above note.- parameters Map<String,Object>
Key/Value pairs to be passed to the Dataflow job (as used in the template).
- project String
The project in which the resource belongs. If it is not provided, the provider project is used.
- region String
The region in which the created job should run.
- service
Account StringEmail The Service Account email used to create the job.
- skip
Wait BooleanOn Job Termination If set to
true
, Pulumi will treatDRAINING
andCANCELLING
as terminal states when deleting the resource, and will remove the resource from Pulumi state and move on. See above note.- state String
The current state of the resource, selected from the JobState enum
- subnetwork String
The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL. For example
"googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME"
- temp
Gcs StringLocation A writeable location on GCS for the Dataflow job to dump its temporary data.
- template
Gcs StringPath The GCS path to the Dataflow job template.
- transform
Name Map<String,Object>Mapping Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job. This field is not used outside of update.
- type String
The type of this job, selected from the JobType enum
- zone String
The zone in which the created job should run. If it is not provided, the provider zone is used.
- additional
Experiments string[] List of experiments that should be used by the job. An example value is
["enable_stackdriver_agent_metrics"]
.- enable
Streaming booleanEngine Enable/disable the use of Streaming Engine for the job. Note that Streaming Engine is enabled by default for pipelines developed against the Beam SDK for Python v2.21.0 or later when using Python 3.
- ip
Configuration string The configuration for VM IPs. Options are
"WORKER_IP_PUBLIC"
or"WORKER_IP_PRIVATE"
.- job
Id string The unique ID of this job.
- kms
Key stringName The name for the Cloud KMS key for the job. Key format is:
projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- labels {[key: string]: any}
User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: Google-provided Dataflow templates often provide default labels that begin with
goog-dataflow-provided
. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.- machine
Type string The machine type to use for the job.
- max
Workers number The number of workers permitted to work on the job. More workers may improve processing speed at additional cost.
- name string
A unique name for the resource, required by Dataflow.
- network string
The network to which VMs will be assigned. If it is not provided, "default" will be used.
- on
Delete string One of "drain" or "cancel". Specifies behavior of deletion during
pulumi destroy
. See above note.- parameters {[key: string]: any}
Key/Value pairs to be passed to the Dataflow job (as used in the template).
- project string
The project in which the resource belongs. If it is not provided, the provider project is used.
- region string
The region in which the created job should run.
- service
Account stringEmail The Service Account email used to create the job.
- skip
Wait booleanOn Job Termination If set to
true
, Pulumi will treatDRAINING
andCANCELLING
as terminal states when deleting the resource, and will remove the resource from Pulumi state and move on. See above note.- state string
The current state of the resource, selected from the JobState enum
- subnetwork string
The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL. For example
"googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME"
- temp
Gcs stringLocation A writeable location on GCS for the Dataflow job to dump its temporary data.
- template
Gcs stringPath The GCS path to the Dataflow job template.
- transform
Name {[key: string]: any}Mapping Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job. This field is not used outside of update.
- type string
The type of this job, selected from the JobType enum
- zone string
The zone in which the created job should run. If it is not provided, the provider zone is used.
- additional_
experiments Sequence[str] List of experiments that should be used by the job. An example value is
["enable_stackdriver_agent_metrics"]
.- enable_
streaming_ boolengine Enable/disable the use of Streaming Engine for the job. Note that Streaming Engine is enabled by default for pipelines developed against the Beam SDK for Python v2.21.0 or later when using Python 3.
- ip_
configuration str The configuration for VM IPs. Options are
"WORKER_IP_PUBLIC"
or"WORKER_IP_PRIVATE"
.- job_
id str The unique ID of this job.
- kms_
key_ strname The name for the Cloud KMS key for the job. Key format is:
projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- labels Mapping[str, Any]
User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: Google-provided Dataflow templates often provide default labels that begin with
goog-dataflow-provided
. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.- machine_
type str The machine type to use for the job.
- max_
workers int The number of workers permitted to work on the job. More workers may improve processing speed at additional cost.
- name str
A unique name for the resource, required by Dataflow.
- network str
The network to which VMs will be assigned. If it is not provided, "default" will be used.
- on_
delete str One of "drain" or "cancel". Specifies behavior of deletion during
pulumi destroy
. See above note.- parameters Mapping[str, Any]
Key/Value pairs to be passed to the Dataflow job (as used in the template).
- project str
The project in which the resource belongs. If it is not provided, the provider project is used.
- region str
The region in which the created job should run.
- service_
account_ stremail The Service Account email used to create the job.
- skip_
wait_ boolon_ job_ termination If set to
true
, Pulumi will treatDRAINING
andCANCELLING
as terminal states when deleting the resource, and will remove the resource from Pulumi state and move on. See above note.- state str
The current state of the resource, selected from the JobState enum
- subnetwork str
The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL. For example
"googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME"
- temp_
gcs_ strlocation A writeable location on GCS for the Dataflow job to dump its temporary data.
- template_
gcs_ strpath The GCS path to the Dataflow job template.
- transform_
name_ Mapping[str, Any]mapping Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job. This field is not used outside of update.
- type str
The type of this job, selected from the JobType enum
- zone str
The zone in which the created job should run. If it is not provided, the provider zone is used.
- additional
Experiments List<String> List of experiments that should be used by the job. An example value is
["enable_stackdriver_agent_metrics"]
.- enable
Streaming BooleanEngine Enable/disable the use of Streaming Engine for the job. Note that Streaming Engine is enabled by default for pipelines developed against the Beam SDK for Python v2.21.0 or later when using Python 3.
- ip
Configuration String The configuration for VM IPs. Options are
"WORKER_IP_PUBLIC"
or"WORKER_IP_PRIVATE"
.- job
Id String The unique ID of this job.
- kms
Key StringName The name for the Cloud KMS key for the job. Key format is:
projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- labels Map<Any>
User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: Google-provided Dataflow templates often provide default labels that begin with
goog-dataflow-provided
. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.- machine
Type String The machine type to use for the job.
- max
Workers Number The number of workers permitted to work on the job. More workers may improve processing speed at additional cost.
- name String
A unique name for the resource, required by Dataflow.
- network String
The network to which VMs will be assigned. If it is not provided, "default" will be used.
- on
Delete String One of "drain" or "cancel". Specifies behavior of deletion during
pulumi destroy
. See above note.- parameters Map<Any>
Key/Value pairs to be passed to the Dataflow job (as used in the template).
- project String
The project in which the resource belongs. If it is not provided, the provider project is used.
- region String
The region in which the created job should run.
- service
Account StringEmail The Service Account email used to create the job.
- skip
Wait BooleanOn Job Termination If set to
true
, Pulumi will treatDRAINING
andCANCELLING
as terminal states when deleting the resource, and will remove the resource from Pulumi state and move on. See above note.- state String
The current state of the resource, selected from the JobState enum
- subnetwork String
The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL. For example
"googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME"
- temp
Gcs StringLocation A writeable location on GCS for the Dataflow job to dump its temporary data.
- template
Gcs StringPath The GCS path to the Dataflow job template.
- transform
Name Map<Any>Mapping Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job. This field is not used outside of update.
- type String
The type of this job, selected from the JobType enum
- zone String
The zone in which the created job should run. If it is not provided, the provider zone is used.
Package Details
- Repository
- Google Cloud (GCP) Classic pulumi/pulumi-gcp
- License
- Apache-2.0
- Notes
This Pulumi package is based on the
google-beta
Terraform Provider.