google-native.datapipelines/v1.Pipeline
Creates a pipeline. For a batch pipeline, you can pass scheduler information. Data Pipelines uses the scheduler information to create an internal scheduler that runs jobs periodically. If the internal scheduler is not configured, you can use RunPipeline to run jobs.
Create Pipeline Resource
new Pipeline(name: string, args: PipelineArgs, opts?: CustomResourceOptions);
@overload
def Pipeline(resource_name: str,
opts: Optional[ResourceOptions] = None,
display_name: Optional[str] = None,
location: Optional[str] = None,
name: Optional[str] = None,
pipeline_sources: Optional[Mapping[str, str]] = None,
project: Optional[str] = None,
schedule_info: Optional[GoogleCloudDatapipelinesV1ScheduleSpecArgs] = None,
scheduler_service_account_email: Optional[str] = None,
state: Optional[PipelineState] = None,
type: Optional[PipelineType] = None,
workload: Optional[GoogleCloudDatapipelinesV1WorkloadArgs] = None)
@overload
def Pipeline(resource_name: str,
args: PipelineArgs,
opts: Optional[ResourceOptions] = None)
func NewPipeline(ctx *Context, name string, args PipelineArgs, opts ...ResourceOption) (*Pipeline, error)
public Pipeline(string name, PipelineArgs args, CustomResourceOptions? opts = null)
public Pipeline(String name, PipelineArgs args)
public Pipeline(String name, PipelineArgs args, CustomResourceOptions options)
type: google-native:datapipelines/v1:Pipeline
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args PipelineArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args PipelineArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args PipelineArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args PipelineArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args PipelineArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Pipeline Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The Pipeline resource accepts the following input properties:
- Display
Name string The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- State
Pulumi.
Google Native. Datapipelines. V1. Pipeline State The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- Type
Pulumi.
Google Native. Datapipelines. V1. Pipeline Type The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- Location string
- Name string
The pipeline name. For example:
projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID
. *PROJECT_ID
can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_ID
is the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations
. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_ID
is the ID of the pipeline. Must be unique for the selected project and location.- Pipeline
Sources Dictionary<string, string> Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- Project string
- Schedule
Info Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Schedule Spec Args Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- Scheduler
Service stringAccount Email Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- Workload
Pulumi.
Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Workload Args Workload information for creating new jobs.
- Display
Name string The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- State
Pipeline
State Enum The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- Type
Pipeline
Type The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- Location string
- Name string
The pipeline name. For example:
projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID
. *PROJECT_ID
can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_ID
is the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations
. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_ID
is the ID of the pipeline. Must be unique for the selected project and location.- Pipeline
Sources map[string]string Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- Project string
- Schedule
Info GoogleCloud Datapipelines V1Schedule Spec Args Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- Scheduler
Service stringAccount Email Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- Workload
Google
Cloud Datapipelines V1Workload Args Workload information for creating new jobs.
- display
Name String The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- state
Pipeline
State The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- type
Pipeline
Type The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- location String
- name String
The pipeline name. For example:
projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID
. *PROJECT_ID
can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_ID
is the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations
. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_ID
is the ID of the pipeline. Must be unique for the selected project and location.- pipeline
Sources Map<String,String> Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- project String
- schedule
Info GoogleCloud Datapipelines V1Schedule Spec Args Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- scheduler
Service StringAccount Email Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- workload
Google
Cloud Datapipelines V1Workload Args Workload information for creating new jobs.
- display
Name string The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- state
Pipeline
State The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- type
Pipeline
Type The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- location string
- name string
The pipeline name. For example:
projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID
. *PROJECT_ID
can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_ID
is the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations
. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_ID
is the ID of the pipeline. Must be unique for the selected project and location.- pipeline
Sources {[key: string]: string} Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- project string
- schedule
Info GoogleCloud Datapipelines V1Schedule Spec Args Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- scheduler
Service stringAccount Email Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- workload
Google
Cloud Datapipelines V1Workload Args Workload information for creating new jobs.
- display_
name str The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- state
Pipeline
State The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- type
Pipeline
Type The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- location str
- name str
The pipeline name. For example:
projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID
. *PROJECT_ID
can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_ID
is the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations
. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_ID
is the ID of the pipeline. Must be unique for the selected project and location.- pipeline_
sources Mapping[str, str] Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- project str
- schedule_
info GoogleCloud Datapipelines V1Schedule Spec Args Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- scheduler_
service_ straccount_ email Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- workload
Google
Cloud Datapipelines V1Workload Args Workload information for creating new jobs.
- display
Name String The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- state "STATE_UNSPECIFIED" | "STATE_RESUMING" | "STATE_ACTIVE" | "STATE_STOPPING" | "STATE_ARCHIVED" | "STATE_PAUSED"
The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- type "PIPELINE_TYPE_UNSPECIFIED" | "PIPELINE_TYPE_BATCH" | "PIPELINE_TYPE_STREAMING"
The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- location String
- name String
The pipeline name. For example:
projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID
. *PROJECT_ID
can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_ID
is the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations
. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_ID
is the ID of the pipeline. Must be unique for the selected project and location.- pipeline
Sources Map<String> Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- project String
- schedule
Info Property Map Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- scheduler
Service StringAccount Email Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- workload Property Map
Workload information for creating new jobs.
Outputs
All input properties are implicitly available as output properties. Additionally, the Pipeline resource produces the following output properties:
- Create
Time string Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- Id string
The provider-assigned unique ID for this managed resource.
- Job
Count int Number of jobs.
- Last
Update stringTime Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- Create
Time string Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- Id string
The provider-assigned unique ID for this managed resource.
- Job
Count int Number of jobs.
- Last
Update stringTime Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- create
Time String Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- id String
The provider-assigned unique ID for this managed resource.
- job
Count Integer Number of jobs.
- last
Update StringTime Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- create
Time string Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- id string
The provider-assigned unique ID for this managed resource.
- job
Count number Number of jobs.
- last
Update stringTime Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- create_
time str Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- id str
The provider-assigned unique ID for this managed resource.
- job_
count int Number of jobs.
- last_
update_ strtime Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- create
Time String Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- id String
The provider-assigned unique ID for this managed resource.
- job
Count Number Number of jobs.
- last
Update StringTime Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
Supporting Types
GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironment
- Additional
Experiments List<string> Additional experiment flags for the job.
- Additional
User Dictionary<string, string>Labels Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
.- Enable
Streaming boolEngine Whether to enable Streaming Engine for the job.
- Flexrs
Goal Pulumi.Google Native. Datapipelines. V1. Google Cloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- Ip
Configuration Pulumi.Google Native. Datapipelines. V1. Google Cloud Datapipelines V1Flex Template Runtime Environment Ip Configuration Configuration for VM IPs.
- Kms
Key stringName Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- Machine
Type string The machine type to use for the job. Defaults to the value from the template if not specified.
- Max
Workers int The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- Num
Workers int The initial number of Compute Engine instances for the job.
- Service
Account stringEmail The email address of the service account to run the job as.
- Subnetwork string
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- Temp
Location string The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- Worker
Region string The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- Worker
Zone string The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- Zone string
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- Additional
Experiments []string Additional experiment flags for the job.
- Additional
User map[string]stringLabels Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
.- Enable
Streaming boolEngine Whether to enable Streaming Engine for the job.
- Flexrs
Goal GoogleCloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- Ip
Configuration GoogleCloud Datapipelines V1Flex Template Runtime Environment Ip Configuration Configuration for VM IPs.
- Kms
Key stringName Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- Machine
Type string The machine type to use for the job. Defaults to the value from the template if not specified.
- Max
Workers int The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- Num
Workers int The initial number of Compute Engine instances for the job.
- Service
Account stringEmail The email address of the service account to run the job as.
- Subnetwork string
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- Temp
Location string The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- Worker
Region string The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- Worker
Zone string The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- Zone string
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments List<String> Additional experiment flags for the job.
- additional
User Map<String,String>Labels Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
.- enable
Streaming BooleanEngine Whether to enable Streaming Engine for the job.
- flexrs
Goal GoogleCloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ip
Configuration GoogleCloud Datapipelines V1Flex Template Runtime Environment Ip Configuration Configuration for VM IPs.
- kms
Key StringName Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type String The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers Integer The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers Integer The initial number of Compute Engine instances for the job.
- service
Account StringEmail The email address of the service account to run the job as.
- subnetwork String
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location String The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker
Region String The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- worker
Zone String The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone String
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments string[] Additional experiment flags for the job.
- additional
User {[key: string]: string}Labels Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
.- enable
Streaming booleanEngine Whether to enable Streaming Engine for the job.
- flexrs
Goal GoogleCloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ip
Configuration GoogleCloud Datapipelines V1Flex Template Runtime Environment Ip Configuration Configuration for VM IPs.
- kms
Key stringName Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type string The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers number The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network string
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers number The initial number of Compute Engine instances for the job.
- service
Account stringEmail The email address of the service account to run the job as.
- subnetwork string
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location string The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker
Region string The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- worker
Zone string The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone string
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional_
experiments Sequence[str] Additional experiment flags for the job.
- additional_
user_ Mapping[str, str]labels Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
.- enable_
streaming_ boolengine Whether to enable Streaming Engine for the job.
- flexrs_
goal GoogleCloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ip_
configuration GoogleCloud Datapipelines V1Flex Template Runtime Environment Ip Configuration Configuration for VM IPs.
- kms_
key_ strname Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machine_
type str The machine type to use for the job. Defaults to the value from the template if not specified.
- max_
workers int The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network str
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num_
workers int The initial number of Compute Engine instances for the job.
- service_
account_ stremail The email address of the service account to run the job as.
- subnetwork str
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp_
location str The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker_
region str The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- worker_
zone str The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone str
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments List<String> Additional experiment flags for the job.
- additional
User Map<String>Labels Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
.- enable
Streaming BooleanEngine Whether to enable Streaming Engine for the job.
- flexrs
Goal "FLEXRS_UNSPECIFIED" | "FLEXRS_SPEED_OPTIMIZED" | "FLEXRS_COST_OPTIMIZED" Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ip
Configuration "WORKER_IP_UNSPECIFIED" | "WORKER_IP_PUBLIC" | "WORKER_IP_PRIVATE" Configuration for VM IPs.
- kms
Key StringName Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type String The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers Number The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers Number The initial number of Compute Engine instances for the job.
- service
Account StringEmail The email address of the service account to run the job as.
- subnetwork String
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location String The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker
Region String The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- worker
Zone String The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone String
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentFlexrsGoal
- Flexrs
Unspecified - FLEXRS_UNSPECIFIED
Run in the default mode.
- Flexrs
Speed Optimized - FLEXRS_SPEED_OPTIMIZED
Optimize for lower execution time.
- Flexrs
Cost Optimized - FLEXRS_COST_OPTIMIZED
Optimize for lower cost.
- Google
Cloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal Flexrs Unspecified - FLEXRS_UNSPECIFIED
Run in the default mode.
- Google
Cloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal Flexrs Speed Optimized - FLEXRS_SPEED_OPTIMIZED
Optimize for lower execution time.
- Google
Cloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal Flexrs Cost Optimized - FLEXRS_COST_OPTIMIZED
Optimize for lower cost.
- Flexrs
Unspecified - FLEXRS_UNSPECIFIED
Run in the default mode.
- Flexrs
Speed Optimized - FLEXRS_SPEED_OPTIMIZED
Optimize for lower execution time.
- Flexrs
Cost Optimized - FLEXRS_COST_OPTIMIZED
Optimize for lower cost.
- Flexrs
Unspecified - FLEXRS_UNSPECIFIED
Run in the default mode.
- Flexrs
Speed Optimized - FLEXRS_SPEED_OPTIMIZED
Optimize for lower execution time.
- Flexrs
Cost Optimized - FLEXRS_COST_OPTIMIZED
Optimize for lower cost.
- FLEXRS_UNSPECIFIED
- FLEXRS_UNSPECIFIED
Run in the default mode.
- FLEXRS_SPEED_OPTIMIZED
- FLEXRS_SPEED_OPTIMIZED
Optimize for lower execution time.
- FLEXRS_COST_OPTIMIZED
- FLEXRS_COST_OPTIMIZED
Optimize for lower cost.
- "FLEXRS_UNSPECIFIED"
- FLEXRS_UNSPECIFIED
Run in the default mode.
- "FLEXRS_SPEED_OPTIMIZED"
- FLEXRS_SPEED_OPTIMIZED
Optimize for lower execution time.
- "FLEXRS_COST_OPTIMIZED"
- FLEXRS_COST_OPTIMIZED
Optimize for lower cost.
GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentIpConfiguration
- Worker
Ip Unspecified - WORKER_IP_UNSPECIFIED
The configuration is unknown, or unspecified.
- Worker
Ip Public - WORKER_IP_PUBLIC
Workers should have public IP addresses.
- Worker
Ip Private - WORKER_IP_PRIVATE
Workers should have private IP addresses.
- Google
Cloud Datapipelines V1Flex Template Runtime Environment Ip Configuration Worker Ip Unspecified - WORKER_IP_UNSPECIFIED
The configuration is unknown, or unspecified.
- Google
Cloud Datapipelines V1Flex Template Runtime Environment Ip Configuration Worker Ip Public - WORKER_IP_PUBLIC
Workers should have public IP addresses.
- Google
Cloud Datapipelines V1Flex Template Runtime Environment Ip Configuration Worker Ip Private - WORKER_IP_PRIVATE
Workers should have private IP addresses.
- Worker
Ip Unspecified - WORKER_IP_UNSPECIFIED
The configuration is unknown, or unspecified.
- Worker
Ip Public - WORKER_IP_PUBLIC
Workers should have public IP addresses.
- Worker
Ip Private - WORKER_IP_PRIVATE
Workers should have private IP addresses.
- Worker
Ip Unspecified - WORKER_IP_UNSPECIFIED
The configuration is unknown, or unspecified.
- Worker
Ip Public - WORKER_IP_PUBLIC
Workers should have public IP addresses.
- Worker
Ip Private - WORKER_IP_PRIVATE
Workers should have private IP addresses.
- WORKER_IP_UNSPECIFIED
- WORKER_IP_UNSPECIFIED
The configuration is unknown, or unspecified.
- WORKER_IP_PUBLIC
- WORKER_IP_PUBLIC
Workers should have public IP addresses.
- WORKER_IP_PRIVATE
- WORKER_IP_PRIVATE
Workers should have private IP addresses.
- "WORKER_IP_UNSPECIFIED"
- WORKER_IP_UNSPECIFIED
The configuration is unknown, or unspecified.
- "WORKER_IP_PUBLIC"
- WORKER_IP_PUBLIC
Workers should have public IP addresses.
- "WORKER_IP_PRIVATE"
- WORKER_IP_PRIVATE
Workers should have private IP addresses.
GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentResponse
- Additional
Experiments List<string> Additional experiment flags for the job.
- Additional
User Dictionary<string, string>Labels Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
.- Enable
Streaming boolEngine Whether to enable Streaming Engine for the job.
- Flexrs
Goal string Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- Ip
Configuration string Configuration for VM IPs.
- Kms
Key stringName Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- Machine
Type string The machine type to use for the job. Defaults to the value from the template if not specified.
- Max
Workers int The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- Num
Workers int The initial number of Compute Engine instances for the job.
- Service
Account stringEmail The email address of the service account to run the job as.
- Subnetwork string
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- Temp
Location string The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- Worker
Region string The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- Worker
Zone string The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- Zone string
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- Additional
Experiments []string Additional experiment flags for the job.
- Additional
User map[string]stringLabels Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
.- Enable
Streaming boolEngine Whether to enable Streaming Engine for the job.
- Flexrs
Goal string Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- Ip
Configuration string Configuration for VM IPs.
- Kms
Key stringName Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- Machine
Type string The machine type to use for the job. Defaults to the value from the template if not specified.
- Max
Workers int The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- Num
Workers int The initial number of Compute Engine instances for the job.
- Service
Account stringEmail The email address of the service account to run the job as.
- Subnetwork string
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- Temp
Location string The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- Worker
Region string The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- Worker
Zone string The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- Zone string
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments List<String> Additional experiment flags for the job.
- additional
User Map<String,String>Labels Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
.- enable
Streaming BooleanEngine Whether to enable Streaming Engine for the job.
- flexrs
Goal String Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ip
Configuration String Configuration for VM IPs.
- kms
Key StringName Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type String The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers Integer The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers Integer The initial number of Compute Engine instances for the job.
- service
Account StringEmail The email address of the service account to run the job as.
- subnetwork String
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location String The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker
Region String The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- worker
Zone String The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone String
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments string[] Additional experiment flags for the job.
- additional
User {[key: string]: string}Labels Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
.- enable
Streaming booleanEngine Whether to enable Streaming Engine for the job.
- flexrs
Goal string Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ip
Configuration string Configuration for VM IPs.
- kms
Key stringName Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type string The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers number The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network string
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers number The initial number of Compute Engine instances for the job.
- service
Account stringEmail The email address of the service account to run the job as.
- subnetwork string
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location string The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker
Region string The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- worker
Zone string The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone string
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional_
experiments Sequence[str] Additional experiment flags for the job.
- additional_
user_ Mapping[str, str]labels Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
.- enable_
streaming_ boolengine Whether to enable Streaming Engine for the job.
- flexrs_
goal str Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ip_
configuration str Configuration for VM IPs.
- kms_
key_ strname Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machine_
type str The machine type to use for the job. Defaults to the value from the template if not specified.
- max_
workers int The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network str
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num_
workers int The initial number of Compute Engine instances for the job.
- service_
account_ stremail The email address of the service account to run the job as.
- subnetwork str
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp_
location str The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker_
region str The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- worker_
zone str The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone str
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments List<String> Additional experiment flags for the job.
- additional
User Map<String>Labels Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
.- enable
Streaming BooleanEngine Whether to enable Streaming Engine for the job.
- flexrs
Goal String Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ip
Configuration String Configuration for VM IPs.
- kms
Key StringName Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type String The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers Number The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers Number The initial number of Compute Engine instances for the job.
- service
Account StringEmail The email address of the service account to run the job as.
- subnetwork String
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location String The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker
Region String The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- worker
Zone String The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone String
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
GoogleCloudDatapipelinesV1LaunchFlexTemplateParameter
- Job
Name string The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- Container
Spec stringGcs Path Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- Environment
Pulumi.
Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Flex Template Runtime Environment The runtime environment for the Flex Template job.
- Launch
Options Dictionary<string, string> Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- Parameters Dictionary<string, string>
The parameters for the Flex Template. Example:
{"num_workers":"5"}
- Transform
Name Dictionary<string, string>Mappings Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- Update bool
Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- Job
Name string The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- Container
Spec stringGcs Path Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- Environment
Google
Cloud Datapipelines V1Flex Template Runtime Environment The runtime environment for the Flex Template job.
- Launch
Options map[string]string Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- Parameters map[string]string
The parameters for the Flex Template. Example:
{"num_workers":"5"}
- Transform
Name map[string]stringMappings Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- Update bool
Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- job
Name String The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- container
Spec StringGcs Path Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
Google
Cloud Datapipelines V1Flex Template Runtime Environment The runtime environment for the Flex Template job.
- launch
Options Map<String,String> Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters Map<String,String>
The parameters for the Flex Template. Example:
{"num_workers":"5"}
- transform
Name Map<String,String>Mappings Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- update Boolean
Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- job
Name string The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- container
Spec stringGcs Path Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
Google
Cloud Datapipelines V1Flex Template Runtime Environment The runtime environment for the Flex Template job.
- launch
Options {[key: string]: string} Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters {[key: string]: string}
The parameters for the Flex Template. Example:
{"num_workers":"5"}
- transform
Name {[key: string]: string}Mappings Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- update boolean
Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- job_
name str The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- container_
spec_ strgcs_ path Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
Google
Cloud Datapipelines V1Flex Template Runtime Environment The runtime environment for the Flex Template job.
- launch_
options Mapping[str, str] Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters Mapping[str, str]
The parameters for the Flex Template. Example:
{"num_workers":"5"}
- transform_
name_ Mapping[str, str]mappings Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- update bool
Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- job
Name String The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- container
Spec StringGcs Path Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment Property Map
The runtime environment for the Flex Template job.
- launch
Options Map<String> Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters Map<String>
The parameters for the Flex Template. Example:
{"num_workers":"5"}
- transform
Name Map<String>Mappings Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- update Boolean
Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
GoogleCloudDatapipelinesV1LaunchFlexTemplateParameterResponse
- Container
Spec stringGcs Path Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- Environment
Pulumi.
Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Flex Template Runtime Environment Response The runtime environment for the Flex Template job.
- Job
Name string The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- Launch
Options Dictionary<string, string> Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- Parameters Dictionary<string, string>
The parameters for the Flex Template. Example:
{"num_workers":"5"}
- Transform
Name Dictionary<string, string>Mappings Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- Update bool
Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- Container
Spec stringGcs Path Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- Environment
Google
Cloud Datapipelines V1Flex Template Runtime Environment Response The runtime environment for the Flex Template job.
- Job
Name string The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- Launch
Options map[string]string Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- Parameters map[string]string
The parameters for the Flex Template. Example:
{"num_workers":"5"}
- Transform
Name map[string]stringMappings Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- Update bool
Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- container
Spec StringGcs Path Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
Google
Cloud Datapipelines V1Flex Template Runtime Environment Response The runtime environment for the Flex Template job.
- job
Name String The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- launch
Options Map<String,String> Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters Map<String,String>
The parameters for the Flex Template. Example:
{"num_workers":"5"}
- transform
Name Map<String,String>Mappings Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- update Boolean
Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- container
Spec stringGcs Path Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
Google
Cloud Datapipelines V1Flex Template Runtime Environment Response The runtime environment for the Flex Template job.
- job
Name string The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- launch
Options {[key: string]: string} Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters {[key: string]: string}
The parameters for the Flex Template. Example:
{"num_workers":"5"}
- transform
Name {[key: string]: string}Mappings Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- update boolean
Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- container_
spec_ strgcs_ path Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
Google
Cloud Datapipelines V1Flex Template Runtime Environment Response The runtime environment for the Flex Template job.
- job_
name str The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- launch_
options Mapping[str, str] Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters Mapping[str, str]
The parameters for the Flex Template. Example:
{"num_workers":"5"}
- transform_
name_ Mapping[str, str]mappings Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- update bool
Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- container
Spec StringGcs Path Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment Property Map
The runtime environment for the Flex Template job.
- job
Name String The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- launch
Options Map<String> Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters Map<String>
The parameters for the Flex Template. Example:
{"num_workers":"5"}
- transform
Name Map<String>Mappings Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- update Boolean
Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
GoogleCloudDatapipelinesV1LaunchFlexTemplateRequest
- Launch
Parameter Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Flex Template Parameter Parameter to launch a job from a Flex Template.
- Location string
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
.- Project string
The ID of the Cloud Platform project that the job belongs to.
- Validate
Only bool If true, the request is validated but not actually executed. Defaults to false.
- Launch
Parameter GoogleCloud Datapipelines V1Launch Flex Template Parameter Parameter to launch a job from a Flex Template.
- Location string
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
.- Project string
The ID of the Cloud Platform project that the job belongs to.
- Validate
Only bool If true, the request is validated but not actually executed. Defaults to false.
- launch
Parameter GoogleCloud Datapipelines V1Launch Flex Template Parameter Parameter to launch a job from a Flex Template.
- location String
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
.- project String
The ID of the Cloud Platform project that the job belongs to.
- validate
Only Boolean If true, the request is validated but not actually executed. Defaults to false.
- launch
Parameter GoogleCloud Datapipelines V1Launch Flex Template Parameter Parameter to launch a job from a Flex Template.
- location string
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
.- project string
The ID of the Cloud Platform project that the job belongs to.
- validate
Only boolean If true, the request is validated but not actually executed. Defaults to false.
- launch_
parameter GoogleCloud Datapipelines V1Launch Flex Template Parameter Parameter to launch a job from a Flex Template.
- location str
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
.- project str
The ID of the Cloud Platform project that the job belongs to.
- validate_
only bool If true, the request is validated but not actually executed. Defaults to false.
- launch
Parameter Property Map Parameter to launch a job from a Flex Template.
- location String
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
.- project String
The ID of the Cloud Platform project that the job belongs to.
- validate
Only Boolean If true, the request is validated but not actually executed. Defaults to false.
GoogleCloudDatapipelinesV1LaunchFlexTemplateRequestResponse
- Launch
Parameter Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Flex Template Parameter Response Parameter to launch a job from a Flex Template.
- Location string
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
.- Project string
The ID of the Cloud Platform project that the job belongs to.
- Validate
Only bool If true, the request is validated but not actually executed. Defaults to false.
- Launch
Parameter GoogleCloud Datapipelines V1Launch Flex Template Parameter Response Parameter to launch a job from a Flex Template.
- Location string
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
.- Project string
The ID of the Cloud Platform project that the job belongs to.
- Validate
Only bool If true, the request is validated but not actually executed. Defaults to false.
- launch
Parameter GoogleCloud Datapipelines V1Launch Flex Template Parameter Response Parameter to launch a job from a Flex Template.
- location String
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
.- project String
The ID of the Cloud Platform project that the job belongs to.
- validate
Only Boolean If true, the request is validated but not actually executed. Defaults to false.
- launch
Parameter GoogleCloud Datapipelines V1Launch Flex Template Parameter Response Parameter to launch a job from a Flex Template.
- location string
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
.- project string
The ID of the Cloud Platform project that the job belongs to.
- validate
Only boolean If true, the request is validated but not actually executed. Defaults to false.
- launch_
parameter GoogleCloud Datapipelines V1Launch Flex Template Parameter Response Parameter to launch a job from a Flex Template.
- location str
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
.- project str
The ID of the Cloud Platform project that the job belongs to.
- validate_
only bool If true, the request is validated but not actually executed. Defaults to false.
- launch
Parameter Property Map Parameter to launch a job from a Flex Template.
- location String
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
.- project String
The ID of the Cloud Platform project that the job belongs to.
- validate
Only Boolean If true, the request is validated but not actually executed. Defaults to false.
GoogleCloudDatapipelinesV1LaunchTemplateParameters
- Job
Name string The job name to use for the created job.
- Environment
Pulumi.
Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Runtime Environment The runtime environment for the job.
- Parameters Dictionary<string, string>
The runtime parameters to pass to the job.
- Transform
Name Dictionary<string, string>Mapping Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- Update bool
If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- Job
Name string The job name to use for the created job.
- Environment
Google
Cloud Datapipelines V1Runtime Environment The runtime environment for the job.
- Parameters map[string]string
The runtime parameters to pass to the job.
- Transform
Name map[string]stringMapping Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- Update bool
If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- job
Name String The job name to use for the created job.
- environment
Google
Cloud Datapipelines V1Runtime Environment The runtime environment for the job.
- parameters Map<String,String>
The runtime parameters to pass to the job.
- transform
Name Map<String,String>Mapping Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update Boolean
If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- job
Name string The job name to use for the created job.
- environment
Google
Cloud Datapipelines V1Runtime Environment The runtime environment for the job.
- parameters {[key: string]: string}
The runtime parameters to pass to the job.
- transform
Name {[key: string]: string}Mapping Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update boolean
If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- job_
name str The job name to use for the created job.
- environment
Google
Cloud Datapipelines V1Runtime Environment The runtime environment for the job.
- parameters Mapping[str, str]
The runtime parameters to pass to the job.
- transform_
name_ Mapping[str, str]mapping Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update bool
If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- job
Name String The job name to use for the created job.
- environment Property Map
The runtime environment for the job.
- parameters Map<String>
The runtime parameters to pass to the job.
- transform
Name Map<String>Mapping Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update Boolean
If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
GoogleCloudDatapipelinesV1LaunchTemplateParametersResponse
- Environment
Pulumi.
Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Runtime Environment Response The runtime environment for the job.
- Job
Name string The job name to use for the created job.
- Parameters Dictionary<string, string>
The runtime parameters to pass to the job.
- Transform
Name Dictionary<string, string>Mapping Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- Update bool
If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- Environment
Google
Cloud Datapipelines V1Runtime Environment Response The runtime environment for the job.
- Job
Name string The job name to use for the created job.
- Parameters map[string]string
The runtime parameters to pass to the job.
- Transform
Name map[string]stringMapping Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- Update bool
If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- environment
Google
Cloud Datapipelines V1Runtime Environment Response The runtime environment for the job.
- job
Name String The job name to use for the created job.
- parameters Map<String,String>
The runtime parameters to pass to the job.
- transform
Name Map<String,String>Mapping Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update Boolean
If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- environment
Google
Cloud Datapipelines V1Runtime Environment Response The runtime environment for the job.
- job
Name string The job name to use for the created job.
- parameters {[key: string]: string}
The runtime parameters to pass to the job.
- transform
Name {[key: string]: string}Mapping Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update boolean
If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- environment
Google
Cloud Datapipelines V1Runtime Environment Response The runtime environment for the job.
- job_
name str The job name to use for the created job.
- parameters Mapping[str, str]
The runtime parameters to pass to the job.
- transform_
name_ Mapping[str, str]mapping Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update bool
If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- environment Property Map
The runtime environment for the job.
- job
Name String The job name to use for the created job.
- parameters Map<String>
The runtime parameters to pass to the job.
- transform
Name Map<String>Mapping Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update Boolean
If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
GoogleCloudDatapipelinesV1LaunchTemplateRequest
- Project string
The ID of the Cloud Platform project that the job belongs to.
- Gcs
Path string A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- Launch
Parameters Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Template Parameters The parameters of the template to launch. This should be part of the body of the POST request.
- Location string
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- Validate
Only bool If true, the request is validated but not actually executed. Defaults to false.
- Project string
The ID of the Cloud Platform project that the job belongs to.
- Gcs
Path string A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- Launch
Parameters GoogleCloud Datapipelines V1Launch Template Parameters The parameters of the template to launch. This should be part of the body of the POST request.
- Location string
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- Validate
Only bool If true, the request is validated but not actually executed. Defaults to false.
- project String
The ID of the Cloud Platform project that the job belongs to.
- gcs
Path String A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launch
Parameters GoogleCloud Datapipelines V1Launch Template Parameters The parameters of the template to launch. This should be part of the body of the POST request.
- location String
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- validate
Only Boolean If true, the request is validated but not actually executed. Defaults to false.
- project string
The ID of the Cloud Platform project that the job belongs to.
- gcs
Path string A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launch
Parameters GoogleCloud Datapipelines V1Launch Template Parameters The parameters of the template to launch. This should be part of the body of the POST request.
- location string
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- validate
Only boolean If true, the request is validated but not actually executed. Defaults to false.
- project str
The ID of the Cloud Platform project that the job belongs to.
- gcs_
path str A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launch_
parameters GoogleCloud Datapipelines V1Launch Template Parameters The parameters of the template to launch. This should be part of the body of the POST request.
- location str
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- validate_
only bool If true, the request is validated but not actually executed. Defaults to false.
- project String
The ID of the Cloud Platform project that the job belongs to.
- gcs
Path String A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launch
Parameters Property Map The parameters of the template to launch. This should be part of the body of the POST request.
- location String
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- validate
Only Boolean If true, the request is validated but not actually executed. Defaults to false.
GoogleCloudDatapipelinesV1LaunchTemplateRequestResponse
- Gcs
Path string A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- Launch
Parameters Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Template Parameters Response The parameters of the template to launch. This should be part of the body of the POST request.
- Location string
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- Project string
The ID of the Cloud Platform project that the job belongs to.
- Validate
Only bool If true, the request is validated but not actually executed. Defaults to false.
- Gcs
Path string A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- Launch
Parameters GoogleCloud Datapipelines V1Launch Template Parameters Response The parameters of the template to launch. This should be part of the body of the POST request.
- Location string
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- Project string
The ID of the Cloud Platform project that the job belongs to.
- Validate
Only bool If true, the request is validated but not actually executed. Defaults to false.
- gcs
Path String A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launch
Parameters GoogleCloud Datapipelines V1Launch Template Parameters Response The parameters of the template to launch. This should be part of the body of the POST request.
- location String
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- project String
The ID of the Cloud Platform project that the job belongs to.
- validate
Only Boolean If true, the request is validated but not actually executed. Defaults to false.
- gcs
Path string A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launch
Parameters GoogleCloud Datapipelines V1Launch Template Parameters Response The parameters of the template to launch. This should be part of the body of the POST request.
- location string
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- project string
The ID of the Cloud Platform project that the job belongs to.
- validate
Only boolean If true, the request is validated but not actually executed. Defaults to false.
- gcs_
path str A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launch_
parameters GoogleCloud Datapipelines V1Launch Template Parameters Response The parameters of the template to launch. This should be part of the body of the POST request.
- location str
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- project str
The ID of the Cloud Platform project that the job belongs to.
- validate_
only bool If true, the request is validated but not actually executed. Defaults to false.
- gcs
Path String A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launch
Parameters Property Map The parameters of the template to launch. This should be part of the body of the POST request.
- location String
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- project String
The ID of the Cloud Platform project that the job belongs to.
- validate
Only Boolean If true, the request is validated but not actually executed. Defaults to false.
GoogleCloudDatapipelinesV1RuntimeEnvironment
- Additional
Experiments List<string> Additional experiment flags for the job.
- Additional
User Dictionary<string, string>Labels Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- Bypass
Temp boolDir Validation Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- Enable
Streaming boolEngine Whether to enable Streaming Engine for the job.
- Ip
Configuration Pulumi.Google Native. Datapipelines. V1. Google Cloud Datapipelines V1Runtime Environment Ip Configuration Configuration for VM IPs.
- Kms
Key stringName Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- Machine
Type string The machine type to use for the job. Defaults to the value from the template if not specified.
- Max
Workers int The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- Num
Workers int The initial number of Compute Engine instances for the job.
- Service
Account stringEmail The email address of the service account to run the job as.
- Subnetwork string
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- Temp
Location string The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- Worker
Region string The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- Worker
Zone string The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- Zone string
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- Additional
Experiments []string Additional experiment flags for the job.
- Additional
User map[string]stringLabels Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- Bypass
Temp boolDir Validation Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- Enable
Streaming boolEngine Whether to enable Streaming Engine for the job.
- Ip
Configuration GoogleCloud Datapipelines V1Runtime Environment Ip Configuration Configuration for VM IPs.
- Kms
Key stringName Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- Machine
Type string The machine type to use for the job. Defaults to the value from the template if not specified.
- Max
Workers int The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- Num
Workers int The initial number of Compute Engine instances for the job.
- Service
Account stringEmail The email address of the service account to run the job as.
- Subnetwork string
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- Temp
Location string The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- Worker
Region string The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- Worker
Zone string The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- Zone string
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments List<String> Additional experiment flags for the job.
- additional
User Map<String,String>Labels Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypass
Temp BooleanDir Validation Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enable
Streaming BooleanEngine Whether to enable Streaming Engine for the job.
- ip
Configuration GoogleCloud Datapipelines V1Runtime Environment Ip Configuration Configuration for VM IPs.
- kms
Key StringName Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type String The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers Integer The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers Integer The initial number of Compute Engine instances for the job.
- service
Account StringEmail The email address of the service account to run the job as.
- subnetwork String
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location String The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker
Region String The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker
Zone String The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone String
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments string[] Additional experiment flags for the job.
- additional
User {[key: string]: string}Labels Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypass
Temp booleanDir Validation Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enable
Streaming booleanEngine Whether to enable Streaming Engine for the job.
- ip
Configuration GoogleCloud Datapipelines V1Runtime Environment Ip Configuration Configuration for VM IPs.
- kms
Key stringName Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type string The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers number The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network string
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers number The initial number of Compute Engine instances for the job.
- service
Account stringEmail The email address of the service account to run the job as.
- subnetwork string
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location string The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker
Region string The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker
Zone string The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone string
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional_
experiments Sequence[str] Additional experiment flags for the job.
- additional_
user_ Mapping[str, str]labels Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypass_
temp_ booldir_ validation Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enable_
streaming_ boolengine Whether to enable Streaming Engine for the job.
- ip_
configuration GoogleCloud Datapipelines V1Runtime Environment Ip Configuration Configuration for VM IPs.
- kms_
key_ strname Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machine_
type str The machine type to use for the job. Defaults to the value from the template if not specified.
- max_
workers int The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network str
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num_
workers int The initial number of Compute Engine instances for the job.
- service_
account_ stremail The email address of the service account to run the job as.
- subnetwork str
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp_
location str The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker_
region str The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker_
zone str The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone str
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments List<String> Additional experiment flags for the job.
- additional
User Map<String>Labels Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypass
Temp BooleanDir Validation Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enable
Streaming BooleanEngine Whether to enable Streaming Engine for the job.
- ip
Configuration "WORKER_IP_UNSPECIFIED" | "WORKER_IP_PUBLIC" | "WORKER_IP_PRIVATE" Configuration for VM IPs.
- kms
Key StringName Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type String The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers Number The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers Number The initial number of Compute Engine instances for the job.
- service
Account StringEmail The email address of the service account to run the job as.
- subnetwork String
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location String The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker
Region String The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker
Zone String The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone String
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
GoogleCloudDatapipelinesV1RuntimeEnvironmentIpConfiguration
- Worker
Ip Unspecified - WORKER_IP_UNSPECIFIED
The configuration is unknown, or unspecified.
- Worker
Ip Public - WORKER_IP_PUBLIC
Workers should have public IP addresses.
- Worker
Ip Private - WORKER_IP_PRIVATE
Workers should have private IP addresses.
- Google
Cloud Datapipelines V1Runtime Environment Ip Configuration Worker Ip Unspecified - WORKER_IP_UNSPECIFIED
The configuration is unknown, or unspecified.
- Google
Cloud Datapipelines V1Runtime Environment Ip Configuration Worker Ip Public - WORKER_IP_PUBLIC
Workers should have public IP addresses.
- Google
Cloud Datapipelines V1Runtime Environment Ip Configuration Worker Ip Private - WORKER_IP_PRIVATE
Workers should have private IP addresses.
- Worker
Ip Unspecified - WORKER_IP_UNSPECIFIED
The configuration is unknown, or unspecified.
- Worker
Ip Public - WORKER_IP_PUBLIC
Workers should have public IP addresses.
- Worker
Ip Private - WORKER_IP_PRIVATE
Workers should have private IP addresses.
- Worker
Ip Unspecified - WORKER_IP_UNSPECIFIED
The configuration is unknown, or unspecified.
- Worker
Ip Public - WORKER_IP_PUBLIC
Workers should have public IP addresses.
- Worker
Ip Private - WORKER_IP_PRIVATE
Workers should have private IP addresses.
- WORKER_IP_UNSPECIFIED
- WORKER_IP_UNSPECIFIED
The configuration is unknown, or unspecified.
- WORKER_IP_PUBLIC
- WORKER_IP_PUBLIC
Workers should have public IP addresses.
- WORKER_IP_PRIVATE
- WORKER_IP_PRIVATE
Workers should have private IP addresses.
- "WORKER_IP_UNSPECIFIED"
- WORKER_IP_UNSPECIFIED
The configuration is unknown, or unspecified.
- "WORKER_IP_PUBLIC"
- WORKER_IP_PUBLIC
Workers should have public IP addresses.
- "WORKER_IP_PRIVATE"
- WORKER_IP_PRIVATE
Workers should have private IP addresses.
GoogleCloudDatapipelinesV1RuntimeEnvironmentResponse
- Additional
Experiments List<string> Additional experiment flags for the job.
- Additional
User Dictionary<string, string>Labels Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- Bypass
Temp boolDir Validation Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- Enable
Streaming boolEngine Whether to enable Streaming Engine for the job.
- Ip
Configuration string Configuration for VM IPs.
- Kms
Key stringName Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- Machine
Type string The machine type to use for the job. Defaults to the value from the template if not specified.
- Max
Workers int The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- Num
Workers int The initial number of Compute Engine instances for the job.
- Service
Account stringEmail The email address of the service account to run the job as.
- Subnetwork string
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- Temp
Location string The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- Worker
Region string The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- Worker
Zone string The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- Zone string
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- Additional
Experiments []string Additional experiment flags for the job.
- Additional
User map[string]stringLabels Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- Bypass
Temp boolDir Validation Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- Enable
Streaming boolEngine Whether to enable Streaming Engine for the job.
- Ip
Configuration string Configuration for VM IPs.
- Kms
Key stringName Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- Machine
Type string The machine type to use for the job. Defaults to the value from the template if not specified.
- Max
Workers int The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- Num
Workers int The initial number of Compute Engine instances for the job.
- Service
Account stringEmail The email address of the service account to run the job as.
- Subnetwork string
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- Temp
Location string The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- Worker
Region string The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- Worker
Zone string The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- Zone string
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments List<String> Additional experiment flags for the job.
- additional
User Map<String,String>Labels Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypass
Temp BooleanDir Validation Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enable
Streaming BooleanEngine Whether to enable Streaming Engine for the job.
- ip
Configuration String Configuration for VM IPs.
- kms
Key StringName Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type String The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers Integer The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers Integer The initial number of Compute Engine instances for the job.
- service
Account StringEmail The email address of the service account to run the job as.
- subnetwork String
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location String The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker
Region String The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker
Zone String The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone String
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments string[] Additional experiment flags for the job.
- additional
User {[key: string]: string}Labels Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypass
Temp booleanDir Validation Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enable
Streaming booleanEngine Whether to enable Streaming Engine for the job.
- ip
Configuration string Configuration for VM IPs.
- kms
Key stringName Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type string The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers number The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network string
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers number The initial number of Compute Engine instances for the job.
- service
Account stringEmail The email address of the service account to run the job as.
- subnetwork string
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location string The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker
Region string The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker
Zone string The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone string
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional_
experiments Sequence[str] Additional experiment flags for the job.
- additional_
user_ Mapping[str, str]labels Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypass_
temp_ booldir_ validation Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enable_
streaming_ boolengine Whether to enable Streaming Engine for the job.
- ip_
configuration str Configuration for VM IPs.
- kms_
key_ strname Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machine_
type str The machine type to use for the job. Defaults to the value from the template if not specified.
- max_
workers int The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network str
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num_
workers int The initial number of Compute Engine instances for the job.
- service_
account_ stremail The email address of the service account to run the job as.
- subnetwork str
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp_
location str The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker_
region str The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker_
zone str The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone str
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments List<String> Additional experiment flags for the job.
- additional
User Map<String>Labels Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypass
Temp BooleanDir Validation Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enable
Streaming BooleanEngine Whether to enable Streaming Engine for the job.
- ip
Configuration String Configuration for VM IPs.
- kms
Key StringName Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type String The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers Number The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers Number The initial number of Compute Engine instances for the job.
- service
Account StringEmail The email address of the service account to run the job as.
- subnetwork String
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location String The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- worker
Region String The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker
Zone String The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- zone String
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
GoogleCloudDatapipelinesV1ScheduleSpec
GoogleCloudDatapipelinesV1ScheduleSpecResponse
- Next
Job stringTime When the next Scheduler job is going to run.
- Schedule string
Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- Time
Zone string Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
- Next
Job stringTime When the next Scheduler job is going to run.
- Schedule string
Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- Time
Zone string Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
- next
Job StringTime When the next Scheduler job is going to run.
- schedule String
Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- time
Zone String Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
- next
Job stringTime When the next Scheduler job is going to run.
- schedule string
Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- time
Zone string Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
- next_
job_ strtime When the next Scheduler job is going to run.
- schedule str
Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- time_
zone str Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
- next
Job StringTime When the next Scheduler job is going to run.
- schedule String
Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- time
Zone String Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
GoogleCloudDatapipelinesV1Workload
- Dataflow
Flex Pulumi.Template Request Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Flex Template Request Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- Dataflow
Launch Pulumi.Template Request Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Template Request Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- Dataflow
Flex GoogleTemplate Request Cloud Datapipelines V1Launch Flex Template Request Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- Dataflow
Launch GoogleTemplate Request Cloud Datapipelines V1Launch Template Request Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflow
Flex GoogleTemplate Request Cloud Datapipelines V1Launch Flex Template Request Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflow
Launch GoogleTemplate Request Cloud Datapipelines V1Launch Template Request Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflow
Flex GoogleTemplate Request Cloud Datapipelines V1Launch Flex Template Request Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflow
Launch GoogleTemplate Request Cloud Datapipelines V1Launch Template Request Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflow_
flex_ Googletemplate_ request Cloud Datapipelines V1Launch Flex Template Request Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflow_
launch_ Googletemplate_ request Cloud Datapipelines V1Launch Template Request Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflow
Flex Property MapTemplate Request Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflow
Launch Property MapTemplate Request Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
GoogleCloudDatapipelinesV1WorkloadResponse
- Dataflow
Flex Pulumi.Template Request Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Flex Template Request Response Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- Dataflow
Launch Pulumi.Template Request Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Template Request Response Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- Dataflow
Flex GoogleTemplate Request Cloud Datapipelines V1Launch Flex Template Request Response Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- Dataflow
Launch GoogleTemplate Request Cloud Datapipelines V1Launch Template Request Response Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflow
Flex GoogleTemplate Request Cloud Datapipelines V1Launch Flex Template Request Response Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflow
Launch GoogleTemplate Request Cloud Datapipelines V1Launch Template Request Response Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflow
Flex GoogleTemplate Request Cloud Datapipelines V1Launch Flex Template Request Response Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflow
Launch GoogleTemplate Request Cloud Datapipelines V1Launch Template Request Response Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflow_
flex_ Googletemplate_ request Cloud Datapipelines V1Launch Flex Template Request Response Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflow_
launch_ Googletemplate_ request Cloud Datapipelines V1Launch Template Request Response Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflow
Flex Property MapTemplate Request Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflow
Launch Property MapTemplate Request Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
PipelineState
- State
Unspecified - STATE_UNSPECIFIED
The pipeline state isn't specified.
- State
Resuming - STATE_RESUMING
The pipeline is getting started or resumed. When finished, the pipeline state will be 'PIPELINE_STATE_ACTIVE'.
- State
Active - STATE_ACTIVE
The pipeline is actively running.
- State
Stopping - STATE_STOPPING
The pipeline is in the process of stopping. When finished, the pipeline state will be 'PIPELINE_STATE_ARCHIVED'.
- State
Archived - STATE_ARCHIVED
The pipeline has been stopped. This is a terminal state and cannot be undone.
- State
Paused - STATE_PAUSED
The pipeline is paused. This is a non-terminal state. When the pipeline is paused, it will hold processing jobs, but can be resumed later. For a batch pipeline, this means pausing the scheduler job. For a streaming pipeline, creating a job snapshot to resume from will give the same effect.
- Pipeline
State State Unspecified - STATE_UNSPECIFIED
The pipeline state isn't specified.
- Pipeline
State State Resuming - STATE_RESUMING
The pipeline is getting started or resumed. When finished, the pipeline state will be 'PIPELINE_STATE_ACTIVE'.
- Pipeline
State State Active - STATE_ACTIVE
The pipeline is actively running.
- Pipeline
State State Stopping - STATE_STOPPING
The pipeline is in the process of stopping. When finished, the pipeline state will be 'PIPELINE_STATE_ARCHIVED'.
- Pipeline
State State Archived - STATE_ARCHIVED
The pipeline has been stopped. This is a terminal state and cannot be undone.
- Pipeline
State State Paused - STATE_PAUSED
The pipeline is paused. This is a non-terminal state. When the pipeline is paused, it will hold processing jobs, but can be resumed later. For a batch pipeline, this means pausing the scheduler job. For a streaming pipeline, creating a job snapshot to resume from will give the same effect.
- State
Unspecified - STATE_UNSPECIFIED
The pipeline state isn't specified.
- State
Resuming - STATE_RESUMING
The pipeline is getting started or resumed. When finished, the pipeline state will be 'PIPELINE_STATE_ACTIVE'.
- State
Active - STATE_ACTIVE
The pipeline is actively running.
- State
Stopping - STATE_STOPPING
The pipeline is in the process of stopping. When finished, the pipeline state will be 'PIPELINE_STATE_ARCHIVED'.
- State
Archived - STATE_ARCHIVED
The pipeline has been stopped. This is a terminal state and cannot be undone.
- State
Paused - STATE_PAUSED
The pipeline is paused. This is a non-terminal state. When the pipeline is paused, it will hold processing jobs, but can be resumed later. For a batch pipeline, this means pausing the scheduler job. For a streaming pipeline, creating a job snapshot to resume from will give the same effect.
- State
Unspecified - STATE_UNSPECIFIED
The pipeline state isn't specified.
- State
Resuming - STATE_RESUMING
The pipeline is getting started or resumed. When finished, the pipeline state will be 'PIPELINE_STATE_ACTIVE'.
- State
Active - STATE_ACTIVE
The pipeline is actively running.
- State
Stopping - STATE_STOPPING
The pipeline is in the process of stopping. When finished, the pipeline state will be 'PIPELINE_STATE_ARCHIVED'.
- State
Archived - STATE_ARCHIVED
The pipeline has been stopped. This is a terminal state and cannot be undone.
- State
Paused - STATE_PAUSED
The pipeline is paused. This is a non-terminal state. When the pipeline is paused, it will hold processing jobs, but can be resumed later. For a batch pipeline, this means pausing the scheduler job. For a streaming pipeline, creating a job snapshot to resume from will give the same effect.
- STATE_UNSPECIFIED
- STATE_UNSPECIFIED
The pipeline state isn't specified.
- STATE_RESUMING
- STATE_RESUMING
The pipeline is getting started or resumed. When finished, the pipeline state will be 'PIPELINE_STATE_ACTIVE'.
- STATE_ACTIVE
- STATE_ACTIVE
The pipeline is actively running.
- STATE_STOPPING
- STATE_STOPPING
The pipeline is in the process of stopping. When finished, the pipeline state will be 'PIPELINE_STATE_ARCHIVED'.
- STATE_ARCHIVED
- STATE_ARCHIVED
The pipeline has been stopped. This is a terminal state and cannot be undone.
- STATE_PAUSED
- STATE_PAUSED
The pipeline is paused. This is a non-terminal state. When the pipeline is paused, it will hold processing jobs, but can be resumed later. For a batch pipeline, this means pausing the scheduler job. For a streaming pipeline, creating a job snapshot to resume from will give the same effect.
- "STATE_UNSPECIFIED"
- STATE_UNSPECIFIED
The pipeline state isn't specified.
- "STATE_RESUMING"
- STATE_RESUMING
The pipeline is getting started or resumed. When finished, the pipeline state will be 'PIPELINE_STATE_ACTIVE'.
- "STATE_ACTIVE"
- STATE_ACTIVE
The pipeline is actively running.
- "STATE_STOPPING"
- STATE_STOPPING
The pipeline is in the process of stopping. When finished, the pipeline state will be 'PIPELINE_STATE_ARCHIVED'.
- "STATE_ARCHIVED"
- STATE_ARCHIVED
The pipeline has been stopped. This is a terminal state and cannot be undone.
- "STATE_PAUSED"
- STATE_PAUSED
The pipeline is paused. This is a non-terminal state. When the pipeline is paused, it will hold processing jobs, but can be resumed later. For a batch pipeline, this means pausing the scheduler job. For a streaming pipeline, creating a job snapshot to resume from will give the same effect.
PipelineType
- Pipeline
Type Unspecified - PIPELINE_TYPE_UNSPECIFIED
The pipeline type isn't specified.
- Pipeline
Type Batch - PIPELINE_TYPE_BATCH
A batch pipeline. It runs jobs on a specific schedule, and each job will automatically terminate once execution is finished.
- Pipeline
Type Streaming - PIPELINE_TYPE_STREAMING
A streaming pipeline. The underlying job is continuously running until it is manually terminated by the user. This type of pipeline doesn't have a schedule to run on, and the linked job gets created when the pipeline is created.
- Pipeline
Type Pipeline Type Unspecified - PIPELINE_TYPE_UNSPECIFIED
The pipeline type isn't specified.
- Pipeline
Type Pipeline Type Batch - PIPELINE_TYPE_BATCH
A batch pipeline. It runs jobs on a specific schedule, and each job will automatically terminate once execution is finished.
- Pipeline
Type Pipeline Type Streaming - PIPELINE_TYPE_STREAMING
A streaming pipeline. The underlying job is continuously running until it is manually terminated by the user. This type of pipeline doesn't have a schedule to run on, and the linked job gets created when the pipeline is created.
- Pipeline
Type Unspecified - PIPELINE_TYPE_UNSPECIFIED
The pipeline type isn't specified.
- Pipeline
Type Batch - PIPELINE_TYPE_BATCH
A batch pipeline. It runs jobs on a specific schedule, and each job will automatically terminate once execution is finished.
- Pipeline
Type Streaming - PIPELINE_TYPE_STREAMING
A streaming pipeline. The underlying job is continuously running until it is manually terminated by the user. This type of pipeline doesn't have a schedule to run on, and the linked job gets created when the pipeline is created.
- Pipeline
Type Unspecified - PIPELINE_TYPE_UNSPECIFIED
The pipeline type isn't specified.
- Pipeline
Type Batch - PIPELINE_TYPE_BATCH
A batch pipeline. It runs jobs on a specific schedule, and each job will automatically terminate once execution is finished.
- Pipeline
Type Streaming - PIPELINE_TYPE_STREAMING
A streaming pipeline. The underlying job is continuously running until it is manually terminated by the user. This type of pipeline doesn't have a schedule to run on, and the linked job gets created when the pipeline is created.
- PIPELINE_TYPE_UNSPECIFIED
- PIPELINE_TYPE_UNSPECIFIED
The pipeline type isn't specified.
- PIPELINE_TYPE_BATCH
- PIPELINE_TYPE_BATCH
A batch pipeline. It runs jobs on a specific schedule, and each job will automatically terminate once execution is finished.
- PIPELINE_TYPE_STREAMING
- PIPELINE_TYPE_STREAMING
A streaming pipeline. The underlying job is continuously running until it is manually terminated by the user. This type of pipeline doesn't have a schedule to run on, and the linked job gets created when the pipeline is created.
- "PIPELINE_TYPE_UNSPECIFIED"
- PIPELINE_TYPE_UNSPECIFIED
The pipeline type isn't specified.
- "PIPELINE_TYPE_BATCH"
- PIPELINE_TYPE_BATCH
A batch pipeline. It runs jobs on a specific schedule, and each job will automatically terminate once execution is finished.
- "PIPELINE_TYPE_STREAMING"
- PIPELINE_TYPE_STREAMING
A streaming pipeline. The underlying job is continuously running until it is manually terminated by the user. This type of pipeline doesn't have a schedule to run on, and the linked job gets created when the pipeline is created.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0