Google Native

Pulumi Official
Package maintained by Pulumi
v0.20.0 published on Monday, Jun 6, 2022 by Pulumi

getPipeline

Looks up a single pipeline. Returns a “NOT_FOUND” error if no such pipeline exists. Returns a “FORBIDDEN” error if the caller doesn’t have permission to access it.

Using getPipeline

Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

function getPipeline(args: GetPipelineArgs, opts?: InvokeOptions): Promise<GetPipelineResult>
function getPipelineOutput(args: GetPipelineOutputArgs, opts?: InvokeOptions): Output<GetPipelineResult>
def get_pipeline(location: Optional[str] = None,
                 pipeline_id: Optional[str] = None,
                 project: Optional[str] = None,
                 opts: Optional[InvokeOptions] = None) -> GetPipelineResult
def get_pipeline_output(location: Optional[pulumi.Input[str]] = None,
                 pipeline_id: Optional[pulumi.Input[str]] = None,
                 project: Optional[pulumi.Input[str]] = None,
                 opts: Optional[InvokeOptions] = None) -> Output[GetPipelineResult]
func LookupPipeline(ctx *Context, args *LookupPipelineArgs, opts ...InvokeOption) (*LookupPipelineResult, error)
func LookupPipelineOutput(ctx *Context, args *LookupPipelineOutputArgs, opts ...InvokeOption) LookupPipelineResultOutput

> Note: This function is named LookupPipeline in the Go SDK.

public static class GetPipeline 
{
    public static Task<GetPipelineResult> InvokeAsync(GetPipelineArgs args, InvokeOptions? opts = null)
    public static Output<GetPipelineResult> Invoke(GetPipelineInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetPipelineResult> getPipeline(GetPipelineArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
Fn::Invoke:
  Function: google-native:datapipelines/v1:getPipeline
  Arguments:
    # Arguments dictionary

The following arguments are supported:

Location string
PipelineId string
Project string
Location string
PipelineId string
Project string
location String
pipelineId String
project String
location string
pipelineId string
project string
location String
pipelineId String
project String

getPipeline Result

The following output properties are available:

CreateTime string

Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.

DisplayName string

The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).

JobCount int

Number of jobs.

LastUpdateTime string

Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.

Name string

The pipeline name. For example: projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID. * PROJECT_ID can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. * LOCATION_ID is the canonical ID for the pipeline's location. The list of available locations can be obtained by calling google.cloud.location.Locations.ListLocations. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. * PIPELINE_ID is the ID of the pipeline. Must be unique for the selected project and location.

PipelineSources Dictionary<string, string>

Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.

ScheduleInfo Pulumi.GoogleNative.Datapipelines.V1.Outputs.GoogleCloudDatapipelinesV1ScheduleSpecResponse

Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.

SchedulerServiceAccountEmail string

Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.

State string

The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.

Type string

The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.

Workload Pulumi.GoogleNative.Datapipelines.V1.Outputs.GoogleCloudDatapipelinesV1WorkloadResponse

Workload information for creating new jobs.

CreateTime string

Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.

DisplayName string

The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).

JobCount int

Number of jobs.

LastUpdateTime string

Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.

Name string

The pipeline name. For example: projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID. * PROJECT_ID can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. * LOCATION_ID is the canonical ID for the pipeline's location. The list of available locations can be obtained by calling google.cloud.location.Locations.ListLocations. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. * PIPELINE_ID is the ID of the pipeline. Must be unique for the selected project and location.

PipelineSources map[string]string

Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.

ScheduleInfo GoogleCloudDatapipelinesV1ScheduleSpecResponse

Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.

SchedulerServiceAccountEmail string

Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.

State string

The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.

Type string

The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.

Workload GoogleCloudDatapipelinesV1WorkloadResponse

Workload information for creating new jobs.

createTime String

Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.

displayName String

The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).

jobCount Integer

Number of jobs.

lastUpdateTime String

Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.

name String

The pipeline name. For example: projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID. * PROJECT_ID can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. * LOCATION_ID is the canonical ID for the pipeline's location. The list of available locations can be obtained by calling google.cloud.location.Locations.ListLocations. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. * PIPELINE_ID is the ID of the pipeline. Must be unique for the selected project and location.

pipelineSources Map<String,String>

Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.

scheduleInfo GoogleCloudDatapipelinesV1ScheduleSpecResponse

Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.

schedulerServiceAccountEmail String

Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.

state String

The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.

type String

The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.

workload GoogleCloudDatapipelinesV1WorkloadResponse

Workload information for creating new jobs.

createTime string

Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.

displayName string

The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).

jobCount number

Number of jobs.

lastUpdateTime string

Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.

name string

The pipeline name. For example: projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID. * PROJECT_ID can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. * LOCATION_ID is the canonical ID for the pipeline's location. The list of available locations can be obtained by calling google.cloud.location.Locations.ListLocations. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. * PIPELINE_ID is the ID of the pipeline. Must be unique for the selected project and location.

pipelineSources {[key: string]: string}

Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.

scheduleInfo GoogleCloudDatapipelinesV1ScheduleSpecResponse

Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.

schedulerServiceAccountEmail string

Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.

state string

The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.

type string

The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.

workload GoogleCloudDatapipelinesV1WorkloadResponse

Workload information for creating new jobs.

create_time str

Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.

display_name str

The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).

job_count int

Number of jobs.

last_update_time str

Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.

name str

The pipeline name. For example: projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID. * PROJECT_ID can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. * LOCATION_ID is the canonical ID for the pipeline's location. The list of available locations can be obtained by calling google.cloud.location.Locations.ListLocations. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. * PIPELINE_ID is the ID of the pipeline. Must be unique for the selected project and location.

pipeline_sources Mapping[str, str]

Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.

schedule_info GoogleCloudDatapipelinesV1ScheduleSpecResponse

Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.

scheduler_service_account_email str

Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.

state str

The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.

type str

The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.

workload GoogleCloudDatapipelinesV1WorkloadResponse

Workload information for creating new jobs.

createTime String

Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.

displayName String

The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).

jobCount Number

Number of jobs.

lastUpdateTime String

Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.

name String

The pipeline name. For example: projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID. * PROJECT_ID can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. * LOCATION_ID is the canonical ID for the pipeline's location. The list of available locations can be obtained by calling google.cloud.location.Locations.ListLocations. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. * PIPELINE_ID is the ID of the pipeline. Must be unique for the selected project and location.

pipelineSources Map<String>

Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.

scheduleInfo Property Map

Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.

schedulerServiceAccountEmail String

Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.

state String

The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.

type String

The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.

workload Property Map

Workload information for creating new jobs.

Supporting Types

GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentResponse

AdditionalExperiments List<string>

Additional experiment flags for the job.

AdditionalUserLabels Dictionary<string, string>

Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

EnableStreamingEngine bool

Whether to enable Streaming Engine for the job.

FlexrsGoal string

Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs

IpConfiguration string

Configuration for VM IPs.

KmsKeyName string

Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/

MachineType string

The machine type to use for the job. Defaults to the value from the template if not specified.

MaxWorkers int

The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

Network string

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

NumWorkers int

The initial number of Compute Engine instances for the job.

ServiceAccountEmail string

The email address of the service account to run the job as.

Subnetwork string

Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

TempLocation string

The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

WorkerRegion string

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.

WorkerZone string

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

Zone string

The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

AdditionalExperiments []string

Additional experiment flags for the job.

AdditionalUserLabels map[string]string

Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

EnableStreamingEngine bool

Whether to enable Streaming Engine for the job.

FlexrsGoal string

Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs

IpConfiguration string

Configuration for VM IPs.

KmsKeyName string

Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/

MachineType string

The machine type to use for the job. Defaults to the value from the template if not specified.

MaxWorkers int

The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

Network string

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

NumWorkers int

The initial number of Compute Engine instances for the job.

ServiceAccountEmail string

The email address of the service account to run the job as.

Subnetwork string

Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

TempLocation string

The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

WorkerRegion string

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.

WorkerZone string

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

Zone string

The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

additionalExperiments List<String>

Additional experiment flags for the job.

additionalUserLabels Map<String,String>

Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

enableStreamingEngine Boolean

Whether to enable Streaming Engine for the job.

flexrsGoal String

Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs

ipConfiguration String

Configuration for VM IPs.

kmsKeyName String

Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/

machineType String

The machine type to use for the job. Defaults to the value from the template if not specified.

maxWorkers Integer

The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

network String

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

numWorkers Integer

The initial number of Compute Engine instances for the job.

serviceAccountEmail String

The email address of the service account to run the job as.

subnetwork String

Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

tempLocation String

The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

workerRegion String

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.

workerZone String

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

zone String

The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

additionalExperiments string[]

Additional experiment flags for the job.

additionalUserLabels {[key: string]: string}

Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

enableStreamingEngine boolean

Whether to enable Streaming Engine for the job.

flexrsGoal string

Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs

ipConfiguration string

Configuration for VM IPs.

kmsKeyName string

Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/

machineType string

The machine type to use for the job. Defaults to the value from the template if not specified.

maxWorkers number

The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

network string

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

numWorkers number

The initial number of Compute Engine instances for the job.

serviceAccountEmail string

The email address of the service account to run the job as.

subnetwork string

Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

tempLocation string

The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

workerRegion string

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.

workerZone string

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

zone string

The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

additional_experiments Sequence[str]

Additional experiment flags for the job.

additional_user_labels Mapping[str, str]

Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

enable_streaming_engine bool

Whether to enable Streaming Engine for the job.

flexrs_goal str

Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs

ip_configuration str

Configuration for VM IPs.

kms_key_name str

Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/

machine_type str

The machine type to use for the job. Defaults to the value from the template if not specified.

max_workers int

The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

network str

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

num_workers int

The initial number of Compute Engine instances for the job.

service_account_email str

The email address of the service account to run the job as.

subnetwork str

Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

temp_location str

The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

worker_region str

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.

worker_zone str

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

zone str

The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

additionalExperiments List<String>

Additional experiment flags for the job.

additionalUserLabels Map<String>

Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

enableStreamingEngine Boolean

Whether to enable Streaming Engine for the job.

flexrsGoal String

Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs

ipConfiguration String

Configuration for VM IPs.

kmsKeyName String

Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/

machineType String

The machine type to use for the job. Defaults to the value from the template if not specified.

maxWorkers Number

The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

network String

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

numWorkers Number

The initial number of Compute Engine instances for the job.

serviceAccountEmail String

The email address of the service account to run the job as.

subnetwork String

Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

tempLocation String

The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

workerRegion String

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.

workerZone String

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

zone String

The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

GoogleCloudDatapipelinesV1LaunchFlexTemplateParameterResponse

ContainerSpecGcsPath string

Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.

Environment Pulumi.GoogleNative.Datapipelines.V1.Inputs.GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentResponse

The runtime environment for the Flex Template job.

JobName string

The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.

LaunchOptions Dictionary<string, string>

Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.

Parameters Dictionary<string, string>

The parameters for the Flex Template. Example: {"num_workers":"5"}

TransformNameMappings Dictionary<string, string>

Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}

Update bool

Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.

ContainerSpecGcsPath string

Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.

Environment GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentResponse

The runtime environment for the Flex Template job.

JobName string

The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.

LaunchOptions map[string]string

Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.

Parameters map[string]string

The parameters for the Flex Template. Example: {"num_workers":"5"}

TransformNameMappings map[string]string

Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}

Update bool

Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.

containerSpecGcsPath String

Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.

environment GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentResponse

The runtime environment for the Flex Template job.

jobName String

The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.

launchOptions Map<String,String>

Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.

parameters Map<String,String>

The parameters for the Flex Template. Example: {"num_workers":"5"}

transformNameMappings Map<String,String>

Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}

update Boolean

Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.

containerSpecGcsPath string

Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.

environment GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentResponse

The runtime environment for the Flex Template job.

jobName string

The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.

launchOptions {[key: string]: string}

Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.

parameters {[key: string]: string}

The parameters for the Flex Template. Example: {"num_workers":"5"}

transformNameMappings {[key: string]: string}

Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}

update boolean

Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.

container_spec_gcs_path str

Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.

environment GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentResponse

The runtime environment for the Flex Template job.

job_name str

The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.

launch_options Mapping[str, str]

Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.

parameters Mapping[str, str]

The parameters for the Flex Template. Example: {"num_workers":"5"}

transform_name_mappings Mapping[str, str]

Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}

update bool

Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.

containerSpecGcsPath String

Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.

environment Property Map

The runtime environment for the Flex Template job.

jobName String

The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.

launchOptions Map<String>

Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.

parameters Map<String>

The parameters for the Flex Template. Example: {"num_workers":"5"}

transformNameMappings Map<String>

Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}

update Boolean

Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.

GoogleCloudDatapipelinesV1LaunchFlexTemplateRequestResponse

LaunchParameter Pulumi.GoogleNative.Datapipelines.V1.Inputs.GoogleCloudDatapipelinesV1LaunchFlexTemplateParameterResponse

Parameter to launch a job from a Flex Template.

Location string

The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1, us-west1.

Project string

The ID of the Cloud Platform project that the job belongs to.

ValidateOnly bool

If true, the request is validated but not actually executed. Defaults to false.

LaunchParameter GoogleCloudDatapipelinesV1LaunchFlexTemplateParameterResponse

Parameter to launch a job from a Flex Template.

Location string

The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1, us-west1.

Project string

The ID of the Cloud Platform project that the job belongs to.

ValidateOnly bool

If true, the request is validated but not actually executed. Defaults to false.

launchParameter GoogleCloudDatapipelinesV1LaunchFlexTemplateParameterResponse

Parameter to launch a job from a Flex Template.

location String

The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1, us-west1.

project String

The ID of the Cloud Platform project that the job belongs to.

validateOnly Boolean

If true, the request is validated but not actually executed. Defaults to false.

launchParameter GoogleCloudDatapipelinesV1LaunchFlexTemplateParameterResponse

Parameter to launch a job from a Flex Template.

location string

The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1, us-west1.

project string

The ID of the Cloud Platform project that the job belongs to.

validateOnly boolean

If true, the request is validated but not actually executed. Defaults to false.

launch_parameter GoogleCloudDatapipelinesV1LaunchFlexTemplateParameterResponse

Parameter to launch a job from a Flex Template.

location str

The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1, us-west1.

project str

The ID of the Cloud Platform project that the job belongs to.

validate_only bool

If true, the request is validated but not actually executed. Defaults to false.

launchParameter Property Map

Parameter to launch a job from a Flex Template.

location String

The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1, us-west1.

project String

The ID of the Cloud Platform project that the job belongs to.

validateOnly Boolean

If true, the request is validated but not actually executed. Defaults to false.

GoogleCloudDatapipelinesV1LaunchTemplateParametersResponse

Environment Pulumi.GoogleNative.Datapipelines.V1.Inputs.GoogleCloudDatapipelinesV1RuntimeEnvironmentResponse

The runtime environment for the job.

JobName string

The job name to use for the created job.

Parameters Dictionary<string, string>

The runtime parameters to pass to the job.

TransformNameMapping Dictionary<string, string>

Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.

Update bool

If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.

Environment GoogleCloudDatapipelinesV1RuntimeEnvironmentResponse

The runtime environment for the job.

JobName string

The job name to use for the created job.

Parameters map[string]string

The runtime parameters to pass to the job.

TransformNameMapping map[string]string

Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.

Update bool

If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.

environment GoogleCloudDatapipelinesV1RuntimeEnvironmentResponse

The runtime environment for the job.

jobName String

The job name to use for the created job.

parameters Map<String,String>

The runtime parameters to pass to the job.

transformNameMapping Map<String,String>

Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.

update Boolean

If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.

environment GoogleCloudDatapipelinesV1RuntimeEnvironmentResponse

The runtime environment for the job.

jobName string

The job name to use for the created job.

parameters {[key: string]: string}

The runtime parameters to pass to the job.

transformNameMapping {[key: string]: string}

Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.

update boolean

If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.

environment GoogleCloudDatapipelinesV1RuntimeEnvironmentResponse

The runtime environment for the job.

job_name str

The job name to use for the created job.

parameters Mapping[str, str]

The runtime parameters to pass to the job.

transform_name_mapping Mapping[str, str]

Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.

update bool

If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.

environment Property Map

The runtime environment for the job.

jobName String

The job name to use for the created job.

parameters Map<String>

The runtime parameters to pass to the job.

transformNameMapping Map<String>

Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.

update Boolean

If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.

GoogleCloudDatapipelinesV1LaunchTemplateRequestResponse

GcsPath string

A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.

LaunchParameters Pulumi.GoogleNative.Datapipelines.V1.Inputs.GoogleCloudDatapipelinesV1LaunchTemplateParametersResponse

The parameters of the template to launch. This should be part of the body of the POST request.

Location string

The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.

Project string

The ID of the Cloud Platform project that the job belongs to.

ValidateOnly bool

If true, the request is validated but not actually executed. Defaults to false.

GcsPath string

A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.

LaunchParameters GoogleCloudDatapipelinesV1LaunchTemplateParametersResponse

The parameters of the template to launch. This should be part of the body of the POST request.

Location string

The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.

Project string

The ID of the Cloud Platform project that the job belongs to.

ValidateOnly bool

If true, the request is validated but not actually executed. Defaults to false.

gcsPath String

A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.

launchParameters GoogleCloudDatapipelinesV1LaunchTemplateParametersResponse

The parameters of the template to launch. This should be part of the body of the POST request.

location String

The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.

project String

The ID of the Cloud Platform project that the job belongs to.

validateOnly Boolean

If true, the request is validated but not actually executed. Defaults to false.

gcsPath string

A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.

launchParameters GoogleCloudDatapipelinesV1LaunchTemplateParametersResponse

The parameters of the template to launch. This should be part of the body of the POST request.

location string

The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.

project string

The ID of the Cloud Platform project that the job belongs to.

validateOnly boolean

If true, the request is validated but not actually executed. Defaults to false.

gcs_path str

A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.

launch_parameters GoogleCloudDatapipelinesV1LaunchTemplateParametersResponse

The parameters of the template to launch. This should be part of the body of the POST request.

location str

The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.

project str

The ID of the Cloud Platform project that the job belongs to.

validate_only bool

If true, the request is validated but not actually executed. Defaults to false.

gcsPath String

A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.

launchParameters Property Map

The parameters of the template to launch. This should be part of the body of the POST request.

location String

The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.

project String

The ID of the Cloud Platform project that the job belongs to.

validateOnly Boolean

If true, the request is validated but not actually executed. Defaults to false.

GoogleCloudDatapipelinesV1RuntimeEnvironmentResponse

AdditionalExperiments List<string>

Additional experiment flags for the job.

AdditionalUserLabels Dictionary<string, string>

Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

BypassTempDirValidation bool

Whether to bypass the safety checks for the job's temporary directory. Use with caution.

EnableStreamingEngine bool

Whether to enable Streaming Engine for the job.

IpConfiguration string

Configuration for VM IPs.

KmsKeyName string

Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/

MachineType string

The machine type to use for the job. Defaults to the value from the template if not specified.

MaxWorkers int

The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

Network string

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

NumWorkers int

The initial number of Compute Engine instances for the job.

ServiceAccountEmail string

The email address of the service account to run the job as.

Subnetwork string

Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

TempLocation string

The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

WorkerRegion string

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

WorkerZone string

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

Zone string

The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

AdditionalExperiments []string

Additional experiment flags for the job.

AdditionalUserLabels map[string]string

Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

BypassTempDirValidation bool

Whether to bypass the safety checks for the job's temporary directory. Use with caution.

EnableStreamingEngine bool

Whether to enable Streaming Engine for the job.

IpConfiguration string

Configuration for VM IPs.

KmsKeyName string

Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/

MachineType string

The machine type to use for the job. Defaults to the value from the template if not specified.

MaxWorkers int

The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

Network string

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

NumWorkers int

The initial number of Compute Engine instances for the job.

ServiceAccountEmail string

The email address of the service account to run the job as.

Subnetwork string

Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

TempLocation string

The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

WorkerRegion string

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

WorkerZone string

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

Zone string

The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

additionalExperiments List<String>

Additional experiment flags for the job.

additionalUserLabels Map<String,String>

Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

bypassTempDirValidation Boolean

Whether to bypass the safety checks for the job's temporary directory. Use with caution.

enableStreamingEngine Boolean

Whether to enable Streaming Engine for the job.

ipConfiguration String

Configuration for VM IPs.

kmsKeyName String

Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/

machineType String

The machine type to use for the job. Defaults to the value from the template if not specified.

maxWorkers Integer

The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

network String

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

numWorkers Integer

The initial number of Compute Engine instances for the job.

serviceAccountEmail String

The email address of the service account to run the job as.

subnetwork String

Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

tempLocation String

The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

workerRegion String

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

workerZone String

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

zone String

The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

additionalExperiments string[]

Additional experiment flags for the job.

additionalUserLabels {[key: string]: string}

Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

bypassTempDirValidation boolean

Whether to bypass the safety checks for the job's temporary directory. Use with caution.

enableStreamingEngine boolean

Whether to enable Streaming Engine for the job.

ipConfiguration string

Configuration for VM IPs.

kmsKeyName string

Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/

machineType string

The machine type to use for the job. Defaults to the value from the template if not specified.

maxWorkers number

The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

network string

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

numWorkers number

The initial number of Compute Engine instances for the job.

serviceAccountEmail string

The email address of the service account to run the job as.

subnetwork string

Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

tempLocation string

The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

workerRegion string

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

workerZone string

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

zone string

The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

additional_experiments Sequence[str]

Additional experiment flags for the job.

additional_user_labels Mapping[str, str]

Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

bypass_temp_dir_validation bool

Whether to bypass the safety checks for the job's temporary directory. Use with caution.

enable_streaming_engine bool

Whether to enable Streaming Engine for the job.

ip_configuration str

Configuration for VM IPs.

kms_key_name str

Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/

machine_type str

The machine type to use for the job. Defaults to the value from the template if not specified.

max_workers int

The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

network str

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

num_workers int

The initial number of Compute Engine instances for the job.

service_account_email str

The email address of the service account to run the job as.

subnetwork str

Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

temp_location str

The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

worker_region str

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

worker_zone str

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

zone str

The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

additionalExperiments List<String>

Additional experiment flags for the job.

additionalUserLabels Map<String>

Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.

bypassTempDirValidation Boolean

Whether to bypass the safety checks for the job's temporary directory. Use with caution.

enableStreamingEngine Boolean

Whether to enable Streaming Engine for the job.

ipConfiguration String

Configuration for VM IPs.

kmsKeyName String

Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/

machineType String

The machine type to use for the job. Defaults to the value from the template if not specified.

maxWorkers Number

The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

network String

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

numWorkers Number

The initial number of Compute Engine instances for the job.

serviceAccountEmail String

The email address of the service account to run the job as.

subnetwork String

Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.

tempLocation String

The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

workerRegion String

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

workerZone String

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zone and zone are set, worker_zone takes precedence.

zone String

The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.

GoogleCloudDatapipelinesV1ScheduleSpecResponse

NextJobTime string

When the next Scheduler job is going to run.

Schedule string

Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.

TimeZone string

Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.

NextJobTime string

When the next Scheduler job is going to run.

Schedule string

Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.

TimeZone string

Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.

nextJobTime String

When the next Scheduler job is going to run.

schedule String

Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.

timeZone String

Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.

nextJobTime string

When the next Scheduler job is going to run.

schedule string

Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.

timeZone string

Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.

next_job_time str

When the next Scheduler job is going to run.

schedule str

Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.

time_zone str

Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.

nextJobTime String

When the next Scheduler job is going to run.

schedule String

Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.

timeZone String

Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.

GoogleCloudDatapipelinesV1WorkloadResponse

DataflowFlexTemplateRequest Pulumi.GoogleNative.Datapipelines.V1.Inputs.GoogleCloudDatapipelinesV1LaunchFlexTemplateRequestResponse

Template information and additional parameters needed to launch a Dataflow job using the flex launch API.

DataflowLaunchTemplateRequest Pulumi.GoogleNative.Datapipelines.V1.Inputs.GoogleCloudDatapipelinesV1LaunchTemplateRequestResponse

Template information and additional parameters needed to launch a Dataflow job using the standard launch API.

DataflowFlexTemplateRequest GoogleCloudDatapipelinesV1LaunchFlexTemplateRequestResponse

Template information and additional parameters needed to launch a Dataflow job using the flex launch API.

DataflowLaunchTemplateRequest GoogleCloudDatapipelinesV1LaunchTemplateRequestResponse

Template information and additional parameters needed to launch a Dataflow job using the standard launch API.

dataflowFlexTemplateRequest GoogleCloudDatapipelinesV1LaunchFlexTemplateRequestResponse

Template information and additional parameters needed to launch a Dataflow job using the flex launch API.

dataflowLaunchTemplateRequest GoogleCloudDatapipelinesV1LaunchTemplateRequestResponse

Template information and additional parameters needed to launch a Dataflow job using the standard launch API.

dataflowFlexTemplateRequest GoogleCloudDatapipelinesV1LaunchFlexTemplateRequestResponse

Template information and additional parameters needed to launch a Dataflow job using the flex launch API.

dataflowLaunchTemplateRequest GoogleCloudDatapipelinesV1LaunchTemplateRequestResponse

Template information and additional parameters needed to launch a Dataflow job using the standard launch API.

dataflow_flex_template_request GoogleCloudDatapipelinesV1LaunchFlexTemplateRequestResponse

Template information and additional parameters needed to launch a Dataflow job using the flex launch API.

dataflow_launch_template_request GoogleCloudDatapipelinesV1LaunchTemplateRequestResponse

Template information and additional parameters needed to launch a Dataflow job using the standard launch API.

dataflowFlexTemplateRequest Property Map

Template information and additional parameters needed to launch a Dataflow job using the flex launch API.

dataflowLaunchTemplateRequest Property Map

Template information and additional parameters needed to launch a Dataflow job using the standard launch API.

Package Details

Repository
https://github.com/pulumi/pulumi-google-native
License
Apache-2.0