1. Packages
  2. Google Cloud (GCP) Classic
  3. API Docs
  4. dataflow
  5. FlexTemplateJob
Google Cloud Classic v6.66.0 published on Monday, Sep 18, 2023 by Pulumi

gcp.dataflow.FlexTemplateJob

Explore with Pulumi AI

gcp logo
Google Cloud Classic v6.66.0 published on Monday, Sep 18, 2023 by Pulumi

    Note on “destroy” / “apply”

    There are many types of Dataflow jobs. Some Dataflow jobs run constantly, getting new data from (e.g.) a GCS bucket, and outputting data continuously. Some jobs process a set amount of data then terminate. All jobs can fail while running due to programming errors or other issues. In this way, Dataflow jobs are different from most other provider / Google resources.

    The Dataflow resource is considered ’existing’ while it is in a nonterminal state. If it reaches a terminal state (e.g. ‘FAILED’, ‘COMPLETE’, ‘CANCELLED’), it will be recreated on the next ‘apply’. This is as expected for jobs which run continuously, but may surprise users who use this resource for other kinds of Dataflow jobs.

    A Dataflow job which is ‘destroyed’ may be “cancelled” or “drained”. If “cancelled”, the job terminates - any data written remains where it is, but no new data will be processed. If “drained”, no new data will enter the pipeline, but any data currently in the pipeline will finish being processed. The default is “cancelled”, but if a user sets on_delete to "drain" in the configuration, you may experience a long wait for your pulumi destroy to complete.

    You can potentially short-circuit the wait by setting skip_wait_on_job_termination to true, but beware that unless you take active steps to ensure that the job name parameter changes between instances, the name will conflict and the launch of the new job will fail. One way to do this is with a random_id resource, for example:

    import * as pulumi from "@pulumi/pulumi";
    import * as gcp from "@pulumi/gcp";
    import * as random from "@pulumi/random";
    
    const config = new pulumi.Config();
    const bigDataJobSubscriptionId = config.get("bigDataJobSubscriptionId") || "projects/myproject/subscriptions/messages";
    const bigDataJobNameSuffix = new random.RandomId("bigDataJobNameSuffix", {
        byteLength: 4,
        keepers: {
            region: _var.region,
            subscription_id: bigDataJobSubscriptionId,
        },
    });
    const bigDataJob = new gcp.dataflow.FlexTemplateJob("bigDataJob", {
        region: _var.region,
        containerSpecGcsPath: "gs://my-bucket/templates/template.json",
        skipWaitOnJobTermination: true,
        parameters: {
            inputSubscription: bigDataJobSubscriptionId,
        },
    }, {
        provider: google_beta,
    });
    
    import pulumi
    import pulumi_gcp as gcp
    import pulumi_random as random
    
    config = pulumi.Config()
    big_data_job_subscription_id = config.get("bigDataJobSubscriptionId")
    if big_data_job_subscription_id is None:
        big_data_job_subscription_id = "projects/myproject/subscriptions/messages"
    big_data_job_name_suffix = random.RandomId("bigDataJobNameSuffix",
        byte_length=4,
        keepers={
            "region": var["region"],
            "subscription_id": big_data_job_subscription_id,
        })
    big_data_job = gcp.dataflow.FlexTemplateJob("bigDataJob",
        region=var["region"],
        container_spec_gcs_path="gs://my-bucket/templates/template.json",
        skip_wait_on_job_termination=True,
        parameters={
            "inputSubscription": big_data_job_subscription_id,
        },
        opts=pulumi.ResourceOptions(provider=google_beta))
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Gcp = Pulumi.Gcp;
    using Random = Pulumi.Random;
    
    return await Deployment.RunAsync(() => 
    {
        var config = new Config();
        var bigDataJobSubscriptionId = config.Get("bigDataJobSubscriptionId") ?? "projects/myproject/subscriptions/messages";
        var bigDataJobNameSuffix = new Random.RandomId("bigDataJobNameSuffix", new()
        {
            ByteLength = 4,
            Keepers = 
            {
                { "region", @var.Region },
                { "subscription_id", bigDataJobSubscriptionId },
            },
        });
    
        var bigDataJob = new Gcp.Dataflow.FlexTemplateJob("bigDataJob", new()
        {
            Region = @var.Region,
            ContainerSpecGcsPath = "gs://my-bucket/templates/template.json",
            SkipWaitOnJobTermination = true,
            Parameters = 
            {
                { "inputSubscription", bigDataJobSubscriptionId },
            },
        }, new CustomResourceOptions
        {
            Provider = google_beta,
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-gcp/sdk/v6/go/gcp/dataflow"
    	"github.com/pulumi/pulumi-random/sdk/v4/go/random"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		cfg := config.New(ctx, "")
    		bigDataJobSubscriptionId := "projects/myproject/subscriptions/messages"
    		if param := cfg.Get("bigDataJobSubscriptionId"); param != "" {
    			bigDataJobSubscriptionId = param
    		}
    		_, err := random.NewRandomId(ctx, "bigDataJobNameSuffix", &random.RandomIdArgs{
    			ByteLength: pulumi.Int(4),
    			Keepers: pulumi.AnyMap{
    				"region":          pulumi.Any(_var.Region),
    				"subscription_id": pulumi.String(bigDataJobSubscriptionId),
    			},
    		})
    		if err != nil {
    			return err
    		}
    		_, err = dataflow.NewFlexTemplateJob(ctx, "bigDataJob", &dataflow.FlexTemplateJobArgs{
    			Region:                   pulumi.Any(_var.Region),
    			ContainerSpecGcsPath:     pulumi.String("gs://my-bucket/templates/template.json"),
    			SkipWaitOnJobTermination: pulumi.Bool(true),
    			Parameters: pulumi.AnyMap{
    				"inputSubscription": pulumi.String(bigDataJobSubscriptionId),
    			},
    		}, pulumi.Provider(google_beta))
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.random.RandomId;
    import com.pulumi.random.RandomIdArgs;
    import com.pulumi.gcp.dataflow.FlexTemplateJob;
    import com.pulumi.gcp.dataflow.FlexTemplateJobArgs;
    import com.pulumi.resources.CustomResourceOptions;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            final var config = ctx.config();
            final var bigDataJobSubscriptionId = config.get("bigDataJobSubscriptionId").orElse("projects/myproject/subscriptions/messages");
            var bigDataJobNameSuffix = new RandomId("bigDataJobNameSuffix", RandomIdArgs.builder()        
                .byteLength(4)
                .keepers(Map.ofEntries(
                    Map.entry("region", var_.region()),
                    Map.entry("subscription_id", bigDataJobSubscriptionId)
                ))
                .build());
    
            var bigDataJob = new FlexTemplateJob("bigDataJob", FlexTemplateJobArgs.builder()        
                .region(var_.region())
                .containerSpecGcsPath("gs://my-bucket/templates/template.json")
                .skipWaitOnJobTermination(true)
                .parameters(Map.of("inputSubscription", bigDataJobSubscriptionId))
                .build(), CustomResourceOptions.builder()
                    .provider(google_beta)
                    .build());
    
        }
    }
    
    configuration:
      bigDataJobSubscriptionId:
        type: string
        default: projects/myproject/subscriptions/messages
    resources:
      bigDataJobNameSuffix:
        type: random:RandomId
        properties:
          byteLength: 4
          keepers:
            region: ${var.region}
            subscription_id: ${bigDataJobSubscriptionId}
      bigDataJob:
        type: gcp:dataflow:FlexTemplateJob
        properties:
          region: ${var.region}
          containerSpecGcsPath: gs://my-bucket/templates/template.json
          skipWaitOnJobTermination: true
          parameters:
            inputSubscription: ${bigDataJobSubscriptionId}
        options:
          provider: ${["google-beta"]}
    

    Example Usage

    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Gcp = Pulumi.Gcp;
    
    return await Deployment.RunAsync(() => 
    {
        var bigDataJob = new Gcp.Dataflow.FlexTemplateJob("bigDataJob", new()
        {
            ContainerSpecGcsPath = "gs://my-bucket/templates/template.json",
            Parameters = 
            {
                { "inputSubscription", "messages" },
            },
        }, new CustomResourceOptions
        {
            Provider = google_beta,
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-gcp/sdk/v6/go/gcp/dataflow"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := dataflow.NewFlexTemplateJob(ctx, "bigDataJob", &dataflow.FlexTemplateJobArgs{
    			ContainerSpecGcsPath: pulumi.String("gs://my-bucket/templates/template.json"),
    			Parameters: pulumi.AnyMap{
    				"inputSubscription": pulumi.Any("messages"),
    			},
    		}, pulumi.Provider(google_beta))
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.gcp.dataflow.FlexTemplateJob;
    import com.pulumi.gcp.dataflow.FlexTemplateJobArgs;
    import com.pulumi.resources.CustomResourceOptions;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var bigDataJob = new FlexTemplateJob("bigDataJob", FlexTemplateJobArgs.builder()        
                .containerSpecGcsPath("gs://my-bucket/templates/template.json")
                .parameters(Map.of("inputSubscription", "messages"))
                .build(), CustomResourceOptions.builder()
                    .provider(google_beta)
                    .build());
    
        }
    }
    
    import pulumi
    import pulumi_gcp as gcp
    
    big_data_job = gcp.dataflow.FlexTemplateJob("bigDataJob",
        container_spec_gcs_path="gs://my-bucket/templates/template.json",
        parameters={
            "inputSubscription": "messages",
        },
        opts=pulumi.ResourceOptions(provider=google_beta))
    
    import * as pulumi from "@pulumi/pulumi";
    import * as gcp from "@pulumi/gcp";
    
    const bigDataJob = new gcp.dataflow.FlexTemplateJob("bigDataJob", {
        containerSpecGcsPath: "gs://my-bucket/templates/template.json",
        parameters: {
            inputSubscription: "messages",
        },
    }, {
        provider: google_beta,
    });
    
    resources:
      bigDataJob:
        type: gcp:dataflow:FlexTemplateJob
        properties:
          containerSpecGcsPath: gs://my-bucket/templates/template.json
          parameters:
            inputSubscription: messages
        options:
          provider: ${["google-beta"]}
    

    resource, for example

    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Gcp = Pulumi.Gcp;
    using Random = Pulumi.Random;
    
    return await Deployment.RunAsync(() => 
    {
        var config = new Config();
        var bigDataJobSubscriptionId = config.Get("bigDataJobSubscriptionId") ?? "projects/myproject/subscriptions/messages";
        var bigDataJobNameSuffix = new Random.RandomId("bigDataJobNameSuffix", new()
        {
            ByteLength = 4,
            Keepers = 
            {
                { "region", @var.Region },
                { "subscription_id", bigDataJobSubscriptionId },
            },
        });
    
        var bigDataJob = new Gcp.Dataflow.FlexTemplateJob("bigDataJob", new()
        {
            Region = @var.Region,
            ContainerSpecGcsPath = "gs://my-bucket/templates/template.json",
            SkipWaitOnJobTermination = true,
            Parameters = 
            {
                { "inputSubscription", bigDataJobSubscriptionId },
            },
        }, new CustomResourceOptions
        {
            Provider = google_beta,
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-gcp/sdk/v6/go/gcp/dataflow"
    	"github.com/pulumi/pulumi-random/sdk/v4/go/random"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		cfg := config.New(ctx, "")
    		bigDataJobSubscriptionId := "projects/myproject/subscriptions/messages"
    		if param := cfg.Get("bigDataJobSubscriptionId"); param != "" {
    			bigDataJobSubscriptionId = param
    		}
    		_, err := random.NewRandomId(ctx, "bigDataJobNameSuffix", &random.RandomIdArgs{
    			ByteLength: pulumi.Int(4),
    			Keepers: pulumi.AnyMap{
    				"region":          pulumi.Any(_var.Region),
    				"subscription_id": pulumi.String(bigDataJobSubscriptionId),
    			},
    		})
    		if err != nil {
    			return err
    		}
    		_, err = dataflow.NewFlexTemplateJob(ctx, "bigDataJob", &dataflow.FlexTemplateJobArgs{
    			Region:                   pulumi.Any(_var.Region),
    			ContainerSpecGcsPath:     pulumi.String("gs://my-bucket/templates/template.json"),
    			SkipWaitOnJobTermination: pulumi.Bool(true),
    			Parameters: pulumi.AnyMap{
    				"inputSubscription": pulumi.String(bigDataJobSubscriptionId),
    			},
    		}, pulumi.Provider(google_beta))
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.random.RandomId;
    import com.pulumi.random.RandomIdArgs;
    import com.pulumi.gcp.dataflow.FlexTemplateJob;
    import com.pulumi.gcp.dataflow.FlexTemplateJobArgs;
    import com.pulumi.resources.CustomResourceOptions;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            final var config = ctx.config();
            final var bigDataJobSubscriptionId = config.get("bigDataJobSubscriptionId").orElse("projects/myproject/subscriptions/messages");
            var bigDataJobNameSuffix = new RandomId("bigDataJobNameSuffix", RandomIdArgs.builder()        
                .byteLength(4)
                .keepers(Map.ofEntries(
                    Map.entry("region", var_.region()),
                    Map.entry("subscription_id", bigDataJobSubscriptionId)
                ))
                .build());
    
            var bigDataJob = new FlexTemplateJob("bigDataJob", FlexTemplateJobArgs.builder()        
                .region(var_.region())
                .containerSpecGcsPath("gs://my-bucket/templates/template.json")
                .skipWaitOnJobTermination(true)
                .parameters(Map.of("inputSubscription", bigDataJobSubscriptionId))
                .build(), CustomResourceOptions.builder()
                    .provider(google_beta)
                    .build());
    
        }
    }
    
    import pulumi
    import pulumi_gcp as gcp
    import pulumi_random as random
    
    config = pulumi.Config()
    big_data_job_subscription_id = config.get("bigDataJobSubscriptionId")
    if big_data_job_subscription_id is None:
        big_data_job_subscription_id = "projects/myproject/subscriptions/messages"
    big_data_job_name_suffix = random.RandomId("bigDataJobNameSuffix",
        byte_length=4,
        keepers={
            "region": var["region"],
            "subscription_id": big_data_job_subscription_id,
        })
    big_data_job = gcp.dataflow.FlexTemplateJob("bigDataJob",
        region=var["region"],
        container_spec_gcs_path="gs://my-bucket/templates/template.json",
        skip_wait_on_job_termination=True,
        parameters={
            "inputSubscription": big_data_job_subscription_id,
        },
        opts=pulumi.ResourceOptions(provider=google_beta))
    
    import * as pulumi from "@pulumi/pulumi";
    import * as gcp from "@pulumi/gcp";
    import * as random from "@pulumi/random";
    
    const config = new pulumi.Config();
    const bigDataJobSubscriptionId = config.get("bigDataJobSubscriptionId") || "projects/myproject/subscriptions/messages";
    const bigDataJobNameSuffix = new random.RandomId("bigDataJobNameSuffix", {
        byteLength: 4,
        keepers: {
            region: _var.region,
            subscription_id: bigDataJobSubscriptionId,
        },
    });
    const bigDataJob = new gcp.dataflow.FlexTemplateJob("bigDataJob", {
        region: _var.region,
        containerSpecGcsPath: "gs://my-bucket/templates/template.json",
        skipWaitOnJobTermination: true,
        parameters: {
            inputSubscription: bigDataJobSubscriptionId,
        },
    }, {
        provider: google_beta,
    });
    
    configuration:
      bigDataJobSubscriptionId:
        type: string
        default: projects/myproject/subscriptions/messages
    resources:
      bigDataJobNameSuffix:
        type: random:RandomId
        properties:
          byteLength: 4
          keepers:
            region: ${var.region}
            subscription_id: ${bigDataJobSubscriptionId}
      bigDataJob:
        type: gcp:dataflow:FlexTemplateJob
        properties:
          region: ${var.region}
          containerSpecGcsPath: gs://my-bucket/templates/template.json
          skipWaitOnJobTermination: true
          parameters:
            inputSubscription: ${bigDataJobSubscriptionId}
        options:
          provider: ${["google-beta"]}
    

    Create FlexTemplateJob Resource

    new FlexTemplateJob(name: string, args: FlexTemplateJobArgs, opts?: CustomResourceOptions);
    @overload
    def FlexTemplateJob(resource_name: str,
                        opts: Optional[ResourceOptions] = None,
                        additional_experiments: Optional[Sequence[str]] = None,
                        autoscaling_algorithm: Optional[str] = None,
                        container_spec_gcs_path: Optional[str] = None,
                        enable_streaming_engine: Optional[bool] = None,
                        ip_configuration: Optional[str] = None,
                        kms_key_name: Optional[str] = None,
                        labels: Optional[Mapping[str, Any]] = None,
                        launcher_machine_type: Optional[str] = None,
                        machine_type: Optional[str] = None,
                        max_workers: Optional[int] = None,
                        name: Optional[str] = None,
                        network: Optional[str] = None,
                        num_workers: Optional[int] = None,
                        on_delete: Optional[str] = None,
                        parameters: Optional[Mapping[str, Any]] = None,
                        project: Optional[str] = None,
                        region: Optional[str] = None,
                        sdk_container_image: Optional[str] = None,
                        service_account_email: Optional[str] = None,
                        skip_wait_on_job_termination: Optional[bool] = None,
                        staging_location: Optional[str] = None,
                        subnetwork: Optional[str] = None,
                        temp_location: Optional[str] = None,
                        transform_name_mapping: Optional[Mapping[str, Any]] = None)
    @overload
    def FlexTemplateJob(resource_name: str,
                        args: FlexTemplateJobArgs,
                        opts: Optional[ResourceOptions] = None)
    func NewFlexTemplateJob(ctx *Context, name string, args FlexTemplateJobArgs, opts ...ResourceOption) (*FlexTemplateJob, error)
    public FlexTemplateJob(string name, FlexTemplateJobArgs args, CustomResourceOptions? opts = null)
    public FlexTemplateJob(String name, FlexTemplateJobArgs args)
    public FlexTemplateJob(String name, FlexTemplateJobArgs args, CustomResourceOptions options)
    
    type: gcp:dataflow:FlexTemplateJob
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    
    name string
    The unique name of the resource.
    args FlexTemplateJobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args FlexTemplateJobArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args FlexTemplateJobArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args FlexTemplateJobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args FlexTemplateJobArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    FlexTemplateJob Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    The FlexTemplateJob resource accepts the following input properties:

    ContainerSpecGcsPath string

    The GCS path to the Dataflow job Flex Template.


    AdditionalExperiments List<string>

    List of experiments that should be used by the job. An example value is ["enable_stackdriver_agent_metrics"].

    AutoscalingAlgorithm string

    The algorithm to use for autoscaling

    EnableStreamingEngine bool

    Indicates if the job should use the streaming engine feature.

    IpConfiguration string

    The configuration for VM IPs. Options are "WORKER_IP_PUBLIC" or "WORKER_IP_PRIVATE".

    KmsKeyName string

    The name for the Cloud KMS key for the job. Key format is: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    Labels Dictionary<string, object>

    User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. Note: This field is marked as deprecated as the API does not currently support adding labels. NOTE: Google-provided Dataflow templates often provide default labels that begin with goog-dataflow-provided. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.

    LauncherMachineType string

    The machine type to use for launching the job. The default is n1-standard-1.

    MachineType string

    The machine type to use for the job.

    MaxWorkers int

    The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

    Name string

    A unique name for the resource, required by Dataflow.

    Network string

    The network to which VMs will be assigned. If it is not provided, "default" will be used.

    NumWorkers int

    The initial number of Google Compute Engine instances for the job.

    OnDelete string

    One of "drain" or "cancel". Specifies behavior of deletion during pulumi destroy. See above note.

    Parameters Dictionary<string, object>

    Key/Value pairs to be passed to the Dataflow job (as used in the template). Additional pipeline options such as serviceAccount, workerMachineType, etc can be specified here.

    Project string

    The project in which the resource belongs. If it is not provided, the provider project is used.

    Region string

    The region in which the created job should run.

    SdkContainerImage string

    Docker registry location of container image to use for the 'worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines.

    ServiceAccountEmail string

    The Service Account email used to create the job.

    SkipWaitOnJobTermination bool

    If true, treat DRAINING and CANCELLING as terminal job states and do not wait for further changes before removing from terraform state and moving on. WARNING: this will lead to job name conflicts if you do not ensure that the job names are different, e.g. by embedding a release ID or by using a random_id.

    StagingLocation string

    The Cloud Storage path to use for staging files. Must be a valid Cloud Storage URL, beginning with gs://.

    Subnetwork string

    The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK".

    TempLocation string

    The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

    TransformNameMapping Dictionary<string, object>

    Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job.

    ContainerSpecGcsPath string

    The GCS path to the Dataflow job Flex Template.


    AdditionalExperiments []string

    List of experiments that should be used by the job. An example value is ["enable_stackdriver_agent_metrics"].

    AutoscalingAlgorithm string

    The algorithm to use for autoscaling

    EnableStreamingEngine bool

    Indicates if the job should use the streaming engine feature.

    IpConfiguration string

    The configuration for VM IPs. Options are "WORKER_IP_PUBLIC" or "WORKER_IP_PRIVATE".

    KmsKeyName string

    The name for the Cloud KMS key for the job. Key format is: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    Labels map[string]interface{}

    User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. Note: This field is marked as deprecated as the API does not currently support adding labels. NOTE: Google-provided Dataflow templates often provide default labels that begin with goog-dataflow-provided. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.

    LauncherMachineType string

    The machine type to use for launching the job. The default is n1-standard-1.

    MachineType string

    The machine type to use for the job.

    MaxWorkers int

    The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

    Name string

    A unique name for the resource, required by Dataflow.

    Network string

    The network to which VMs will be assigned. If it is not provided, "default" will be used.

    NumWorkers int

    The initial number of Google Compute Engine instances for the job.

    OnDelete string

    One of "drain" or "cancel". Specifies behavior of deletion during pulumi destroy. See above note.

    Parameters map[string]interface{}

    Key/Value pairs to be passed to the Dataflow job (as used in the template). Additional pipeline options such as serviceAccount, workerMachineType, etc can be specified here.

    Project string

    The project in which the resource belongs. If it is not provided, the provider project is used.

    Region string

    The region in which the created job should run.

    SdkContainerImage string

    Docker registry location of container image to use for the 'worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines.

    ServiceAccountEmail string

    The Service Account email used to create the job.

    SkipWaitOnJobTermination bool

    If true, treat DRAINING and CANCELLING as terminal job states and do not wait for further changes before removing from terraform state and moving on. WARNING: this will lead to job name conflicts if you do not ensure that the job names are different, e.g. by embedding a release ID or by using a random_id.

    StagingLocation string

    The Cloud Storage path to use for staging files. Must be a valid Cloud Storage URL, beginning with gs://.

    Subnetwork string

    The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK".

    TempLocation string

    The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

    TransformNameMapping map[string]interface{}

    Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job.

    containerSpecGcsPath String

    The GCS path to the Dataflow job Flex Template.


    additionalExperiments List<String>

    List of experiments that should be used by the job. An example value is ["enable_stackdriver_agent_metrics"].

    autoscalingAlgorithm String

    The algorithm to use for autoscaling

    enableStreamingEngine Boolean

    Indicates if the job should use the streaming engine feature.

    ipConfiguration String

    The configuration for VM IPs. Options are "WORKER_IP_PUBLIC" or "WORKER_IP_PRIVATE".

    kmsKeyName String

    The name for the Cloud KMS key for the job. Key format is: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    labels Map<String,Object>

    User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. Note: This field is marked as deprecated as the API does not currently support adding labels. NOTE: Google-provided Dataflow templates often provide default labels that begin with goog-dataflow-provided. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.

    launcherMachineType String

    The machine type to use for launching the job. The default is n1-standard-1.

    machineType String

    The machine type to use for the job.

    maxWorkers Integer

    The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

    name String

    A unique name for the resource, required by Dataflow.

    network String

    The network to which VMs will be assigned. If it is not provided, "default" will be used.

    numWorkers Integer

    The initial number of Google Compute Engine instances for the job.

    onDelete String

    One of "drain" or "cancel". Specifies behavior of deletion during pulumi destroy. See above note.

    parameters Map<String,Object>

    Key/Value pairs to be passed to the Dataflow job (as used in the template). Additional pipeline options such as serviceAccount, workerMachineType, etc can be specified here.

    project String

    The project in which the resource belongs. If it is not provided, the provider project is used.

    region String

    The region in which the created job should run.

    sdkContainerImage String

    Docker registry location of container image to use for the 'worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines.

    serviceAccountEmail String

    The Service Account email used to create the job.

    skipWaitOnJobTermination Boolean

    If true, treat DRAINING and CANCELLING as terminal job states and do not wait for further changes before removing from terraform state and moving on. WARNING: this will lead to job name conflicts if you do not ensure that the job names are different, e.g. by embedding a release ID or by using a random_id.

    stagingLocation String

    The Cloud Storage path to use for staging files. Must be a valid Cloud Storage URL, beginning with gs://.

    subnetwork String

    The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK".

    tempLocation String

    The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

    transformNameMapping Map<String,Object>

    Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job.

    containerSpecGcsPath string

    The GCS path to the Dataflow job Flex Template.


    additionalExperiments string[]

    List of experiments that should be used by the job. An example value is ["enable_stackdriver_agent_metrics"].

    autoscalingAlgorithm string

    The algorithm to use for autoscaling

    enableStreamingEngine boolean

    Indicates if the job should use the streaming engine feature.

    ipConfiguration string

    The configuration for VM IPs. Options are "WORKER_IP_PUBLIC" or "WORKER_IP_PRIVATE".

    kmsKeyName string

    The name for the Cloud KMS key for the job. Key format is: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    labels {[key: string]: any}

    User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. Note: This field is marked as deprecated as the API does not currently support adding labels. NOTE: Google-provided Dataflow templates often provide default labels that begin with goog-dataflow-provided. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.

    launcherMachineType string

    The machine type to use for launching the job. The default is n1-standard-1.

    machineType string

    The machine type to use for the job.

    maxWorkers number

    The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

    name string

    A unique name for the resource, required by Dataflow.

    network string

    The network to which VMs will be assigned. If it is not provided, "default" will be used.

    numWorkers number

    The initial number of Google Compute Engine instances for the job.

    onDelete string

    One of "drain" or "cancel". Specifies behavior of deletion during pulumi destroy. See above note.

    parameters {[key: string]: any}

    Key/Value pairs to be passed to the Dataflow job (as used in the template). Additional pipeline options such as serviceAccount, workerMachineType, etc can be specified here.

    project string

    The project in which the resource belongs. If it is not provided, the provider project is used.

    region string

    The region in which the created job should run.

    sdkContainerImage string

    Docker registry location of container image to use for the 'worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines.

    serviceAccountEmail string

    The Service Account email used to create the job.

    skipWaitOnJobTermination boolean

    If true, treat DRAINING and CANCELLING as terminal job states and do not wait for further changes before removing from terraform state and moving on. WARNING: this will lead to job name conflicts if you do not ensure that the job names are different, e.g. by embedding a release ID or by using a random_id.

    stagingLocation string

    The Cloud Storage path to use for staging files. Must be a valid Cloud Storage URL, beginning with gs://.

    subnetwork string

    The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK".

    tempLocation string

    The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

    transformNameMapping {[key: string]: any}

    Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job.

    container_spec_gcs_path str

    The GCS path to the Dataflow job Flex Template.


    additional_experiments Sequence[str]

    List of experiments that should be used by the job. An example value is ["enable_stackdriver_agent_metrics"].

    autoscaling_algorithm str

    The algorithm to use for autoscaling

    enable_streaming_engine bool

    Indicates if the job should use the streaming engine feature.

    ip_configuration str

    The configuration for VM IPs. Options are "WORKER_IP_PUBLIC" or "WORKER_IP_PRIVATE".

    kms_key_name str

    The name for the Cloud KMS key for the job. Key format is: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    labels Mapping[str, Any]

    User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. Note: This field is marked as deprecated as the API does not currently support adding labels. NOTE: Google-provided Dataflow templates often provide default labels that begin with goog-dataflow-provided. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.

    launcher_machine_type str

    The machine type to use for launching the job. The default is n1-standard-1.

    machine_type str

    The machine type to use for the job.

    max_workers int

    The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

    name str

    A unique name for the resource, required by Dataflow.

    network str

    The network to which VMs will be assigned. If it is not provided, "default" will be used.

    num_workers int

    The initial number of Google Compute Engine instances for the job.

    on_delete str

    One of "drain" or "cancel". Specifies behavior of deletion during pulumi destroy. See above note.

    parameters Mapping[str, Any]

    Key/Value pairs to be passed to the Dataflow job (as used in the template). Additional pipeline options such as serviceAccount, workerMachineType, etc can be specified here.

    project str

    The project in which the resource belongs. If it is not provided, the provider project is used.

    region str

    The region in which the created job should run.

    sdk_container_image str

    Docker registry location of container image to use for the 'worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines.

    service_account_email str

    The Service Account email used to create the job.

    skip_wait_on_job_termination bool

    If true, treat DRAINING and CANCELLING as terminal job states and do not wait for further changes before removing from terraform state and moving on. WARNING: this will lead to job name conflicts if you do not ensure that the job names are different, e.g. by embedding a release ID or by using a random_id.

    staging_location str

    The Cloud Storage path to use for staging files. Must be a valid Cloud Storage URL, beginning with gs://.

    subnetwork str

    The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK".

    temp_location str

    The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

    transform_name_mapping Mapping[str, Any]

    Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job.

    containerSpecGcsPath String

    The GCS path to the Dataflow job Flex Template.


    additionalExperiments List<String>

    List of experiments that should be used by the job. An example value is ["enable_stackdriver_agent_metrics"].

    autoscalingAlgorithm String

    The algorithm to use for autoscaling

    enableStreamingEngine Boolean

    Indicates if the job should use the streaming engine feature.

    ipConfiguration String

    The configuration for VM IPs. Options are "WORKER_IP_PUBLIC" or "WORKER_IP_PRIVATE".

    kmsKeyName String

    The name for the Cloud KMS key for the job. Key format is: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    labels Map<Any>

    User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. Note: This field is marked as deprecated as the API does not currently support adding labels. NOTE: Google-provided Dataflow templates often provide default labels that begin with goog-dataflow-provided. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.

    launcherMachineType String

    The machine type to use for launching the job. The default is n1-standard-1.

    machineType String

    The machine type to use for the job.

    maxWorkers Number

    The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

    name String

    A unique name for the resource, required by Dataflow.

    network String

    The network to which VMs will be assigned. If it is not provided, "default" will be used.

    numWorkers Number

    The initial number of Google Compute Engine instances for the job.

    onDelete String

    One of "drain" or "cancel". Specifies behavior of deletion during pulumi destroy. See above note.

    parameters Map<Any>

    Key/Value pairs to be passed to the Dataflow job (as used in the template). Additional pipeline options such as serviceAccount, workerMachineType, etc can be specified here.

    project String

    The project in which the resource belongs. If it is not provided, the provider project is used.

    region String

    The region in which the created job should run.

    sdkContainerImage String

    Docker registry location of container image to use for the 'worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines.

    serviceAccountEmail String

    The Service Account email used to create the job.

    skipWaitOnJobTermination Boolean

    If true, treat DRAINING and CANCELLING as terminal job states and do not wait for further changes before removing from terraform state and moving on. WARNING: this will lead to job name conflicts if you do not ensure that the job names are different, e.g. by embedding a release ID or by using a random_id.

    stagingLocation String

    The Cloud Storage path to use for staging files. Must be a valid Cloud Storage URL, beginning with gs://.

    subnetwork String

    The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK".

    tempLocation String

    The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

    transformNameMapping Map<Any>

    Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the FlexTemplateJob resource produces the following output properties:

    Id string

    The provider-assigned unique ID for this managed resource.

    JobId string

    The unique ID of this job.

    State string

    The current state of the resource, selected from the JobState enum

    Type string

    The type of this job, selected from the JobType enum.

    Id string

    The provider-assigned unique ID for this managed resource.

    JobId string

    The unique ID of this job.

    State string

    The current state of the resource, selected from the JobState enum

    Type string

    The type of this job, selected from the JobType enum.

    id String

    The provider-assigned unique ID for this managed resource.

    jobId String

    The unique ID of this job.

    state String

    The current state of the resource, selected from the JobState enum

    type String

    The type of this job, selected from the JobType enum.

    id string

    The provider-assigned unique ID for this managed resource.

    jobId string

    The unique ID of this job.

    state string

    The current state of the resource, selected from the JobState enum

    type string

    The type of this job, selected from the JobType enum.

    id str

    The provider-assigned unique ID for this managed resource.

    job_id str

    The unique ID of this job.

    state str

    The current state of the resource, selected from the JobState enum

    type str

    The type of this job, selected from the JobType enum.

    id String

    The provider-assigned unique ID for this managed resource.

    jobId String

    The unique ID of this job.

    state String

    The current state of the resource, selected from the JobState enum

    type String

    The type of this job, selected from the JobType enum.

    Look up Existing FlexTemplateJob Resource

    Get an existing FlexTemplateJob resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.

    public static get(name: string, id: Input<ID>, state?: FlexTemplateJobState, opts?: CustomResourceOptions): FlexTemplateJob
    @staticmethod
    def get(resource_name: str,
            id: str,
            opts: Optional[ResourceOptions] = None,
            additional_experiments: Optional[Sequence[str]] = None,
            autoscaling_algorithm: Optional[str] = None,
            container_spec_gcs_path: Optional[str] = None,
            enable_streaming_engine: Optional[bool] = None,
            ip_configuration: Optional[str] = None,
            job_id: Optional[str] = None,
            kms_key_name: Optional[str] = None,
            labels: Optional[Mapping[str, Any]] = None,
            launcher_machine_type: Optional[str] = None,
            machine_type: Optional[str] = None,
            max_workers: Optional[int] = None,
            name: Optional[str] = None,
            network: Optional[str] = None,
            num_workers: Optional[int] = None,
            on_delete: Optional[str] = None,
            parameters: Optional[Mapping[str, Any]] = None,
            project: Optional[str] = None,
            region: Optional[str] = None,
            sdk_container_image: Optional[str] = None,
            service_account_email: Optional[str] = None,
            skip_wait_on_job_termination: Optional[bool] = None,
            staging_location: Optional[str] = None,
            state: Optional[str] = None,
            subnetwork: Optional[str] = None,
            temp_location: Optional[str] = None,
            transform_name_mapping: Optional[Mapping[str, Any]] = None,
            type: Optional[str] = None) -> FlexTemplateJob
    func GetFlexTemplateJob(ctx *Context, name string, id IDInput, state *FlexTemplateJobState, opts ...ResourceOption) (*FlexTemplateJob, error)
    public static FlexTemplateJob Get(string name, Input<string> id, FlexTemplateJobState? state, CustomResourceOptions? opts = null)
    public static FlexTemplateJob get(String name, Output<String> id, FlexTemplateJobState state, CustomResourceOptions options)
    Resource lookup is not supported in YAML
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    resource_name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    The following state arguments are supported:
    AdditionalExperiments List<string>

    List of experiments that should be used by the job. An example value is ["enable_stackdriver_agent_metrics"].

    AutoscalingAlgorithm string

    The algorithm to use for autoscaling

    ContainerSpecGcsPath string

    The GCS path to the Dataflow job Flex Template.


    EnableStreamingEngine bool

    Indicates if the job should use the streaming engine feature.

    IpConfiguration string

    The configuration for VM IPs. Options are "WORKER_IP_PUBLIC" or "WORKER_IP_PRIVATE".

    JobId string

    The unique ID of this job.

    KmsKeyName string

    The name for the Cloud KMS key for the job. Key format is: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    Labels Dictionary<string, object>

    User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. Note: This field is marked as deprecated as the API does not currently support adding labels. NOTE: Google-provided Dataflow templates often provide default labels that begin with goog-dataflow-provided. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.

    LauncherMachineType string

    The machine type to use for launching the job. The default is n1-standard-1.

    MachineType string

    The machine type to use for the job.

    MaxWorkers int

    The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

    Name string

    A unique name for the resource, required by Dataflow.

    Network string

    The network to which VMs will be assigned. If it is not provided, "default" will be used.

    NumWorkers int

    The initial number of Google Compute Engine instances for the job.

    OnDelete string

    One of "drain" or "cancel". Specifies behavior of deletion during pulumi destroy. See above note.

    Parameters Dictionary<string, object>

    Key/Value pairs to be passed to the Dataflow job (as used in the template). Additional pipeline options such as serviceAccount, workerMachineType, etc can be specified here.

    Project string

    The project in which the resource belongs. If it is not provided, the provider project is used.

    Region string

    The region in which the created job should run.

    SdkContainerImage string

    Docker registry location of container image to use for the 'worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines.

    ServiceAccountEmail string

    The Service Account email used to create the job.

    SkipWaitOnJobTermination bool

    If true, treat DRAINING and CANCELLING as terminal job states and do not wait for further changes before removing from terraform state and moving on. WARNING: this will lead to job name conflicts if you do not ensure that the job names are different, e.g. by embedding a release ID or by using a random_id.

    StagingLocation string

    The Cloud Storage path to use for staging files. Must be a valid Cloud Storage URL, beginning with gs://.

    State string

    The current state of the resource, selected from the JobState enum

    Subnetwork string

    The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK".

    TempLocation string

    The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

    TransformNameMapping Dictionary<string, object>

    Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job.

    Type string

    The type of this job, selected from the JobType enum.

    AdditionalExperiments []string

    List of experiments that should be used by the job. An example value is ["enable_stackdriver_agent_metrics"].

    AutoscalingAlgorithm string

    The algorithm to use for autoscaling

    ContainerSpecGcsPath string

    The GCS path to the Dataflow job Flex Template.


    EnableStreamingEngine bool

    Indicates if the job should use the streaming engine feature.

    IpConfiguration string

    The configuration for VM IPs. Options are "WORKER_IP_PUBLIC" or "WORKER_IP_PRIVATE".

    JobId string

    The unique ID of this job.

    KmsKeyName string

    The name for the Cloud KMS key for the job. Key format is: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    Labels map[string]interface{}

    User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. Note: This field is marked as deprecated as the API does not currently support adding labels. NOTE: Google-provided Dataflow templates often provide default labels that begin with goog-dataflow-provided. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.

    LauncherMachineType string

    The machine type to use for launching the job. The default is n1-standard-1.

    MachineType string

    The machine type to use for the job.

    MaxWorkers int

    The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

    Name string

    A unique name for the resource, required by Dataflow.

    Network string

    The network to which VMs will be assigned. If it is not provided, "default" will be used.

    NumWorkers int

    The initial number of Google Compute Engine instances for the job.

    OnDelete string

    One of "drain" or "cancel". Specifies behavior of deletion during pulumi destroy. See above note.

    Parameters map[string]interface{}

    Key/Value pairs to be passed to the Dataflow job (as used in the template). Additional pipeline options such as serviceAccount, workerMachineType, etc can be specified here.

    Project string

    The project in which the resource belongs. If it is not provided, the provider project is used.

    Region string

    The region in which the created job should run.

    SdkContainerImage string

    Docker registry location of container image to use for the 'worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines.

    ServiceAccountEmail string

    The Service Account email used to create the job.

    SkipWaitOnJobTermination bool

    If true, treat DRAINING and CANCELLING as terminal job states and do not wait for further changes before removing from terraform state and moving on. WARNING: this will lead to job name conflicts if you do not ensure that the job names are different, e.g. by embedding a release ID or by using a random_id.

    StagingLocation string

    The Cloud Storage path to use for staging files. Must be a valid Cloud Storage URL, beginning with gs://.

    State string

    The current state of the resource, selected from the JobState enum

    Subnetwork string

    The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK".

    TempLocation string

    The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

    TransformNameMapping map[string]interface{}

    Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job.

    Type string

    The type of this job, selected from the JobType enum.

    additionalExperiments List<String>

    List of experiments that should be used by the job. An example value is ["enable_stackdriver_agent_metrics"].

    autoscalingAlgorithm String

    The algorithm to use for autoscaling

    containerSpecGcsPath String

    The GCS path to the Dataflow job Flex Template.


    enableStreamingEngine Boolean

    Indicates if the job should use the streaming engine feature.

    ipConfiguration String

    The configuration for VM IPs. Options are "WORKER_IP_PUBLIC" or "WORKER_IP_PRIVATE".

    jobId String

    The unique ID of this job.

    kmsKeyName String

    The name for the Cloud KMS key for the job. Key format is: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    labels Map<String,Object>

    User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. Note: This field is marked as deprecated as the API does not currently support adding labels. NOTE: Google-provided Dataflow templates often provide default labels that begin with goog-dataflow-provided. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.

    launcherMachineType String

    The machine type to use for launching the job. The default is n1-standard-1.

    machineType String

    The machine type to use for the job.

    maxWorkers Integer

    The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

    name String

    A unique name for the resource, required by Dataflow.

    network String

    The network to which VMs will be assigned. If it is not provided, "default" will be used.

    numWorkers Integer

    The initial number of Google Compute Engine instances for the job.

    onDelete String

    One of "drain" or "cancel". Specifies behavior of deletion during pulumi destroy. See above note.

    parameters Map<String,Object>

    Key/Value pairs to be passed to the Dataflow job (as used in the template). Additional pipeline options such as serviceAccount, workerMachineType, etc can be specified here.

    project String

    The project in which the resource belongs. If it is not provided, the provider project is used.

    region String

    The region in which the created job should run.

    sdkContainerImage String

    Docker registry location of container image to use for the 'worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines.

    serviceAccountEmail String

    The Service Account email used to create the job.

    skipWaitOnJobTermination Boolean

    If true, treat DRAINING and CANCELLING as terminal job states and do not wait for further changes before removing from terraform state and moving on. WARNING: this will lead to job name conflicts if you do not ensure that the job names are different, e.g. by embedding a release ID or by using a random_id.

    stagingLocation String

    The Cloud Storage path to use for staging files. Must be a valid Cloud Storage URL, beginning with gs://.

    state String

    The current state of the resource, selected from the JobState enum

    subnetwork String

    The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK".

    tempLocation String

    The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

    transformNameMapping Map<String,Object>

    Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job.

    type String

    The type of this job, selected from the JobType enum.

    additionalExperiments string[]

    List of experiments that should be used by the job. An example value is ["enable_stackdriver_agent_metrics"].

    autoscalingAlgorithm string

    The algorithm to use for autoscaling

    containerSpecGcsPath string

    The GCS path to the Dataflow job Flex Template.


    enableStreamingEngine boolean

    Indicates if the job should use the streaming engine feature.

    ipConfiguration string

    The configuration for VM IPs. Options are "WORKER_IP_PUBLIC" or "WORKER_IP_PRIVATE".

    jobId string

    The unique ID of this job.

    kmsKeyName string

    The name for the Cloud KMS key for the job. Key format is: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    labels {[key: string]: any}

    User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. Note: This field is marked as deprecated as the API does not currently support adding labels. NOTE: Google-provided Dataflow templates often provide default labels that begin with goog-dataflow-provided. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.

    launcherMachineType string

    The machine type to use for launching the job. The default is n1-standard-1.

    machineType string

    The machine type to use for the job.

    maxWorkers number

    The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

    name string

    A unique name for the resource, required by Dataflow.

    network string

    The network to which VMs will be assigned. If it is not provided, "default" will be used.

    numWorkers number

    The initial number of Google Compute Engine instances for the job.

    onDelete string

    One of "drain" or "cancel". Specifies behavior of deletion during pulumi destroy. See above note.

    parameters {[key: string]: any}

    Key/Value pairs to be passed to the Dataflow job (as used in the template). Additional pipeline options such as serviceAccount, workerMachineType, etc can be specified here.

    project string

    The project in which the resource belongs. If it is not provided, the provider project is used.

    region string

    The region in which the created job should run.

    sdkContainerImage string

    Docker registry location of container image to use for the 'worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines.

    serviceAccountEmail string

    The Service Account email used to create the job.

    skipWaitOnJobTermination boolean

    If true, treat DRAINING and CANCELLING as terminal job states and do not wait for further changes before removing from terraform state and moving on. WARNING: this will lead to job name conflicts if you do not ensure that the job names are different, e.g. by embedding a release ID or by using a random_id.

    stagingLocation string

    The Cloud Storage path to use for staging files. Must be a valid Cloud Storage URL, beginning with gs://.

    state string

    The current state of the resource, selected from the JobState enum

    subnetwork string

    The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK".

    tempLocation string

    The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

    transformNameMapping {[key: string]: any}

    Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job.

    type string

    The type of this job, selected from the JobType enum.

    additional_experiments Sequence[str]

    List of experiments that should be used by the job. An example value is ["enable_stackdriver_agent_metrics"].

    autoscaling_algorithm str

    The algorithm to use for autoscaling

    container_spec_gcs_path str

    The GCS path to the Dataflow job Flex Template.


    enable_streaming_engine bool

    Indicates if the job should use the streaming engine feature.

    ip_configuration str

    The configuration for VM IPs. Options are "WORKER_IP_PUBLIC" or "WORKER_IP_PRIVATE".

    job_id str

    The unique ID of this job.

    kms_key_name str

    The name for the Cloud KMS key for the job. Key format is: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    labels Mapping[str, Any]

    User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. Note: This field is marked as deprecated as the API does not currently support adding labels. NOTE: Google-provided Dataflow templates often provide default labels that begin with goog-dataflow-provided. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.

    launcher_machine_type str

    The machine type to use for launching the job. The default is n1-standard-1.

    machine_type str

    The machine type to use for the job.

    max_workers int

    The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

    name str

    A unique name for the resource, required by Dataflow.

    network str

    The network to which VMs will be assigned. If it is not provided, "default" will be used.

    num_workers int

    The initial number of Google Compute Engine instances for the job.

    on_delete str

    One of "drain" or "cancel". Specifies behavior of deletion during pulumi destroy. See above note.

    parameters Mapping[str, Any]

    Key/Value pairs to be passed to the Dataflow job (as used in the template). Additional pipeline options such as serviceAccount, workerMachineType, etc can be specified here.

    project str

    The project in which the resource belongs. If it is not provided, the provider project is used.

    region str

    The region in which the created job should run.

    sdk_container_image str

    Docker registry location of container image to use for the 'worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines.

    service_account_email str

    The Service Account email used to create the job.

    skip_wait_on_job_termination bool

    If true, treat DRAINING and CANCELLING as terminal job states and do not wait for further changes before removing from terraform state and moving on. WARNING: this will lead to job name conflicts if you do not ensure that the job names are different, e.g. by embedding a release ID or by using a random_id.

    staging_location str

    The Cloud Storage path to use for staging files. Must be a valid Cloud Storage URL, beginning with gs://.

    state str

    The current state of the resource, selected from the JobState enum

    subnetwork str

    The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK".

    temp_location str

    The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

    transform_name_mapping Mapping[str, Any]

    Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job.

    type str

    The type of this job, selected from the JobType enum.

    additionalExperiments List<String>

    List of experiments that should be used by the job. An example value is ["enable_stackdriver_agent_metrics"].

    autoscalingAlgorithm String

    The algorithm to use for autoscaling

    containerSpecGcsPath String

    The GCS path to the Dataflow job Flex Template.


    enableStreamingEngine Boolean

    Indicates if the job should use the streaming engine feature.

    ipConfiguration String

    The configuration for VM IPs. Options are "WORKER_IP_PUBLIC" or "WORKER_IP_PRIVATE".

    jobId String

    The unique ID of this job.

    kmsKeyName String

    The name for the Cloud KMS key for the job. Key format is: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

    labels Map<Any>

    User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. Note: This field is marked as deprecated as the API does not currently support adding labels. NOTE: Google-provided Dataflow templates often provide default labels that begin with goog-dataflow-provided. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.

    launcherMachineType String

    The machine type to use for launching the job. The default is n1-standard-1.

    machineType String

    The machine type to use for the job.

    maxWorkers Number

    The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.

    name String

    A unique name for the resource, required by Dataflow.

    network String

    The network to which VMs will be assigned. If it is not provided, "default" will be used.

    numWorkers Number

    The initial number of Google Compute Engine instances for the job.

    onDelete String

    One of "drain" or "cancel". Specifies behavior of deletion during pulumi destroy. See above note.

    parameters Map<Any>

    Key/Value pairs to be passed to the Dataflow job (as used in the template). Additional pipeline options such as serviceAccount, workerMachineType, etc can be specified here.

    project String

    The project in which the resource belongs. If it is not provided, the provider project is used.

    region String

    The region in which the created job should run.

    sdkContainerImage String

    Docker registry location of container image to use for the 'worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines.

    serviceAccountEmail String

    The Service Account email used to create the job.

    skipWaitOnJobTermination Boolean

    If true, treat DRAINING and CANCELLING as terminal job states and do not wait for further changes before removing from terraform state and moving on. WARNING: this will lead to job name conflicts if you do not ensure that the job names are different, e.g. by embedding a release ID or by using a random_id.

    stagingLocation String

    The Cloud Storage path to use for staging files. Must be a valid Cloud Storage URL, beginning with gs://.

    state String

    The current state of the resource, selected from the JobState enum

    subnetwork String

    The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK".

    tempLocation String

    The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.

    transformNameMapping Map<Any>

    Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job.

    type String

    The type of this job, selected from the JobType enum.

    Import

    This resource does not support import.

    Package Details

    Repository
    Google Cloud (GCP) Classic pulumi/pulumi-gcp
    License
    Apache-2.0
    Notes

    This Pulumi package is based on the google-beta Terraform Provider.

    gcp logo
    Google Cloud Classic v6.66.0 published on Monday, Sep 18, 2023 by Pulumi