1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataproc
  5. dataproc/v1
  6. SessionTemplate

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.dataproc/v1.SessionTemplate

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Create a session template synchronously.

    Create SessionTemplate Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new SessionTemplate(name: string, args?: SessionTemplateArgs, opts?: CustomResourceOptions);
    @overload
    def SessionTemplate(resource_name: str,
                        args: Optional[SessionTemplateArgs] = None,
                        opts: Optional[ResourceOptions] = None)
    
    @overload
    def SessionTemplate(resource_name: str,
                        opts: Optional[ResourceOptions] = None,
                        description: Optional[str] = None,
                        environment_config: Optional[EnvironmentConfigArgs] = None,
                        jupyter_session: Optional[JupyterConfigArgs] = None,
                        labels: Optional[Mapping[str, str]] = None,
                        location: Optional[str] = None,
                        name: Optional[str] = None,
                        project: Optional[str] = None,
                        runtime_config: Optional[RuntimeConfigArgs] = None)
    func NewSessionTemplate(ctx *Context, name string, args *SessionTemplateArgs, opts ...ResourceOption) (*SessionTemplate, error)
    public SessionTemplate(string name, SessionTemplateArgs? args = null, CustomResourceOptions? opts = null)
    public SessionTemplate(String name, SessionTemplateArgs args)
    public SessionTemplate(String name, SessionTemplateArgs args, CustomResourceOptions options)
    
    type: google-native:dataproc/v1:SessionTemplate
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args SessionTemplateArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args SessionTemplateArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args SessionTemplateArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args SessionTemplateArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args SessionTemplateArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Example

    The following reference example uses placeholder values for all input properties.

    var sessionTemplateResource = new GoogleNative.Dataproc.V1.SessionTemplate("sessionTemplateResource", new()
    {
        Description = "string",
        EnvironmentConfig = new GoogleNative.Dataproc.V1.Inputs.EnvironmentConfigArgs
        {
            ExecutionConfig = new GoogleNative.Dataproc.V1.Inputs.ExecutionConfigArgs
            {
                IdleTtl = "string",
                KmsKey = "string",
                NetworkTags = new[]
                {
                    "string",
                },
                NetworkUri = "string",
                ServiceAccount = "string",
                StagingBucket = "string",
                SubnetworkUri = "string",
                Ttl = "string",
            },
            PeripheralsConfig = new GoogleNative.Dataproc.V1.Inputs.PeripheralsConfigArgs
            {
                MetastoreService = "string",
                SparkHistoryServerConfig = new GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfigArgs
                {
                    DataprocCluster = "string",
                },
            },
        },
        JupyterSession = new GoogleNative.Dataproc.V1.Inputs.JupyterConfigArgs
        {
            DisplayName = "string",
            Kernel = GoogleNative.Dataproc.V1.JupyterConfigKernel.KernelUnspecified,
        },
        Labels = 
        {
            { "string", "string" },
        },
        Location = "string",
        Name = "string",
        Project = "string",
        RuntimeConfig = new GoogleNative.Dataproc.V1.Inputs.RuntimeConfigArgs
        {
            ContainerImage = "string",
            Properties = 
            {
                { "string", "string" },
            },
            RepositoryConfig = new GoogleNative.Dataproc.V1.Inputs.RepositoryConfigArgs
            {
                PypiRepositoryConfig = new GoogleNative.Dataproc.V1.Inputs.PyPiRepositoryConfigArgs
                {
                    PypiRepository = "string",
                },
            },
            Version = "string",
        },
    });
    
    example, err := dataproc.NewSessionTemplate(ctx, "sessionTemplateResource", &dataproc.SessionTemplateArgs{
    Description: pulumi.String("string"),
    EnvironmentConfig: &dataproc.EnvironmentConfigArgs{
    ExecutionConfig: &dataproc.ExecutionConfigArgs{
    IdleTtl: pulumi.String("string"),
    KmsKey: pulumi.String("string"),
    NetworkTags: pulumi.StringArray{
    pulumi.String("string"),
    },
    NetworkUri: pulumi.String("string"),
    ServiceAccount: pulumi.String("string"),
    StagingBucket: pulumi.String("string"),
    SubnetworkUri: pulumi.String("string"),
    Ttl: pulumi.String("string"),
    },
    PeripheralsConfig: &dataproc.PeripheralsConfigArgs{
    MetastoreService: pulumi.String("string"),
    SparkHistoryServerConfig: &dataproc.SparkHistoryServerConfigArgs{
    DataprocCluster: pulumi.String("string"),
    },
    },
    },
    JupyterSession: &dataproc.JupyterConfigArgs{
    DisplayName: pulumi.String("string"),
    Kernel: dataproc.JupyterConfigKernelKernelUnspecified,
    },
    Labels: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    Location: pulumi.String("string"),
    Name: pulumi.String("string"),
    Project: pulumi.String("string"),
    RuntimeConfig: &dataproc.RuntimeConfigArgs{
    ContainerImage: pulumi.String("string"),
    Properties: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    RepositoryConfig: &dataproc.RepositoryConfigArgs{
    PypiRepositoryConfig: &dataproc.PyPiRepositoryConfigArgs{
    PypiRepository: pulumi.String("string"),
    },
    },
    Version: pulumi.String("string"),
    },
    })
    
    var sessionTemplateResource = new SessionTemplate("sessionTemplateResource", SessionTemplateArgs.builder()        
        .description("string")
        .environmentConfig(EnvironmentConfigArgs.builder()
            .executionConfig(ExecutionConfigArgs.builder()
                .idleTtl("string")
                .kmsKey("string")
                .networkTags("string")
                .networkUri("string")
                .serviceAccount("string")
                .stagingBucket("string")
                .subnetworkUri("string")
                .ttl("string")
                .build())
            .peripheralsConfig(PeripheralsConfigArgs.builder()
                .metastoreService("string")
                .sparkHistoryServerConfig(SparkHistoryServerConfigArgs.builder()
                    .dataprocCluster("string")
                    .build())
                .build())
            .build())
        .jupyterSession(JupyterConfigArgs.builder()
            .displayName("string")
            .kernel("KERNEL_UNSPECIFIED")
            .build())
        .labels(Map.of("string", "string"))
        .location("string")
        .name("string")
        .project("string")
        .runtimeConfig(RuntimeConfigArgs.builder()
            .containerImage("string")
            .properties(Map.of("string", "string"))
            .repositoryConfig(RepositoryConfigArgs.builder()
                .pypiRepositoryConfig(PyPiRepositoryConfigArgs.builder()
                    .pypiRepository("string")
                    .build())
                .build())
            .version("string")
            .build())
        .build());
    
    session_template_resource = google_native.dataproc.v1.SessionTemplate("sessionTemplateResource",
        description="string",
        environment_config=google_native.dataproc.v1.EnvironmentConfigArgs(
            execution_config=google_native.dataproc.v1.ExecutionConfigArgs(
                idle_ttl="string",
                kms_key="string",
                network_tags=["string"],
                network_uri="string",
                service_account="string",
                staging_bucket="string",
                subnetwork_uri="string",
                ttl="string",
            ),
            peripherals_config=google_native.dataproc.v1.PeripheralsConfigArgs(
                metastore_service="string",
                spark_history_server_config=google_native.dataproc.v1.SparkHistoryServerConfigArgs(
                    dataproc_cluster="string",
                ),
            ),
        ),
        jupyter_session=google_native.dataproc.v1.JupyterConfigArgs(
            display_name="string",
            kernel=google_native.dataproc.v1.JupyterConfigKernel.KERNEL_UNSPECIFIED,
        ),
        labels={
            "string": "string",
        },
        location="string",
        name="string",
        project="string",
        runtime_config=google_native.dataproc.v1.RuntimeConfigArgs(
            container_image="string",
            properties={
                "string": "string",
            },
            repository_config=google_native.dataproc.v1.RepositoryConfigArgs(
                pypi_repository_config=google_native.dataproc.v1.PyPiRepositoryConfigArgs(
                    pypi_repository="string",
                ),
            ),
            version="string",
        ))
    
    const sessionTemplateResource = new google_native.dataproc.v1.SessionTemplate("sessionTemplateResource", {
        description: "string",
        environmentConfig: {
            executionConfig: {
                idleTtl: "string",
                kmsKey: "string",
                networkTags: ["string"],
                networkUri: "string",
                serviceAccount: "string",
                stagingBucket: "string",
                subnetworkUri: "string",
                ttl: "string",
            },
            peripheralsConfig: {
                metastoreService: "string",
                sparkHistoryServerConfig: {
                    dataprocCluster: "string",
                },
            },
        },
        jupyterSession: {
            displayName: "string",
            kernel: google_native.dataproc.v1.JupyterConfigKernel.KernelUnspecified,
        },
        labels: {
            string: "string",
        },
        location: "string",
        name: "string",
        project: "string",
        runtimeConfig: {
            containerImage: "string",
            properties: {
                string: "string",
            },
            repositoryConfig: {
                pypiRepositoryConfig: {
                    pypiRepository: "string",
                },
            },
            version: "string",
        },
    });
    
    type: google-native:dataproc/v1:SessionTemplate
    properties:
        description: string
        environmentConfig:
            executionConfig:
                idleTtl: string
                kmsKey: string
                networkTags:
                    - string
                networkUri: string
                serviceAccount: string
                stagingBucket: string
                subnetworkUri: string
                ttl: string
            peripheralsConfig:
                metastoreService: string
                sparkHistoryServerConfig:
                    dataprocCluster: string
        jupyterSession:
            displayName: string
            kernel: KERNEL_UNSPECIFIED
        labels:
            string: string
        location: string
        name: string
        project: string
        runtimeConfig:
            containerImage: string
            properties:
                string: string
            repositoryConfig:
                pypiRepositoryConfig:
                    pypiRepository: string
            version: string
    

    SessionTemplate Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    The SessionTemplate resource accepts the following input properties:

    Description string
    Optional. Brief description of the template.
    EnvironmentConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.EnvironmentConfig
    Optional. Environment configuration for session execution.
    JupyterSession Pulumi.GoogleNative.Dataproc.V1.Inputs.JupyterConfig
    Optional. Jupyter session config.
    Labels Dictionary<string, string>
    Optional. Labels to associate with sessions created using this template. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
    Location string
    Name string
    The resource name of the session template.
    Project string
    RuntimeConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.RuntimeConfig
    Optional. Runtime configuration for session execution.
    Description string
    Optional. Brief description of the template.
    EnvironmentConfig EnvironmentConfigArgs
    Optional. Environment configuration for session execution.
    JupyterSession JupyterConfigArgs
    Optional. Jupyter session config.
    Labels map[string]string
    Optional. Labels to associate with sessions created using this template. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
    Location string
    Name string
    The resource name of the session template.
    Project string
    RuntimeConfig RuntimeConfigArgs
    Optional. Runtime configuration for session execution.
    description String
    Optional. Brief description of the template.
    environmentConfig EnvironmentConfig
    Optional. Environment configuration for session execution.
    jupyterSession JupyterConfig
    Optional. Jupyter session config.
    labels Map<String,String>
    Optional. Labels to associate with sessions created using this template. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
    location String
    name String
    The resource name of the session template.
    project String
    runtimeConfig RuntimeConfig
    Optional. Runtime configuration for session execution.
    description string
    Optional. Brief description of the template.
    environmentConfig EnvironmentConfig
    Optional. Environment configuration for session execution.
    jupyterSession JupyterConfig
    Optional. Jupyter session config.
    labels {[key: string]: string}
    Optional. Labels to associate with sessions created using this template. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
    location string
    name string
    The resource name of the session template.
    project string
    runtimeConfig RuntimeConfig
    Optional. Runtime configuration for session execution.
    description str
    Optional. Brief description of the template.
    environment_config EnvironmentConfigArgs
    Optional. Environment configuration for session execution.
    jupyter_session JupyterConfigArgs
    Optional. Jupyter session config.
    labels Mapping[str, str]
    Optional. Labels to associate with sessions created using this template. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
    location str
    name str
    The resource name of the session template.
    project str
    runtime_config RuntimeConfigArgs
    Optional. Runtime configuration for session execution.
    description String
    Optional. Brief description of the template.
    environmentConfig Property Map
    Optional. Environment configuration for session execution.
    jupyterSession Property Map
    Optional. Jupyter session config.
    labels Map<String>
    Optional. Labels to associate with sessions created using this template. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
    location String
    name String
    The resource name of the session template.
    project String
    runtimeConfig Property Map
    Optional. Runtime configuration for session execution.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the SessionTemplate resource produces the following output properties:

    CreateTime string
    The time when the template was created.
    Creator string
    The email address of the user who created the template.
    Id string
    The provider-assigned unique ID for this managed resource.
    UpdateTime string
    The time the template was last updated.
    Uuid string
    A session template UUID (Unique Universal Identifier). The service generates this value when it creates the session template.
    CreateTime string
    The time when the template was created.
    Creator string
    The email address of the user who created the template.
    Id string
    The provider-assigned unique ID for this managed resource.
    UpdateTime string
    The time the template was last updated.
    Uuid string
    A session template UUID (Unique Universal Identifier). The service generates this value when it creates the session template.
    createTime String
    The time when the template was created.
    creator String
    The email address of the user who created the template.
    id String
    The provider-assigned unique ID for this managed resource.
    updateTime String
    The time the template was last updated.
    uuid String
    A session template UUID (Unique Universal Identifier). The service generates this value when it creates the session template.
    createTime string
    The time when the template was created.
    creator string
    The email address of the user who created the template.
    id string
    The provider-assigned unique ID for this managed resource.
    updateTime string
    The time the template was last updated.
    uuid string
    A session template UUID (Unique Universal Identifier). The service generates this value when it creates the session template.
    create_time str
    The time when the template was created.
    creator str
    The email address of the user who created the template.
    id str
    The provider-assigned unique ID for this managed resource.
    update_time str
    The time the template was last updated.
    uuid str
    A session template UUID (Unique Universal Identifier). The service generates this value when it creates the session template.
    createTime String
    The time when the template was created.
    creator String
    The email address of the user who created the template.
    id String
    The provider-assigned unique ID for this managed resource.
    updateTime String
    The time the template was last updated.
    uuid String
    A session template UUID (Unique Universal Identifier). The service generates this value when it creates the session template.

    Supporting Types

    EnvironmentConfig, EnvironmentConfigArgs

    ExecutionConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ExecutionConfig
    Optional. Execution configuration for a workload.
    PeripheralsConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.PeripheralsConfig
    Optional. Peripherals configuration that workload has access to.
    ExecutionConfig ExecutionConfig
    Optional. Execution configuration for a workload.
    PeripheralsConfig PeripheralsConfig
    Optional. Peripherals configuration that workload has access to.
    executionConfig ExecutionConfig
    Optional. Execution configuration for a workload.
    peripheralsConfig PeripheralsConfig
    Optional. Peripherals configuration that workload has access to.
    executionConfig ExecutionConfig
    Optional. Execution configuration for a workload.
    peripheralsConfig PeripheralsConfig
    Optional. Peripherals configuration that workload has access to.
    execution_config ExecutionConfig
    Optional. Execution configuration for a workload.
    peripherals_config PeripheralsConfig
    Optional. Peripherals configuration that workload has access to.
    executionConfig Property Map
    Optional. Execution configuration for a workload.
    peripheralsConfig Property Map
    Optional. Peripherals configuration that workload has access to.

    EnvironmentConfigResponse, EnvironmentConfigResponseArgs

    ExecutionConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ExecutionConfigResponse
    Optional. Execution configuration for a workload.
    PeripheralsConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.PeripheralsConfigResponse
    Optional. Peripherals configuration that workload has access to.
    ExecutionConfig ExecutionConfigResponse
    Optional. Execution configuration for a workload.
    PeripheralsConfig PeripheralsConfigResponse
    Optional. Peripherals configuration that workload has access to.
    executionConfig ExecutionConfigResponse
    Optional. Execution configuration for a workload.
    peripheralsConfig PeripheralsConfigResponse
    Optional. Peripherals configuration that workload has access to.
    executionConfig ExecutionConfigResponse
    Optional. Execution configuration for a workload.
    peripheralsConfig PeripheralsConfigResponse
    Optional. Peripherals configuration that workload has access to.
    execution_config ExecutionConfigResponse
    Optional. Execution configuration for a workload.
    peripherals_config PeripheralsConfigResponse
    Optional. Peripherals configuration that workload has access to.
    executionConfig Property Map
    Optional. Execution configuration for a workload.
    peripheralsConfig Property Map
    Optional. Peripherals configuration that workload has access to.

    ExecutionConfig, ExecutionConfigArgs

    IdleTtl string
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    KmsKey string
    Optional. The Cloud KMS key to use for encryption.
    NetworkTags List<string>
    Optional. Tags used for network traffic control.
    NetworkUri string
    Optional. Network URI to connect workload to.
    ServiceAccount string
    Optional. Service account that used to execute workload.
    StagingBucket string
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    SubnetworkUri string
    Optional. Subnetwork URI to connect workload to.
    Ttl string
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    IdleTtl string
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    KmsKey string
    Optional. The Cloud KMS key to use for encryption.
    NetworkTags []string
    Optional. Tags used for network traffic control.
    NetworkUri string
    Optional. Network URI to connect workload to.
    ServiceAccount string
    Optional. Service account that used to execute workload.
    StagingBucket string
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    SubnetworkUri string
    Optional. Subnetwork URI to connect workload to.
    Ttl string
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    idleTtl String
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    kmsKey String
    Optional. The Cloud KMS key to use for encryption.
    networkTags List<String>
    Optional. Tags used for network traffic control.
    networkUri String
    Optional. Network URI to connect workload to.
    serviceAccount String
    Optional. Service account that used to execute workload.
    stagingBucket String
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    subnetworkUri String
    Optional. Subnetwork URI to connect workload to.
    ttl String
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    idleTtl string
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    kmsKey string
    Optional. The Cloud KMS key to use for encryption.
    networkTags string[]
    Optional. Tags used for network traffic control.
    networkUri string
    Optional. Network URI to connect workload to.
    serviceAccount string
    Optional. Service account that used to execute workload.
    stagingBucket string
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    subnetworkUri string
    Optional. Subnetwork URI to connect workload to.
    ttl string
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    idle_ttl str
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    kms_key str
    Optional. The Cloud KMS key to use for encryption.
    network_tags Sequence[str]
    Optional. Tags used for network traffic control.
    network_uri str
    Optional. Network URI to connect workload to.
    service_account str
    Optional. Service account that used to execute workload.
    staging_bucket str
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    subnetwork_uri str
    Optional. Subnetwork URI to connect workload to.
    ttl str
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    idleTtl String
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    kmsKey String
    Optional. The Cloud KMS key to use for encryption.
    networkTags List<String>
    Optional. Tags used for network traffic control.
    networkUri String
    Optional. Network URI to connect workload to.
    serviceAccount String
    Optional. Service account that used to execute workload.
    stagingBucket String
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    subnetworkUri String
    Optional. Subnetwork URI to connect workload to.
    ttl String
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

    ExecutionConfigResponse, ExecutionConfigResponseArgs

    IdleTtl string
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    KmsKey string
    Optional. The Cloud KMS key to use for encryption.
    NetworkTags List<string>
    Optional. Tags used for network traffic control.
    NetworkUri string
    Optional. Network URI to connect workload to.
    ServiceAccount string
    Optional. Service account that used to execute workload.
    StagingBucket string
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    SubnetworkUri string
    Optional. Subnetwork URI to connect workload to.
    Ttl string
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    IdleTtl string
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    KmsKey string
    Optional. The Cloud KMS key to use for encryption.
    NetworkTags []string
    Optional. Tags used for network traffic control.
    NetworkUri string
    Optional. Network URI to connect workload to.
    ServiceAccount string
    Optional. Service account that used to execute workload.
    StagingBucket string
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    SubnetworkUri string
    Optional. Subnetwork URI to connect workload to.
    Ttl string
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    idleTtl String
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    kmsKey String
    Optional. The Cloud KMS key to use for encryption.
    networkTags List<String>
    Optional. Tags used for network traffic control.
    networkUri String
    Optional. Network URI to connect workload to.
    serviceAccount String
    Optional. Service account that used to execute workload.
    stagingBucket String
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    subnetworkUri String
    Optional. Subnetwork URI to connect workload to.
    ttl String
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    idleTtl string
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    kmsKey string
    Optional. The Cloud KMS key to use for encryption.
    networkTags string[]
    Optional. Tags used for network traffic control.
    networkUri string
    Optional. Network URI to connect workload to.
    serviceAccount string
    Optional. Service account that used to execute workload.
    stagingBucket string
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    subnetworkUri string
    Optional. Subnetwork URI to connect workload to.
    ttl string
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    idle_ttl str
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    kms_key str
    Optional. The Cloud KMS key to use for encryption.
    network_tags Sequence[str]
    Optional. Tags used for network traffic control.
    network_uri str
    Optional. Network URI to connect workload to.
    service_account str
    Optional. Service account that used to execute workload.
    staging_bucket str
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    subnetwork_uri str
    Optional. Subnetwork URI to connect workload to.
    ttl str
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    idleTtl String
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    kmsKey String
    Optional. The Cloud KMS key to use for encryption.
    networkTags List<String>
    Optional. Tags used for network traffic control.
    networkUri String
    Optional. Network URI to connect workload to.
    serviceAccount String
    Optional. Service account that used to execute workload.
    stagingBucket String
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    subnetworkUri String
    Optional. Subnetwork URI to connect workload to.
    ttl String
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

    JupyterConfig, JupyterConfigArgs

    DisplayName string
    Optional. Display name, shown in the Jupyter kernelspec card.
    Kernel Pulumi.GoogleNative.Dataproc.V1.JupyterConfigKernel
    Optional. Kernel
    DisplayName string
    Optional. Display name, shown in the Jupyter kernelspec card.
    Kernel JupyterConfigKernel
    Optional. Kernel
    displayName String
    Optional. Display name, shown in the Jupyter kernelspec card.
    kernel JupyterConfigKernel
    Optional. Kernel
    displayName string
    Optional. Display name, shown in the Jupyter kernelspec card.
    kernel JupyterConfigKernel
    Optional. Kernel
    display_name str
    Optional. Display name, shown in the Jupyter kernelspec card.
    kernel JupyterConfigKernel
    Optional. Kernel
    displayName String
    Optional. Display name, shown in the Jupyter kernelspec card.
    kernel "KERNEL_UNSPECIFIED" | "PYTHON" | "SCALA"
    Optional. Kernel

    JupyterConfigKernel, JupyterConfigKernelArgs

    KernelUnspecified
    KERNEL_UNSPECIFIEDThe kernel is unknown.
    Python
    PYTHONPython kernel.
    Scala
    SCALAScala kernel.
    JupyterConfigKernelKernelUnspecified
    KERNEL_UNSPECIFIEDThe kernel is unknown.
    JupyterConfigKernelPython
    PYTHONPython kernel.
    JupyterConfigKernelScala
    SCALAScala kernel.
    KernelUnspecified
    KERNEL_UNSPECIFIEDThe kernel is unknown.
    Python
    PYTHONPython kernel.
    Scala
    SCALAScala kernel.
    KernelUnspecified
    KERNEL_UNSPECIFIEDThe kernel is unknown.
    Python
    PYTHONPython kernel.
    Scala
    SCALAScala kernel.
    KERNEL_UNSPECIFIED
    KERNEL_UNSPECIFIEDThe kernel is unknown.
    PYTHON
    PYTHONPython kernel.
    SCALA
    SCALAScala kernel.
    "KERNEL_UNSPECIFIED"
    KERNEL_UNSPECIFIEDThe kernel is unknown.
    "PYTHON"
    PYTHONPython kernel.
    "SCALA"
    SCALAScala kernel.

    JupyterConfigResponse, JupyterConfigResponseArgs

    DisplayName string
    Optional. Display name, shown in the Jupyter kernelspec card.
    Kernel string
    Optional. Kernel
    DisplayName string
    Optional. Display name, shown in the Jupyter kernelspec card.
    Kernel string
    Optional. Kernel
    displayName String
    Optional. Display name, shown in the Jupyter kernelspec card.
    kernel String
    Optional. Kernel
    displayName string
    Optional. Display name, shown in the Jupyter kernelspec card.
    kernel string
    Optional. Kernel
    display_name str
    Optional. Display name, shown in the Jupyter kernelspec card.
    kernel str
    Optional. Kernel
    displayName String
    Optional. Display name, shown in the Jupyter kernelspec card.
    kernel String
    Optional. Kernel

    PeripheralsConfig, PeripheralsConfigArgs

    MetastoreService string
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    SparkHistoryServerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfig
    Optional. The Spark History Server configuration for the workload.
    MetastoreService string
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    SparkHistoryServerConfig SparkHistoryServerConfig
    Optional. The Spark History Server configuration for the workload.
    metastoreService String
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    sparkHistoryServerConfig SparkHistoryServerConfig
    Optional. The Spark History Server configuration for the workload.
    metastoreService string
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    sparkHistoryServerConfig SparkHistoryServerConfig
    Optional. The Spark History Server configuration for the workload.
    metastore_service str
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    spark_history_server_config SparkHistoryServerConfig
    Optional. The Spark History Server configuration for the workload.
    metastoreService String
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    sparkHistoryServerConfig Property Map
    Optional. The Spark History Server configuration for the workload.

    PeripheralsConfigResponse, PeripheralsConfigResponseArgs

    MetastoreService string
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    SparkHistoryServerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfigResponse
    Optional. The Spark History Server configuration for the workload.
    MetastoreService string
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    SparkHistoryServerConfig SparkHistoryServerConfigResponse
    Optional. The Spark History Server configuration for the workload.
    metastoreService String
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    sparkHistoryServerConfig SparkHistoryServerConfigResponse
    Optional. The Spark History Server configuration for the workload.
    metastoreService string
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    sparkHistoryServerConfig SparkHistoryServerConfigResponse
    Optional. The Spark History Server configuration for the workload.
    metastore_service str
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    spark_history_server_config SparkHistoryServerConfigResponse
    Optional. The Spark History Server configuration for the workload.
    metastoreService String
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    sparkHistoryServerConfig Property Map
    Optional. The Spark History Server configuration for the workload.

    PyPiRepositoryConfig, PyPiRepositoryConfigArgs

    PypiRepository string
    Optional. PyPi repository address
    PypiRepository string
    Optional. PyPi repository address
    pypiRepository String
    Optional. PyPi repository address
    pypiRepository string
    Optional. PyPi repository address
    pypi_repository str
    Optional. PyPi repository address
    pypiRepository String
    Optional. PyPi repository address

    PyPiRepositoryConfigResponse, PyPiRepositoryConfigResponseArgs

    PypiRepository string
    Optional. PyPi repository address
    PypiRepository string
    Optional. PyPi repository address
    pypiRepository String
    Optional. PyPi repository address
    pypiRepository string
    Optional. PyPi repository address
    pypi_repository str
    Optional. PyPi repository address
    pypiRepository String
    Optional. PyPi repository address

    RepositoryConfig, RepositoryConfigArgs

    PypiRepositoryConfig PyPiRepositoryConfig
    Optional. Configuration for PyPi repository.
    pypiRepositoryConfig PyPiRepositoryConfig
    Optional. Configuration for PyPi repository.
    pypiRepositoryConfig PyPiRepositoryConfig
    Optional. Configuration for PyPi repository.
    pypi_repository_config PyPiRepositoryConfig
    Optional. Configuration for PyPi repository.
    pypiRepositoryConfig Property Map
    Optional. Configuration for PyPi repository.

    RepositoryConfigResponse, RepositoryConfigResponseArgs

    PypiRepositoryConfig PyPiRepositoryConfigResponse
    Optional. Configuration for PyPi repository.
    pypiRepositoryConfig PyPiRepositoryConfigResponse
    Optional. Configuration for PyPi repository.
    pypiRepositoryConfig PyPiRepositoryConfigResponse
    Optional. Configuration for PyPi repository.
    pypi_repository_config PyPiRepositoryConfigResponse
    Optional. Configuration for PyPi repository.
    pypiRepositoryConfig Property Map
    Optional. Configuration for PyPi repository.

    RuntimeConfig, RuntimeConfigArgs

    ContainerImage string
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, which are used to configure workload execution.
    RepositoryConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.RepositoryConfig
    Optional. Dependency repository configuration.
    Version string
    Optional. Version of the batch runtime.
    ContainerImage string
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    Properties map[string]string
    Optional. A mapping of property names to values, which are used to configure workload execution.
    RepositoryConfig RepositoryConfig
    Optional. Dependency repository configuration.
    Version string
    Optional. Version of the batch runtime.
    containerImage String
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    properties Map<String,String>
    Optional. A mapping of property names to values, which are used to configure workload execution.
    repositoryConfig RepositoryConfig
    Optional. Dependency repository configuration.
    version String
    Optional. Version of the batch runtime.
    containerImage string
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, which are used to configure workload execution.
    repositoryConfig RepositoryConfig
    Optional. Dependency repository configuration.
    version string
    Optional. Version of the batch runtime.
    container_image str
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, which are used to configure workload execution.
    repository_config RepositoryConfig
    Optional. Dependency repository configuration.
    version str
    Optional. Version of the batch runtime.
    containerImage String
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    properties Map<String>
    Optional. A mapping of property names to values, which are used to configure workload execution.
    repositoryConfig Property Map
    Optional. Dependency repository configuration.
    version String
    Optional. Version of the batch runtime.

    RuntimeConfigResponse, RuntimeConfigResponseArgs

    ContainerImage string
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, which are used to configure workload execution.
    RepositoryConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.RepositoryConfigResponse
    Optional. Dependency repository configuration.
    Version string
    Optional. Version of the batch runtime.
    ContainerImage string
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    Properties map[string]string
    Optional. A mapping of property names to values, which are used to configure workload execution.
    RepositoryConfig RepositoryConfigResponse
    Optional. Dependency repository configuration.
    Version string
    Optional. Version of the batch runtime.
    containerImage String
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    properties Map<String,String>
    Optional. A mapping of property names to values, which are used to configure workload execution.
    repositoryConfig RepositoryConfigResponse
    Optional. Dependency repository configuration.
    version String
    Optional. Version of the batch runtime.
    containerImage string
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, which are used to configure workload execution.
    repositoryConfig RepositoryConfigResponse
    Optional. Dependency repository configuration.
    version string
    Optional. Version of the batch runtime.
    container_image str
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, which are used to configure workload execution.
    repository_config RepositoryConfigResponse
    Optional. Dependency repository configuration.
    version str
    Optional. Version of the batch runtime.
    containerImage String
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    properties Map<String>
    Optional. A mapping of property names to values, which are used to configure workload execution.
    repositoryConfig Property Map
    Optional. Dependency repository configuration.
    version String
    Optional. Version of the batch runtime.

    SparkHistoryServerConfig, SparkHistoryServerConfigArgs

    DataprocCluster string
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
    DataprocCluster string
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
    dataprocCluster String
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
    dataprocCluster string
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
    dataproc_cluster str
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
    dataprocCluster String
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

    SparkHistoryServerConfigResponse, SparkHistoryServerConfigResponseArgs

    DataprocCluster string
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
    DataprocCluster string
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
    dataprocCluster String
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
    dataprocCluster string
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
    dataproc_cluster str
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
    dataprocCluster String
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi