1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataproc
  5. dataproc/v1
  6. getBatch

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.dataproc/v1.getBatch

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Gets the batch workload resource representation.

    Using getBatch

    Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

    function getBatch(args: GetBatchArgs, opts?: InvokeOptions): Promise<GetBatchResult>
    function getBatchOutput(args: GetBatchOutputArgs, opts?: InvokeOptions): Output<GetBatchResult>
    def get_batch(batch_id: Optional[str] = None,
                  location: Optional[str] = None,
                  project: Optional[str] = None,
                  opts: Optional[InvokeOptions] = None) -> GetBatchResult
    def get_batch_output(batch_id: Optional[pulumi.Input[str]] = None,
                  location: Optional[pulumi.Input[str]] = None,
                  project: Optional[pulumi.Input[str]] = None,
                  opts: Optional[InvokeOptions] = None) -> Output[GetBatchResult]
    func LookupBatch(ctx *Context, args *LookupBatchArgs, opts ...InvokeOption) (*LookupBatchResult, error)
    func LookupBatchOutput(ctx *Context, args *LookupBatchOutputArgs, opts ...InvokeOption) LookupBatchResultOutput

    > Note: This function is named LookupBatch in the Go SDK.

    public static class GetBatch 
    {
        public static Task<GetBatchResult> InvokeAsync(GetBatchArgs args, InvokeOptions? opts = null)
        public static Output<GetBatchResult> Invoke(GetBatchInvokeArgs args, InvokeOptions? opts = null)
    }
    public static CompletableFuture<GetBatchResult> getBatch(GetBatchArgs args, InvokeOptions options)
    // Output-based functions aren't available in Java yet
    
    fn::invoke:
      function: google-native:dataproc/v1:getBatch
      arguments:
        # arguments dictionary

    The following arguments are supported:

    BatchId string
    Location string
    Project string
    BatchId string
    Location string
    Project string
    batchId String
    location String
    project String
    batchId string
    location string
    project string
    batchId String
    location String
    project String

    getBatch Result

    The following output properties are available:

    CreateTime string
    The time when the batch was created.
    Creator string
    The email address of the user who created the batch.
    EnvironmentConfig Pulumi.GoogleNative.Dataproc.V1.Outputs.EnvironmentConfigResponse
    Optional. Environment configuration for the batch execution.
    Labels Dictionary<string, string>
    Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
    Name string
    The resource name of the batch.
    Operation string
    The resource name of the operation associated with this batch.
    PysparkBatch Pulumi.GoogleNative.Dataproc.V1.Outputs.PySparkBatchResponse
    Optional. PySpark batch config.
    RuntimeConfig Pulumi.GoogleNative.Dataproc.V1.Outputs.RuntimeConfigResponse
    Optional. Runtime configuration for the batch execution.
    RuntimeInfo Pulumi.GoogleNative.Dataproc.V1.Outputs.RuntimeInfoResponse
    Runtime information about batch execution.
    SparkBatch Pulumi.GoogleNative.Dataproc.V1.Outputs.SparkBatchResponse
    Optional. Spark batch config.
    SparkRBatch Pulumi.GoogleNative.Dataproc.V1.Outputs.SparkRBatchResponse
    Optional. SparkR batch config.
    SparkSqlBatch Pulumi.GoogleNative.Dataproc.V1.Outputs.SparkSqlBatchResponse
    Optional. SparkSql batch config.
    State string
    The state of the batch.
    StateHistory List<Pulumi.GoogleNative.Dataproc.V1.Outputs.StateHistoryResponse>
    Historical state information for the batch.
    StateMessage string
    Batch state details, such as a failure description if the state is FAILED.
    StateTime string
    The time when the batch entered a current state.
    Uuid string
    A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
    CreateTime string
    The time when the batch was created.
    Creator string
    The email address of the user who created the batch.
    EnvironmentConfig EnvironmentConfigResponse
    Optional. Environment configuration for the batch execution.
    Labels map[string]string
    Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
    Name string
    The resource name of the batch.
    Operation string
    The resource name of the operation associated with this batch.
    PysparkBatch PySparkBatchResponse
    Optional. PySpark batch config.
    RuntimeConfig RuntimeConfigResponse
    Optional. Runtime configuration for the batch execution.
    RuntimeInfo RuntimeInfoResponse
    Runtime information about batch execution.
    SparkBatch SparkBatchResponse
    Optional. Spark batch config.
    SparkRBatch SparkRBatchResponse
    Optional. SparkR batch config.
    SparkSqlBatch SparkSqlBatchResponse
    Optional. SparkSql batch config.
    State string
    The state of the batch.
    StateHistory []StateHistoryResponse
    Historical state information for the batch.
    StateMessage string
    Batch state details, such as a failure description if the state is FAILED.
    StateTime string
    The time when the batch entered a current state.
    Uuid string
    A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
    createTime String
    The time when the batch was created.
    creator String
    The email address of the user who created the batch.
    environmentConfig EnvironmentConfigResponse
    Optional. Environment configuration for the batch execution.
    labels Map<String,String>
    Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
    name String
    The resource name of the batch.
    operation String
    The resource name of the operation associated with this batch.
    pysparkBatch PySparkBatchResponse
    Optional. PySpark batch config.
    runtimeConfig RuntimeConfigResponse
    Optional. Runtime configuration for the batch execution.
    runtimeInfo RuntimeInfoResponse
    Runtime information about batch execution.
    sparkBatch SparkBatchResponse
    Optional. Spark batch config.
    sparkRBatch SparkRBatchResponse
    Optional. SparkR batch config.
    sparkSqlBatch SparkSqlBatchResponse
    Optional. SparkSql batch config.
    state String
    The state of the batch.
    stateHistory List<StateHistoryResponse>
    Historical state information for the batch.
    stateMessage String
    Batch state details, such as a failure description if the state is FAILED.
    stateTime String
    The time when the batch entered a current state.
    uuid String
    A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
    createTime string
    The time when the batch was created.
    creator string
    The email address of the user who created the batch.
    environmentConfig EnvironmentConfigResponse
    Optional. Environment configuration for the batch execution.
    labels {[key: string]: string}
    Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
    name string
    The resource name of the batch.
    operation string
    The resource name of the operation associated with this batch.
    pysparkBatch PySparkBatchResponse
    Optional. PySpark batch config.
    runtimeConfig RuntimeConfigResponse
    Optional. Runtime configuration for the batch execution.
    runtimeInfo RuntimeInfoResponse
    Runtime information about batch execution.
    sparkBatch SparkBatchResponse
    Optional. Spark batch config.
    sparkRBatch SparkRBatchResponse
    Optional. SparkR batch config.
    sparkSqlBatch SparkSqlBatchResponse
    Optional. SparkSql batch config.
    state string
    The state of the batch.
    stateHistory StateHistoryResponse[]
    Historical state information for the batch.
    stateMessage string
    Batch state details, such as a failure description if the state is FAILED.
    stateTime string
    The time when the batch entered a current state.
    uuid string
    A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
    create_time str
    The time when the batch was created.
    creator str
    The email address of the user who created the batch.
    environment_config EnvironmentConfigResponse
    Optional. Environment configuration for the batch execution.
    labels Mapping[str, str]
    Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
    name str
    The resource name of the batch.
    operation str
    The resource name of the operation associated with this batch.
    pyspark_batch PySparkBatchResponse
    Optional. PySpark batch config.
    runtime_config RuntimeConfigResponse
    Optional. Runtime configuration for the batch execution.
    runtime_info RuntimeInfoResponse
    Runtime information about batch execution.
    spark_batch SparkBatchResponse
    Optional. Spark batch config.
    spark_r_batch SparkRBatchResponse
    Optional. SparkR batch config.
    spark_sql_batch SparkSqlBatchResponse
    Optional. SparkSql batch config.
    state str
    The state of the batch.
    state_history Sequence[StateHistoryResponse]
    Historical state information for the batch.
    state_message str
    Batch state details, such as a failure description if the state is FAILED.
    state_time str
    The time when the batch entered a current state.
    uuid str
    A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
    createTime String
    The time when the batch was created.
    creator String
    The email address of the user who created the batch.
    environmentConfig Property Map
    Optional. Environment configuration for the batch execution.
    labels Map<String>
    Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
    name String
    The resource name of the batch.
    operation String
    The resource name of the operation associated with this batch.
    pysparkBatch Property Map
    Optional. PySpark batch config.
    runtimeConfig Property Map
    Optional. Runtime configuration for the batch execution.
    runtimeInfo Property Map
    Runtime information about batch execution.
    sparkBatch Property Map
    Optional. Spark batch config.
    sparkRBatch Property Map
    Optional. SparkR batch config.
    sparkSqlBatch Property Map
    Optional. SparkSql batch config.
    state String
    The state of the batch.
    stateHistory List<Property Map>
    Historical state information for the batch.
    stateMessage String
    Batch state details, such as a failure description if the state is FAILED.
    stateTime String
    The time when the batch entered a current state.
    uuid String
    A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.

    Supporting Types

    EnvironmentConfigResponse

    ExecutionConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ExecutionConfigResponse
    Optional. Execution configuration for a workload.
    PeripheralsConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.PeripheralsConfigResponse
    Optional. Peripherals configuration that workload has access to.
    ExecutionConfig ExecutionConfigResponse
    Optional. Execution configuration for a workload.
    PeripheralsConfig PeripheralsConfigResponse
    Optional. Peripherals configuration that workload has access to.
    executionConfig ExecutionConfigResponse
    Optional. Execution configuration for a workload.
    peripheralsConfig PeripheralsConfigResponse
    Optional. Peripherals configuration that workload has access to.
    executionConfig ExecutionConfigResponse
    Optional. Execution configuration for a workload.
    peripheralsConfig PeripheralsConfigResponse
    Optional. Peripherals configuration that workload has access to.
    execution_config ExecutionConfigResponse
    Optional. Execution configuration for a workload.
    peripherals_config PeripheralsConfigResponse
    Optional. Peripherals configuration that workload has access to.
    executionConfig Property Map
    Optional. Execution configuration for a workload.
    peripheralsConfig Property Map
    Optional. Peripherals configuration that workload has access to.

    ExecutionConfigResponse

    IdleTtl string
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    KmsKey string
    Optional. The Cloud KMS key to use for encryption.
    NetworkTags List<string>
    Optional. Tags used for network traffic control.
    NetworkUri string
    Optional. Network URI to connect workload to.
    ServiceAccount string
    Optional. Service account that used to execute workload.
    StagingBucket string
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    SubnetworkUri string
    Optional. Subnetwork URI to connect workload to.
    Ttl string
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    IdleTtl string
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    KmsKey string
    Optional. The Cloud KMS key to use for encryption.
    NetworkTags []string
    Optional. Tags used for network traffic control.
    NetworkUri string
    Optional. Network URI to connect workload to.
    ServiceAccount string
    Optional. Service account that used to execute workload.
    StagingBucket string
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    SubnetworkUri string
    Optional. Subnetwork URI to connect workload to.
    Ttl string
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    idleTtl String
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    kmsKey String
    Optional. The Cloud KMS key to use for encryption.
    networkTags List<String>
    Optional. Tags used for network traffic control.
    networkUri String
    Optional. Network URI to connect workload to.
    serviceAccount String
    Optional. Service account that used to execute workload.
    stagingBucket String
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    subnetworkUri String
    Optional. Subnetwork URI to connect workload to.
    ttl String
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    idleTtl string
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    kmsKey string
    Optional. The Cloud KMS key to use for encryption.
    networkTags string[]
    Optional. Tags used for network traffic control.
    networkUri string
    Optional. Network URI to connect workload to.
    serviceAccount string
    Optional. Service account that used to execute workload.
    stagingBucket string
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    subnetworkUri string
    Optional. Subnetwork URI to connect workload to.
    ttl string
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    idle_ttl str
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    kms_key str
    Optional. The Cloud KMS key to use for encryption.
    network_tags Sequence[str]
    Optional. Tags used for network traffic control.
    network_uri str
    Optional. Network URI to connect workload to.
    service_account str
    Optional. Service account that used to execute workload.
    staging_bucket str
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    subnetwork_uri str
    Optional. Subnetwork URI to connect workload to.
    ttl str
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    idleTtl String
    Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
    kmsKey String
    Optional. The Cloud KMS key to use for encryption.
    networkTags List<String>
    Optional. Tags used for network traffic control.
    networkUri String
    Optional. Network URI to connect workload to.
    serviceAccount String
    Optional. Service account that used to execute workload.
    stagingBucket String
    Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    subnetworkUri String
    Optional. Subnetwork URI to connect workload to.
    ttl String
    Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

    PeripheralsConfigResponse

    MetastoreService string
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    SparkHistoryServerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfigResponse
    Optional. The Spark History Server configuration for the workload.
    MetastoreService string
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    SparkHistoryServerConfig SparkHistoryServerConfigResponse
    Optional. The Spark History Server configuration for the workload.
    metastoreService String
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    sparkHistoryServerConfig SparkHistoryServerConfigResponse
    Optional. The Spark History Server configuration for the workload.
    metastoreService string
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    sparkHistoryServerConfig SparkHistoryServerConfigResponse
    Optional. The Spark History Server configuration for the workload.
    metastore_service str
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    spark_history_server_config SparkHistoryServerConfigResponse
    Optional. The Spark History Server configuration for the workload.
    metastoreService String
    Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
    sparkHistoryServerConfig Property Map
    Optional. The Spark History Server configuration for the workload.

    PyPiRepositoryConfigResponse

    PypiRepository string
    Optional. PyPi repository address
    PypiRepository string
    Optional. PyPi repository address
    pypiRepository String
    Optional. PyPi repository address
    pypiRepository string
    Optional. PyPi repository address
    pypi_repository str
    Optional. PyPi repository address
    pypiRepository String
    Optional. PyPi repository address

    PySparkBatchResponse

    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    FileUris List<string>
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
    MainPythonFileUri string
    The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
    PythonFileUris List<string>
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    FileUris []string
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
    MainPythonFileUri string
    The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
    PythonFileUris []string
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
    mainPythonFileUri String
    The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
    pythonFileUris List<String>
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    fileUris string[]
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
    mainPythonFileUri string
    The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
    pythonFileUris string[]
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    file_uris Sequence[str]
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
    main_python_file_uri str
    The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
    python_file_uris Sequence[str]
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
    mainPythonFileUri String
    The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
    pythonFileUris List<String>
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

    RepositoryConfigResponse

    PypiRepositoryConfig PyPiRepositoryConfigResponse
    Optional. Configuration for PyPi repository.
    pypiRepositoryConfig PyPiRepositoryConfigResponse
    Optional. Configuration for PyPi repository.
    pypiRepositoryConfig PyPiRepositoryConfigResponse
    Optional. Configuration for PyPi repository.
    pypi_repository_config PyPiRepositoryConfigResponse
    Optional. Configuration for PyPi repository.
    pypiRepositoryConfig Property Map
    Optional. Configuration for PyPi repository.

    RuntimeConfigResponse

    ContainerImage string
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, which are used to configure workload execution.
    RepositoryConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.RepositoryConfigResponse
    Optional. Dependency repository configuration.
    Version string
    Optional. Version of the batch runtime.
    ContainerImage string
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    Properties map[string]string
    Optional. A mapping of property names to values, which are used to configure workload execution.
    RepositoryConfig RepositoryConfigResponse
    Optional. Dependency repository configuration.
    Version string
    Optional. Version of the batch runtime.
    containerImage String
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    properties Map<String,String>
    Optional. A mapping of property names to values, which are used to configure workload execution.
    repositoryConfig RepositoryConfigResponse
    Optional. Dependency repository configuration.
    version String
    Optional. Version of the batch runtime.
    containerImage string
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, which are used to configure workload execution.
    repositoryConfig RepositoryConfigResponse
    Optional. Dependency repository configuration.
    version string
    Optional. Version of the batch runtime.
    container_image str
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, which are used to configure workload execution.
    repository_config RepositoryConfigResponse
    Optional. Dependency repository configuration.
    version str
    Optional. Version of the batch runtime.
    containerImage String
    Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
    properties Map<String>
    Optional. A mapping of property names to values, which are used to configure workload execution.
    repositoryConfig Property Map
    Optional. Dependency repository configuration.
    version String
    Optional. Version of the batch runtime.

    RuntimeInfoResponse

    ApproximateUsage Pulumi.GoogleNative.Dataproc.V1.Inputs.UsageMetricsResponse
    Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
    CurrentUsage Pulumi.GoogleNative.Dataproc.V1.Inputs.UsageSnapshotResponse
    Snapshot of current workload resource usage.
    DiagnosticOutputUri string
    A URI pointing to the location of the diagnostics tarball.
    Endpoints Dictionary<string, string>
    Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
    OutputUri string
    A URI pointing to the location of the stdout and stderr of the workload.
    ApproximateUsage UsageMetricsResponse
    Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
    CurrentUsage UsageSnapshotResponse
    Snapshot of current workload resource usage.
    DiagnosticOutputUri string
    A URI pointing to the location of the diagnostics tarball.
    Endpoints map[string]string
    Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
    OutputUri string
    A URI pointing to the location of the stdout and stderr of the workload.
    approximateUsage UsageMetricsResponse
    Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
    currentUsage UsageSnapshotResponse
    Snapshot of current workload resource usage.
    diagnosticOutputUri String
    A URI pointing to the location of the diagnostics tarball.
    endpoints Map<String,String>
    Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
    outputUri String
    A URI pointing to the location of the stdout and stderr of the workload.
    approximateUsage UsageMetricsResponse
    Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
    currentUsage UsageSnapshotResponse
    Snapshot of current workload resource usage.
    diagnosticOutputUri string
    A URI pointing to the location of the diagnostics tarball.
    endpoints {[key: string]: string}
    Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
    outputUri string
    A URI pointing to the location of the stdout and stderr of the workload.
    approximate_usage UsageMetricsResponse
    Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
    current_usage UsageSnapshotResponse
    Snapshot of current workload resource usage.
    diagnostic_output_uri str
    A URI pointing to the location of the diagnostics tarball.
    endpoints Mapping[str, str]
    Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
    output_uri str
    A URI pointing to the location of the stdout and stderr of the workload.
    approximateUsage Property Map
    Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
    currentUsage Property Map
    Snapshot of current workload resource usage.
    diagnosticOutputUri String
    A URI pointing to the location of the diagnostics tarball.
    endpoints Map<String>
    Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
    outputUri String
    A URI pointing to the location of the stdout and stderr of the workload.

    SparkBatchResponse

    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    FileUris List<string>
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
    MainClass string
    Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
    MainJarFileUri string
    Optional. The HCFS URI of the jar file that contains the main class.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    FileUris []string
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
    MainClass string
    Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
    MainJarFileUri string
    Optional. The HCFS URI of the jar file that contains the main class.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
    mainClass String
    Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
    mainJarFileUri String
    Optional. The HCFS URI of the jar file that contains the main class.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    fileUris string[]
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
    mainClass string
    Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
    mainJarFileUri string
    Optional. The HCFS URI of the jar file that contains the main class.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    file_uris Sequence[str]
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
    main_class str
    Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
    main_jar_file_uri str
    Optional. The HCFS URI of the jar file that contains the main class.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
    mainClass String
    Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
    mainJarFileUri String
    Optional. The HCFS URI of the jar file that contains the main class.

    SparkHistoryServerConfigResponse

    DataprocCluster string
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
    DataprocCluster string
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
    dataprocCluster String
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
    dataprocCluster string
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
    dataproc_cluster str
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
    dataprocCluster String
    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

    SparkRBatchResponse

    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args List<string>
    Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    FileUris List<string>
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    MainRFileUri string
    The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args []string
    Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    FileUris []string
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    MainRFileUri string
    The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    mainRFileUri String
    The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args string[]
    Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    fileUris string[]
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    mainRFileUri string
    The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args Sequence[str]
    Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    file_uris Sequence[str]
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    main_r_file_uri str
    The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor.
    mainRFileUri String
    The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.

    SparkSqlBatchResponse

    JarFileUris List<string>
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    QueryFileUri string
    The HCFS URI of the script that contains Spark SQL queries to execute.
    QueryVariables Dictionary<string, string>
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    JarFileUris []string
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    QueryFileUri string
    The HCFS URI of the script that contains Spark SQL queries to execute.
    QueryVariables map[string]string
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    queryFileUri String
    The HCFS URI of the script that contains Spark SQL queries to execute.
    queryVariables Map<String,String>
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jarFileUris string[]
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    queryFileUri string
    The HCFS URI of the script that contains Spark SQL queries to execute.
    queryVariables {[key: string]: string}
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    query_file_uri str
    The HCFS URI of the script that contains Spark SQL queries to execute.
    query_variables Mapping[str, str]
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    queryFileUri String
    The HCFS URI of the script that contains Spark SQL queries to execute.
    queryVariables Map<String>
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

    StateHistoryResponse

    State string
    The state of the batch at this point in history.
    StateMessage string
    Details about the state at this point in history.
    StateStartTime string
    The time when the batch entered the historical state.
    State string
    The state of the batch at this point in history.
    StateMessage string
    Details about the state at this point in history.
    StateStartTime string
    The time when the batch entered the historical state.
    state String
    The state of the batch at this point in history.
    stateMessage String
    Details about the state at this point in history.
    stateStartTime String
    The time when the batch entered the historical state.
    state string
    The state of the batch at this point in history.
    stateMessage string
    Details about the state at this point in history.
    stateStartTime string
    The time when the batch entered the historical state.
    state str
    The state of the batch at this point in history.
    state_message str
    Details about the state at this point in history.
    state_start_time str
    The time when the batch entered the historical state.
    state String
    The state of the batch at this point in history.
    stateMessage String
    Details about the state at this point in history.
    stateStartTime String
    The time when the batch entered the historical state.

    UsageMetricsResponse

    AcceleratorType string
    Optional. Accelerator type being used, if any
    MilliAcceleratorSeconds string
    Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    MilliDcuSeconds string
    Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    ShuffleStorageGbSeconds string
    Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    AcceleratorType string
    Optional. Accelerator type being used, if any
    MilliAcceleratorSeconds string
    Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    MilliDcuSeconds string
    Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    ShuffleStorageGbSeconds string
    Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    acceleratorType String
    Optional. Accelerator type being used, if any
    milliAcceleratorSeconds String
    Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    milliDcuSeconds String
    Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    shuffleStorageGbSeconds String
    Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    acceleratorType string
    Optional. Accelerator type being used, if any
    milliAcceleratorSeconds string
    Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    milliDcuSeconds string
    Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    shuffleStorageGbSeconds string
    Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    accelerator_type str
    Optional. Accelerator type being used, if any
    milli_accelerator_seconds str
    Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    milli_dcu_seconds str
    Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    shuffle_storage_gb_seconds str
    Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    acceleratorType String
    Optional. Accelerator type being used, if any
    milliAcceleratorSeconds String
    Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    milliDcuSeconds String
    Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    shuffleStorageGbSeconds String
    Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

    UsageSnapshotResponse

    AcceleratorType string
    Optional. Accelerator type being used, if any
    MilliAccelerator string
    Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    MilliDcu string
    Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    MilliDcuPremium string
    Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    ShuffleStorageGb string
    Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    ShuffleStorageGbPremium string
    Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    SnapshotTime string
    Optional. The timestamp of the usage snapshot.
    AcceleratorType string
    Optional. Accelerator type being used, if any
    MilliAccelerator string
    Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    MilliDcu string
    Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    MilliDcuPremium string
    Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    ShuffleStorageGb string
    Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    ShuffleStorageGbPremium string
    Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    SnapshotTime string
    Optional. The timestamp of the usage snapshot.
    acceleratorType String
    Optional. Accelerator type being used, if any
    milliAccelerator String
    Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    milliDcu String
    Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    milliDcuPremium String
    Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    shuffleStorageGb String
    Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    shuffleStorageGbPremium String
    Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    snapshotTime String
    Optional. The timestamp of the usage snapshot.
    acceleratorType string
    Optional. Accelerator type being used, if any
    milliAccelerator string
    Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    milliDcu string
    Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    milliDcuPremium string
    Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    shuffleStorageGb string
    Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    shuffleStorageGbPremium string
    Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    snapshotTime string
    Optional. The timestamp of the usage snapshot.
    accelerator_type str
    Optional. Accelerator type being used, if any
    milli_accelerator str
    Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    milli_dcu str
    Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    milli_dcu_premium str
    Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    shuffle_storage_gb str
    Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    shuffle_storage_gb_premium str
    Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    snapshot_time str
    Optional. The timestamp of the usage snapshot.
    acceleratorType String
    Optional. Accelerator type being used, if any
    milliAccelerator String
    Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    milliDcu String
    Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    milliDcuPremium String
    Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
    shuffleStorageGb String
    Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    shuffleStorageGbPremium String
    Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
    snapshotTime String
    Optional. The timestamp of the usage snapshot.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi