google-native logo
Google Cloud Native v0.30.0, Apr 14 23

google-native.dataproc/v1.Batch

Explore with Pulumi AI

Creates a batch workload that executes asynchronously. Auto-naming is currently not supported for this resource.

Create Batch Resource

new Batch(name: string, args?: BatchArgs, opts?: CustomResourceOptions);
@overload
def Batch(resource_name: str,
          opts: Optional[ResourceOptions] = None,
          batch_id: Optional[str] = None,
          environment_config: Optional[EnvironmentConfigArgs] = None,
          labels: Optional[Mapping[str, str]] = None,
          location: Optional[str] = None,
          project: Optional[str] = None,
          pyspark_batch: Optional[PySparkBatchArgs] = None,
          request_id: Optional[str] = None,
          runtime_config: Optional[RuntimeConfigArgs] = None,
          spark_batch: Optional[SparkBatchArgs] = None,
          spark_r_batch: Optional[SparkRBatchArgs] = None,
          spark_sql_batch: Optional[SparkSqlBatchArgs] = None)
@overload
def Batch(resource_name: str,
          args: Optional[BatchArgs] = None,
          opts: Optional[ResourceOptions] = None)
func NewBatch(ctx *Context, name string, args *BatchArgs, opts ...ResourceOption) (*Batch, error)
public Batch(string name, BatchArgs? args = null, CustomResourceOptions? opts = null)
public Batch(String name, BatchArgs args)
public Batch(String name, BatchArgs args, CustomResourceOptions options)
type: google-native:dataproc/v1:Batch
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.

name string
The unique name of the resource.
args BatchArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
resource_name str
The unique name of the resource.
args BatchArgs
The arguments to resource properties.
opts ResourceOptions
Bag of options to control resource's behavior.
ctx Context
Context object for the current deployment.
name string
The unique name of the resource.
args BatchArgs
The arguments to resource properties.
opts ResourceOption
Bag of options to control resource's behavior.
name string
The unique name of the resource.
args BatchArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
name String
The unique name of the resource.
args BatchArgs
The arguments to resource properties.
options CustomResourceOptions
Bag of options to control resource's behavior.

Batch Resource Properties

To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

Inputs

The Batch resource accepts the following input properties:

BatchId string

Optional. The ID to use for the batch, which will become the final component of the batch's resource name.This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.

EnvironmentConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.EnvironmentConfigArgs

Optional. Environment configuration for the batch execution.

Labels Dictionary<string, string>

Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.

Location string
Project string
PysparkBatch Pulumi.GoogleNative.Dataproc.V1.Inputs.PySparkBatchArgs

Optional. PySpark batch config.

RequestId string

Optional. A unique ID used to identify the request. If the service receives two CreateBatchRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateBatchRequest)s with the same request_id, the second request is ignored and the Operation that corresponds to the first Batch created and stored in the backend is returned.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

RuntimeConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.RuntimeConfigArgs

Optional. Runtime configuration for the batch execution.

SparkBatch Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkBatchArgs

Optional. Spark batch config.

SparkRBatch Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkRBatchArgs

Optional. SparkR batch config.

SparkSqlBatch Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkSqlBatchArgs

Optional. SparkSql batch config.

BatchId string

Optional. The ID to use for the batch, which will become the final component of the batch's resource name.This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.

EnvironmentConfig EnvironmentConfigArgs

Optional. Environment configuration for the batch execution.

Labels map[string]string

Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.

Location string
Project string
PysparkBatch PySparkBatchArgs

Optional. PySpark batch config.

RequestId string

Optional. A unique ID used to identify the request. If the service receives two CreateBatchRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateBatchRequest)s with the same request_id, the second request is ignored and the Operation that corresponds to the first Batch created and stored in the backend is returned.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

RuntimeConfig RuntimeConfigArgs

Optional. Runtime configuration for the batch execution.

SparkBatch SparkBatchArgs

Optional. Spark batch config.

SparkRBatch SparkRBatchArgs

Optional. SparkR batch config.

SparkSqlBatch SparkSqlBatchArgs

Optional. SparkSql batch config.

batchId String

Optional. The ID to use for the batch, which will become the final component of the batch's resource name.This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.

environmentConfig EnvironmentConfigArgs

Optional. Environment configuration for the batch execution.

labels Map<String,String>

Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.

location String
project String
pysparkBatch PySparkBatchArgs

Optional. PySpark batch config.

requestId String

Optional. A unique ID used to identify the request. If the service receives two CreateBatchRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateBatchRequest)s with the same request_id, the second request is ignored and the Operation that corresponds to the first Batch created and stored in the backend is returned.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

runtimeConfig RuntimeConfigArgs

Optional. Runtime configuration for the batch execution.

sparkBatch SparkBatchArgs

Optional. Spark batch config.

sparkRBatch SparkRBatchArgs

Optional. SparkR batch config.

sparkSqlBatch SparkSqlBatchArgs

Optional. SparkSql batch config.

batchId string

Optional. The ID to use for the batch, which will become the final component of the batch's resource name.This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.

environmentConfig EnvironmentConfigArgs

Optional. Environment configuration for the batch execution.

labels {[key: string]: string}

Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.

location string
project string
pysparkBatch PySparkBatchArgs

Optional. PySpark batch config.

requestId string

Optional. A unique ID used to identify the request. If the service receives two CreateBatchRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateBatchRequest)s with the same request_id, the second request is ignored and the Operation that corresponds to the first Batch created and stored in the backend is returned.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

runtimeConfig RuntimeConfigArgs

Optional. Runtime configuration for the batch execution.

sparkBatch SparkBatchArgs

Optional. Spark batch config.

sparkRBatch SparkRBatchArgs

Optional. SparkR batch config.

sparkSqlBatch SparkSqlBatchArgs

Optional. SparkSql batch config.

batch_id str

Optional. The ID to use for the batch, which will become the final component of the batch's resource name.This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.

environment_config EnvironmentConfigArgs

Optional. Environment configuration for the batch execution.

labels Mapping[str, str]

Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.

location str
project str
pyspark_batch PySparkBatchArgs

Optional. PySpark batch config.

request_id str

Optional. A unique ID used to identify the request. If the service receives two CreateBatchRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateBatchRequest)s with the same request_id, the second request is ignored and the Operation that corresponds to the first Batch created and stored in the backend is returned.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

runtime_config RuntimeConfigArgs

Optional. Runtime configuration for the batch execution.

spark_batch SparkBatchArgs

Optional. Spark batch config.

spark_r_batch SparkRBatchArgs

Optional. SparkR batch config.

spark_sql_batch SparkSqlBatchArgs

Optional. SparkSql batch config.

batchId String

Optional. The ID to use for the batch, which will become the final component of the batch's resource name.This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.

environmentConfig Property Map

Optional. Environment configuration for the batch execution.

labels Map<String>

Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.

location String
project String
pysparkBatch Property Map

Optional. PySpark batch config.

requestId String

Optional. A unique ID used to identify the request. If the service receives two CreateBatchRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateBatchRequest)s with the same request_id, the second request is ignored and the Operation that corresponds to the first Batch created and stored in the backend is returned.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

runtimeConfig Property Map

Optional. Runtime configuration for the batch execution.

sparkBatch Property Map

Optional. Spark batch config.

sparkRBatch Property Map

Optional. SparkR batch config.

sparkSqlBatch Property Map

Optional. SparkSql batch config.

Outputs

All input properties are implicitly available as output properties. Additionally, the Batch resource produces the following output properties:

CreateTime string

The time when the batch was created.

Creator string

The email address of the user who created the batch.

Id string

The provider-assigned unique ID for this managed resource.

Name string

The resource name of the batch.

Operation string

The resource name of the operation associated with this batch.

RuntimeInfo Pulumi.GoogleNative.Dataproc.V1.Outputs.RuntimeInfoResponse

Runtime information about batch execution.

State string

The state of the batch.

StateHistory List<Pulumi.GoogleNative.Dataproc.V1.Outputs.StateHistoryResponse>

Historical state information for the batch.

StateMessage string

Batch state details, such as a failure description if the state is FAILED.

StateTime string

The time when the batch entered a current state.

Uuid string

A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.

CreateTime string

The time when the batch was created.

Creator string

The email address of the user who created the batch.

Id string

The provider-assigned unique ID for this managed resource.

Name string

The resource name of the batch.

Operation string

The resource name of the operation associated with this batch.

RuntimeInfo RuntimeInfoResponse

Runtime information about batch execution.

State string

The state of the batch.

StateHistory []StateHistoryResponse

Historical state information for the batch.

StateMessage string

Batch state details, such as a failure description if the state is FAILED.

StateTime string

The time when the batch entered a current state.

Uuid string

A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.

createTime String

The time when the batch was created.

creator String

The email address of the user who created the batch.

id String

The provider-assigned unique ID for this managed resource.

name String

The resource name of the batch.

operation String

The resource name of the operation associated with this batch.

runtimeInfo RuntimeInfoResponse

Runtime information about batch execution.

state String

The state of the batch.

stateHistory List<StateHistoryResponse>

Historical state information for the batch.

stateMessage String

Batch state details, such as a failure description if the state is FAILED.

stateTime String

The time when the batch entered a current state.

uuid String

A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.

createTime string

The time when the batch was created.

creator string

The email address of the user who created the batch.

id string

The provider-assigned unique ID for this managed resource.

name string

The resource name of the batch.

operation string

The resource name of the operation associated with this batch.

runtimeInfo RuntimeInfoResponse

Runtime information about batch execution.

state string

The state of the batch.

stateHistory StateHistoryResponse[]

Historical state information for the batch.

stateMessage string

Batch state details, such as a failure description if the state is FAILED.

stateTime string

The time when the batch entered a current state.

uuid string

A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.

create_time str

The time when the batch was created.

creator str

The email address of the user who created the batch.

id str

The provider-assigned unique ID for this managed resource.

name str

The resource name of the batch.

operation str

The resource name of the operation associated with this batch.

runtime_info RuntimeInfoResponse

Runtime information about batch execution.

state str

The state of the batch.

state_history Sequence[StateHistoryResponse]

Historical state information for the batch.

state_message str

Batch state details, such as a failure description if the state is FAILED.

state_time str

The time when the batch entered a current state.

uuid str

A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.

createTime String

The time when the batch was created.

creator String

The email address of the user who created the batch.

id String

The provider-assigned unique ID for this managed resource.

name String

The resource name of the batch.

operation String

The resource name of the operation associated with this batch.

runtimeInfo Property Map

Runtime information about batch execution.

state String

The state of the batch.

stateHistory List<Property Map>

Historical state information for the batch.

stateMessage String

Batch state details, such as a failure description if the state is FAILED.

stateTime String

The time when the batch entered a current state.

uuid String

A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.

Supporting Types

EnvironmentConfig

ExecutionConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ExecutionConfig

Optional. Execution configuration for a workload.

PeripheralsConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.PeripheralsConfig

Optional. Peripherals configuration that workload has access to.

ExecutionConfig ExecutionConfig

Optional. Execution configuration for a workload.

PeripheralsConfig PeripheralsConfig

Optional. Peripherals configuration that workload has access to.

executionConfig ExecutionConfig

Optional. Execution configuration for a workload.

peripheralsConfig PeripheralsConfig

Optional. Peripherals configuration that workload has access to.

executionConfig ExecutionConfig

Optional. Execution configuration for a workload.

peripheralsConfig PeripheralsConfig

Optional. Peripherals configuration that workload has access to.

execution_config ExecutionConfig

Optional. Execution configuration for a workload.

peripherals_config PeripheralsConfig

Optional. Peripherals configuration that workload has access to.

executionConfig Property Map

Optional. Execution configuration for a workload.

peripheralsConfig Property Map

Optional. Peripherals configuration that workload has access to.

EnvironmentConfigResponse

ExecutionConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ExecutionConfigResponse

Optional. Execution configuration for a workload.

PeripheralsConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.PeripheralsConfigResponse

Optional. Peripherals configuration that workload has access to.

ExecutionConfig ExecutionConfigResponse

Optional. Execution configuration for a workload.

PeripheralsConfig PeripheralsConfigResponse

Optional. Peripherals configuration that workload has access to.

executionConfig ExecutionConfigResponse

Optional. Execution configuration for a workload.

peripheralsConfig PeripheralsConfigResponse

Optional. Peripherals configuration that workload has access to.

executionConfig ExecutionConfigResponse

Optional. Execution configuration for a workload.

peripheralsConfig PeripheralsConfigResponse

Optional. Peripherals configuration that workload has access to.

execution_config ExecutionConfigResponse

Optional. Execution configuration for a workload.

peripherals_config PeripheralsConfigResponse

Optional. Peripherals configuration that workload has access to.

executionConfig Property Map

Optional. Execution configuration for a workload.

peripheralsConfig Property Map

Optional. Peripherals configuration that workload has access to.

ExecutionConfig

IdleTtl string

Optional. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 4 hours if not set. If both ttl and idle_ttl are specified, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceed, whichever occurs first.

KmsKey string

Optional. The Cloud KMS key to use for encryption.

NetworkTags List<string>

Optional. Tags used for network traffic control.

NetworkUri string

Optional. Network URI to connect workload to.

ServiceAccount string

Optional. Service account that used to execute workload.

StagingBucket string

Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

SubnetworkUri string

Optional. Subnetwork URI to connect workload to.

Ttl string

Optional. The duration after which the workload will be terminated. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or runs forever without exiting). If ttl is not specified for an interactive session, it defaults to 24h. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

IdleTtl string

Optional. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 4 hours if not set. If both ttl and idle_ttl are specified, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceed, whichever occurs first.

KmsKey string

Optional. The Cloud KMS key to use for encryption.

NetworkTags []string

Optional. Tags used for network traffic control.

NetworkUri string

Optional. Network URI to connect workload to.

ServiceAccount string

Optional. Service account that used to execute workload.

StagingBucket string

Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

SubnetworkUri string

Optional. Subnetwork URI to connect workload to.

Ttl string

Optional. The duration after which the workload will be terminated. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or runs forever without exiting). If ttl is not specified for an interactive session, it defaults to 24h. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

idleTtl String

Optional. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 4 hours if not set. If both ttl and idle_ttl are specified, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceed, whichever occurs first.

kmsKey String

Optional. The Cloud KMS key to use for encryption.

networkTags List<String>

Optional. Tags used for network traffic control.

networkUri String

Optional. Network URI to connect workload to.

serviceAccount String

Optional. Service account that used to execute workload.

stagingBucket String

Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

subnetworkUri String

Optional. Subnetwork URI to connect workload to.

ttl String

Optional. The duration after which the workload will be terminated. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or runs forever without exiting). If ttl is not specified for an interactive session, it defaults to 24h. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

idleTtl string

Optional. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 4 hours if not set. If both ttl and idle_ttl are specified, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceed, whichever occurs first.

kmsKey string

Optional. The Cloud KMS key to use for encryption.

networkTags string[]

Optional. Tags used for network traffic control.

networkUri string

Optional. Network URI to connect workload to.

serviceAccount string

Optional. Service account that used to execute workload.

stagingBucket string

Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

subnetworkUri string

Optional. Subnetwork URI to connect workload to.

ttl string

Optional. The duration after which the workload will be terminated. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or runs forever without exiting). If ttl is not specified for an interactive session, it defaults to 24h. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

idle_ttl str

Optional. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 4 hours if not set. If both ttl and idle_ttl are specified, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceed, whichever occurs first.

kms_key str

Optional. The Cloud KMS key to use for encryption.

network_tags Sequence[str]

Optional. Tags used for network traffic control.

network_uri str

Optional. Network URI to connect workload to.

service_account str

Optional. Service account that used to execute workload.

staging_bucket str

Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

subnetwork_uri str

Optional. Subnetwork URI to connect workload to.

ttl str

Optional. The duration after which the workload will be terminated. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or runs forever without exiting). If ttl is not specified for an interactive session, it defaults to 24h. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

idleTtl String

Optional. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 4 hours if not set. If both ttl and idle_ttl are specified, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceed, whichever occurs first.

kmsKey String

Optional. The Cloud KMS key to use for encryption.

networkTags List<String>

Optional. Tags used for network traffic control.

networkUri String

Optional. Network URI to connect workload to.

serviceAccount String

Optional. Service account that used to execute workload.

stagingBucket String

Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

subnetworkUri String

Optional. Subnetwork URI to connect workload to.

ttl String

Optional. The duration after which the workload will be terminated. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or runs forever without exiting). If ttl is not specified for an interactive session, it defaults to 24h. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

ExecutionConfigResponse

IdleTtl string

Optional. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 4 hours if not set. If both ttl and idle_ttl are specified, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceed, whichever occurs first.

KmsKey string

Optional. The Cloud KMS key to use for encryption.

NetworkTags List<string>

Optional. Tags used for network traffic control.

NetworkUri string

Optional. Network URI to connect workload to.

ServiceAccount string

Optional. Service account that used to execute workload.

StagingBucket string

Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

SubnetworkUri string

Optional. Subnetwork URI to connect workload to.

Ttl string

Optional. The duration after which the workload will be terminated. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or runs forever without exiting). If ttl is not specified for an interactive session, it defaults to 24h. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

IdleTtl string

Optional. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 4 hours if not set. If both ttl and idle_ttl are specified, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceed, whichever occurs first.

KmsKey string

Optional. The Cloud KMS key to use for encryption.

NetworkTags []string

Optional. Tags used for network traffic control.

NetworkUri string

Optional. Network URI to connect workload to.

ServiceAccount string

Optional. Service account that used to execute workload.

StagingBucket string

Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

SubnetworkUri string

Optional. Subnetwork URI to connect workload to.

Ttl string

Optional. The duration after which the workload will be terminated. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or runs forever without exiting). If ttl is not specified for an interactive session, it defaults to 24h. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

idleTtl String

Optional. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 4 hours if not set. If both ttl and idle_ttl are specified, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceed, whichever occurs first.

kmsKey String

Optional. The Cloud KMS key to use for encryption.

networkTags List<String>

Optional. Tags used for network traffic control.

networkUri String

Optional. Network URI to connect workload to.

serviceAccount String

Optional. Service account that used to execute workload.

stagingBucket String

Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

subnetworkUri String

Optional. Subnetwork URI to connect workload to.

ttl String

Optional. The duration after which the workload will be terminated. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or runs forever without exiting). If ttl is not specified for an interactive session, it defaults to 24h. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

idleTtl string

Optional. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 4 hours if not set. If both ttl and idle_ttl are specified, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceed, whichever occurs first.

kmsKey string

Optional. The Cloud KMS key to use for encryption.

networkTags string[]

Optional. Tags used for network traffic control.

networkUri string

Optional. Network URI to connect workload to.

serviceAccount string

Optional. Service account that used to execute workload.

stagingBucket string

Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

subnetworkUri string

Optional. Subnetwork URI to connect workload to.

ttl string

Optional. The duration after which the workload will be terminated. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or runs forever without exiting). If ttl is not specified for an interactive session, it defaults to 24h. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

idle_ttl str

Optional. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 4 hours if not set. If both ttl and idle_ttl are specified, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceed, whichever occurs first.

kms_key str

Optional. The Cloud KMS key to use for encryption.

network_tags Sequence[str]

Optional. Tags used for network traffic control.

network_uri str

Optional. Network URI to connect workload to.

service_account str

Optional. Service account that used to execute workload.

staging_bucket str

Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

subnetwork_uri str

Optional. Subnetwork URI to connect workload to.

ttl str

Optional. The duration after which the workload will be terminated. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or runs forever without exiting). If ttl is not specified for an interactive session, it defaults to 24h. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

idleTtl String

Optional. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 4 hours if not set. If both ttl and idle_ttl are specified, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceed, whichever occurs first.

kmsKey String

Optional. The Cloud KMS key to use for encryption.

networkTags List<String>

Optional. Tags used for network traffic control.

networkUri String

Optional. Network URI to connect workload to.

serviceAccount String

Optional. Service account that used to execute workload.

stagingBucket String

Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

subnetworkUri String

Optional. Subnetwork URI to connect workload to.

ttl String

Optional. The duration after which the workload will be terminated. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or runs forever without exiting). If ttl is not specified for an interactive session, it defaults to 24h. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

PeripheralsConfig

MetastoreService string

Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]

SparkHistoryServerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfig

Optional. The Spark History Server configuration for the workload.

MetastoreService string

Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]

SparkHistoryServerConfig SparkHistoryServerConfig

Optional. The Spark History Server configuration for the workload.

metastoreService String

Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]

sparkHistoryServerConfig SparkHistoryServerConfig

Optional. The Spark History Server configuration for the workload.

metastoreService string

Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]

sparkHistoryServerConfig SparkHistoryServerConfig

Optional. The Spark History Server configuration for the workload.

metastore_service str

Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]

spark_history_server_config SparkHistoryServerConfig

Optional. The Spark History Server configuration for the workload.

metastoreService String

Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]

sparkHistoryServerConfig Property Map

Optional. The Spark History Server configuration for the workload.

PeripheralsConfigResponse

MetastoreService string

Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]

SparkHistoryServerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfigResponse

Optional. The Spark History Server configuration for the workload.

MetastoreService string

Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]

SparkHistoryServerConfig SparkHistoryServerConfigResponse

Optional. The Spark History Server configuration for the workload.

metastoreService String

Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]

sparkHistoryServerConfig SparkHistoryServerConfigResponse

Optional. The Spark History Server configuration for the workload.

metastoreService string

Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]

sparkHistoryServerConfig SparkHistoryServerConfigResponse

Optional. The Spark History Server configuration for the workload.

metastore_service str

Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]

spark_history_server_config SparkHistoryServerConfigResponse

Optional. The Spark History Server configuration for the workload.

metastoreService String

Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]

sparkHistoryServerConfig Property Map

Optional. The Spark History Server configuration for the workload.

PySparkBatch

MainPythonFileUri string

The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.

ArchiveUris List<string>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Args List<string>

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

FileUris List<string>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

JarFileUris List<string>

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

PythonFileUris List<string>

Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

MainPythonFileUri string

The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.

ArchiveUris []string

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Args []string

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

FileUris []string

Optional. HCFS URIs of files to be placed in the working directory of each executor.

JarFileUris []string

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

PythonFileUris []string

Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

mainPythonFileUri String

The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.

archiveUris List<String>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args List<String>

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris List<String>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jarFileUris List<String>

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

pythonFileUris List<String>

Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

mainPythonFileUri string

The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.

archiveUris string[]

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args string[]

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris string[]

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jarFileUris string[]

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

pythonFileUris string[]

Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

main_python_file_uri str

The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.

archive_uris Sequence[str]

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args Sequence[str]

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

file_uris Sequence[str]

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jar_file_uris Sequence[str]

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

python_file_uris Sequence[str]

Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

mainPythonFileUri String

The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.

archiveUris List<String>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args List<String>

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris List<String>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jarFileUris List<String>

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

pythonFileUris List<String>

Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

PySparkBatchResponse

ArchiveUris List<string>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Args List<string>

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

FileUris List<string>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

JarFileUris List<string>

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

MainPythonFileUri string

The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.

PythonFileUris List<string>

Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

ArchiveUris []string

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Args []string

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

FileUris []string

Optional. HCFS URIs of files to be placed in the working directory of each executor.

JarFileUris []string

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

MainPythonFileUri string

The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.

PythonFileUris []string

Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

archiveUris List<String>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args List<String>

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris List<String>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jarFileUris List<String>

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

mainPythonFileUri String

The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.

pythonFileUris List<String>

Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

archiveUris string[]

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args string[]

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris string[]

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jarFileUris string[]

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

mainPythonFileUri string

The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.

pythonFileUris string[]

Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

archive_uris Sequence[str]

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args Sequence[str]

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

file_uris Sequence[str]

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jar_file_uris Sequence[str]

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

main_python_file_uri str

The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.

python_file_uris Sequence[str]

Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

archiveUris List<String>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args List<String>

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris List<String>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jarFileUris List<String>

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

mainPythonFileUri String

The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.

pythonFileUris List<String>

Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

RuntimeConfig

ContainerImage string

Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.

Properties Dictionary<string, string>

Optional. A mapping of property names to values, which are used to configure workload execution.

Version string

Optional. Version of the batch runtime.

ContainerImage string

Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.

Properties map[string]string

Optional. A mapping of property names to values, which are used to configure workload execution.

Version string

Optional. Version of the batch runtime.

containerImage String

Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.

properties Map<String,String>

Optional. A mapping of property names to values, which are used to configure workload execution.

version String

Optional. Version of the batch runtime.

containerImage string

Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.

properties {[key: string]: string}

Optional. A mapping of property names to values, which are used to configure workload execution.

version string

Optional. Version of the batch runtime.

container_image str

Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.

properties Mapping[str, str]

Optional. A mapping of property names to values, which are used to configure workload execution.

version str

Optional. Version of the batch runtime.

containerImage String

Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.

properties Map<String>

Optional. A mapping of property names to values, which are used to configure workload execution.

version String

Optional. Version of the batch runtime.

RuntimeConfigResponse

ContainerImage string

Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.

Properties Dictionary<string, string>

Optional. A mapping of property names to values, which are used to configure workload execution.

Version string

Optional. Version of the batch runtime.

ContainerImage string

Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.

Properties map[string]string

Optional. A mapping of property names to values, which are used to configure workload execution.

Version string

Optional. Version of the batch runtime.

containerImage String

Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.

properties Map<String,String>

Optional. A mapping of property names to values, which are used to configure workload execution.

version String

Optional. Version of the batch runtime.

containerImage string

Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.

properties {[key: string]: string}

Optional. A mapping of property names to values, which are used to configure workload execution.

version string

Optional. Version of the batch runtime.

container_image str

Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.

properties Mapping[str, str]

Optional. A mapping of property names to values, which are used to configure workload execution.

version str

Optional. Version of the batch runtime.

containerImage String

Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.

properties Map<String>

Optional. A mapping of property names to values, which are used to configure workload execution.

version String

Optional. Version of the batch runtime.

RuntimeInfoResponse

ApproximateUsage Pulumi.GoogleNative.Dataproc.V1.Inputs.UsageMetricsResponse

Approximate workload resource usage calculated after workload finishes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

CurrentUsage Pulumi.GoogleNative.Dataproc.V1.Inputs.UsageSnapshotResponse

Snapshot of current workload resource usage.

DiagnosticOutputUri string

A URI pointing to the location of the diagnostics tarball.

Endpoints Dictionary<string, string>

Map of remote access endpoints (such as web interfaces and APIs) to their URIs.

OutputUri string

A URI pointing to the location of the stdout and stderr of the workload.

ApproximateUsage UsageMetricsResponse

Approximate workload resource usage calculated after workload finishes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

CurrentUsage UsageSnapshotResponse

Snapshot of current workload resource usage.

DiagnosticOutputUri string

A URI pointing to the location of the diagnostics tarball.

Endpoints map[string]string

Map of remote access endpoints (such as web interfaces and APIs) to their URIs.

OutputUri string

A URI pointing to the location of the stdout and stderr of the workload.

approximateUsage UsageMetricsResponse

Approximate workload resource usage calculated after workload finishes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

currentUsage UsageSnapshotResponse

Snapshot of current workload resource usage.

diagnosticOutputUri String

A URI pointing to the location of the diagnostics tarball.

endpoints Map<String,String>

Map of remote access endpoints (such as web interfaces and APIs) to their URIs.

outputUri String

A URI pointing to the location of the stdout and stderr of the workload.

approximateUsage UsageMetricsResponse

Approximate workload resource usage calculated after workload finishes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

currentUsage UsageSnapshotResponse

Snapshot of current workload resource usage.

diagnosticOutputUri string

A URI pointing to the location of the diagnostics tarball.

endpoints {[key: string]: string}

Map of remote access endpoints (such as web interfaces and APIs) to their URIs.

outputUri string

A URI pointing to the location of the stdout and stderr of the workload.

approximate_usage UsageMetricsResponse

Approximate workload resource usage calculated after workload finishes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

current_usage UsageSnapshotResponse

Snapshot of current workload resource usage.

diagnostic_output_uri str

A URI pointing to the location of the diagnostics tarball.

endpoints Mapping[str, str]

Map of remote access endpoints (such as web interfaces and APIs) to their URIs.

output_uri str

A URI pointing to the location of the stdout and stderr of the workload.

approximateUsage Property Map

Approximate workload resource usage calculated after workload finishes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

currentUsage Property Map

Snapshot of current workload resource usage.

diagnosticOutputUri String

A URI pointing to the location of the diagnostics tarball.

endpoints Map<String>

Map of remote access endpoints (such as web interfaces and APIs) to their URIs.

outputUri String

A URI pointing to the location of the stdout and stderr of the workload.

SparkBatch

ArchiveUris List<string>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Args List<string>

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

FileUris List<string>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

JarFileUris List<string>

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

MainClass string

Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.

MainJarFileUri string

Optional. The HCFS URI of the jar file that contains the main class.

ArchiveUris []string

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Args []string

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

FileUris []string

Optional. HCFS URIs of files to be placed in the working directory of each executor.

JarFileUris []string

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

MainClass string

Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.

MainJarFileUri string

Optional. The HCFS URI of the jar file that contains the main class.

archiveUris List<String>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args List<String>

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris List<String>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jarFileUris List<String>

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

mainClass String

Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.

mainJarFileUri String

Optional. The HCFS URI of the jar file that contains the main class.

archiveUris string[]

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args string[]

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris string[]

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jarFileUris string[]

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

mainClass string

Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.

mainJarFileUri string

Optional. The HCFS URI of the jar file that contains the main class.

archive_uris Sequence[str]

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args Sequence[str]

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

file_uris Sequence[str]

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jar_file_uris Sequence[str]

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

main_class str

Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.

main_jar_file_uri str

Optional. The HCFS URI of the jar file that contains the main class.

archiveUris List<String>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args List<String>

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris List<String>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jarFileUris List<String>

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

mainClass String

Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.

mainJarFileUri String

Optional. The HCFS URI of the jar file that contains the main class.

SparkBatchResponse

ArchiveUris List<string>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Args List<string>

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

FileUris List<string>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

JarFileUris List<string>

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

MainClass string

Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.

MainJarFileUri string

Optional. The HCFS URI of the jar file that contains the main class.

ArchiveUris []string

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Args []string

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

FileUris []string

Optional. HCFS URIs of files to be placed in the working directory of each executor.

JarFileUris []string

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

MainClass string

Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.

MainJarFileUri string

Optional. The HCFS URI of the jar file that contains the main class.

archiveUris List<String>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args List<String>

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris List<String>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jarFileUris List<String>

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

mainClass String

Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.

mainJarFileUri String

Optional. The HCFS URI of the jar file that contains the main class.

archiveUris string[]

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args string[]

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris string[]

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jarFileUris string[]

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

mainClass string

Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.

mainJarFileUri string

Optional. The HCFS URI of the jar file that contains the main class.

archive_uris Sequence[str]

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args Sequence[str]

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

file_uris Sequence[str]

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jar_file_uris Sequence[str]

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

main_class str

Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.

main_jar_file_uri str

Optional. The HCFS URI of the jar file that contains the main class.

archiveUris List<String>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args List<String>

Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris List<String>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

jarFileUris List<String>

Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.

mainClass String

Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.

mainJarFileUri String

Optional. The HCFS URI of the jar file that contains the main class.

SparkHistoryServerConfig

DataprocCluster string

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

DataprocCluster string

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataprocCluster String

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataprocCluster string

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataproc_cluster str

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataprocCluster String

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

SparkHistoryServerConfigResponse

DataprocCluster string

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

DataprocCluster string

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataprocCluster String

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataprocCluster string

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataproc_cluster str

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataprocCluster String

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

SparkRBatch

MainRFileUri string

The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.

ArchiveUris List<string>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Args List<string>

Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

FileUris List<string>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

MainRFileUri string

The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.

ArchiveUris []string

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Args []string

Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

FileUris []string

Optional. HCFS URIs of files to be placed in the working directory of each executor.

mainRFileUri String

The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.

archiveUris List<String>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args List<String>

Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris List<String>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

mainRFileUri string

The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.

archiveUris string[]

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args string[]

Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris string[]

Optional. HCFS URIs of files to be placed in the working directory of each executor.

main_r_file_uri str

The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.

archive_uris Sequence[str]

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args Sequence[str]

Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

file_uris Sequence[str]

Optional. HCFS URIs of files to be placed in the working directory of each executor.

mainRFileUri String

The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.

archiveUris List<String>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args List<String>

Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris List<String>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

SparkRBatchResponse

ArchiveUris List<string>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Args List<string>

Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

FileUris List<string>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

MainRFileUri string

The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.

ArchiveUris []string

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Args []string

Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

FileUris []string

Optional. HCFS URIs of files to be placed in the working directory of each executor.

MainRFileUri string

The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.

archiveUris List<String>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args List<String>

Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris List<String>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

mainRFileUri String

The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.

archiveUris string[]

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args string[]

Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris string[]

Optional. HCFS URIs of files to be placed in the working directory of each executor.

mainRFileUri string

The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.

archive_uris Sequence[str]

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args Sequence[str]

Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

file_uris Sequence[str]

Optional. HCFS URIs of files to be placed in the working directory of each executor.

main_r_file_uri str

The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.

archiveUris List<String>

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

args List<String>

Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.

fileUris List<String>

Optional. HCFS URIs of files to be placed in the working directory of each executor.

mainRFileUri String

The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.

SparkSqlBatch

QueryFileUri string

The HCFS URI of the script that contains Spark SQL queries to execute.

JarFileUris List<string>

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

QueryVariables Dictionary<string, string>

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

QueryFileUri string

The HCFS URI of the script that contains Spark SQL queries to execute.

JarFileUris []string

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

QueryVariables map[string]string

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

queryFileUri String

The HCFS URI of the script that contains Spark SQL queries to execute.

jarFileUris List<String>

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

queryVariables Map<String,String>

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

queryFileUri string

The HCFS URI of the script that contains Spark SQL queries to execute.

jarFileUris string[]

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

queryVariables {[key: string]: string}

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

query_file_uri str

The HCFS URI of the script that contains Spark SQL queries to execute.

jar_file_uris Sequence[str]

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

query_variables Mapping[str, str]

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

queryFileUri String

The HCFS URI of the script that contains Spark SQL queries to execute.

jarFileUris List<String>

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

queryVariables Map<String>

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

SparkSqlBatchResponse

JarFileUris List<string>

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

QueryFileUri string

The HCFS URI of the script that contains Spark SQL queries to execute.

QueryVariables Dictionary<string, string>

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

JarFileUris []string

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

QueryFileUri string

The HCFS URI of the script that contains Spark SQL queries to execute.

QueryVariables map[string]string

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

jarFileUris List<String>

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

queryFileUri String

The HCFS URI of the script that contains Spark SQL queries to execute.

queryVariables Map<String,String>

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

jarFileUris string[]

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

queryFileUri string

The HCFS URI of the script that contains Spark SQL queries to execute.

queryVariables {[key: string]: string}

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

jar_file_uris Sequence[str]

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

query_file_uri str

The HCFS URI of the script that contains Spark SQL queries to execute.

query_variables Mapping[str, str]

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

jarFileUris List<String>

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

queryFileUri String

The HCFS URI of the script that contains Spark SQL queries to execute.

queryVariables Map<String>

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

StateHistoryResponse

State string

The state of the batch at this point in history.

StateMessage string

Details about the state at this point in history.

StateStartTime string

The time when the batch entered the historical state.

State string

The state of the batch at this point in history.

StateMessage string

Details about the state at this point in history.

StateStartTime string

The time when the batch entered the historical state.

state String

The state of the batch at this point in history.

stateMessage String

Details about the state at this point in history.

stateStartTime String

The time when the batch entered the historical state.

state string

The state of the batch at this point in history.

stateMessage string

Details about the state at this point in history.

stateStartTime string

The time when the batch entered the historical state.

state str

The state of the batch at this point in history.

state_message str

Details about the state at this point in history.

state_start_time str

The time when the batch entered the historical state.

state String

The state of the batch at this point in history.

stateMessage String

Details about the state at this point in history.

stateStartTime String

The time when the batch entered the historical state.

UsageMetricsResponse

MilliDcuSeconds string

Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

ShuffleStorageGbSeconds string

Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

MilliDcuSeconds string

Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

ShuffleStorageGbSeconds string

Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

milliDcuSeconds String

Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

shuffleStorageGbSeconds String

Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

milliDcuSeconds string

Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

shuffleStorageGbSeconds string

Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

milli_dcu_seconds str

Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

shuffle_storage_gb_seconds str

Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

milliDcuSeconds String

Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

shuffleStorageGbSeconds String

Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

UsageSnapshotResponse

MilliDcu string

Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

ShuffleStorageGb string

Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))

SnapshotTime string

Optional. The timestamp of the usage snapshot.

MilliDcu string

Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

ShuffleStorageGb string

Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))

SnapshotTime string

Optional. The timestamp of the usage snapshot.

milliDcu String

Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

shuffleStorageGb String

Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))

snapshotTime String

Optional. The timestamp of the usage snapshot.

milliDcu string

Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

shuffleStorageGb string

Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))

snapshotTime string

Optional. The timestamp of the usage snapshot.

milli_dcu str

Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

shuffle_storage_gb str

Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))

snapshot_time str

Optional. The timestamp of the usage snapshot.

milliDcu String

Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

shuffleStorageGb String

Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))

snapshotTime String

Optional. The timestamp of the usage snapshot.

Package Details

Repository
Google Cloud Native pulumi/pulumi-google-native
License
Apache-2.0