google-native logo
Google Cloud Native v0.30.0, Apr 14 23

google-native.dataproc/v1.Cluster

Explore with Pulumi AI

Creates a cluster in a project. The returned Operation.metadata will be ClusterOperationMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#clusteroperationmetadata). Auto-naming is currently not supported for this resource.

Create Cluster Resource

new Cluster(name: string, args: ClusterArgs, opts?: CustomResourceOptions);
@overload
def Cluster(resource_name: str,
            opts: Optional[ResourceOptions] = None,
            action_on_failed_primary_workers: Optional[str] = None,
            cluster_name: Optional[str] = None,
            config: Optional[ClusterConfigArgs] = None,
            labels: Optional[Mapping[str, str]] = None,
            project: Optional[str] = None,
            region: Optional[str] = None,
            request_id: Optional[str] = None,
            virtual_cluster_config: Optional[VirtualClusterConfigArgs] = None)
@overload
def Cluster(resource_name: str,
            args: ClusterArgs,
            opts: Optional[ResourceOptions] = None)
func NewCluster(ctx *Context, name string, args ClusterArgs, opts ...ResourceOption) (*Cluster, error)
public Cluster(string name, ClusterArgs args, CustomResourceOptions? opts = null)
public Cluster(String name, ClusterArgs args)
public Cluster(String name, ClusterArgs args, CustomResourceOptions options)
type: google-native:dataproc/v1:Cluster
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.

name string
The unique name of the resource.
args ClusterArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
resource_name str
The unique name of the resource.
args ClusterArgs
The arguments to resource properties.
opts ResourceOptions
Bag of options to control resource's behavior.
ctx Context
Context object for the current deployment.
name string
The unique name of the resource.
args ClusterArgs
The arguments to resource properties.
opts ResourceOption
Bag of options to control resource's behavior.
name string
The unique name of the resource.
args ClusterArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
name String
The unique name of the resource.
args ClusterArgs
The arguments to resource properties.
options CustomResourceOptions
Bag of options to control resource's behavior.

Cluster Resource Properties

To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

Inputs

The Cluster resource accepts the following input properties:

ClusterName string

The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.

Region string
ActionOnFailedPrimaryWorkers string

Optional. Failure action when primary worker creation fails.

Config Pulumi.GoogleNative.Dataproc.V1.Inputs.ClusterConfigArgs

Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.

Labels Dictionary<string, string>

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

Project string

The Google Cloud Platform project ID that the cluster belongs to.

RequestId string

Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

VirtualClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.VirtualClusterConfigArgs

Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.

ClusterName string

The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.

Region string
ActionOnFailedPrimaryWorkers string

Optional. Failure action when primary worker creation fails.

Config ClusterConfigArgs

Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.

Labels map[string]string

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

Project string

The Google Cloud Platform project ID that the cluster belongs to.

RequestId string

Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

VirtualClusterConfig VirtualClusterConfigArgs

Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.

clusterName String

The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.

region String
actionOnFailedPrimaryWorkers String

Optional. Failure action when primary worker creation fails.

config ClusterConfigArgs

Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.

labels Map<String,String>

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

project String

The Google Cloud Platform project ID that the cluster belongs to.

requestId String

Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

virtualClusterConfig VirtualClusterConfigArgs

Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.

clusterName string

The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.

region string
actionOnFailedPrimaryWorkers string

Optional. Failure action when primary worker creation fails.

config ClusterConfigArgs

Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.

labels {[key: string]: string}

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

project string

The Google Cloud Platform project ID that the cluster belongs to.

requestId string

Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

virtualClusterConfig VirtualClusterConfigArgs

Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.

cluster_name str

The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.

region str
action_on_failed_primary_workers str

Optional. Failure action when primary worker creation fails.

config ClusterConfigArgs

Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.

labels Mapping[str, str]

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

project str

The Google Cloud Platform project ID that the cluster belongs to.

request_id str

Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

virtual_cluster_config VirtualClusterConfigArgs

Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.

clusterName String

The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.

region String
actionOnFailedPrimaryWorkers String

Optional. Failure action when primary worker creation fails.

config Property Map

Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.

labels Map<String>

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

project String

The Google Cloud Platform project ID that the cluster belongs to.

requestId String

Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

virtualClusterConfig Property Map

Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.

Outputs

All input properties are implicitly available as output properties. Additionally, the Cluster resource produces the following output properties:

ClusterUuid string

A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

Id string

The provider-assigned unique ID for this managed resource.

Metrics Pulumi.GoogleNative.Dataproc.V1.Outputs.ClusterMetricsResponse

Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

Status Pulumi.GoogleNative.Dataproc.V1.Outputs.ClusterStatusResponse

Cluster status.

StatusHistory List<Pulumi.GoogleNative.Dataproc.V1.Outputs.ClusterStatusResponse>

The previous cluster status.

ClusterUuid string

A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

Id string

The provider-assigned unique ID for this managed resource.

Metrics ClusterMetricsResponse

Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

Status ClusterStatusResponse

Cluster status.

StatusHistory []ClusterStatusResponse

The previous cluster status.

clusterUuid String

A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

id String

The provider-assigned unique ID for this managed resource.

metrics ClusterMetricsResponse

Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

status ClusterStatusResponse

Cluster status.

statusHistory List<ClusterStatusResponse>

The previous cluster status.

clusterUuid string

A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

id string

The provider-assigned unique ID for this managed resource.

metrics ClusterMetricsResponse

Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

status ClusterStatusResponse

Cluster status.

statusHistory ClusterStatusResponse[]

The previous cluster status.

cluster_uuid str

A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

id str

The provider-assigned unique ID for this managed resource.

metrics ClusterMetricsResponse

Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

status ClusterStatusResponse

Cluster status.

status_history Sequence[ClusterStatusResponse]

The previous cluster status.

clusterUuid String

A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

id String

The provider-assigned unique ID for this managed resource.

metrics Property Map

Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

status Property Map

Cluster status.

statusHistory List<Property Map>

The previous cluster status.

Supporting Types

AcceleratorConfig

AcceleratorCount int

The number of the accelerator cards of this type exposed to this instance.

AcceleratorTypeUri string

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

AcceleratorCount int

The number of the accelerator cards of this type exposed to this instance.

AcceleratorTypeUri string

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

acceleratorCount Integer

The number of the accelerator cards of this type exposed to this instance.

acceleratorTypeUri String

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

acceleratorCount number

The number of the accelerator cards of this type exposed to this instance.

acceleratorTypeUri string

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

accelerator_count int

The number of the accelerator cards of this type exposed to this instance.

accelerator_type_uri str

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

acceleratorCount Number

The number of the accelerator cards of this type exposed to this instance.

acceleratorTypeUri String

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

AcceleratorConfigResponse

AcceleratorCount int

The number of the accelerator cards of this type exposed to this instance.

AcceleratorTypeUri string

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

AcceleratorCount int

The number of the accelerator cards of this type exposed to this instance.

AcceleratorTypeUri string

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

acceleratorCount Integer

The number of the accelerator cards of this type exposed to this instance.

acceleratorTypeUri String

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

acceleratorCount number

The number of the accelerator cards of this type exposed to this instance.

acceleratorTypeUri string

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

accelerator_count int

The number of the accelerator cards of this type exposed to this instance.

accelerator_type_uri str

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

acceleratorCount Number

The number of the accelerator cards of this type exposed to this instance.

acceleratorTypeUri String

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

AutoscalingConfig

PolicyUri string

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

PolicyUri string

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

policyUri String

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

policyUri string

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

policy_uri str

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

policyUri String

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

AutoscalingConfigResponse

PolicyUri string

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

PolicyUri string

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

policyUri String

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

policyUri string

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

policy_uri str

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

policyUri String

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

AuxiliaryNodeGroup

NodeGroup Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeGroup

Node group configuration.

NodeGroupId string

Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

NodeGroup NodeGroupType

Node group configuration.

NodeGroupId string

Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

nodeGroup NodeGroup

Node group configuration.

nodeGroupId String

Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

nodeGroup NodeGroup

Node group configuration.

nodeGroupId string

Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

node_group NodeGroup

Node group configuration.

node_group_id str

Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

nodeGroup Property Map

Node group configuration.

nodeGroupId String

Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

AuxiliaryNodeGroupResponse

NodeGroup Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeGroupResponse

Node group configuration.

NodeGroupId string

Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

NodeGroup NodeGroupResponse

Node group configuration.

NodeGroupId string

Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

nodeGroup NodeGroupResponse

Node group configuration.

nodeGroupId String

Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

nodeGroup NodeGroupResponse

Node group configuration.

nodeGroupId string

Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

node_group NodeGroupResponse

Node group configuration.

node_group_id str

Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

nodeGroup Property Map

Node group configuration.

nodeGroupId String

Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

AuxiliaryServicesConfig

MetastoreConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.MetastoreConfig

Optional. The Hive Metastore configuration for this workload.

SparkHistoryServerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfig

Optional. The Spark History Server configuration for the workload.

MetastoreConfig MetastoreConfig

Optional. The Hive Metastore configuration for this workload.

SparkHistoryServerConfig SparkHistoryServerConfig

Optional. The Spark History Server configuration for the workload.

metastoreConfig MetastoreConfig

Optional. The Hive Metastore configuration for this workload.

sparkHistoryServerConfig SparkHistoryServerConfig

Optional. The Spark History Server configuration for the workload.

metastoreConfig MetastoreConfig

Optional. The Hive Metastore configuration for this workload.

sparkHistoryServerConfig SparkHistoryServerConfig

Optional. The Spark History Server configuration for the workload.

metastore_config MetastoreConfig

Optional. The Hive Metastore configuration for this workload.

spark_history_server_config SparkHistoryServerConfig

Optional. The Spark History Server configuration for the workload.

metastoreConfig Property Map

Optional. The Hive Metastore configuration for this workload.

sparkHistoryServerConfig Property Map

Optional. The Spark History Server configuration for the workload.

AuxiliaryServicesConfigResponse

MetastoreConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.MetastoreConfigResponse

Optional. The Hive Metastore configuration for this workload.

SparkHistoryServerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfigResponse

Optional. The Spark History Server configuration for the workload.

MetastoreConfig MetastoreConfigResponse

Optional. The Hive Metastore configuration for this workload.

SparkHistoryServerConfig SparkHistoryServerConfigResponse

Optional. The Spark History Server configuration for the workload.

metastoreConfig MetastoreConfigResponse

Optional. The Hive Metastore configuration for this workload.

sparkHistoryServerConfig SparkHistoryServerConfigResponse

Optional. The Spark History Server configuration for the workload.

metastoreConfig MetastoreConfigResponse

Optional. The Hive Metastore configuration for this workload.

sparkHistoryServerConfig SparkHistoryServerConfigResponse

Optional. The Spark History Server configuration for the workload.

metastore_config MetastoreConfigResponse

Optional. The Hive Metastore configuration for this workload.

spark_history_server_config SparkHistoryServerConfigResponse

Optional. The Spark History Server configuration for the workload.

metastoreConfig Property Map

Optional. The Hive Metastore configuration for this workload.

sparkHistoryServerConfig Property Map

Optional. The Spark History Server configuration for the workload.

ClusterConfig

AutoscalingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.AutoscalingConfig

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

AuxiliaryNodeGroups List<Pulumi.GoogleNative.Dataproc.V1.Inputs.AuxiliaryNodeGroup>

Optional. The node group settings.

ConfigBucket string

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

DataprocMetricConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.DataprocMetricConfig

Optional. The config for Dataproc metrics.

EncryptionConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.EncryptionConfig

Optional. Encryption settings for the cluster.

EndpointConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.EndpointConfig

Optional. Port/endpoint configuration for this cluster

GceClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GceClusterConfig

Optional. The shared Compute Engine config settings for all instances in a cluster.

GkeClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeClusterConfig

Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

InitializationActions List<Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeInitializationAction>

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

LifecycleConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LifecycleConfig

Optional. Lifecycle setting for the cluster.

MasterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfig

Optional. The Compute Engine config settings for the cluster's master instance.

MetastoreConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.MetastoreConfig

Optional. Metastore configuration.

SecondaryWorkerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfig

Optional. The Compute Engine config settings for a cluster's secondary worker instances

SecurityConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SecurityConfig

Optional. Security settings for the cluster.

SoftwareConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SoftwareConfig

Optional. The config settings for cluster software.

TempBucket string

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

WorkerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfig

Optional. The Compute Engine config settings for the cluster's worker instances.

AutoscalingConfig AutoscalingConfig

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

AuxiliaryNodeGroups []AuxiliaryNodeGroup

Optional. The node group settings.

ConfigBucket string

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

DataprocMetricConfig DataprocMetricConfig

Optional. The config for Dataproc metrics.

EncryptionConfig EncryptionConfig

Optional. Encryption settings for the cluster.

EndpointConfig EndpointConfig

Optional. Port/endpoint configuration for this cluster

GceClusterConfig GceClusterConfig

Optional. The shared Compute Engine config settings for all instances in a cluster.

GkeClusterConfig GkeClusterConfig

Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

InitializationActions []NodeInitializationAction

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

LifecycleConfig LifecycleConfig

Optional. Lifecycle setting for the cluster.

MasterConfig InstanceGroupConfig

Optional. The Compute Engine config settings for the cluster's master instance.

MetastoreConfig MetastoreConfig

Optional. Metastore configuration.

SecondaryWorkerConfig InstanceGroupConfig

Optional. The Compute Engine config settings for a cluster's secondary worker instances

SecurityConfig SecurityConfig

Optional. Security settings for the cluster.

SoftwareConfig SoftwareConfig

Optional. The config settings for cluster software.

TempBucket string

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

WorkerConfig InstanceGroupConfig

Optional. The Compute Engine config settings for the cluster's worker instances.

autoscalingConfig AutoscalingConfig

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

auxiliaryNodeGroups List<AuxiliaryNodeGroup>

Optional. The node group settings.

configBucket String

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

dataprocMetricConfig DataprocMetricConfig

Optional. The config for Dataproc metrics.

encryptionConfig EncryptionConfig

Optional. Encryption settings for the cluster.

endpointConfig EndpointConfig

Optional. Port/endpoint configuration for this cluster

gceClusterConfig GceClusterConfig

Optional. The shared Compute Engine config settings for all instances in a cluster.

gkeClusterConfig GkeClusterConfig

Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

initializationActions List<NodeInitializationAction>

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

lifecycleConfig LifecycleConfig

Optional. Lifecycle setting for the cluster.

masterConfig InstanceGroupConfig

Optional. The Compute Engine config settings for the cluster's master instance.

metastoreConfig MetastoreConfig

Optional. Metastore configuration.

secondaryWorkerConfig InstanceGroupConfig

Optional. The Compute Engine config settings for a cluster's secondary worker instances

securityConfig SecurityConfig

Optional. Security settings for the cluster.

softwareConfig SoftwareConfig

Optional. The config settings for cluster software.

tempBucket String

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

workerConfig InstanceGroupConfig

Optional. The Compute Engine config settings for the cluster's worker instances.

autoscalingConfig AutoscalingConfig

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

auxiliaryNodeGroups AuxiliaryNodeGroup[]

Optional. The node group settings.

configBucket string

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

dataprocMetricConfig DataprocMetricConfig

Optional. The config for Dataproc metrics.

encryptionConfig EncryptionConfig

Optional. Encryption settings for the cluster.

endpointConfig EndpointConfig

Optional. Port/endpoint configuration for this cluster

gceClusterConfig GceClusterConfig

Optional. The shared Compute Engine config settings for all instances in a cluster.

gkeClusterConfig GkeClusterConfig

Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

initializationActions NodeInitializationAction[]

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

lifecycleConfig LifecycleConfig

Optional. Lifecycle setting for the cluster.

masterConfig InstanceGroupConfig

Optional. The Compute Engine config settings for the cluster's master instance.

metastoreConfig MetastoreConfig

Optional. Metastore configuration.

secondaryWorkerConfig InstanceGroupConfig

Optional. The Compute Engine config settings for a cluster's secondary worker instances

securityConfig SecurityConfig

Optional. Security settings for the cluster.

softwareConfig SoftwareConfig

Optional. The config settings for cluster software.

tempBucket string

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

workerConfig InstanceGroupConfig

Optional. The Compute Engine config settings for the cluster's worker instances.

autoscaling_config AutoscalingConfig

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

auxiliary_node_groups Sequence[AuxiliaryNodeGroup]

Optional. The node group settings.

config_bucket str

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

dataproc_metric_config DataprocMetricConfig

Optional. The config for Dataproc metrics.

encryption_config EncryptionConfig

Optional. Encryption settings for the cluster.

endpoint_config EndpointConfig

Optional. Port/endpoint configuration for this cluster

gce_cluster_config GceClusterConfig

Optional. The shared Compute Engine config settings for all instances in a cluster.

gke_cluster_config GkeClusterConfig

Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

initialization_actions Sequence[NodeInitializationAction]

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

lifecycle_config LifecycleConfig

Optional. Lifecycle setting for the cluster.

master_config InstanceGroupConfig

Optional. The Compute Engine config settings for the cluster's master instance.

metastore_config MetastoreConfig

Optional. Metastore configuration.

secondary_worker_config InstanceGroupConfig

Optional. The Compute Engine config settings for a cluster's secondary worker instances

security_config SecurityConfig

Optional. Security settings for the cluster.

software_config SoftwareConfig

Optional. The config settings for cluster software.

temp_bucket str

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

worker_config InstanceGroupConfig

Optional. The Compute Engine config settings for the cluster's worker instances.

autoscalingConfig Property Map

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

auxiliaryNodeGroups List<Property Map>

Optional. The node group settings.

configBucket String

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

dataprocMetricConfig Property Map

Optional. The config for Dataproc metrics.

encryptionConfig Property Map

Optional. Encryption settings for the cluster.

endpointConfig Property Map

Optional. Port/endpoint configuration for this cluster

gceClusterConfig Property Map

Optional. The shared Compute Engine config settings for all instances in a cluster.

gkeClusterConfig Property Map

Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

initializationActions List<Property Map>

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

lifecycleConfig Property Map

Optional. Lifecycle setting for the cluster.

masterConfig Property Map

Optional. The Compute Engine config settings for the cluster's master instance.

metastoreConfig Property Map

Optional. Metastore configuration.

secondaryWorkerConfig Property Map

Optional. The Compute Engine config settings for a cluster's secondary worker instances

securityConfig Property Map

Optional. Security settings for the cluster.

softwareConfig Property Map

Optional. The config settings for cluster software.

tempBucket String

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

workerConfig Property Map

Optional. The Compute Engine config settings for the cluster's worker instances.

ClusterConfigResponse

AutoscalingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.AutoscalingConfigResponse

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

AuxiliaryNodeGroups List<Pulumi.GoogleNative.Dataproc.V1.Inputs.AuxiliaryNodeGroupResponse>

Optional. The node group settings.

ConfigBucket string

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

DataprocMetricConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.DataprocMetricConfigResponse

Optional. The config for Dataproc metrics.

EncryptionConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.EncryptionConfigResponse

Optional. Encryption settings for the cluster.

EndpointConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.EndpointConfigResponse

Optional. Port/endpoint configuration for this cluster

GceClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GceClusterConfigResponse

Optional. The shared Compute Engine config settings for all instances in a cluster.

GkeClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeClusterConfigResponse

Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

InitializationActions List<Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeInitializationActionResponse>

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

LifecycleConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LifecycleConfigResponse

Optional. Lifecycle setting for the cluster.

MasterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse

Optional. The Compute Engine config settings for the cluster's master instance.

MetastoreConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.MetastoreConfigResponse

Optional. Metastore configuration.

SecondaryWorkerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse

Optional. The Compute Engine config settings for a cluster's secondary worker instances

SecurityConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SecurityConfigResponse

Optional. Security settings for the cluster.

SoftwareConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SoftwareConfigResponse

Optional. The config settings for cluster software.

TempBucket string

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

WorkerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse

Optional. The Compute Engine config settings for the cluster's worker instances.

AutoscalingConfig AutoscalingConfigResponse

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

AuxiliaryNodeGroups []AuxiliaryNodeGroupResponse

Optional. The node group settings.

ConfigBucket string

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

DataprocMetricConfig DataprocMetricConfigResponse

Optional. The config for Dataproc metrics.

EncryptionConfig EncryptionConfigResponse

Optional. Encryption settings for the cluster.

EndpointConfig EndpointConfigResponse

Optional. Port/endpoint configuration for this cluster

GceClusterConfig GceClusterConfigResponse

Optional. The shared Compute Engine config settings for all instances in a cluster.

GkeClusterConfig GkeClusterConfigResponse

Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

InitializationActions []NodeInitializationActionResponse

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

LifecycleConfig LifecycleConfigResponse

Optional. Lifecycle setting for the cluster.

MasterConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for the cluster's master instance.

MetastoreConfig MetastoreConfigResponse

Optional. Metastore configuration.

SecondaryWorkerConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for a cluster's secondary worker instances

SecurityConfig SecurityConfigResponse

Optional. Security settings for the cluster.

SoftwareConfig SoftwareConfigResponse

Optional. The config settings for cluster software.

TempBucket string

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

WorkerConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for the cluster's worker instances.

autoscalingConfig AutoscalingConfigResponse

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

auxiliaryNodeGroups List<AuxiliaryNodeGroupResponse>

Optional. The node group settings.

configBucket String

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

dataprocMetricConfig DataprocMetricConfigResponse

Optional. The config for Dataproc metrics.

encryptionConfig EncryptionConfigResponse

Optional. Encryption settings for the cluster.

endpointConfig EndpointConfigResponse

Optional. Port/endpoint configuration for this cluster

gceClusterConfig GceClusterConfigResponse

Optional. The shared Compute Engine config settings for all instances in a cluster.

gkeClusterConfig GkeClusterConfigResponse

Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

initializationActions List<NodeInitializationActionResponse>

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

lifecycleConfig LifecycleConfigResponse

Optional. Lifecycle setting for the cluster.

masterConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for the cluster's master instance.

metastoreConfig MetastoreConfigResponse

Optional. Metastore configuration.

secondaryWorkerConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for a cluster's secondary worker instances

securityConfig SecurityConfigResponse

Optional. Security settings for the cluster.

softwareConfig SoftwareConfigResponse

Optional. The config settings for cluster software.

tempBucket String

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

workerConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for the cluster's worker instances.

autoscalingConfig AutoscalingConfigResponse

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

auxiliaryNodeGroups AuxiliaryNodeGroupResponse[]

Optional. The node group settings.

configBucket string

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

dataprocMetricConfig DataprocMetricConfigResponse

Optional. The config for Dataproc metrics.

encryptionConfig EncryptionConfigResponse

Optional. Encryption settings for the cluster.

endpointConfig EndpointConfigResponse

Optional. Port/endpoint configuration for this cluster

gceClusterConfig GceClusterConfigResponse

Optional. The shared Compute Engine config settings for all instances in a cluster.

gkeClusterConfig GkeClusterConfigResponse

Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

initializationActions NodeInitializationActionResponse[]

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

lifecycleConfig LifecycleConfigResponse

Optional. Lifecycle setting for the cluster.

masterConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for the cluster's master instance.

metastoreConfig MetastoreConfigResponse

Optional. Metastore configuration.

secondaryWorkerConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for a cluster's secondary worker instances

securityConfig SecurityConfigResponse

Optional. Security settings for the cluster.

softwareConfig SoftwareConfigResponse

Optional. The config settings for cluster software.

tempBucket string

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

workerConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for the cluster's worker instances.

autoscaling_config AutoscalingConfigResponse

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

auxiliary_node_groups Sequence[AuxiliaryNodeGroupResponse]

Optional. The node group settings.

config_bucket str

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

dataproc_metric_config DataprocMetricConfigResponse

Optional. The config for Dataproc metrics.

encryption_config EncryptionConfigResponse

Optional. Encryption settings for the cluster.

endpoint_config EndpointConfigResponse

Optional. Port/endpoint configuration for this cluster

gce_cluster_config GceClusterConfigResponse

Optional. The shared Compute Engine config settings for all instances in a cluster.

gke_cluster_config GkeClusterConfigResponse

Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

initialization_actions Sequence[NodeInitializationActionResponse]

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

lifecycle_config LifecycleConfigResponse

Optional. Lifecycle setting for the cluster.

master_config InstanceGroupConfigResponse

Optional. The Compute Engine config settings for the cluster's master instance.

metastore_config MetastoreConfigResponse

Optional. Metastore configuration.

secondary_worker_config InstanceGroupConfigResponse

Optional. The Compute Engine config settings for a cluster's secondary worker instances

security_config SecurityConfigResponse

Optional. Security settings for the cluster.

software_config SoftwareConfigResponse

Optional. The config settings for cluster software.

temp_bucket str

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

worker_config InstanceGroupConfigResponse

Optional. The Compute Engine config settings for the cluster's worker instances.

autoscalingConfig Property Map

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

auxiliaryNodeGroups List<Property Map>

Optional. The node group settings.

configBucket String

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

dataprocMetricConfig Property Map

Optional. The config for Dataproc metrics.

encryptionConfig Property Map

Optional. Encryption settings for the cluster.

endpointConfig Property Map

Optional. Port/endpoint configuration for this cluster

gceClusterConfig Property Map

Optional. The shared Compute Engine config settings for all instances in a cluster.

gkeClusterConfig Property Map

Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

initializationActions List<Property Map>

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

lifecycleConfig Property Map

Optional. Lifecycle setting for the cluster.

masterConfig Property Map

Optional. The Compute Engine config settings for the cluster's master instance.

metastoreConfig Property Map

Optional. Metastore configuration.

secondaryWorkerConfig Property Map

Optional. The Compute Engine config settings for a cluster's secondary worker instances

securityConfig Property Map

Optional. Security settings for the cluster.

softwareConfig Property Map

Optional. The config settings for cluster software.

tempBucket String

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

workerConfig Property Map

Optional. The Compute Engine config settings for the cluster's worker instances.

ClusterMetricsResponse

HdfsMetrics Dictionary<string, string>

The HDFS metrics.

YarnMetrics Dictionary<string, string>

YARN metrics.

HdfsMetrics map[string]string

The HDFS metrics.

YarnMetrics map[string]string

YARN metrics.

hdfsMetrics Map<String,String>

The HDFS metrics.

yarnMetrics Map<String,String>

YARN metrics.

hdfsMetrics {[key: string]: string}

The HDFS metrics.

yarnMetrics {[key: string]: string}

YARN metrics.

hdfs_metrics Mapping[str, str]

The HDFS metrics.

yarn_metrics Mapping[str, str]

YARN metrics.

hdfsMetrics Map<String>

The HDFS metrics.

yarnMetrics Map<String>

YARN metrics.

ClusterStatusResponse

Detail string

Optional. Output only. Details of cluster's state.

State string

The cluster's state.

StateStartTime string

Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

Substate string

Additional state information that includes status reported by the agent.

Detail string

Optional. Output only. Details of cluster's state.

State string

The cluster's state.

StateStartTime string

Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

Substate string

Additional state information that includes status reported by the agent.

detail String

Optional. Output only. Details of cluster's state.

state String

The cluster's state.

stateStartTime String

Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

substate String

Additional state information that includes status reported by the agent.

detail string

Optional. Output only. Details of cluster's state.

state string

The cluster's state.

stateStartTime string

Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

substate string

Additional state information that includes status reported by the agent.

detail str

Optional. Output only. Details of cluster's state.

state str

The cluster's state.

state_start_time str

Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

substate str

Additional state information that includes status reported by the agent.

detail String

Optional. Output only. Details of cluster's state.

state String

The cluster's state.

stateStartTime String

Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

substate String

Additional state information that includes status reported by the agent.

ConfidentialInstanceConfig

EnableConfidentialCompute bool

Optional. Defines whether the instance should have confidential compute enabled.

EnableConfidentialCompute bool

Optional. Defines whether the instance should have confidential compute enabled.

enableConfidentialCompute Boolean

Optional. Defines whether the instance should have confidential compute enabled.

enableConfidentialCompute boolean

Optional. Defines whether the instance should have confidential compute enabled.

enable_confidential_compute bool

Optional. Defines whether the instance should have confidential compute enabled.

enableConfidentialCompute Boolean

Optional. Defines whether the instance should have confidential compute enabled.

ConfidentialInstanceConfigResponse

EnableConfidentialCompute bool

Optional. Defines whether the instance should have confidential compute enabled.

EnableConfidentialCompute bool

Optional. Defines whether the instance should have confidential compute enabled.

enableConfidentialCompute Boolean

Optional. Defines whether the instance should have confidential compute enabled.

enableConfidentialCompute boolean

Optional. Defines whether the instance should have confidential compute enabled.

enable_confidential_compute bool

Optional. Defines whether the instance should have confidential compute enabled.

enableConfidentialCompute Boolean

Optional. Defines whether the instance should have confidential compute enabled.

DataprocMetricConfig

Metrics []Metric

Metrics sources to enable.

metrics List<Metric>

Metrics sources to enable.

metrics Metric[]

Metrics sources to enable.

metrics Sequence[Metric]

Metrics sources to enable.

metrics List<Property Map>

Metrics sources to enable.

DataprocMetricConfigResponse

Metrics []MetricResponse

Metrics sources to enable.

metrics List<MetricResponse>

Metrics sources to enable.

metrics MetricResponse[]

Metrics sources to enable.

metrics Sequence[MetricResponse]

Metrics sources to enable.

metrics List<Property Map>

Metrics sources to enable.

DiskConfig

BootDiskSizeGb int

Optional. Size in GB of the boot disk (default is 500GB).

BootDiskType string

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

LocalSsdInterface string

Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

NumLocalSsds int

Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

BootDiskSizeGb int

Optional. Size in GB of the boot disk (default is 500GB).

BootDiskType string

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

LocalSsdInterface string

Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

NumLocalSsds int

Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

bootDiskSizeGb Integer

Optional. Size in GB of the boot disk (default is 500GB).

bootDiskType String

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

localSsdInterface String

Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

numLocalSsds Integer

Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

bootDiskSizeGb number

Optional. Size in GB of the boot disk (default is 500GB).

bootDiskType string

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

localSsdInterface string

Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

numLocalSsds number

Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

boot_disk_size_gb int

Optional. Size in GB of the boot disk (default is 500GB).

boot_disk_type str

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

local_ssd_interface str

Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

num_local_ssds int

Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

bootDiskSizeGb Number

Optional. Size in GB of the boot disk (default is 500GB).

bootDiskType String

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

localSsdInterface String

Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

numLocalSsds Number

Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

DiskConfigResponse

BootDiskSizeGb int

Optional. Size in GB of the boot disk (default is 500GB).

BootDiskType string

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

LocalSsdInterface string

Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

NumLocalSsds int

Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

BootDiskSizeGb int

Optional. Size in GB of the boot disk (default is 500GB).

BootDiskType string

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

LocalSsdInterface string

Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

NumLocalSsds int

Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

bootDiskSizeGb Integer

Optional. Size in GB of the boot disk (default is 500GB).

bootDiskType String

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

localSsdInterface String

Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

numLocalSsds Integer

Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

bootDiskSizeGb number

Optional. Size in GB of the boot disk (default is 500GB).

bootDiskType string

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

localSsdInterface string

Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

numLocalSsds number

Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

boot_disk_size_gb int

Optional. Size in GB of the boot disk (default is 500GB).

boot_disk_type str

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

local_ssd_interface str

Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

num_local_ssds int

Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

bootDiskSizeGb Number

Optional. Size in GB of the boot disk (default is 500GB).

bootDiskType String

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

localSsdInterface String

Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

numLocalSsds Number

Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

EncryptionConfig

GcePdKmsKeyName string

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

KmsKey string

Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

GcePdKmsKeyName string

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

KmsKey string

Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

gcePdKmsKeyName String

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

kmsKey String

Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

gcePdKmsKeyName string

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

kmsKey string

Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

gce_pd_kms_key_name str

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

kms_key str

Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

gcePdKmsKeyName String

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

kmsKey String

Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

EncryptionConfigResponse

GcePdKmsKeyName string

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

KmsKey string

Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

GcePdKmsKeyName string

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

KmsKey string

Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

gcePdKmsKeyName String

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

kmsKey String

Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

gcePdKmsKeyName string

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

kmsKey string

Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

gce_pd_kms_key_name str

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

kms_key str

Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

gcePdKmsKeyName String

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

kmsKey String

Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

EndpointConfig

EnableHttpPortAccess bool

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

EnableHttpPortAccess bool

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

enableHttpPortAccess Boolean

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

enableHttpPortAccess boolean

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

enable_http_port_access bool

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

enableHttpPortAccess Boolean

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

EndpointConfigResponse

EnableHttpPortAccess bool

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

HttpPorts Dictionary<string, string>

The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

EnableHttpPortAccess bool

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

HttpPorts map[string]string

The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

enableHttpPortAccess Boolean

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

httpPorts Map<String,String>

The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

enableHttpPortAccess boolean

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

httpPorts {[key: string]: string}

The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

enable_http_port_access bool

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

http_ports Mapping[str, str]

The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

enableHttpPortAccess Boolean

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

httpPorts Map<String>

The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

GceClusterConfig

ConfidentialInstanceConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ConfidentialInstanceConfig

Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

InternalIpOnly bool

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

Metadata Dictionary<string, string>

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

NetworkUri string

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

NodeGroupAffinity Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeGroupAffinity

Optional. Node Group Affinity for sole-tenant clusters.

PrivateIpv6GoogleAccess Pulumi.GoogleNative.Dataproc.V1.GceClusterConfigPrivateIpv6GoogleAccess

Optional. The type of IPv6 access for a cluster.

ReservationAffinity Pulumi.GoogleNative.Dataproc.V1.Inputs.ReservationAffinity

Optional. Reservation Affinity for consuming Zonal reservation.

ServiceAccount string

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

ServiceAccountScopes List<string>

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

ShieldedInstanceConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ShieldedInstanceConfig

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

SubnetworkUri string

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

Tags List<string>

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

ZoneUri string

Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

ConfidentialInstanceConfig ConfidentialInstanceConfig

Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

InternalIpOnly bool

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

Metadata map[string]string

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

NetworkUri string

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

NodeGroupAffinity NodeGroupAffinity

Optional. Node Group Affinity for sole-tenant clusters.

PrivateIpv6GoogleAccess GceClusterConfigPrivateIpv6GoogleAccess

Optional. The type of IPv6 access for a cluster.

ReservationAffinity ReservationAffinity

Optional. Reservation Affinity for consuming Zonal reservation.

ServiceAccount string

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

ServiceAccountScopes []string

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

ShieldedInstanceConfig ShieldedInstanceConfig

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

SubnetworkUri string

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

Tags []string

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

ZoneUri string

Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

confidentialInstanceConfig ConfidentialInstanceConfig

Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

internalIpOnly Boolean

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

metadata Map<String,String>

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

networkUri String

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

nodeGroupAffinity NodeGroupAffinity

Optional. Node Group Affinity for sole-tenant clusters.

privateIpv6GoogleAccess GceClusterConfigPrivateIpv6GoogleAccess

Optional. The type of IPv6 access for a cluster.

reservationAffinity ReservationAffinity

Optional. Reservation Affinity for consuming Zonal reservation.

serviceAccount String

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

serviceAccountScopes List<String>

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

shieldedInstanceConfig ShieldedInstanceConfig

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

subnetworkUri String

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

tags List<String>

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

zoneUri String

Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

confidentialInstanceConfig ConfidentialInstanceConfig

Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

internalIpOnly boolean

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

metadata {[key: string]: string}

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

networkUri string

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

nodeGroupAffinity NodeGroupAffinity

Optional. Node Group Affinity for sole-tenant clusters.

privateIpv6GoogleAccess GceClusterConfigPrivateIpv6GoogleAccess

Optional. The type of IPv6 access for a cluster.

reservationAffinity ReservationAffinity

Optional. Reservation Affinity for consuming Zonal reservation.

serviceAccount string

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

serviceAccountScopes string[]

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

shieldedInstanceConfig ShieldedInstanceConfig

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

subnetworkUri string

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

tags string[]

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

zoneUri string

Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

confidential_instance_config ConfidentialInstanceConfig

Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

internal_ip_only bool

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

metadata Mapping[str, str]

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

network_uri str

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

node_group_affinity NodeGroupAffinity

Optional. Node Group Affinity for sole-tenant clusters.

private_ipv6_google_access GceClusterConfigPrivateIpv6GoogleAccess

Optional. The type of IPv6 access for a cluster.

reservation_affinity ReservationAffinity

Optional. Reservation Affinity for consuming Zonal reservation.

service_account str

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

service_account_scopes Sequence[str]

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

shielded_instance_config ShieldedInstanceConfig

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

subnetwork_uri str

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

tags Sequence[str]

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

zone_uri str

Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

confidentialInstanceConfig Property Map

Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

internalIpOnly Boolean

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

metadata Map<String>

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

networkUri String

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

nodeGroupAffinity Property Map

Optional. Node Group Affinity for sole-tenant clusters.

privateIpv6GoogleAccess "PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED" | "INHERIT_FROM_SUBNETWORK" | "OUTBOUND" | "BIDIRECTIONAL"

Optional. The type of IPv6 access for a cluster.

reservationAffinity Property Map

Optional. Reservation Affinity for consuming Zonal reservation.

serviceAccount String

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

serviceAccountScopes List<String>

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

shieldedInstanceConfig Property Map

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

subnetworkUri String

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

tags List<String>

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

zoneUri String

Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

GceClusterConfigPrivateIpv6GoogleAccess

PrivateIpv6GoogleAccessUnspecified
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED

If unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.

InheritFromSubnetwork
INHERIT_FROM_SUBNETWORK

Private access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.

Outbound
OUTBOUND

Enables outbound private IPv6 access to Google Services from the Dataproc cluster.

Bidirectional
BIDIRECTIONAL

Enables bidirectional private IPv6 access between Google Services and the Dataproc cluster.

GceClusterConfigPrivateIpv6GoogleAccessPrivateIpv6GoogleAccessUnspecified
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED

If unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.

GceClusterConfigPrivateIpv6GoogleAccessInheritFromSubnetwork
INHERIT_FROM_SUBNETWORK

Private access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.

GceClusterConfigPrivateIpv6GoogleAccessOutbound
OUTBOUND

Enables outbound private IPv6 access to Google Services from the Dataproc cluster.

GceClusterConfigPrivateIpv6GoogleAccessBidirectional
BIDIRECTIONAL

Enables bidirectional private IPv6 access between Google Services and the Dataproc cluster.

PrivateIpv6GoogleAccessUnspecified
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED

If unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.

InheritFromSubnetwork
INHERIT_FROM_SUBNETWORK

Private access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.

Outbound
OUTBOUND

Enables outbound private IPv6 access to Google Services from the Dataproc cluster.

Bidirectional
BIDIRECTIONAL

Enables bidirectional private IPv6 access between Google Services and the Dataproc cluster.

PrivateIpv6GoogleAccessUnspecified
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED

If unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.

InheritFromSubnetwork
INHERIT_FROM_SUBNETWORK

Private access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.

Outbound
OUTBOUND

Enables outbound private IPv6 access to Google Services from the Dataproc cluster.

Bidirectional
BIDIRECTIONAL

Enables bidirectional private IPv6 access between Google Services and the Dataproc cluster.

PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED

If unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.

INHERIT_FROM_SUBNETWORK
INHERIT_FROM_SUBNETWORK

Private access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.

OUTBOUND
OUTBOUND

Enables outbound private IPv6 access to Google Services from the Dataproc cluster.

BIDIRECTIONAL
BIDIRECTIONAL

Enables bidirectional private IPv6 access between Google Services and the Dataproc cluster.

"PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED"
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED

If unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.

"INHERIT_FROM_SUBNETWORK"
INHERIT_FROM_SUBNETWORK

Private access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.

"OUTBOUND"
OUTBOUND

Enables outbound private IPv6 access to Google Services from the Dataproc cluster.

"BIDIRECTIONAL"
BIDIRECTIONAL

Enables bidirectional private IPv6 access between Google Services and the Dataproc cluster.

GceClusterConfigResponse

ConfidentialInstanceConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ConfidentialInstanceConfigResponse

Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

InternalIpOnly bool

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

Metadata Dictionary<string, string>

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

NetworkUri string

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

NodeGroupAffinity Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeGroupAffinityResponse

Optional. Node Group Affinity for sole-tenant clusters.

PrivateIpv6GoogleAccess string

Optional. The type of IPv6 access for a cluster.

ReservationAffinity Pulumi.GoogleNative.Dataproc.V1.Inputs.ReservationAffinityResponse

Optional. Reservation Affinity for consuming Zonal reservation.

ServiceAccount string

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

ServiceAccountScopes List<string>

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

ShieldedInstanceConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ShieldedInstanceConfigResponse

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

SubnetworkUri string

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

Tags List<string>

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

ZoneUri string

Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

ConfidentialInstanceConfig ConfidentialInstanceConfigResponse

Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

InternalIpOnly bool

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

Metadata map[string]string

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

NetworkUri string

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

NodeGroupAffinity NodeGroupAffinityResponse

Optional. Node Group Affinity for sole-tenant clusters.

PrivateIpv6GoogleAccess string

Optional. The type of IPv6 access for a cluster.

ReservationAffinity ReservationAffinityResponse

Optional. Reservation Affinity for consuming Zonal reservation.

ServiceAccount string

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

ServiceAccountScopes []string

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

ShieldedInstanceConfig ShieldedInstanceConfigResponse

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

SubnetworkUri string

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

Tags []string

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

ZoneUri string

Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

confidentialInstanceConfig ConfidentialInstanceConfigResponse

Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

internalIpOnly Boolean

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

metadata Map<String,String>

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

networkUri String

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

nodeGroupAffinity NodeGroupAffinityResponse

Optional. Node Group Affinity for sole-tenant clusters.

privateIpv6GoogleAccess String

Optional. The type of IPv6 access for a cluster.

reservationAffinity ReservationAffinityResponse

Optional. Reservation Affinity for consuming Zonal reservation.

serviceAccount String

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

serviceAccountScopes List<String>

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

shieldedInstanceConfig ShieldedInstanceConfigResponse

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

subnetworkUri String

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

tags List<String>

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

zoneUri String

Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

confidentialInstanceConfig ConfidentialInstanceConfigResponse

Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

internalIpOnly boolean

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

metadata {[key: string]: string}

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

networkUri string

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

nodeGroupAffinity NodeGroupAffinityResponse

Optional. Node Group Affinity for sole-tenant clusters.

privateIpv6GoogleAccess string

Optional. The type of IPv6 access for a cluster.

reservationAffinity ReservationAffinityResponse

Optional. Reservation Affinity for consuming Zonal reservation.

serviceAccount string

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

serviceAccountScopes string[]

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

shieldedInstanceConfig ShieldedInstanceConfigResponse

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

subnetworkUri string

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

tags string[]

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

zoneUri string

Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

confidential_instance_config ConfidentialInstanceConfigResponse

Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

internal_ip_only bool

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

metadata Mapping[str, str]

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

network_uri str

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

node_group_affinity NodeGroupAffinityResponse

Optional. Node Group Affinity for sole-tenant clusters.

private_ipv6_google_access str

Optional. The type of IPv6 access for a cluster.

reservation_affinity ReservationAffinityResponse

Optional. Reservation Affinity for consuming Zonal reservation.

service_account str

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

service_account_scopes Sequence[str]

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

shielded_instance_config ShieldedInstanceConfigResponse

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

subnetwork_uri str

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

tags Sequence[str]

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

zone_uri str

Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

confidentialInstanceConfig Property Map

Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

internalIpOnly Boolean

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

metadata Map<String>

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

networkUri String

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

nodeGroupAffinity Property Map

Optional. Node Group Affinity for sole-tenant clusters.

privateIpv6GoogleAccess String

Optional. The type of IPv6 access for a cluster.

reservationAffinity Property Map

Optional. Reservation Affinity for consuming Zonal reservation.

serviceAccount String

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

serviceAccountScopes List<String>

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

shieldedInstanceConfig Property Map

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

subnetworkUri String

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

tags List<String>

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

zoneUri String

Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

GkeClusterConfig

GkeClusterTarget string

Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

NamespacedGkeDeploymentTarget Pulumi.GoogleNative.Dataproc.V1.Inputs.NamespacedGkeDeploymentTarget

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated:

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

NodePoolTarget List<Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolTarget>

Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

GkeClusterTarget string

Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

NamespacedGkeDeploymentTarget NamespacedGkeDeploymentTarget

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated:

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

NodePoolTarget []GkeNodePoolTarget

Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

gkeClusterTarget String

Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

namespacedGkeDeploymentTarget NamespacedGkeDeploymentTarget

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated:

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

nodePoolTarget List<GkeNodePoolTarget>

Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

gkeClusterTarget string

Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

namespacedGkeDeploymentTarget NamespacedGkeDeploymentTarget

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated:

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

nodePoolTarget GkeNodePoolTarget[]

Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

gke_cluster_target str

Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

namespaced_gke_deployment_target NamespacedGkeDeploymentTarget

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated:

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

node_pool_target Sequence[GkeNodePoolTarget]

Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

gkeClusterTarget String

Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

namespacedGkeDeploymentTarget Property Map

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated:

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

nodePoolTarget List<Property Map>

Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

GkeClusterConfigResponse

GkeClusterTarget string

Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

NamespacedGkeDeploymentTarget Pulumi.GoogleNative.Dataproc.V1.Inputs.NamespacedGkeDeploymentTargetResponse

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated:

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

NodePoolTarget List<Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolTargetResponse>

Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

GkeClusterTarget string

Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

NamespacedGkeDeploymentTarget NamespacedGkeDeploymentTargetResponse

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated:

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

NodePoolTarget []GkeNodePoolTargetResponse

Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

gkeClusterTarget String

Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

namespacedGkeDeploymentTarget NamespacedGkeDeploymentTargetResponse

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated:

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

nodePoolTarget List<GkeNodePoolTargetResponse>

Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

gkeClusterTarget string

Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

namespacedGkeDeploymentTarget NamespacedGkeDeploymentTargetResponse

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated:

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

nodePoolTarget GkeNodePoolTargetResponse[]

Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

gke_cluster_target str

Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

namespaced_gke_deployment_target NamespacedGkeDeploymentTargetResponse

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated:

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

node_pool_target Sequence[GkeNodePoolTargetResponse]

Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

gkeClusterTarget String

Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

namespacedGkeDeploymentTarget Property Map

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated:

Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

nodePoolTarget List<Property Map>

Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

GkeNodeConfig

Accelerators List<Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAcceleratorConfig>

Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

BootDiskKmsKey string

Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/KEY_PROJECT_ID/locations/LOCATION /keyRings/RING_NAME/cryptoKeys/KEY_NAME.

LocalSsdCount int

Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

MachineType string

Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

MinCpuPlatform string

Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

Preemptible bool

Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

Spot bool

Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

Accelerators []GkeNodePoolAcceleratorConfig

Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

BootDiskKmsKey string

Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/KEY_PROJECT_ID/locations/LOCATION /keyRings/RING_NAME/cryptoKeys/KEY_NAME.

LocalSsdCount int

Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

MachineType string

Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

MinCpuPlatform string

Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

Preemptible bool

Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

Spot bool

Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

accelerators List<GkeNodePoolAcceleratorConfig>

Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

bootDiskKmsKey String

Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/KEY_PROJECT_ID/locations/LOCATION /keyRings/RING_NAME/cryptoKeys/KEY_NAME.

localSsdCount Integer

Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

machineType String

Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

minCpuPlatform String

Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

preemptible Boolean

Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

spot Boolean

Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

accelerators GkeNodePoolAcceleratorConfig[]

Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

bootDiskKmsKey string

Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/KEY_PROJECT_ID/locations/LOCATION /keyRings/RING_NAME/cryptoKeys/KEY_NAME.

localSsdCount number

Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

machineType string

Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

minCpuPlatform string

Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

preemptible boolean

Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

spot boolean

Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

accelerators Sequence[GkeNodePoolAcceleratorConfig]

Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

boot_disk_kms_key str

Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/KEY_PROJECT_ID/locations/LOCATION /keyRings/RING_NAME/cryptoKeys/KEY_NAME.

local_ssd_count int

Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

machine_type str

Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

min_cpu_platform str

Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

preemptible bool

Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

spot bool

Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

accelerators List<Property Map>

Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

bootDiskKmsKey String

Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/KEY_PROJECT_ID/locations/LOCATION /keyRings/RING_NAME/cryptoKeys/KEY_NAME.

localSsdCount Number

Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

machineType String

Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

minCpuPlatform String

Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

preemptible Boolean

Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

spot Boolean

Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

GkeNodeConfigResponse

Accelerators List<Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAcceleratorConfigResponse>

Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

BootDiskKmsKey string

Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/KEY_PROJECT_ID/locations/LOCATION /keyRings/RING_NAME/cryptoKeys/KEY_NAME.

LocalSsdCount int

Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

MachineType string

Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

MinCpuPlatform string

Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

Preemptible bool

Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

Spot bool

Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

Accelerators []GkeNodePoolAcceleratorConfigResponse

Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

BootDiskKmsKey string

Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/KEY_PROJECT_ID/locations/LOCATION /keyRings/RING_NAME/cryptoKeys/KEY_NAME.

LocalSsdCount int

Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

MachineType string

Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

MinCpuPlatform string

Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

Preemptible bool

Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

Spot bool

Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

accelerators List<GkeNodePoolAcceleratorConfigResponse>

Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

bootDiskKmsKey String

Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/KEY_PROJECT_ID/locations/LOCATION /keyRings/RING_NAME/cryptoKeys/KEY_NAME.

localSsdCount Integer

Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

machineType String

Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

minCpuPlatform String

Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

preemptible Boolean

Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

spot Boolean

Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

accelerators GkeNodePoolAcceleratorConfigResponse[]

Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

bootDiskKmsKey string

Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/KEY_PROJECT_ID/locations/LOCATION /keyRings/RING_NAME/cryptoKeys/KEY_NAME.

localSsdCount number

Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

machineType string

Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

minCpuPlatform string

Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

preemptible boolean

Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

spot boolean

Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

accelerators Sequence[GkeNodePoolAcceleratorConfigResponse]

Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

boot_disk_kms_key str

Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/KEY_PROJECT_ID/locations/LOCATION /keyRings/RING_NAME/cryptoKeys/KEY_NAME.

local_ssd_count int

Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

machine_type str

Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

min_cpu_platform str

Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

preemptible bool

Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

spot bool

Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

accelerators List<Property Map>

Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

bootDiskKmsKey String

Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/KEY_PROJECT_ID/locations/LOCATION /keyRings/RING_NAME/cryptoKeys/KEY_NAME.

localSsdCount Number

Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

machineType String

Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

minCpuPlatform String

Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

preemptible Boolean

Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

spot Boolean

Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

GkeNodePoolAcceleratorConfig

AcceleratorCount string

The number of accelerator cards exposed to an instance.

AcceleratorType string

The accelerator type resource namename (see GPUs on Compute Engine).

GpuPartitionSize string

Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

AcceleratorCount string

The number of accelerator cards exposed to an instance.

AcceleratorType string

The accelerator type resource namename (see GPUs on Compute Engine).

GpuPartitionSize string

Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

acceleratorCount String

The number of accelerator cards exposed to an instance.

acceleratorType String

The accelerator type resource namename (see GPUs on Compute Engine).

gpuPartitionSize String

Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

acceleratorCount string

The number of accelerator cards exposed to an instance.

acceleratorType string

The accelerator type resource namename (see GPUs on Compute Engine).

gpuPartitionSize string

Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

accelerator_count str

The number of accelerator cards exposed to an instance.

accelerator_type str

The accelerator type resource namename (see GPUs on Compute Engine).

gpu_partition_size str

Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

acceleratorCount String

The number of accelerator cards exposed to an instance.

acceleratorType String

The accelerator type resource namename (see GPUs on Compute Engine).

gpuPartitionSize String

Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

GkeNodePoolAcceleratorConfigResponse

AcceleratorCount string

The number of accelerator cards exposed to an instance.

AcceleratorType string

The accelerator type resource namename (see GPUs on Compute Engine).

GpuPartitionSize string

Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

AcceleratorCount string

The number of accelerator cards exposed to an instance.

AcceleratorType string

The accelerator type resource namename (see GPUs on Compute Engine).

GpuPartitionSize string

Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

acceleratorCount String

The number of accelerator cards exposed to an instance.

acceleratorType String

The accelerator type resource namename (see GPUs on Compute Engine).

gpuPartitionSize String

Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

acceleratorCount string

The number of accelerator cards exposed to an instance.

acceleratorType string

The accelerator type resource namename (see GPUs on Compute Engine).

gpuPartitionSize string

Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

accelerator_count str

The number of accelerator cards exposed to an instance.

accelerator_type str

The accelerator type resource namename (see GPUs on Compute Engine).

gpu_partition_size str

Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

acceleratorCount String

The number of accelerator cards exposed to an instance.

acceleratorType String

The accelerator type resource namename (see GPUs on Compute Engine).

gpuPartitionSize String

Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

GkeNodePoolAutoscalingConfig

MaxNodeCount int

The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

MinNodeCount int

The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

MaxNodeCount int

The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

MinNodeCount int

The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

maxNodeCount Integer

The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

minNodeCount Integer

The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

maxNodeCount number

The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

minNodeCount number

The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

max_node_count int

The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

min_node_count int

The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

maxNodeCount Number

The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

minNodeCount Number

The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

GkeNodePoolAutoscalingConfigResponse

MaxNodeCount int

The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

MinNodeCount int

The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

MaxNodeCount int

The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

MinNodeCount int

The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

maxNodeCount Integer

The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

minNodeCount Integer

The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

maxNodeCount number

The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

minNodeCount number

The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

max_node_count int

The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

min_node_count int

The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

maxNodeCount Number

The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

minNodeCount Number

The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

GkeNodePoolConfig

Autoscaling Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAutoscalingConfig

Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

Config Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodeConfig

Optional. The node pool configuration.

Locations List<string>

Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

Autoscaling GkeNodePoolAutoscalingConfig

Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

Config GkeNodeConfig

Optional. The node pool configuration.

Locations []string

Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

autoscaling GkeNodePoolAutoscalingConfig

Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

config GkeNodeConfig

Optional. The node pool configuration.

locations List<String>

Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

autoscaling GkeNodePoolAutoscalingConfig

Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

config GkeNodeConfig

Optional. The node pool configuration.

locations string[]

Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

autoscaling GkeNodePoolAutoscalingConfig

Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

config GkeNodeConfig

Optional. The node pool configuration.

locations Sequence[str]

Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

autoscaling Property Map

Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

config Property Map

Optional. The node pool configuration.

locations List<String>

Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

GkeNodePoolConfigResponse

Autoscaling Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAutoscalingConfigResponse

Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

Config Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodeConfigResponse

Optional. The node pool configuration.

Locations List<string>

Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

Autoscaling GkeNodePoolAutoscalingConfigResponse

Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

Config GkeNodeConfigResponse

Optional. The node pool configuration.

Locations []string

Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

autoscaling GkeNodePoolAutoscalingConfigResponse

Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

config GkeNodeConfigResponse

Optional. The node pool configuration.

locations List<String>

Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

autoscaling GkeNodePoolAutoscalingConfigResponse

Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

config GkeNodeConfigResponse

Optional. The node pool configuration.

locations string[]

Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

autoscaling GkeNodePoolAutoscalingConfigResponse

Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

config GkeNodeConfigResponse

Optional. The node pool configuration.

locations Sequence[str]

Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

autoscaling Property Map

Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

config Property Map

Optional. The node pool configuration.

locations List<String>

Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

GkeNodePoolTarget

NodePool string

The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

Roles List<Pulumi.GoogleNative.Dataproc.V1.GkeNodePoolTargetRolesItem>

The roles associated with the GKE node pool.

NodePoolConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolConfig

Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

NodePool string

The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

Roles []GkeNodePoolTargetRolesItem

The roles associated with the GKE node pool.

NodePoolConfig GkeNodePoolConfig

Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

nodePool String

The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

roles List<GkeNodePoolTargetRolesItem>

The roles associated with the GKE node pool.

nodePoolConfig GkeNodePoolConfig

Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

nodePool string

The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

roles GkeNodePoolTargetRolesItem[]

The roles associated with the GKE node pool.

nodePoolConfig GkeNodePoolConfig

Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

node_pool str

The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

roles Sequence[GkeNodePoolTargetRolesItem]

The roles associated with the GKE node pool.

node_pool_config GkeNodePoolConfig

Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

nodePool String

The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

roles List<"ROLE_UNSPECIFIED" | "DEFAULT" | "CONTROLLER" | "SPARK_DRIVER" | "SPARK_EXECUTOR">

The roles associated with the GKE node pool.

nodePoolConfig Property Map

Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

GkeNodePoolTargetResponse

NodePool string

The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

NodePoolConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolConfigResponse

Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

Roles List<string>

The roles associated with the GKE node pool.

NodePool string

The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

NodePoolConfig GkeNodePoolConfigResponse

Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

Roles []string

The roles associated with the GKE node pool.

nodePool String

The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

nodePoolConfig GkeNodePoolConfigResponse

Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

roles List<String>

The roles associated with the GKE node pool.

nodePool string

The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

nodePoolConfig GkeNodePoolConfigResponse

Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

roles string[]

The roles associated with the GKE node pool.

node_pool str

The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

node_pool_config GkeNodePoolConfigResponse

Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

roles Sequence[str]

The roles associated with the GKE node pool.

nodePool String

The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

nodePoolConfig Property Map

Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

roles List<String>

The roles associated with the GKE node pool.

GkeNodePoolTargetRolesItem

RoleUnspecified
ROLE_UNSPECIFIED

Role is unspecified.

Default
DEFAULT

At least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.

Controller
CONTROLLER

Run work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.

SparkDriver
SPARK_DRIVER

Run work associated with a Spark driver of a job.

SparkExecutor
SPARK_EXECUTOR

Run work associated with a Spark executor of a job.

GkeNodePoolTargetRolesItemRoleUnspecified
ROLE_UNSPECIFIED

Role is unspecified.

GkeNodePoolTargetRolesItemDefault
DEFAULT

At least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.

GkeNodePoolTargetRolesItemController
CONTROLLER

Run work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.

GkeNodePoolTargetRolesItemSparkDriver
SPARK_DRIVER

Run work associated with a Spark driver of a job.

GkeNodePoolTargetRolesItemSparkExecutor
SPARK_EXECUTOR

Run work associated with a Spark executor of a job.

RoleUnspecified
ROLE_UNSPECIFIED

Role is unspecified.

Default
DEFAULT

At least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.

Controller
CONTROLLER

Run work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.

SparkDriver
SPARK_DRIVER

Run work associated with a Spark driver of a job.

SparkExecutor
SPARK_EXECUTOR

Run work associated with a Spark executor of a job.

RoleUnspecified
ROLE_UNSPECIFIED

Role is unspecified.

Default
DEFAULT

At least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.

Controller
CONTROLLER

Run work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.

SparkDriver
SPARK_DRIVER

Run work associated with a Spark driver of a job.

SparkExecutor
SPARK_EXECUTOR

Run work associated with a Spark executor of a job.

ROLE_UNSPECIFIED
ROLE_UNSPECIFIED

Role is unspecified.

DEFAULT
DEFAULT

At least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.

CONTROLLER
CONTROLLER

Run work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.

SPARK_DRIVER
SPARK_DRIVER

Run work associated with a Spark driver of a job.

SPARK_EXECUTOR
SPARK_EXECUTOR

Run work associated with a Spark executor of a job.

"ROLE_UNSPECIFIED"
ROLE_UNSPECIFIED

Role is unspecified.

"DEFAULT"
DEFAULT

At least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.

"CONTROLLER"
CONTROLLER

Run work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.

"SPARK_DRIVER"
SPARK_DRIVER

Run work associated with a Spark driver of a job.

"SPARK_EXECUTOR"
SPARK_EXECUTOR

Run work associated with a Spark executor of a job.

IdentityConfig

UserServiceAccountMapping Dictionary<string, string>

Map of user to service account.

UserServiceAccountMapping map[string]string

Map of user to service account.

userServiceAccountMapping Map<String,String>

Map of user to service account.

userServiceAccountMapping {[key: string]: string}

Map of user to service account.

user_service_account_mapping Mapping[str, str]

Map of user to service account.

userServiceAccountMapping Map<String>

Map of user to service account.

IdentityConfigResponse

UserServiceAccountMapping Dictionary<string, string>

Map of user to service account.

UserServiceAccountMapping map[string]string

Map of user to service account.

userServiceAccountMapping Map<String,String>

Map of user to service account.

userServiceAccountMapping {[key: string]: string}

Map of user to service account.

user_service_account_mapping Mapping[str, str]

Map of user to service account.

userServiceAccountMapping Map<String>

Map of user to service account.

InstanceGroupConfig

Accelerators List<Pulumi.GoogleNative.Dataproc.V1.Inputs.AcceleratorConfig>

Optional. The Compute Engine accelerator configuration for these instances.

DiskConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.DiskConfig

Optional. Disk option config settings.

ImageUri string

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

MachineTypeUri string

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

MinCpuPlatform string

Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

NumInstances int

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

Preemptibility Pulumi.GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

Accelerators []AcceleratorConfig

Optional. The Compute Engine accelerator configuration for these instances.

DiskConfig DiskConfig

Optional. Disk option config settings.

ImageUri string

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

MachineTypeUri string

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

MinCpuPlatform string

Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

NumInstances int

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

Preemptibility InstanceGroupConfigPreemptibility

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

accelerators List<AcceleratorConfig>

Optional. The Compute Engine accelerator configuration for these instances.

diskConfig DiskConfig

Optional. Disk option config settings.

imageUri String

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

machineTypeUri String

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

minCpuPlatform String

Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

numInstances Integer

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

preemptibility InstanceGroupConfigPreemptibility

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

accelerators AcceleratorConfig[]

Optional. The Compute Engine accelerator configuration for these instances.

diskConfig DiskConfig

Optional. Disk option config settings.

imageUri string

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

machineTypeUri string

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

minCpuPlatform string

Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

numInstances number

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

preemptibility InstanceGroupConfigPreemptibility

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

accelerators Sequence[AcceleratorConfig]

Optional. The Compute Engine accelerator configuration for these instances.

disk_config DiskConfig

Optional. Disk option config settings.

image_uri str

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

machine_type_uri str

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

min_cpu_platform str

Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

num_instances int

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

preemptibility InstanceGroupConfigPreemptibility

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

accelerators List<Property Map>

Optional. The Compute Engine accelerator configuration for these instances.

diskConfig Property Map

Optional. Disk option config settings.

imageUri String

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

machineTypeUri String

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

minCpuPlatform String

Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

numInstances Number

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

preemptibility "PREEMPTIBILITY_UNSPECIFIED" | "NON_PREEMPTIBLE" | "PREEMPTIBLE" | "SPOT"

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

InstanceGroupConfigPreemptibility

PreemptibilityUnspecified
PREEMPTIBILITY_UNSPECIFIED

Preemptibility is unspecified, the system will choose the appropriate setting for each instance group.

NonPreemptible
NON_PREEMPTIBLE

Instances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.

Preemptible
PREEMPTIBLE

Instances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.

Spot
SPOT

Instances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.

InstanceGroupConfigPreemptibilityPreemptibilityUnspecified
PREEMPTIBILITY_UNSPECIFIED

Preemptibility is unspecified, the system will choose the appropriate setting for each instance group.

InstanceGroupConfigPreemptibilityNonPreemptible
NON_PREEMPTIBLE

Instances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.

InstanceGroupConfigPreemptibilityPreemptible
PREEMPTIBLE

Instances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.

InstanceGroupConfigPreemptibilitySpot
SPOT

Instances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.

PreemptibilityUnspecified
PREEMPTIBILITY_UNSPECIFIED

Preemptibility is unspecified, the system will choose the appropriate setting for each instance group.

NonPreemptible
NON_PREEMPTIBLE

Instances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.

Preemptible
PREEMPTIBLE

Instances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.

Spot
SPOT

Instances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.

PreemptibilityUnspecified
PREEMPTIBILITY_UNSPECIFIED

Preemptibility is unspecified, the system will choose the appropriate setting for each instance group.

NonPreemptible
NON_PREEMPTIBLE

Instances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.

Preemptible
PREEMPTIBLE

Instances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.

Spot
SPOT

Instances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.

PREEMPTIBILITY_UNSPECIFIED
PREEMPTIBILITY_UNSPECIFIED

Preemptibility is unspecified, the system will choose the appropriate setting for each instance group.

NON_PREEMPTIBLE
NON_PREEMPTIBLE

Instances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.

PREEMPTIBLE
PREEMPTIBLE

Instances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.

SPOT
SPOT

Instances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.

"PREEMPTIBILITY_UNSPECIFIED"
PREEMPTIBILITY_UNSPECIFIED

Preemptibility is unspecified, the system will choose the appropriate setting for each instance group.

"NON_PREEMPTIBLE"
NON_PREEMPTIBLE

Instances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.

"PREEMPTIBLE"
PREEMPTIBLE

Instances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.

"SPOT"
SPOT

Instances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.

InstanceGroupConfigResponse

Accelerators List<Pulumi.GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigResponse>

Optional. The Compute Engine accelerator configuration for these instances.

DiskConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.DiskConfigResponse

Optional. Disk option config settings.

ImageUri string

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

InstanceNames List<string>

The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

InstanceReferences List<Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceReferenceResponse>

List of references to Compute Engine instances.

IsPreemptible bool

Specifies that this instance group contains preemptible instances.

MachineTypeUri string

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

ManagedGroupConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ManagedGroupConfigResponse

The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

MinCpuPlatform string

Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

NumInstances int

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

Preemptibility string

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

Accelerators []AcceleratorConfigResponse

Optional. The Compute Engine accelerator configuration for these instances.

DiskConfig DiskConfigResponse

Optional. Disk option config settings.

ImageUri string

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

InstanceNames []string

The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

InstanceReferences []InstanceReferenceResponse

List of references to Compute Engine instances.

IsPreemptible bool

Specifies that this instance group contains preemptible instances.

MachineTypeUri string

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

ManagedGroupConfig ManagedGroupConfigResponse

The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

MinCpuPlatform string

Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

NumInstances int

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

Preemptibility string

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

accelerators List<AcceleratorConfigResponse>

Optional. The Compute Engine accelerator configuration for these instances.

diskConfig DiskConfigResponse

Optional. Disk option config settings.

imageUri String

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

instanceNames List<String>

The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

instanceReferences List<InstanceReferenceResponse>

List of references to Compute Engine instances.

isPreemptible Boolean

Specifies that this instance group contains preemptible instances.

machineTypeUri String

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

managedGroupConfig ManagedGroupConfigResponse

The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

minCpuPlatform String

Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

numInstances Integer

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

preemptibility String

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

accelerators AcceleratorConfigResponse[]

Optional. The Compute Engine accelerator configuration for these instances.

diskConfig DiskConfigResponse

Optional. Disk option config settings.

imageUri string

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

instanceNames string[]

The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

instanceReferences InstanceReferenceResponse[]

List of references to Compute Engine instances.

isPreemptible boolean

Specifies that this instance group contains preemptible instances.

machineTypeUri string

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

managedGroupConfig ManagedGroupConfigResponse

The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

minCpuPlatform string

Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

numInstances number

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

preemptibility string

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

accelerators Sequence[AcceleratorConfigResponse]

Optional. The Compute Engine accelerator configuration for these instances.

disk_config DiskConfigResponse

Optional. Disk option config settings.

image_uri str

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

instance_names Sequence[str]

The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

instance_references Sequence[InstanceReferenceResponse]

List of references to Compute Engine instances.

is_preemptible bool

Specifies that this instance group contains preemptible instances.

machine_type_uri str

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

managed_group_config ManagedGroupConfigResponse

The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

min_cpu_platform str

Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

num_instances int

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

preemptibility str

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

accelerators List<Property Map>

Optional. The Compute Engine accelerator configuration for these instances.

diskConfig Property Map

Optional. Disk option config settings.

imageUri String

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

instanceNames List<String>

The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

instanceReferences List<Property Map>

List of references to Compute Engine instances.

isPreemptible Boolean

Specifies that this instance group contains preemptible instances.

machineTypeUri String

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

managedGroupConfig Property Map

The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

minCpuPlatform String

Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

numInstances Number

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

preemptibility String

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

InstanceReferenceResponse

InstanceId string

The unique identifier of the Compute Engine instance.

InstanceName string

The user-friendly name of the Compute Engine instance.

PublicEciesKey string

The public ECIES key used for sharing data with this instance.

PublicKey string

The public RSA key used for sharing data with this instance.

InstanceId string

The unique identifier of the Compute Engine instance.

InstanceName string

The user-friendly name of the Compute Engine instance.

PublicEciesKey string

The public ECIES key used for sharing data with this instance.

PublicKey string

The public RSA key used for sharing data with this instance.

instanceId String

The unique identifier of the Compute Engine instance.

instanceName String

The user-friendly name of the Compute Engine instance.

publicEciesKey String

The public ECIES key used for sharing data with this instance.

publicKey String

The public RSA key used for sharing data with this instance.

instanceId string

The unique identifier of the Compute Engine instance.

instanceName string

The user-friendly name of the Compute Engine instance.

publicEciesKey string

The public ECIES key used for sharing data with this instance.

publicKey string

The public RSA key used for sharing data with this instance.

instance_id str

The unique identifier of the Compute Engine instance.

instance_name str

The user-friendly name of the Compute Engine instance.

public_ecies_key str

The public ECIES key used for sharing data with this instance.

public_key str

The public RSA key used for sharing data with this instance.

instanceId String

The unique identifier of the Compute Engine instance.

instanceName String

The user-friendly name of the Compute Engine instance.

publicEciesKey String

The public ECIES key used for sharing data with this instance.

publicKey String

The public RSA key used for sharing data with this instance.

KerberosConfig

CrossRealmTrustAdminServer string

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

CrossRealmTrustKdc string

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

CrossRealmTrustRealm string

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

CrossRealmTrustSharedPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

EnableKerberos bool

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

KdcDbKeyUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

KeyPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

KeystorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

KeystoreUri string

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

KmsKeyUri string

Optional. The uri of the KMS key used to encrypt various sensitive files.

Realm string

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

RootPrincipalPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

TgtLifetimeHours int

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

TruststorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

TruststoreUri string

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

CrossRealmTrustAdminServer string

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

CrossRealmTrustKdc string

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

CrossRealmTrustRealm string

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

CrossRealmTrustSharedPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

EnableKerberos bool

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

KdcDbKeyUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

KeyPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

KeystorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

KeystoreUri string

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

KmsKeyUri string

Optional. The uri of the KMS key used to encrypt various sensitive files.

Realm string

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

RootPrincipalPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

TgtLifetimeHours int

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

TruststorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

TruststoreUri string

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

crossRealmTrustAdminServer String

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustKdc String

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustRealm String

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

crossRealmTrustSharedPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

enableKerberos Boolean

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

kdcDbKeyUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

keyPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

keystorePasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

keystoreUri String

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

kmsKeyUri String

Optional. The uri of the KMS key used to encrypt various sensitive files.

realm String

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

rootPrincipalPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

tgtLifetimeHours Integer

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

truststorePasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

truststoreUri String

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

crossRealmTrustAdminServer string

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustKdc string

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustRealm string

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

crossRealmTrustSharedPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

enableKerberos boolean

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

kdcDbKeyUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

keyPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

keystorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

keystoreUri string

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

kmsKeyUri string

Optional. The uri of the KMS key used to encrypt various sensitive files.

realm string

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

rootPrincipalPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

tgtLifetimeHours number

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

truststorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

truststoreUri string

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

cross_realm_trust_admin_server str

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

cross_realm_trust_kdc str

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

cross_realm_trust_realm str

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

cross_realm_trust_shared_password_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

enable_kerberos bool

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

kdc_db_key_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

key_password_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

keystore_password_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

keystore_uri str

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

kms_key_uri str

Optional. The uri of the KMS key used to encrypt various sensitive files.

realm str

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

root_principal_password_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

tgt_lifetime_hours int

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

truststore_password_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

truststore_uri str

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

crossRealmTrustAdminServer String

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustKdc String

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustRealm String

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

crossRealmTrustSharedPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

enableKerberos Boolean

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

kdcDbKeyUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

keyPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

keystorePasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

keystoreUri String

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

kmsKeyUri String

Optional. The uri of the KMS key used to encrypt various sensitive files.

realm String

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

rootPrincipalPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

tgtLifetimeHours Number

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

truststorePasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

truststoreUri String

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

KerberosConfigResponse

CrossRealmTrustAdminServer string

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

CrossRealmTrustKdc string

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

CrossRealmTrustRealm string

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

CrossRealmTrustSharedPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

EnableKerberos bool

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

KdcDbKeyUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

KeyPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

KeystorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

KeystoreUri string

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

KmsKeyUri string

Optional. The uri of the KMS key used to encrypt various sensitive files.

Realm string

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

RootPrincipalPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

TgtLifetimeHours int

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

TruststorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

TruststoreUri string

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

CrossRealmTrustAdminServer string

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

CrossRealmTrustKdc string

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

CrossRealmTrustRealm string

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

CrossRealmTrustSharedPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

EnableKerberos bool

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

KdcDbKeyUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

KeyPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

KeystorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

KeystoreUri string

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

KmsKeyUri string

Optional. The uri of the KMS key used to encrypt various sensitive files.

Realm string

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

RootPrincipalPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

TgtLifetimeHours int

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

TruststorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

TruststoreUri string

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

crossRealmTrustAdminServer String

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustKdc String

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustRealm String

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

crossRealmTrustSharedPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

enableKerberos Boolean

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

kdcDbKeyUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

keyPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

keystorePasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

keystoreUri String

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

kmsKeyUri String

Optional. The uri of the KMS key used to encrypt various sensitive files.

realm String

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

rootPrincipalPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

tgtLifetimeHours Integer

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

truststorePasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

truststoreUri String

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

crossRealmTrustAdminServer string

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustKdc string

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustRealm string

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

crossRealmTrustSharedPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

enableKerberos boolean

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

kdcDbKeyUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

keyPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

keystorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

keystoreUri string

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

kmsKeyUri string

Optional. The uri of the KMS key used to encrypt various sensitive files.

realm string

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

rootPrincipalPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

tgtLifetimeHours number

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

truststorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

truststoreUri string

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

cross_realm_trust_admin_server str

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

cross_realm_trust_kdc str

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

cross_realm_trust_realm str

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

cross_realm_trust_shared_password_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

enable_kerberos bool

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

kdc_db_key_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

key_password_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

keystore_password_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

keystore_uri str

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

kms_key_uri str

Optional. The uri of the KMS key used to encrypt various sensitive files.

realm str

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

root_principal_password_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

tgt_lifetime_hours int

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

truststore_password_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

truststore_uri str

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

crossRealmTrustAdminServer String

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustKdc String

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustRealm String

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

crossRealmTrustSharedPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

enableKerberos Boolean

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

kdcDbKeyUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

keyPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

keystorePasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

keystoreUri String

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

kmsKeyUri String

Optional. The uri of the KMS key used to encrypt various sensitive files.

realm String

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

rootPrincipalPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

tgtLifetimeHours Number

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

truststorePasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

truststoreUri String

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

KubernetesClusterConfig

GkeClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeClusterConfig

The configuration for running the Dataproc cluster on GKE.

KubernetesNamespace string

Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

KubernetesSoftwareConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.KubernetesSoftwareConfig

Optional. The software configuration for this Dataproc cluster running on Kubernetes.

GkeClusterConfig GkeClusterConfig

The configuration for running the Dataproc cluster on GKE.

KubernetesNamespace string

Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

KubernetesSoftwareConfig KubernetesSoftwareConfig

Optional. The software configuration for this Dataproc cluster running on Kubernetes.

gkeClusterConfig GkeClusterConfig

The configuration for running the Dataproc cluster on GKE.

kubernetesNamespace String

Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

kubernetesSoftwareConfig KubernetesSoftwareConfig

Optional. The software configuration for this Dataproc cluster running on Kubernetes.

gkeClusterConfig GkeClusterConfig

The configuration for running the Dataproc cluster on GKE.

kubernetesNamespace string

Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

kubernetesSoftwareConfig KubernetesSoftwareConfig

Optional. The software configuration for this Dataproc cluster running on Kubernetes.

gke_cluster_config GkeClusterConfig

The configuration for running the Dataproc cluster on GKE.

kubernetes_namespace str

Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

kubernetes_software_config KubernetesSoftwareConfig

Optional. The software configuration for this Dataproc cluster running on Kubernetes.

gkeClusterConfig Property Map

The configuration for running the Dataproc cluster on GKE.

kubernetesNamespace String

Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

kubernetesSoftwareConfig Property Map

Optional. The software configuration for this Dataproc cluster running on Kubernetes.

KubernetesClusterConfigResponse

GkeClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeClusterConfigResponse

The configuration for running the Dataproc cluster on GKE.

KubernetesNamespace string

Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

KubernetesSoftwareConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.KubernetesSoftwareConfigResponse

Optional. The software configuration for this Dataproc cluster running on Kubernetes.

GkeClusterConfig GkeClusterConfigResponse

The configuration for running the Dataproc cluster on GKE.

KubernetesNamespace string

Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

KubernetesSoftwareConfig KubernetesSoftwareConfigResponse

Optional. The software configuration for this Dataproc cluster running on Kubernetes.

gkeClusterConfig GkeClusterConfigResponse

The configuration for running the Dataproc cluster on GKE.

kubernetesNamespace String

Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

kubernetesSoftwareConfig KubernetesSoftwareConfigResponse

Optional. The software configuration for this Dataproc cluster running on Kubernetes.

gkeClusterConfig GkeClusterConfigResponse

The configuration for running the Dataproc cluster on GKE.

kubernetesNamespace string

Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

kubernetesSoftwareConfig KubernetesSoftwareConfigResponse

Optional. The software configuration for this Dataproc cluster running on Kubernetes.

gke_cluster_config GkeClusterConfigResponse

The configuration for running the Dataproc cluster on GKE.

kubernetes_namespace str

Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

kubernetes_software_config KubernetesSoftwareConfigResponse

Optional. The software configuration for this Dataproc cluster running on Kubernetes.

gkeClusterConfig Property Map

The configuration for running the Dataproc cluster on GKE.

kubernetesNamespace String

Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

kubernetesSoftwareConfig Property Map

Optional. The software configuration for this Dataproc cluster running on Kubernetes.

KubernetesSoftwareConfig

ComponentVersion Dictionary<string, string>

The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

Properties Dictionary<string, string>

The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

ComponentVersion map[string]string

The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

Properties map[string]string

The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

componentVersion Map<String,String>

The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

properties Map<String,String>

The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

componentVersion {[key: string]: string}

The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

properties {[key: string]: string}

The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

component_version Mapping[str, str]

The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

properties Mapping[str, str]

The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

componentVersion Map<String>

The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

properties Map<String>

The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

KubernetesSoftwareConfigResponse

ComponentVersion Dictionary<string, string>

The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

Properties Dictionary<string, string>

The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

ComponentVersion map[string]string

The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

Properties map[string]string

The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

componentVersion Map<String,String>

The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

properties Map<String,String>

The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

componentVersion {[key: string]: string}

The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

properties {[key: string]: string}

The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

component_version Mapping[str, str]

The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

properties Mapping[str, str]

The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

componentVersion Map<String>

The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

properties Map<String>

The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

LifecycleConfig

AutoDeleteTime string

Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

AutoDeleteTtl string

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

IdleDeleteTtl string

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

AutoDeleteTime string

Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

AutoDeleteTtl string

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

IdleDeleteTtl string

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTime String

Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTtl String

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idleDeleteTtl String

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTime string

Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTtl string

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idleDeleteTtl string

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

auto_delete_time str

Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

auto_delete_ttl str

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idle_delete_ttl str

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTime String

Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTtl String

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idleDeleteTtl String

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

LifecycleConfigResponse

AutoDeleteTime string

Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

AutoDeleteTtl string

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

IdleDeleteTtl string

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

IdleStartTime string

The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

AutoDeleteTime string

Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

AutoDeleteTtl string

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

IdleDeleteTtl string

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

IdleStartTime string

The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTime String

Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTtl String

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idleDeleteTtl String

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idleStartTime String

The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTime string

Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTtl string

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idleDeleteTtl string

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idleStartTime string

The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

auto_delete_time str

Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

auto_delete_ttl str

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idle_delete_ttl str

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idle_start_time str

The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTime String

Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTtl String

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idleDeleteTtl String

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idleStartTime String

The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

ManagedGroupConfigResponse

InstanceGroupManagerName string

The name of the Instance Group Manager for this group.

InstanceTemplateName string

The name of the Instance Template used for the Managed Instance Group.

InstanceGroupManagerName string

The name of the Instance Group Manager for this group.

InstanceTemplateName string

The name of the Instance Template used for the Managed Instance Group.

instanceGroupManagerName String

The name of the Instance Group Manager for this group.

instanceTemplateName String

The name of the Instance Template used for the Managed Instance Group.

instanceGroupManagerName string

The name of the Instance Group Manager for this group.

instanceTemplateName string

The name of the Instance Template used for the Managed Instance Group.

instance_group_manager_name str

The name of the Instance Group Manager for this group.

instance_template_name str

The name of the Instance Template used for the Managed Instance Group.

instanceGroupManagerName String

The name of the Instance Group Manager for this group.

instanceTemplateName String

The name of the Instance Template used for the Managed Instance Group.

MetastoreConfig

DataprocMetastoreService string

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

DataprocMetastoreService string

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

dataprocMetastoreService String

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

dataprocMetastoreService string

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

dataproc_metastore_service str

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

dataprocMetastoreService String

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

MetastoreConfigResponse

DataprocMetastoreService string

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

DataprocMetastoreService string

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

dataprocMetastoreService String

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

dataprocMetastoreService string

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

dataproc_metastore_service str

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

dataprocMetastoreService String

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

Metric

MetricSource Pulumi.GoogleNative.Dataproc.V1.MetricMetricSource

Default metrics are collected unless metricOverrides are specified for the metric source (see Available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) for more information).

MetricOverrides List<string>

Optional. Specify one or more available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) to collect for the metric course (for the SPARK metric source, any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics will be collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics will not be collected. The collection of the default metrics for other OSS metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all default YARN metrics will be collected.

MetricSource MetricMetricSource

Default metrics are collected unless metricOverrides are specified for the metric source (see Available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) for more information).

MetricOverrides []string

Optional. Specify one or more available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) to collect for the metric course (for the SPARK metric source, any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics will be collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics will not be collected. The collection of the default metrics for other OSS metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all default YARN metrics will be collected.

metricSource MetricMetricSource

Default metrics are collected unless metricOverrides are specified for the metric source (see Available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) for more information).

metricOverrides List<String>

Optional. Specify one or more available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) to collect for the metric course (for the SPARK metric source, any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics will be collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics will not be collected. The collection of the default metrics for other OSS metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all default YARN metrics will be collected.

metricSource MetricMetricSource

Default metrics are collected unless metricOverrides are specified for the metric source (see Available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) for more information).

metricOverrides string[]

Optional. Specify one or more available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) to collect for the metric course (for the SPARK metric source, any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics will be collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics will not be collected. The collection of the default metrics for other OSS metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all default YARN metrics will be collected.

metric_source MetricMetricSource

Default metrics are collected unless metricOverrides are specified for the metric source (see Available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) for more information).

metric_overrides Sequence[str]

Optional. Specify one or more available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) to collect for the metric course (for the SPARK metric source, any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics will be collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics will not be collected. The collection of the default metrics for other OSS metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all default YARN metrics will be collected.

metricSource "METRIC_SOURCE_UNSPECIFIED" | "MONITORING_AGENT_DEFAULTS" | "HDFS" | "SPARK" | "YARN" | "SPARK_HISTORY_SERVER" | "HIVESERVER2" | "HIVEMETASTORE"

Default metrics are collected unless metricOverrides are specified for the metric source (see Available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) for more information).

metricOverrides List<String>

Optional. Specify one or more available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) to collect for the metric course (for the SPARK metric source, any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics will be collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics will not be collected. The collection of the default metrics for other OSS metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all default YARN metrics will be collected.

MetricMetricSource

MetricSourceUnspecified
METRIC_SOURCE_UNSPECIFIED

Required unspecified metric source.

MonitoringAgentDefaults
MONITORING_AGENT_DEFAULTS

Default monitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects default monitoring agent metrics, which are published with an agent.googleapis.com prefix.

Hdfs
HDFS

HDFS metric source.

Spark
SPARK

Spark metric source.

Yarn
YARN

YARN metric source.

SparkHistoryServer
SPARK_HISTORY_SERVER

Spark History Server metric source.

Hiveserver2
HIVESERVER2

Hiveserver2 metric source.

Hivemetastore
HIVEMETASTORE

hivemetastore metric source

MetricMetricSourceMetricSourceUnspecified
METRIC_SOURCE_UNSPECIFIED

Required unspecified metric source.

MetricMetricSourceMonitoringAgentDefaults
MONITORING_AGENT_DEFAULTS

Default monitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects default monitoring agent metrics, which are published with an agent.googleapis.com prefix.

MetricMetricSourceHdfs
HDFS

HDFS metric source.

MetricMetricSourceSpark
SPARK

Spark metric source.

MetricMetricSourceYarn
YARN

YARN metric source.

MetricMetricSourceSparkHistoryServer
SPARK_HISTORY_SERVER

Spark History Server metric source.

MetricMetricSourceHiveserver2
HIVESERVER2

Hiveserver2 metric source.

MetricMetricSourceHivemetastore
HIVEMETASTORE

hivemetastore metric source

MetricSourceUnspecified
METRIC_SOURCE_UNSPECIFIED

Required unspecified metric source.

MonitoringAgentDefaults
MONITORING_AGENT_DEFAULTS

Default monitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects default monitoring agent metrics, which are published with an agent.googleapis.com prefix.

Hdfs
HDFS

HDFS metric source.

Spark
SPARK

Spark metric source.

Yarn
YARN

YARN metric source.

SparkHistoryServer
SPARK_HISTORY_SERVER

Spark History Server metric source.

Hiveserver2
HIVESERVER2

Hiveserver2 metric source.

Hivemetastore
HIVEMETASTORE

hivemetastore metric source

MetricSourceUnspecified
METRIC_SOURCE_UNSPECIFIED

Required unspecified metric source.

MonitoringAgentDefaults
MONITORING_AGENT_DEFAULTS

Default monitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects default monitoring agent metrics, which are published with an agent.googleapis.com prefix.

Hdfs
HDFS

HDFS metric source.

Spark
SPARK

Spark metric source.

Yarn
YARN

YARN metric source.

SparkHistoryServer
SPARK_HISTORY_SERVER

Spark History Server metric source.

Hiveserver2
HIVESERVER2

Hiveserver2 metric source.

Hivemetastore
HIVEMETASTORE

hivemetastore metric source

METRIC_SOURCE_UNSPECIFIED
METRIC_SOURCE_UNSPECIFIED

Required unspecified metric source.

MONITORING_AGENT_DEFAULTS
MONITORING_AGENT_DEFAULTS

Default monitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects default monitoring agent metrics, which are published with an agent.googleapis.com prefix.

HDFS
HDFS

HDFS metric source.

SPARK
SPARK

Spark metric source.

YARN
YARN

YARN metric source.

SPARK_HISTORY_SERVER
SPARK_HISTORY_SERVER

Spark History Server metric source.

HIVESERVER2
HIVESERVER2

Hiveserver2 metric source.

HIVEMETASTORE
HIVEMETASTORE

hivemetastore metric source

"METRIC_SOURCE_UNSPECIFIED"
METRIC_SOURCE_UNSPECIFIED

Required unspecified metric source.

"MONITORING_AGENT_DEFAULTS"
MONITORING_AGENT_DEFAULTS

Default monitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects default monitoring agent metrics, which are published with an agent.googleapis.com prefix.

"HDFS"
HDFS

HDFS metric source.

"SPARK"
SPARK

Spark metric source.

"YARN"
YARN

YARN metric source.

"SPARK_HISTORY_SERVER"
SPARK_HISTORY_SERVER

Spark History Server metric source.

"HIVESERVER2"
HIVESERVER2

Hiveserver2 metric source.

"HIVEMETASTORE"
HIVEMETASTORE

hivemetastore metric source

MetricResponse

MetricOverrides List<string>

Optional. Specify one or more available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) to collect for the metric course (for the SPARK metric source, any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics will be collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics will not be collected. The collection of the default metrics for other OSS metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all default YARN metrics will be collected.

MetricSource string

Default metrics are collected unless metricOverrides are specified for the metric source (see Available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) for more information).

MetricOverrides []string

Optional. Specify one or more available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) to collect for the metric course (for the SPARK metric source, any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics will be collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics will not be collected. The collection of the default metrics for other OSS metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all default YARN metrics will be collected.

MetricSource string

Default metrics are collected unless metricOverrides are specified for the metric source (see Available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) for more information).

metricOverrides List<String>

Optional. Specify one or more available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) to collect for the metric course (for the SPARK metric source, any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics will be collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics will not be collected. The collection of the default metrics for other OSS metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all default YARN metrics will be collected.

metricSource String

Default metrics are collected unless metricOverrides are specified for the metric source (see Available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) for more information).

metricOverrides string[]

Optional. Specify one or more available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) to collect for the metric course (for the SPARK metric source, any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics will be collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics will not be collected. The collection of the default metrics for other OSS metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all default YARN metrics will be collected.

metricSource string

Default metrics are collected unless metricOverrides are specified for the metric source (see Available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) for more information).

metric_overrides Sequence[str]

Optional. Specify one or more available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) to collect for the metric course (for the SPARK metric source, any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics will be collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics will not be collected. The collection of the default metrics for other OSS metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all default YARN metrics will be collected.

metric_source str

Default metrics are collected unless metricOverrides are specified for the metric source (see Available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) for more information).

metricOverrides List<String>

Optional. Specify one or more available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) to collect for the metric course (for the SPARK metric source, any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics will be collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics will not be collected. The collection of the default metrics for other OSS metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all default YARN metrics will be collected.

metricSource String

Default metrics are collected unless metricOverrides are specified for the metric source (see Available OSS metrics (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics) for more information).

NamespacedGkeDeploymentTarget

ClusterNamespace string

Optional. A namespace within the GKE cluster to deploy into.

TargetGkeCluster string

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

ClusterNamespace string

Optional. A namespace within the GKE cluster to deploy into.

TargetGkeCluster string

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

clusterNamespace String

Optional. A namespace within the GKE cluster to deploy into.

targetGkeCluster String

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

clusterNamespace string

Optional. A namespace within the GKE cluster to deploy into.

targetGkeCluster string

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

cluster_namespace str

Optional. A namespace within the GKE cluster to deploy into.

target_gke_cluster str

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

clusterNamespace String

Optional. A namespace within the GKE cluster to deploy into.

targetGkeCluster String

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

NamespacedGkeDeploymentTargetResponse

ClusterNamespace string

Optional. A namespace within the GKE cluster to deploy into.

TargetGkeCluster string

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

ClusterNamespace string

Optional. A namespace within the GKE cluster to deploy into.

TargetGkeCluster string

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

clusterNamespace String

Optional. A namespace within the GKE cluster to deploy into.

targetGkeCluster String

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

clusterNamespace string

Optional. A namespace within the GKE cluster to deploy into.

targetGkeCluster string

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

cluster_namespace str

Optional. A namespace within the GKE cluster to deploy into.

target_gke_cluster str

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

clusterNamespace String

Optional. A namespace within the GKE cluster to deploy into.

targetGkeCluster String

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

NodeGroup

Roles List<Pulumi.GoogleNative.Dataproc.V1.NodeGroupRolesItem>

Node group roles.

Labels Dictionary<string, string>

Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

Name string

The Node group resource name (https://aip.dev/122).

NodeGroupConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfig

Optional. The node group instance group configuration.

Roles []NodeGroupRolesItem

Node group roles.

Labels map[string]string

Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

Name string

The Node group resource name (https://aip.dev/122).

NodeGroupConfig InstanceGroupConfig

Optional. The node group instance group configuration.

roles List<NodeGroupRolesItem>

Node group roles.

labels Map<String,String>

Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

name String

The Node group resource name (https://aip.dev/122).

nodeGroupConfig InstanceGroupConfig

Optional. The node group instance group configuration.

roles NodeGroupRolesItem[]

Node group roles.

labels {[key: string]: string}

Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

name string

The Node group resource name (https://aip.dev/122).

nodeGroupConfig InstanceGroupConfig

Optional. The node group instance group configuration.

roles Sequence[NodeGroupRolesItem]

Node group roles.

labels Mapping[str, str]

Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

name str

The Node group resource name (https://aip.dev/122).

node_group_config InstanceGroupConfig

Optional. The node group instance group configuration.

roles List<"ROLE_UNSPECIFIED" | "DRIVER">

Node group roles.

labels Map<String>

Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

name String

The Node group resource name (https://aip.dev/122).

nodeGroupConfig Property Map

Optional. The node group instance group configuration.

NodeGroupAffinity

NodeGroupUri string

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

NodeGroupUri string

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

nodeGroupUri String

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

nodeGroupUri string

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

node_group_uri str

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

nodeGroupUri String

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

NodeGroupAffinityResponse

NodeGroupUri string

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

NodeGroupUri string

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

nodeGroupUri String

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

nodeGroupUri string

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

node_group_uri str

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

nodeGroupUri String

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

NodeGroupResponse

Labels Dictionary<string, string>

Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

Name string

The Node group resource name (https://aip.dev/122).

NodeGroupConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse

Optional. The node group instance group configuration.

Roles List<string>

Node group roles.

Labels map[string]string

Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

Name string

The Node group resource name (https://aip.dev/122).

NodeGroupConfig InstanceGroupConfigResponse

Optional. The node group instance group configuration.

Roles []string

Node group roles.

labels Map<String,String>

Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

name String

The Node group resource name (https://aip.dev/122).

nodeGroupConfig InstanceGroupConfigResponse

Optional. The node group instance group configuration.

roles List<String>

Node group roles.

labels {[key: string]: string}

Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

name string

The Node group resource name (https://aip.dev/122).

nodeGroupConfig InstanceGroupConfigResponse

Optional. The node group instance group configuration.

roles string[]

Node group roles.

labels Mapping[str, str]

Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

name str

The Node group resource name (https://aip.dev/122).

node_group_config InstanceGroupConfigResponse

Optional. The node group instance group configuration.

roles Sequence[str]

Node group roles.

labels Map<String>

Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

name String

The Node group resource name (https://aip.dev/122).

nodeGroupConfig Property Map

Optional. The node group instance group configuration.

roles List<String>

Node group roles.

NodeGroupRolesItem

RoleUnspecified
ROLE_UNSPECIFIED

Required unspecified role.

Driver
DRIVER

Job drivers run on the node pool.

NodeGroupRolesItemRoleUnspecified
ROLE_UNSPECIFIED

Required unspecified role.

NodeGroupRolesItemDriver
DRIVER

Job drivers run on the node pool.

RoleUnspecified
ROLE_UNSPECIFIED

Required unspecified role.

Driver
DRIVER

Job drivers run on the node pool.

RoleUnspecified
ROLE_UNSPECIFIED

Required unspecified role.

Driver
DRIVER

Job drivers run on the node pool.

ROLE_UNSPECIFIED
ROLE_UNSPECIFIED

Required unspecified role.

DRIVER
DRIVER

Job drivers run on the node pool.

"ROLE_UNSPECIFIED"
ROLE_UNSPECIFIED

Required unspecified role.

"DRIVER"
DRIVER

Job drivers run on the node pool.

NodeInitializationAction

ExecutableFile string

Cloud Storage URI of executable file.

ExecutionTimeout string

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

ExecutableFile string

Cloud Storage URI of executable file.

ExecutionTimeout string

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

executableFile String

Cloud Storage URI of executable file.

executionTimeout String

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

executableFile string

Cloud Storage URI of executable file.

executionTimeout string

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

executable_file str

Cloud Storage URI of executable file.

execution_timeout str

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

executableFile String

Cloud Storage URI of executable file.

executionTimeout String

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

NodeInitializationActionResponse

ExecutableFile string

Cloud Storage URI of executable file.

ExecutionTimeout string

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

ExecutableFile string

Cloud Storage URI of executable file.

ExecutionTimeout string

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

executableFile String

Cloud Storage URI of executable file.

executionTimeout String

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

executableFile string

Cloud Storage URI of executable file.

executionTimeout string

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

executable_file str

Cloud Storage URI of executable file.

execution_timeout str

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

executableFile String

Cloud Storage URI of executable file.

executionTimeout String

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

ReservationAffinity

ConsumeReservationType Pulumi.GoogleNative.Dataproc.V1.ReservationAffinityConsumeReservationType

Optional. Type of reservation to consume

Key string

Optional. Corresponds to the label key of reservation resource.

Values List<string>

Optional. Corresponds to the label values of reservation resource.

ConsumeReservationType ReservationAffinityConsumeReservationType

Optional. Type of reservation to consume

Key string

Optional. Corresponds to the label key of reservation resource.

Values []string

Optional. Corresponds to the label values of reservation resource.

consumeReservationType ReservationAffinityConsumeReservationType

Optional. Type of reservation to consume

key String

Optional. Corresponds to the label key of reservation resource.

values List<String>

Optional. Corresponds to the label values of reservation resource.

consumeReservationType ReservationAffinityConsumeReservationType

Optional. Type of reservation to consume

key string

Optional. Corresponds to the label key of reservation resource.

values string[]

Optional. Corresponds to the label values of reservation resource.

consume_reservation_type ReservationAffinityConsumeReservationType

Optional. Type of reservation to consume

key str

Optional. Corresponds to the label key of reservation resource.

values Sequence[str]

Optional. Corresponds to the label values of reservation resource.

consumeReservationType "TYPE_UNSPECIFIED" | "NO_RESERVATION" | "ANY_RESERVATION" | "SPECIFIC_RESERVATION"

Optional. Type of reservation to consume

key String

Optional. Corresponds to the label key of reservation resource.

values List<String>

Optional. Corresponds to the label values of reservation resource.

ReservationAffinityConsumeReservationType

TypeUnspecified
TYPE_UNSPECIFIED
NoReservation
NO_RESERVATION

Do not consume from any allocated capacity.

AnyReservation
ANY_RESERVATION

Consume any reservation available.

SpecificReservation
SPECIFIC_RESERVATION

Must consume from a specific reservation. Must specify key value fields for specifying the reservations.

ReservationAffinityConsumeReservationTypeTypeUnspecified
TYPE_UNSPECIFIED
ReservationAffinityConsumeReservationTypeNoReservation
NO_RESERVATION

Do not consume from any allocated capacity.

ReservationAffinityConsumeReservationTypeAnyReservation
ANY_RESERVATION

Consume any reservation available.

ReservationAffinityConsumeReservationTypeSpecificReservation
SPECIFIC_RESERVATION

Must consume from a specific reservation. Must specify key value fields for specifying the reservations.

TypeUnspecified
TYPE_UNSPECIFIED
NoReservation
NO_RESERVATION

Do not consume from any allocated capacity.

AnyReservation
ANY_RESERVATION

Consume any reservation available.

SpecificReservation
SPECIFIC_RESERVATION

Must consume from a specific reservation. Must specify key value fields for specifying the reservations.

TypeUnspecified
TYPE_UNSPECIFIED
NoReservation
NO_RESERVATION

Do not consume from any allocated capacity.

AnyReservation
ANY_RESERVATION

Consume any reservation available.

SpecificReservation
SPECIFIC_RESERVATION

Must consume from a specific reservation. Must specify key value fields for specifying the reservations.

TYPE_UNSPECIFIED
TYPE_UNSPECIFIED
NO_RESERVATION
NO_RESERVATION

Do not consume from any allocated capacity.

ANY_RESERVATION
ANY_RESERVATION

Consume any reservation available.

SPECIFIC_RESERVATION
SPECIFIC_RESERVATION

Must consume from a specific reservation. Must specify key value fields for specifying the reservations.

"TYPE_UNSPECIFIED"
TYPE_UNSPECIFIED
"NO_RESERVATION"
NO_RESERVATION

Do not consume from any allocated capacity.

"ANY_RESERVATION"
ANY_RESERVATION

Consume any reservation available.

"SPECIFIC_RESERVATION"
SPECIFIC_RESERVATION

Must consume from a specific reservation. Must specify key value fields for specifying the reservations.

ReservationAffinityResponse

ConsumeReservationType string

Optional. Type of reservation to consume

Key string

Optional. Corresponds to the label key of reservation resource.

Values List<string>

Optional. Corresponds to the label values of reservation resource.

ConsumeReservationType string

Optional. Type of reservation to consume

Key string

Optional. Corresponds to the label key of reservation resource.

Values []string

Optional. Corresponds to the label values of reservation resource.

consumeReservationType String

Optional. Type of reservation to consume

key String

Optional. Corresponds to the label key of reservation resource.

values List<String>

Optional. Corresponds to the label values of reservation resource.

consumeReservationType string

Optional. Type of reservation to consume

key string

Optional. Corresponds to the label key of reservation resource.

values string[]

Optional. Corresponds to the label values of reservation resource.

consume_reservation_type str

Optional. Type of reservation to consume

key str

Optional. Corresponds to the label key of reservation resource.

values Sequence[str]

Optional. Corresponds to the label values of reservation resource.

consumeReservationType String

Optional. Type of reservation to consume

key String

Optional. Corresponds to the label key of reservation resource.

values List<String>

Optional. Corresponds to the label values of reservation resource.

SecurityConfig

IdentityConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.IdentityConfig

Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

KerberosConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.KerberosConfig

Optional. Kerberos related configuration.

IdentityConfig IdentityConfig

Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

KerberosConfig KerberosConfig

Optional. Kerberos related configuration.

identityConfig IdentityConfig

Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

kerberosConfig KerberosConfig

Optional. Kerberos related configuration.

identityConfig IdentityConfig

Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

kerberosConfig KerberosConfig

Optional. Kerberos related configuration.

identity_config IdentityConfig

Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

kerberos_config KerberosConfig

Optional. Kerberos related configuration.

identityConfig Property Map

Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

kerberosConfig Property Map

Optional. Kerberos related configuration.

SecurityConfigResponse

IdentityConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.IdentityConfigResponse

Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

KerberosConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.KerberosConfigResponse

Optional. Kerberos related configuration.

IdentityConfig IdentityConfigResponse

Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

KerberosConfig KerberosConfigResponse

Optional. Kerberos related configuration.

identityConfig IdentityConfigResponse

Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

kerberosConfig KerberosConfigResponse

Optional. Kerberos related configuration.

identityConfig IdentityConfigResponse

Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

kerberosConfig KerberosConfigResponse

Optional. Kerberos related configuration.

identity_config IdentityConfigResponse

Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

kerberos_config KerberosConfigResponse

Optional. Kerberos related configuration.

identityConfig Property Map

Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

kerberosConfig Property Map

Optional. Kerberos related configuration.

ShieldedInstanceConfig

EnableIntegrityMonitoring bool

Optional. Defines whether instances have integrity monitoring enabled.

EnableSecureBoot bool

Optional. Defines whether instances have Secure Boot enabled.

EnableVtpm bool

Optional. Defines whether instances have the vTPM enabled.

EnableIntegrityMonitoring bool

Optional. Defines whether instances have integrity monitoring enabled.

EnableSecureBoot bool

Optional. Defines whether instances have Secure Boot enabled.

EnableVtpm bool

Optional. Defines whether instances have the vTPM enabled.

enableIntegrityMonitoring Boolean

Optional. Defines whether instances have integrity monitoring enabled.

enableSecureBoot Boolean

Optional. Defines whether instances have Secure Boot enabled.

enableVtpm Boolean

Optional. Defines whether instances have the vTPM enabled.

enableIntegrityMonitoring boolean

Optional. Defines whether instances have integrity monitoring enabled.

enableSecureBoot boolean

Optional. Defines whether instances have Secure Boot enabled.

enableVtpm boolean

Optional. Defines whether instances have the vTPM enabled.

enable_integrity_monitoring bool

Optional. Defines whether instances have integrity monitoring enabled.

enable_secure_boot bool

Optional. Defines whether instances have Secure Boot enabled.

enable_vtpm bool

Optional. Defines whether instances have the vTPM enabled.

enableIntegrityMonitoring Boolean

Optional. Defines whether instances have integrity monitoring enabled.

enableSecureBoot Boolean

Optional. Defines whether instances have Secure Boot enabled.

enableVtpm Boolean

Optional. Defines whether instances have the vTPM enabled.

ShieldedInstanceConfigResponse

EnableIntegrityMonitoring bool

Optional. Defines whether instances have integrity monitoring enabled.

EnableSecureBoot bool

Optional. Defines whether instances have Secure Boot enabled.

EnableVtpm bool

Optional. Defines whether instances have the vTPM enabled.

EnableIntegrityMonitoring bool

Optional. Defines whether instances have integrity monitoring enabled.

EnableSecureBoot bool

Optional. Defines whether instances have Secure Boot enabled.

EnableVtpm bool

Optional. Defines whether instances have the vTPM enabled.

enableIntegrityMonitoring Boolean

Optional. Defines whether instances have integrity monitoring enabled.

enableSecureBoot Boolean

Optional. Defines whether instances have Secure Boot enabled.

enableVtpm Boolean

Optional. Defines whether instances have the vTPM enabled.

enableIntegrityMonitoring boolean

Optional. Defines whether instances have integrity monitoring enabled.

enableSecureBoot boolean

Optional. Defines whether instances have Secure Boot enabled.

enableVtpm boolean

Optional. Defines whether instances have the vTPM enabled.

enable_integrity_monitoring bool

Optional. Defines whether instances have integrity monitoring enabled.

enable_secure_boot bool

Optional. Defines whether instances have Secure Boot enabled.

enable_vtpm bool

Optional. Defines whether instances have the vTPM enabled.

enableIntegrityMonitoring Boolean

Optional. Defines whether instances have integrity monitoring enabled.

enableSecureBoot Boolean

Optional. Defines whether instances have Secure Boot enabled.

enableVtpm Boolean

Optional. Defines whether instances have the vTPM enabled.

SoftwareConfig

ImageVersion string

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

OptionalComponents List<Pulumi.GoogleNative.Dataproc.V1.SoftwareConfigOptionalComponentsItem>

Optional. The set of components to activate on the cluster.

Properties Dictionary<string, string>

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

ImageVersion string

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

OptionalComponents []SoftwareConfigOptionalComponentsItem

Optional. The set of components to activate on the cluster.

Properties map[string]string

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

imageVersion String

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

optionalComponents List<SoftwareConfigOptionalComponentsItem>

Optional. The set of components to activate on the cluster.

properties Map<String,String>

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

imageVersion string

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

optionalComponents SoftwareConfigOptionalComponentsItem[]

Optional. The set of components to activate on the cluster.

properties {[key: string]: string}

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

image_version str

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

optional_components Sequence[SoftwareConfigOptionalComponentsItem]

Optional. The set of components to activate on the cluster.

properties Mapping[str, str]

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

imageVersion String

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

optionalComponents List<"COMPONENT_UNSPECIFIED" | "ANACONDA" | "DOCKER" | "DRUID" | "FLINK" | "HBASE" | "HIVE_WEBHCAT" | "HUDI" | "JUPYTER" | "PRESTO" | "TRINO" | "RANGER" | "SOLR" | "ZEPPELIN" | "ZOOKEEPER">

Optional. The set of components to activate on the cluster.

properties Map<String>

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

SoftwareConfigOptionalComponentsItem

ComponentUnspecified
COMPONENT_UNSPECIFIED

Unspecified component. Specifying this will cause Cluster creation to fail.

Anaconda
ANACONDA

The Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.

Docker
DOCKER

Docker

Druid
DRUID

The Druid query engine. (alpha)

Flink
FLINK

Flink

Hbase
HBASE

HBase. (beta)

HiveWebhcat
HIVE_WEBHCAT

The Hive Web HCatalog (the REST service for accessing HCatalog).

Hudi
HUDI

Hudi.

Jupyter
JUPYTER

The Jupyter Notebook.

Presto
PRESTO

The Presto query engine.

Trino
TRINO

The Trino query engine.

Ranger
RANGER

The Ranger service.

Solr
SOLR

The Solr service.

Zeppelin
ZEPPELIN

The Zeppelin notebook.

Zookeeper
ZOOKEEPER

The Zookeeper service.

SoftwareConfigOptionalComponentsItemComponentUnspecified
COMPONENT_UNSPECIFIED

Unspecified component. Specifying this will cause Cluster creation to fail.

SoftwareConfigOptionalComponentsItemAnaconda
ANACONDA

The Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.

SoftwareConfigOptionalComponentsItemDocker
DOCKER

Docker

SoftwareConfigOptionalComponentsItemDruid
DRUID

The Druid query engine. (alpha)

SoftwareConfigOptionalComponentsItemFlink
FLINK

Flink

SoftwareConfigOptionalComponentsItemHbase
HBASE

HBase. (beta)

SoftwareConfigOptionalComponentsItemHiveWebhcat
HIVE_WEBHCAT

The Hive Web HCatalog (the REST service for accessing HCatalog).

SoftwareConfigOptionalComponentsItemHudi
HUDI

Hudi.

SoftwareConfigOptionalComponentsItemJupyter
JUPYTER

The Jupyter Notebook.

SoftwareConfigOptionalComponentsItemPresto
PRESTO

The Presto query engine.

SoftwareConfigOptionalComponentsItemTrino
TRINO

The Trino query engine.

SoftwareConfigOptionalComponentsItemRanger
RANGER

The Ranger service.

SoftwareConfigOptionalComponentsItemSolr
SOLR

The Solr service.

SoftwareConfigOptionalComponentsItemZeppelin
ZEPPELIN

The Zeppelin notebook.

SoftwareConfigOptionalComponentsItemZookeeper
ZOOKEEPER

The Zookeeper service.

ComponentUnspecified
COMPONENT_UNSPECIFIED

Unspecified component. Specifying this will cause Cluster creation to fail.

Anaconda
ANACONDA

The Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.

Docker
DOCKER

Docker

Druid
DRUID

The Druid query engine. (alpha)

Flink
FLINK

Flink

Hbase
HBASE

HBase. (beta)

HiveWebhcat
HIVE_WEBHCAT

The Hive Web HCatalog (the REST service for accessing HCatalog).

Hudi
HUDI

Hudi.

Jupyter
JUPYTER

The Jupyter Notebook.

Presto
PRESTO

The Presto query engine.

Trino
TRINO

The Trino query engine.

Ranger
RANGER

The Ranger service.

Solr
SOLR

The Solr service.

Zeppelin
ZEPPELIN

The Zeppelin notebook.

Zookeeper
ZOOKEEPER

The Zookeeper service.

ComponentUnspecified
COMPONENT_UNSPECIFIED

Unspecified component. Specifying this will cause Cluster creation to fail.

Anaconda
ANACONDA

The Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.

Docker
DOCKER

Docker

Druid
DRUID

The Druid query engine. (alpha)

Flink
FLINK

Flink

Hbase
HBASE

HBase. (beta)

HiveWebhcat
HIVE_WEBHCAT

The Hive Web HCatalog (the REST service for accessing HCatalog).

Hudi
HUDI

Hudi.

Jupyter
JUPYTER

The Jupyter Notebook.

Presto
PRESTO

The Presto query engine.

Trino
TRINO

The Trino query engine.

Ranger
RANGER

The Ranger service.

Solr
SOLR

The Solr service.

Zeppelin
ZEPPELIN

The Zeppelin notebook.

Zookeeper
ZOOKEEPER

The Zookeeper service.

COMPONENT_UNSPECIFIED
COMPONENT_UNSPECIFIED

Unspecified component. Specifying this will cause Cluster creation to fail.

ANACONDA
ANACONDA

The Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.

DOCKER
DOCKER

Docker

DRUID
DRUID

The Druid query engine. (alpha)

FLINK
FLINK

Flink

HBASE
HBASE

HBase. (beta)

HIVE_WEBHCAT
HIVE_WEBHCAT

The Hive Web HCatalog (the REST service for accessing HCatalog).

HUDI
HUDI

Hudi.

JUPYTER
JUPYTER

The Jupyter Notebook.

PRESTO
PRESTO

The Presto query engine.

TRINO
TRINO

The Trino query engine.

RANGER
RANGER

The Ranger service.

SOLR
SOLR

The Solr service.

ZEPPELIN
ZEPPELIN

The Zeppelin notebook.

ZOOKEEPER
ZOOKEEPER

The Zookeeper service.

"COMPONENT_UNSPECIFIED"
COMPONENT_UNSPECIFIED

Unspecified component. Specifying this will cause Cluster creation to fail.

"ANACONDA"
ANACONDA

The Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.

"DOCKER"
DOCKER

Docker

"DRUID"
DRUID

The Druid query engine. (alpha)

"FLINK"
FLINK

Flink

"HBASE"
HBASE

HBase. (beta)

"HIVE_WEBHCAT"
HIVE_WEBHCAT

The Hive Web HCatalog (the REST service for accessing HCatalog).

"HUDI"
HUDI

Hudi.

"JUPYTER"
JUPYTER

The Jupyter Notebook.

"PRESTO"
PRESTO

The Presto query engine.

"TRINO"
TRINO

The Trino query engine.

"RANGER"
RANGER

The Ranger service.

"SOLR"
SOLR

The Solr service.

"ZEPPELIN"
ZEPPELIN

The Zeppelin notebook.

"ZOOKEEPER"
ZOOKEEPER

The Zookeeper service.

SoftwareConfigResponse

ImageVersion string

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

OptionalComponents List<string>

Optional. The set of components to activate on the cluster.

Properties Dictionary<string, string>

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

ImageVersion string

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

OptionalComponents []string

Optional. The set of components to activate on the cluster.

Properties map[string]string

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

imageVersion String

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

optionalComponents List<String>

Optional. The set of components to activate on the cluster.

properties Map<String,String>

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

imageVersion string

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

optionalComponents string[]

Optional. The set of components to activate on the cluster.

properties {[key: string]: string}

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

image_version str

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

optional_components Sequence[str]

Optional. The set of components to activate on the cluster.

properties Mapping[str, str]

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

imageVersion String

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

optionalComponents List<String>

Optional. The set of components to activate on the cluster.

properties Map<String>

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

SparkHistoryServerConfig

DataprocCluster string

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

DataprocCluster string

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataprocCluster String

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataprocCluster string

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataproc_cluster str

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataprocCluster String

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

SparkHistoryServerConfigResponse

DataprocCluster string

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

DataprocCluster string

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataprocCluster String

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataprocCluster string

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataproc_cluster str

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

dataprocCluster String

Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

VirtualClusterConfig

KubernetesClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.KubernetesClusterConfig

The configuration for running the Dataproc cluster on Kubernetes.

AuxiliaryServicesConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.AuxiliaryServicesConfig

Optional. Configuration of auxiliary services used by this cluster.

StagingBucket string

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

KubernetesClusterConfig KubernetesClusterConfig

The configuration for running the Dataproc cluster on Kubernetes.

AuxiliaryServicesConfig AuxiliaryServicesConfig

Optional. Configuration of auxiliary services used by this cluster.

StagingBucket string

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

kubernetesClusterConfig KubernetesClusterConfig

The configuration for running the Dataproc cluster on Kubernetes.

auxiliaryServicesConfig AuxiliaryServicesConfig

Optional. Configuration of auxiliary services used by this cluster.

stagingBucket String

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

kubernetesClusterConfig KubernetesClusterConfig

The configuration for running the Dataproc cluster on Kubernetes.

auxiliaryServicesConfig AuxiliaryServicesConfig

Optional. Configuration of auxiliary services used by this cluster.

stagingBucket string

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

kubernetes_cluster_config KubernetesClusterConfig

The configuration for running the Dataproc cluster on Kubernetes.

auxiliary_services_config AuxiliaryServicesConfig

Optional. Configuration of auxiliary services used by this cluster.

staging_bucket str

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

kubernetesClusterConfig Property Map

The configuration for running the Dataproc cluster on Kubernetes.

auxiliaryServicesConfig Property Map

Optional. Configuration of auxiliary services used by this cluster.

stagingBucket String

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

VirtualClusterConfigResponse

AuxiliaryServicesConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.AuxiliaryServicesConfigResponse

Optional. Configuration of auxiliary services used by this cluster.

KubernetesClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.KubernetesClusterConfigResponse

The configuration for running the Dataproc cluster on Kubernetes.

StagingBucket string

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.