google-native logo
Google Cloud Native v0.28.0, Feb 2 23

google-native.dataproc/v1beta2.getCluster

Gets the resource representation for a cluster in a project.

Using getCluster

Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

function getCluster(args: GetClusterArgs, opts?: InvokeOptions): Promise<GetClusterResult>
function getClusterOutput(args: GetClusterOutputArgs, opts?: InvokeOptions): Output<GetClusterResult>
def get_cluster(cluster_name: Optional[str] = None,
                project: Optional[str] = None,
                region: Optional[str] = None,
                opts: Optional[InvokeOptions] = None) -> GetClusterResult
def get_cluster_output(cluster_name: Optional[pulumi.Input[str]] = None,
                project: Optional[pulumi.Input[str]] = None,
                region: Optional[pulumi.Input[str]] = None,
                opts: Optional[InvokeOptions] = None) -> Output[GetClusterResult]
func LookupCluster(ctx *Context, args *LookupClusterArgs, opts ...InvokeOption) (*LookupClusterResult, error)
func LookupClusterOutput(ctx *Context, args *LookupClusterOutputArgs, opts ...InvokeOption) LookupClusterResultOutput

> Note: This function is named LookupCluster in the Go SDK.

public static class GetCluster 
{
    public static Task<GetClusterResult> InvokeAsync(GetClusterArgs args, InvokeOptions? opts = null)
    public static Output<GetClusterResult> Invoke(GetClusterInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetClusterResult> getCluster(GetClusterArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
fn::invoke:
  function: google-native:dataproc/v1beta2:getCluster
  arguments:
    # arguments dictionary

The following arguments are supported:

ClusterName string
Region string
Project string
ClusterName string
Region string
Project string
clusterName String
region String
project String
clusterName string
region string
project string
clusterName String
region String
project String

getCluster Result

The following output properties are available:

ClusterName string

The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.

ClusterUuid string

A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

Config Pulumi.GoogleNative.Dataproc.V1Beta2.Outputs.ClusterConfigResponse

The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.

Labels Dictionary<string, string>

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

Metrics Pulumi.GoogleNative.Dataproc.V1Beta2.Outputs.ClusterMetricsResponse

Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

Project string

The Google Cloud Platform project ID that the cluster belongs to.

Status Pulumi.GoogleNative.Dataproc.V1Beta2.Outputs.ClusterStatusResponse

Cluster status.

StatusHistory List<Pulumi.GoogleNative.Dataproc.V1Beta2.Outputs.ClusterStatusResponse>

The previous cluster status.

ClusterName string

The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.

ClusterUuid string

A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

Config ClusterConfigResponse

The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.

Labels map[string]string

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

Metrics ClusterMetricsResponse

Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

Project string

The Google Cloud Platform project ID that the cluster belongs to.

Status ClusterStatusResponse

Cluster status.

StatusHistory []ClusterStatusResponse

The previous cluster status.

clusterName String

The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.

clusterUuid String

A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

config ClusterConfigResponse

The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.

labels Map<String,String>

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

metrics ClusterMetricsResponse

Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

project String

The Google Cloud Platform project ID that the cluster belongs to.

status ClusterStatusResponse

Cluster status.

statusHistory List<ClusterStatusResponse>

The previous cluster status.

clusterName string

The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.

clusterUuid string

A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

config ClusterConfigResponse

The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.

labels {[key: string]: string}

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

metrics ClusterMetricsResponse

Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

project string

The Google Cloud Platform project ID that the cluster belongs to.

status ClusterStatusResponse

Cluster status.

statusHistory ClusterStatusResponse[]

The previous cluster status.

cluster_name str

The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.

cluster_uuid str

A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

config ClusterConfigResponse

The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.

labels Mapping[str, str]

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

metrics ClusterMetricsResponse

Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

project str

The Google Cloud Platform project ID that the cluster belongs to.

status ClusterStatusResponse

Cluster status.

status_history Sequence[ClusterStatusResponse]

The previous cluster status.

clusterName String

The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.

clusterUuid String

A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

config Property Map

The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.

labels Map<String>

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

metrics Property Map

Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

project String

The Google Cloud Platform project ID that the cluster belongs to.

status Property Map

Cluster status.

statusHistory List<Property Map>

The previous cluster status.

Supporting Types

AcceleratorConfigResponse

AcceleratorCount int

The number of the accelerator cards of this type exposed to this instance.

AcceleratorTypeUri string

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

AcceleratorCount int

The number of the accelerator cards of this type exposed to this instance.

AcceleratorTypeUri string

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

acceleratorCount Integer

The number of the accelerator cards of this type exposed to this instance.

acceleratorTypeUri String

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

acceleratorCount number

The number of the accelerator cards of this type exposed to this instance.

acceleratorTypeUri string

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

accelerator_count int

The number of the accelerator cards of this type exposed to this instance.

accelerator_type_uri str

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

acceleratorCount Number

The number of the accelerator cards of this type exposed to this instance.

acceleratorTypeUri String

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

AutoscalingConfigResponse

PolicyUri string

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

PolicyUri string

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

policyUri String

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

policyUri string

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

policy_uri str

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

policyUri String

Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

ClusterConfigResponse

AutoscalingConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.AutoscalingConfigResponse

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

ConfigBucket string

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

EncryptionConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.EncryptionConfigResponse

Optional. Encryption settings for the cluster.

EndpointConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.EndpointConfigResponse

Optional. Port/endpoint configuration for this cluster

GceClusterConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.GceClusterConfigResponse

Optional. The shared Compute Engine config settings for all instances in a cluster.

GkeClusterConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.GkeClusterConfigResponse

Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

InitializationActions List<Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.NodeInitializationActionResponse>

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

LifecycleConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.LifecycleConfigResponse

Optional. The config setting for auto delete cluster schedule.

MasterConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfigResponse

Optional. The Compute Engine config settings for the master instance in a cluster.

MetastoreConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.MetastoreConfigResponse

Optional. Metastore configuration.

SecondaryWorkerConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfigResponse

Optional. The Compute Engine config settings for additional worker instances in a cluster.

SecurityConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.SecurityConfigResponse

Optional. Security related configuration.

SoftwareConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.SoftwareConfigResponse

Optional. The config settings for software inside the cluster.

TempBucket string

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

WorkerConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfigResponse

Optional. The Compute Engine config settings for worker instances in a cluster.

AutoscalingConfig AutoscalingConfigResponse

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

ConfigBucket string

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

EncryptionConfig EncryptionConfigResponse

Optional. Encryption settings for the cluster.

EndpointConfig EndpointConfigResponse

Optional. Port/endpoint configuration for this cluster

GceClusterConfig GceClusterConfigResponse

Optional. The shared Compute Engine config settings for all instances in a cluster.

GkeClusterConfig GkeClusterConfigResponse

Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

InitializationActions []NodeInitializationActionResponse

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

LifecycleConfig LifecycleConfigResponse

Optional. The config setting for auto delete cluster schedule.

MasterConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for the master instance in a cluster.

MetastoreConfig MetastoreConfigResponse

Optional. Metastore configuration.

SecondaryWorkerConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for additional worker instances in a cluster.

SecurityConfig SecurityConfigResponse

Optional. Security related configuration.

SoftwareConfig SoftwareConfigResponse

Optional. The config settings for software inside the cluster.

TempBucket string

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

WorkerConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for worker instances in a cluster.

autoscalingConfig AutoscalingConfigResponse

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

configBucket String

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

encryptionConfig EncryptionConfigResponse

Optional. Encryption settings for the cluster.

endpointConfig EndpointConfigResponse

Optional. Port/endpoint configuration for this cluster

gceClusterConfig GceClusterConfigResponse

Optional. The shared Compute Engine config settings for all instances in a cluster.

gkeClusterConfig GkeClusterConfigResponse

Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

initializationActions List<NodeInitializationActionResponse>

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

lifecycleConfig LifecycleConfigResponse

Optional. The config setting for auto delete cluster schedule.

masterConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for the master instance in a cluster.

metastoreConfig MetastoreConfigResponse

Optional. Metastore configuration.

secondaryWorkerConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for additional worker instances in a cluster.

securityConfig SecurityConfigResponse

Optional. Security related configuration.

softwareConfig SoftwareConfigResponse

Optional. The config settings for software inside the cluster.

tempBucket String

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

workerConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for worker instances in a cluster.

autoscalingConfig AutoscalingConfigResponse

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

configBucket string

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

encryptionConfig EncryptionConfigResponse

Optional. Encryption settings for the cluster.

endpointConfig EndpointConfigResponse

Optional. Port/endpoint configuration for this cluster

gceClusterConfig GceClusterConfigResponse

Optional. The shared Compute Engine config settings for all instances in a cluster.

gkeClusterConfig GkeClusterConfigResponse

Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

initializationActions NodeInitializationActionResponse[]

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

lifecycleConfig LifecycleConfigResponse

Optional. The config setting for auto delete cluster schedule.

masterConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for the master instance in a cluster.

metastoreConfig MetastoreConfigResponse

Optional. Metastore configuration.

secondaryWorkerConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for additional worker instances in a cluster.

securityConfig SecurityConfigResponse

Optional. Security related configuration.

softwareConfig SoftwareConfigResponse

Optional. The config settings for software inside the cluster.

tempBucket string

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

workerConfig InstanceGroupConfigResponse

Optional. The Compute Engine config settings for worker instances in a cluster.

autoscaling_config AutoscalingConfigResponse

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

config_bucket str

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

encryption_config EncryptionConfigResponse

Optional. Encryption settings for the cluster.

endpoint_config EndpointConfigResponse

Optional. Port/endpoint configuration for this cluster

gce_cluster_config GceClusterConfigResponse

Optional. The shared Compute Engine config settings for all instances in a cluster.

gke_cluster_config GkeClusterConfigResponse

Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

initialization_actions Sequence[NodeInitializationActionResponse]

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

lifecycle_config LifecycleConfigResponse

Optional. The config setting for auto delete cluster schedule.

master_config InstanceGroupConfigResponse

Optional. The Compute Engine config settings for the master instance in a cluster.

metastore_config MetastoreConfigResponse

Optional. Metastore configuration.

secondary_worker_config InstanceGroupConfigResponse

Optional. The Compute Engine config settings for additional worker instances in a cluster.

security_config SecurityConfigResponse

Optional. Security related configuration.

software_config SoftwareConfigResponse

Optional. The config settings for software inside the cluster.

temp_bucket str

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

worker_config InstanceGroupConfigResponse

Optional. The Compute Engine config settings for worker instances in a cluster.

autoscalingConfig Property Map

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

configBucket String

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

encryptionConfig Property Map

Optional. Encryption settings for the cluster.

endpointConfig Property Map

Optional. Port/endpoint configuration for this cluster

gceClusterConfig Property Map

Optional. The shared Compute Engine config settings for all instances in a cluster.

gkeClusterConfig Property Map

Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

initializationActions List<Property Map>

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

lifecycleConfig Property Map

Optional. The config setting for auto delete cluster schedule.

masterConfig Property Map

Optional. The Compute Engine config settings for the master instance in a cluster.

metastoreConfig Property Map

Optional. Metastore configuration.

secondaryWorkerConfig Property Map

Optional. The Compute Engine config settings for additional worker instances in a cluster.

securityConfig Property Map

Optional. Security related configuration.

softwareConfig Property Map

Optional. The config settings for software inside the cluster.

tempBucket String

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

workerConfig Property Map

Optional. The Compute Engine config settings for worker instances in a cluster.

ClusterMetricsResponse

HdfsMetrics Dictionary<string, string>

The HDFS metrics.

YarnMetrics Dictionary<string, string>

The YARN metrics.

HdfsMetrics map[string]string

The HDFS metrics.

YarnMetrics map[string]string

The YARN metrics.

hdfsMetrics Map<String,String>

The HDFS metrics.

yarnMetrics Map<String,String>

The YARN metrics.

hdfsMetrics {[key: string]: string}

The HDFS metrics.

yarnMetrics {[key: string]: string}

The YARN metrics.

hdfs_metrics Mapping[str, str]

The HDFS metrics.

yarn_metrics Mapping[str, str]

The YARN metrics.

hdfsMetrics Map<String>

The HDFS metrics.

yarnMetrics Map<String>

The YARN metrics.

ClusterStatusResponse

Detail string

Optional details of cluster's state.

State string

The cluster's state.

StateStartTime string

Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

Substate string

Additional state information that includes status reported by the agent.

Detail string

Optional details of cluster's state.

State string

The cluster's state.

StateStartTime string

Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

Substate string

Additional state information that includes status reported by the agent.

detail String

Optional details of cluster's state.

state String

The cluster's state.

stateStartTime String

Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

substate String

Additional state information that includes status reported by the agent.

detail string

Optional details of cluster's state.

state string

The cluster's state.

stateStartTime string

Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

substate string

Additional state information that includes status reported by the agent.

detail str

Optional details of cluster's state.

state str

The cluster's state.

state_start_time str

Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

substate str

Additional state information that includes status reported by the agent.

detail String

Optional details of cluster's state.

state String

The cluster's state.

stateStartTime String

Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

substate String

Additional state information that includes status reported by the agent.

DiskConfigResponse

BootDiskSizeGb int

Optional. Size in GB of the boot disk (default is 500GB).

BootDiskType string

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

NumLocalSsds int

Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.

BootDiskSizeGb int

Optional. Size in GB of the boot disk (default is 500GB).

BootDiskType string

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

NumLocalSsds int

Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.

bootDiskSizeGb Integer

Optional. Size in GB of the boot disk (default is 500GB).

bootDiskType String

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

numLocalSsds Integer

Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.

bootDiskSizeGb number

Optional. Size in GB of the boot disk (default is 500GB).

bootDiskType string

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

numLocalSsds number

Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.

boot_disk_size_gb int

Optional. Size in GB of the boot disk (default is 500GB).

boot_disk_type str

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

num_local_ssds int

Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.

bootDiskSizeGb Number

Optional. Size in GB of the boot disk (default is 500GB).

bootDiskType String

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

numLocalSsds Number

Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.

EncryptionConfigResponse

GcePdKmsKeyName string

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

GcePdKmsKeyName string

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

gcePdKmsKeyName String

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

gcePdKmsKeyName string

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

gce_pd_kms_key_name str

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

gcePdKmsKeyName String

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

EndpointConfigResponse

EnableHttpPortAccess bool

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

HttpPorts Dictionary<string, string>

The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

EnableHttpPortAccess bool

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

HttpPorts map[string]string

The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

enableHttpPortAccess Boolean

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

httpPorts Map<String,String>

The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

enableHttpPortAccess boolean

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

httpPorts {[key: string]: string}

The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

enable_http_port_access bool

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

http_ports Mapping[str, str]

The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

enableHttpPortAccess Boolean

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

httpPorts Map<String>

The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

GceClusterConfigResponse

InternalIpOnly bool

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

Metadata Dictionary<string, string>

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

NetworkUri string

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default

NodeGroupAffinity Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.NodeGroupAffinityResponse

Optional. Node Group Affinity for sole-tenant clusters.

PrivateIpv6GoogleAccess string

Optional. The type of IPv6 access for a cluster.

ReservationAffinity Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ReservationAffinityResponse

Optional. Reservation Affinity for consuming Zonal reservation.

ServiceAccount string

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

ServiceAccountScopes List<string>

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

ShieldedInstanceConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ShieldedInstanceConfigResponse

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

SubnetworkUri string

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0

Tags List<string>

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

ZoneUri string

Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f

InternalIpOnly bool

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

Metadata map[string]string

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

NetworkUri string

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default

NodeGroupAffinity NodeGroupAffinityResponse

Optional. Node Group Affinity for sole-tenant clusters.

PrivateIpv6GoogleAccess string

Optional. The type of IPv6 access for a cluster.

ReservationAffinity ReservationAffinityResponse

Optional. Reservation Affinity for consuming Zonal reservation.

ServiceAccount string

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

ServiceAccountScopes []string

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

ShieldedInstanceConfig ShieldedInstanceConfigResponse

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

SubnetworkUri string

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0

Tags []string

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

ZoneUri string

Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f

internalIpOnly Boolean

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

metadata Map<String,String>

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

networkUri String

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default

nodeGroupAffinity NodeGroupAffinityResponse

Optional. Node Group Affinity for sole-tenant clusters.

privateIpv6GoogleAccess String

Optional. The type of IPv6 access for a cluster.

reservationAffinity ReservationAffinityResponse

Optional. Reservation Affinity for consuming Zonal reservation.

serviceAccount String

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

serviceAccountScopes List<String>

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

shieldedInstanceConfig ShieldedInstanceConfigResponse

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

subnetworkUri String

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0

tags List<String>

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

zoneUri String

Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f

internalIpOnly boolean

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

metadata {[key: string]: string}

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

networkUri string

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default

nodeGroupAffinity NodeGroupAffinityResponse

Optional. Node Group Affinity for sole-tenant clusters.

privateIpv6GoogleAccess string

Optional. The type of IPv6 access for a cluster.

reservationAffinity ReservationAffinityResponse

Optional. Reservation Affinity for consuming Zonal reservation.

serviceAccount string

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

serviceAccountScopes string[]

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

shieldedInstanceConfig ShieldedInstanceConfigResponse

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

subnetworkUri string

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0

tags string[]

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

zoneUri string

Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f

internal_ip_only bool

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

metadata Mapping[str, str]

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

network_uri str

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default

node_group_affinity NodeGroupAffinityResponse

Optional. Node Group Affinity for sole-tenant clusters.

private_ipv6_google_access str

Optional. The type of IPv6 access for a cluster.

reservation_affinity ReservationAffinityResponse

Optional. Reservation Affinity for consuming Zonal reservation.

service_account str

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

service_account_scopes Sequence[str]

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

shielded_instance_config ShieldedInstanceConfigResponse

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

subnetwork_uri str

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0

tags Sequence[str]

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

zone_uri str

Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f

internalIpOnly Boolean

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

metadata Map<String>

The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

networkUri String

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default

nodeGroupAffinity Property Map

Optional. Node Group Affinity for sole-tenant clusters.

privateIpv6GoogleAccess String

Optional. The type of IPv6 access for a cluster.

reservationAffinity Property Map

Optional. Reservation Affinity for consuming Zonal reservation.

serviceAccount String

Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

serviceAccountScopes List<String>

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

shieldedInstanceConfig Property Map

Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

subnetworkUri String

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0

tags List<String>

The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

zoneUri String

Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f

GkeClusterConfigResponse

namespacedGkeDeploymentTarget Property Map

Optional. A target for the deployment.

InstanceGroupConfigResponse

Accelerators List<Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.AcceleratorConfigResponse>

Optional. The Compute Engine accelerator configuration for these instances.

DiskConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.DiskConfigResponse

Optional. Disk option config settings.

ImageUri string

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

InstanceNames List<string>

The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

InstanceReferences List<Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.InstanceReferenceResponse>

List of references to Compute Engine instances.

IsPreemptible bool

Specifies that this instance group contains preemptible instances.

MachineTypeUri string

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

ManagedGroupConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ManagedGroupConfigResponse

The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

MinCpuPlatform string

Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

NumInstances int

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

Preemptibility string

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

Accelerators []AcceleratorConfigResponse

Optional. The Compute Engine accelerator configuration for these instances.

DiskConfig DiskConfigResponse

Optional. Disk option config settings.

ImageUri string

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

InstanceNames []string

The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

InstanceReferences []InstanceReferenceResponse

List of references to Compute Engine instances.

IsPreemptible bool

Specifies that this instance group contains preemptible instances.

MachineTypeUri string

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

ManagedGroupConfig ManagedGroupConfigResponse

The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

MinCpuPlatform string

Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

NumInstances int

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

Preemptibility string

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

accelerators List<AcceleratorConfigResponse>

Optional. The Compute Engine accelerator configuration for these instances.

diskConfig DiskConfigResponse

Optional. Disk option config settings.

imageUri String

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

instanceNames List<String>

The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

instanceReferences List<InstanceReferenceResponse>

List of references to Compute Engine instances.

isPreemptible Boolean

Specifies that this instance group contains preemptible instances.

machineTypeUri String

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

managedGroupConfig ManagedGroupConfigResponse

The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

minCpuPlatform String

Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

numInstances Integer

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

preemptibility String

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

accelerators AcceleratorConfigResponse[]

Optional. The Compute Engine accelerator configuration for these instances.

diskConfig DiskConfigResponse

Optional. Disk option config settings.

imageUri string

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

instanceNames string[]

The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

instanceReferences InstanceReferenceResponse[]

List of references to Compute Engine instances.

isPreemptible boolean

Specifies that this instance group contains preemptible instances.

machineTypeUri string

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

managedGroupConfig ManagedGroupConfigResponse

The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

minCpuPlatform string

Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

numInstances number

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

preemptibility string

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

accelerators Sequence[AcceleratorConfigResponse]

Optional. The Compute Engine accelerator configuration for these instances.

disk_config DiskConfigResponse

Optional. Disk option config settings.

image_uri str

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

instance_names Sequence[str]

The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

instance_references Sequence[InstanceReferenceResponse]

List of references to Compute Engine instances.

is_preemptible bool

Specifies that this instance group contains preemptible instances.

machine_type_uri str

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

managed_group_config ManagedGroupConfigResponse

The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

min_cpu_platform str

Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

num_instances int

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

preemptibility str

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

accelerators List<Property Map>

Optional. The Compute Engine accelerator configuration for these instances.

diskConfig Property Map

Optional. Disk option config settings.

imageUri String

Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

instanceNames List<String>

The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

instanceReferences List<Property Map>

List of references to Compute Engine instances.

isPreemptible Boolean

Specifies that this instance group contains preemptible instances.

machineTypeUri String

Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

managedGroupConfig Property Map

The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

minCpuPlatform String

Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

numInstances Number

Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

preemptibility String

Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

InstanceReferenceResponse

InstanceId string

The unique identifier of the Compute Engine instance.

InstanceName string

The user-friendly name of the Compute Engine instance.

PublicKey string

The public key used for sharing data with this instance.

InstanceId string

The unique identifier of the Compute Engine instance.

InstanceName string

The user-friendly name of the Compute Engine instance.

PublicKey string

The public key used for sharing data with this instance.

instanceId String

The unique identifier of the Compute Engine instance.

instanceName String

The user-friendly name of the Compute Engine instance.

publicKey String

The public key used for sharing data with this instance.

instanceId string

The unique identifier of the Compute Engine instance.

instanceName string

The user-friendly name of the Compute Engine instance.

publicKey string

The public key used for sharing data with this instance.

instance_id str

The unique identifier of the Compute Engine instance.

instance_name str

The user-friendly name of the Compute Engine instance.

public_key str

The public key used for sharing data with this instance.

instanceId String

The unique identifier of the Compute Engine instance.

instanceName String

The user-friendly name of the Compute Engine instance.

publicKey String

The public key used for sharing data with this instance.

KerberosConfigResponse

CrossRealmTrustAdminServer string

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

CrossRealmTrustKdc string

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

CrossRealmTrustRealm string

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

CrossRealmTrustSharedPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

EnableKerberos bool

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

KdcDbKeyUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

KeyPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

KeystorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

KeystoreUri string

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

KmsKeyUri string

Optional. The uri of the KMS key used to encrypt various sensitive files.

Realm string

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

RootPrincipalPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

TgtLifetimeHours int

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

TruststorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

TruststoreUri string

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

CrossRealmTrustAdminServer string

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

CrossRealmTrustKdc string

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

CrossRealmTrustRealm string

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

CrossRealmTrustSharedPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

EnableKerberos bool

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

KdcDbKeyUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

KeyPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

KeystorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

KeystoreUri string

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

KmsKeyUri string

Optional. The uri of the KMS key used to encrypt various sensitive files.

Realm string

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

RootPrincipalPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

TgtLifetimeHours int

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

TruststorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

TruststoreUri string

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

crossRealmTrustAdminServer String

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustKdc String

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustRealm String

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

crossRealmTrustSharedPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

enableKerberos Boolean

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

kdcDbKeyUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

keyPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

keystorePasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

keystoreUri String

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

kmsKeyUri String

Optional. The uri of the KMS key used to encrypt various sensitive files.

realm String

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

rootPrincipalPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

tgtLifetimeHours Integer

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

truststorePasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

truststoreUri String

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

crossRealmTrustAdminServer string

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustKdc string

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustRealm string

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

crossRealmTrustSharedPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

enableKerberos boolean

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

kdcDbKeyUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

keyPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

keystorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

keystoreUri string

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

kmsKeyUri string

Optional. The uri of the KMS key used to encrypt various sensitive files.

realm string

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

rootPrincipalPasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

tgtLifetimeHours number

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

truststorePasswordUri string

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

truststoreUri string

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

cross_realm_trust_admin_server str

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

cross_realm_trust_kdc str

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

cross_realm_trust_realm str

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

cross_realm_trust_shared_password_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

enable_kerberos bool

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

kdc_db_key_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

key_password_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

keystore_password_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

keystore_uri str

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

kms_key_uri str

Optional. The uri of the KMS key used to encrypt various sensitive files.

realm str

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

root_principal_password_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

tgt_lifetime_hours int

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

truststore_password_uri str

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

truststore_uri str

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

crossRealmTrustAdminServer String

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustKdc String

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

crossRealmTrustRealm String

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

crossRealmTrustSharedPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

enableKerberos Boolean

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

kdcDbKeyUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

keyPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

keystorePasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

keystoreUri String

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

kmsKeyUri String

Optional. The uri of the KMS key used to encrypt various sensitive files.

realm String

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

rootPrincipalPasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

tgtLifetimeHours Number

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

truststorePasswordUri String

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

truststoreUri String

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

LifecycleConfigResponse

AutoDeleteTime string

Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

AutoDeleteTtl string

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

IdleDeleteTtl string

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

IdleStartTime string

The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

AutoDeleteTime string

Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

AutoDeleteTtl string

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

IdleDeleteTtl string

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

IdleStartTime string

The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTime String

Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTtl String

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idleDeleteTtl String

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idleStartTime String

The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTime string

Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTtl string

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idleDeleteTtl string

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idleStartTime string

The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

auto_delete_time str

Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

auto_delete_ttl str

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idle_delete_ttl str

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idle_start_time str

The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTime String

Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

autoDeleteTtl String

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idleDeleteTtl String

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

idleStartTime String

The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

ManagedGroupConfigResponse

InstanceGroupManagerName string

The name of the Instance Group Manager for this group.

InstanceTemplateName string

The name of the Instance Template used for the Managed Instance Group.

InstanceGroupManagerName string

The name of the Instance Group Manager for this group.

InstanceTemplateName string

The name of the Instance Template used for the Managed Instance Group.

instanceGroupManagerName String

The name of the Instance Group Manager for this group.

instanceTemplateName String

The name of the Instance Template used for the Managed Instance Group.

instanceGroupManagerName string

The name of the Instance Group Manager for this group.

instanceTemplateName string

The name of the Instance Template used for the Managed Instance Group.

instance_group_manager_name str

The name of the Instance Group Manager for this group.

instance_template_name str

The name of the Instance Template used for the Managed Instance Group.

instanceGroupManagerName String

The name of the Instance Group Manager for this group.

instanceTemplateName String

The name of the Instance Template used for the Managed Instance Group.

MetastoreConfigResponse

DataprocMetastoreService string

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

DataprocMetastoreService string

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

dataprocMetastoreService String

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

dataprocMetastoreService string

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

dataproc_metastore_service str

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

dataprocMetastoreService String

Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

NamespacedGkeDeploymentTargetResponse

ClusterNamespace string

Optional. A namespace within the GKE cluster to deploy into.

TargetGkeCluster string

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

ClusterNamespace string

Optional. A namespace within the GKE cluster to deploy into.

TargetGkeCluster string

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

clusterNamespace String

Optional. A namespace within the GKE cluster to deploy into.

targetGkeCluster String

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

clusterNamespace string

Optional. A namespace within the GKE cluster to deploy into.

targetGkeCluster string

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

cluster_namespace str

Optional. A namespace within the GKE cluster to deploy into.

target_gke_cluster str

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

clusterNamespace String

Optional. A namespace within the GKE cluster to deploy into.

targetGkeCluster String

Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

NodeGroupAffinityResponse

NodeGroupUri string

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1

NodeGroupUri string

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1

nodeGroupUri String

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1

nodeGroupUri string

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1

node_group_uri str

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1

nodeGroupUri String

The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1

NodeInitializationActionResponse

ExecutableFile string

Cloud Storage URI of executable file.

ExecutionTimeout string

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

ExecutableFile string

Cloud Storage URI of executable file.

ExecutionTimeout string

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

executableFile String

Cloud Storage URI of executable file.

executionTimeout String

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

executableFile string

Cloud Storage URI of executable file.

executionTimeout string

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

executable_file str

Cloud Storage URI of executable file.

execution_timeout str

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

executableFile String

Cloud Storage URI of executable file.

executionTimeout String

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

ReservationAffinityResponse

ConsumeReservationType string

Optional. Type of reservation to consume

Key string

Optional. Corresponds to the label key of reservation resource.

Values List<string>

Optional. Corresponds to the label values of reservation resource.

ConsumeReservationType string

Optional. Type of reservation to consume

Key string

Optional. Corresponds to the label key of reservation resource.

Values []string

Optional. Corresponds to the label values of reservation resource.

consumeReservationType String

Optional. Type of reservation to consume

key String

Optional. Corresponds to the label key of reservation resource.

values List<String>

Optional. Corresponds to the label values of reservation resource.

consumeReservationType string

Optional. Type of reservation to consume

key string

Optional. Corresponds to the label key of reservation resource.

values string[]

Optional. Corresponds to the label values of reservation resource.

consume_reservation_type str

Optional. Type of reservation to consume

key str

Optional. Corresponds to the label key of reservation resource.

values Sequence[str]

Optional. Corresponds to the label values of reservation resource.

consumeReservationType String

Optional. Type of reservation to consume

key String

Optional. Corresponds to the label key of reservation resource.

values List<String>

Optional. Corresponds to the label values of reservation resource.

SecurityConfigResponse

KerberosConfig KerberosConfigResponse

Optional. Kerberos related configuration.

kerberosConfig KerberosConfigResponse

Optional. Kerberos related configuration.

kerberosConfig KerberosConfigResponse

Optional. Kerberos related configuration.

kerberos_config KerberosConfigResponse

Optional. Kerberos related configuration.

kerberosConfig Property Map

Optional. Kerberos related configuration.

ShieldedInstanceConfigResponse

EnableIntegrityMonitoring bool

Optional. Defines whether instances have integrity monitoring enabled.

EnableSecureBoot bool

Optional. Defines whether instances have Secure Boot enabled.

EnableVtpm bool

Optional. Defines whether instances have the vTPM enabled.

EnableIntegrityMonitoring bool

Optional. Defines whether instances have integrity monitoring enabled.

EnableSecureBoot bool

Optional. Defines whether instances have Secure Boot enabled.

EnableVtpm bool

Optional. Defines whether instances have the vTPM enabled.

enableIntegrityMonitoring Boolean

Optional. Defines whether instances have integrity monitoring enabled.

enableSecureBoot Boolean

Optional. Defines whether instances have Secure Boot enabled.

enableVtpm Boolean

Optional. Defines whether instances have the vTPM enabled.

enableIntegrityMonitoring boolean

Optional. Defines whether instances have integrity monitoring enabled.

enableSecureBoot boolean

Optional. Defines whether instances have Secure Boot enabled.

enableVtpm boolean

Optional. Defines whether instances have the vTPM enabled.

enable_integrity_monitoring bool

Optional. Defines whether instances have integrity monitoring enabled.

enable_secure_boot bool

Optional. Defines whether instances have Secure Boot enabled.

enable_vtpm bool

Optional. Defines whether instances have the vTPM enabled.

enableIntegrityMonitoring Boolean

Optional. Defines whether instances have integrity monitoring enabled.

enableSecureBoot Boolean

Optional. Defines whether instances have Secure Boot enabled.

enableVtpm Boolean

Optional. Defines whether instances have the vTPM enabled.

SoftwareConfigResponse

ImageVersion string

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

OptionalComponents List<string>

The set of optional components to activate on the cluster.

Properties Dictionary<string, string>

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

ImageVersion string

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

OptionalComponents []string

The set of optional components to activate on the cluster.

Properties map[string]string

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

imageVersion String

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

optionalComponents List<String>

The set of optional components to activate on the cluster.

properties Map<String,String>

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

imageVersion string

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

optionalComponents string[]

The set of optional components to activate on the cluster.

properties {[key: string]: string}

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

image_version str

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

optional_components Sequence[str]

The set of optional components to activate on the cluster.

properties Mapping[str, str]

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

imageVersion String

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

optionalComponents List<String>

The set of optional components to activate on the cluster.

properties Map<String>

Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

Package Details

Repository
Google Cloud Native pulumi/pulumi-google-native
License
Apache-2.0