1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataproc
  5. dataproc/v1
  6. getCluster

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.31.1 published on Thursday, Jul 20, 2023 by Pulumi

google-native.dataproc/v1.getCluster

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.31.1 published on Thursday, Jul 20, 2023 by Pulumi

    Gets the resource representation for a cluster in a project.

    Using getCluster

    Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

    function getCluster(args: GetClusterArgs, opts?: InvokeOptions): Promise<GetClusterResult>
    function getClusterOutput(args: GetClusterOutputArgs, opts?: InvokeOptions): Output<GetClusterResult>
    def get_cluster(cluster_name: Optional[str] = None,
                    project: Optional[str] = None,
                    region: Optional[str] = None,
                    opts: Optional[InvokeOptions] = None) -> GetClusterResult
    def get_cluster_output(cluster_name: Optional[pulumi.Input[str]] = None,
                    project: Optional[pulumi.Input[str]] = None,
                    region: Optional[pulumi.Input[str]] = None,
                    opts: Optional[InvokeOptions] = None) -> Output[GetClusterResult]
    func LookupCluster(ctx *Context, args *LookupClusterArgs, opts ...InvokeOption) (*LookupClusterResult, error)
    func LookupClusterOutput(ctx *Context, args *LookupClusterOutputArgs, opts ...InvokeOption) LookupClusterResultOutput

    > Note: This function is named LookupCluster in the Go SDK.

    public static class GetCluster 
    {
        public static Task<GetClusterResult> InvokeAsync(GetClusterArgs args, InvokeOptions? opts = null)
        public static Output<GetClusterResult> Invoke(GetClusterInvokeArgs args, InvokeOptions? opts = null)
    }
    public static CompletableFuture<GetClusterResult> getCluster(GetClusterArgs args, InvokeOptions options)
    // Output-based functions aren't available in Java yet
    
    fn::invoke:
      function: google-native:dataproc/v1:getCluster
      arguments:
        # arguments dictionary

    The following arguments are supported:

    ClusterName string
    Region string
    Project string
    ClusterName string
    Region string
    Project string
    clusterName String
    region String
    project String
    clusterName string
    region string
    project string
    clusterName String
    region String
    project String

    getCluster Result

    The following output properties are available:

    ClusterName string

    The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.

    ClusterUuid string

    A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

    Config Pulumi.GoogleNative.Dataproc.V1.Outputs.ClusterConfigResponse

    Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.

    Labels Dictionary<string, string>

    Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

    Metrics Pulumi.GoogleNative.Dataproc.V1.Outputs.ClusterMetricsResponse

    Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

    Project string

    The Google Cloud Platform project ID that the cluster belongs to.

    Status Pulumi.GoogleNative.Dataproc.V1.Outputs.ClusterStatusResponse

    Cluster status.

    StatusHistory List<Pulumi.GoogleNative.Dataproc.V1.Outputs.ClusterStatusResponse>

    The previous cluster status.

    VirtualClusterConfig Pulumi.GoogleNative.Dataproc.V1.Outputs.VirtualClusterConfigResponse

    Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.

    ClusterName string

    The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.

    ClusterUuid string

    A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

    Config ClusterConfigResponse

    Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.

    Labels map[string]string

    Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

    Metrics ClusterMetricsResponse

    Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

    Project string

    The Google Cloud Platform project ID that the cluster belongs to.

    Status ClusterStatusResponse

    Cluster status.

    StatusHistory []ClusterStatusResponse

    The previous cluster status.

    VirtualClusterConfig VirtualClusterConfigResponse

    Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.

    clusterName String

    The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.

    clusterUuid String

    A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

    config ClusterConfigResponse

    Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.

    labels Map<String,String>

    Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

    metrics ClusterMetricsResponse

    Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

    project String

    The Google Cloud Platform project ID that the cluster belongs to.

    status ClusterStatusResponse

    Cluster status.

    statusHistory List<ClusterStatusResponse>

    The previous cluster status.

    virtualClusterConfig VirtualClusterConfigResponse

    Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.

    clusterName string

    The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.

    clusterUuid string

    A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

    config ClusterConfigResponse

    Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.

    labels {[key: string]: string}

    Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

    metrics ClusterMetricsResponse

    Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

    project string

    The Google Cloud Platform project ID that the cluster belongs to.

    status ClusterStatusResponse

    Cluster status.

    statusHistory ClusterStatusResponse[]

    The previous cluster status.

    virtualClusterConfig VirtualClusterConfigResponse

    Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.

    cluster_name str

    The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.

    cluster_uuid str

    A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

    config ClusterConfigResponse

    Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.

    labels Mapping[str, str]

    Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

    metrics ClusterMetricsResponse

    Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

    project str

    The Google Cloud Platform project ID that the cluster belongs to.

    status ClusterStatusResponse

    Cluster status.

    status_history Sequence[ClusterStatusResponse]

    The previous cluster status.

    virtual_cluster_config VirtualClusterConfigResponse

    Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.

    clusterName String

    The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.

    clusterUuid String

    A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

    config Property Map

    Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.

    labels Map<String>

    Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.

    metrics Property Map

    Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

    project String

    The Google Cloud Platform project ID that the cluster belongs to.

    status Property Map

    Cluster status.

    statusHistory List<Property Map>

    The previous cluster status.

    virtualClusterConfig Property Map

    Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.

    Supporting Types

    AcceleratorConfigResponse

    AcceleratorCount int

    The number of the accelerator cards of this type exposed to this instance.

    AcceleratorTypeUri string

    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

    AcceleratorCount int

    The number of the accelerator cards of this type exposed to this instance.

    AcceleratorTypeUri string

    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

    acceleratorCount Integer

    The number of the accelerator cards of this type exposed to this instance.

    acceleratorTypeUri String

    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

    acceleratorCount number

    The number of the accelerator cards of this type exposed to this instance.

    acceleratorTypeUri string

    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

    accelerator_count int

    The number of the accelerator cards of this type exposed to this instance.

    accelerator_type_uri str

    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

    acceleratorCount Number

    The number of the accelerator cards of this type exposed to this instance.

    acceleratorTypeUri String

    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

    AutoscalingConfigResponse

    PolicyUri string

    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

    PolicyUri string

    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

    policyUri String

    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

    policyUri string

    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

    policy_uri str

    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

    policyUri String

    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

    AuxiliaryNodeGroupResponse

    NodeGroup Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeGroupResponse

    Node group configuration.

    NodeGroupId string

    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

    NodeGroup NodeGroupResponse

    Node group configuration.

    NodeGroupId string

    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

    nodeGroup NodeGroupResponse

    Node group configuration.

    nodeGroupId String

    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

    nodeGroup NodeGroupResponse

    Node group configuration.

    nodeGroupId string

    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

    node_group NodeGroupResponse

    Node group configuration.

    node_group_id str

    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

    nodeGroup Property Map

    Node group configuration.

    nodeGroupId String

    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

    AuxiliaryServicesConfigResponse

    MetastoreConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.MetastoreConfigResponse

    Optional. The Hive Metastore configuration for this workload.

    SparkHistoryServerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfigResponse

    Optional. The Spark History Server configuration for the workload.

    MetastoreConfig MetastoreConfigResponse

    Optional. The Hive Metastore configuration for this workload.

    SparkHistoryServerConfig SparkHistoryServerConfigResponse

    Optional. The Spark History Server configuration for the workload.

    metastoreConfig MetastoreConfigResponse

    Optional. The Hive Metastore configuration for this workload.

    sparkHistoryServerConfig SparkHistoryServerConfigResponse

    Optional. The Spark History Server configuration for the workload.

    metastoreConfig MetastoreConfigResponse

    Optional. The Hive Metastore configuration for this workload.

    sparkHistoryServerConfig SparkHistoryServerConfigResponse

    Optional. The Spark History Server configuration for the workload.

    metastore_config MetastoreConfigResponse

    Optional. The Hive Metastore configuration for this workload.

    spark_history_server_config SparkHistoryServerConfigResponse

    Optional. The Spark History Server configuration for the workload.

    metastoreConfig Property Map

    Optional. The Hive Metastore configuration for this workload.

    sparkHistoryServerConfig Property Map

    Optional. The Spark History Server configuration for the workload.

    ClusterConfigResponse

    AutoscalingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.AutoscalingConfigResponse

    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

    AuxiliaryNodeGroups List<Pulumi.GoogleNative.Dataproc.V1.Inputs.AuxiliaryNodeGroupResponse>

    Optional. The node group settings.

    ConfigBucket string

    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    DataprocMetricConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.DataprocMetricConfigResponse

    Optional. The config for Dataproc metrics.

    EncryptionConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.EncryptionConfigResponse

    Optional. Encryption settings for the cluster.

    EndpointConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.EndpointConfigResponse

    Optional. Port/endpoint configuration for this cluster

    GceClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GceClusterConfigResponse

    Optional. The shared Compute Engine config settings for all instances in a cluster.

    GkeClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeClusterConfigResponse

    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

    InitializationActions List<Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeInitializationActionResponse>

    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

    LifecycleConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LifecycleConfigResponse

    Optional. Lifecycle setting for the cluster.

    MasterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse

    Optional. The Compute Engine config settings for the cluster's master instance.

    MetastoreConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.MetastoreConfigResponse

    Optional. Metastore configuration.

    SecondaryWorkerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse

    Optional. The Compute Engine config settings for a cluster's secondary worker instances

    SecurityConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SecurityConfigResponse

    Optional. Security settings for the cluster.

    SoftwareConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SoftwareConfigResponse

    Optional. The config settings for cluster software.

    TempBucket string

    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    WorkerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse

    Optional. The Compute Engine config settings for the cluster's worker instances.

    AutoscalingConfig AutoscalingConfigResponse

    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

    AuxiliaryNodeGroups []AuxiliaryNodeGroupResponse

    Optional. The node group settings.

    ConfigBucket string

    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    DataprocMetricConfig DataprocMetricConfigResponse

    Optional. The config for Dataproc metrics.

    EncryptionConfig EncryptionConfigResponse

    Optional. Encryption settings for the cluster.

    EndpointConfig EndpointConfigResponse

    Optional. Port/endpoint configuration for this cluster

    GceClusterConfig GceClusterConfigResponse

    Optional. The shared Compute Engine config settings for all instances in a cluster.

    GkeClusterConfig GkeClusterConfigResponse

    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

    InitializationActions []NodeInitializationActionResponse

    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

    LifecycleConfig LifecycleConfigResponse

    Optional. Lifecycle setting for the cluster.

    MasterConfig InstanceGroupConfigResponse

    Optional. The Compute Engine config settings for the cluster's master instance.

    MetastoreConfig MetastoreConfigResponse

    Optional. Metastore configuration.

    SecondaryWorkerConfig InstanceGroupConfigResponse

    Optional. The Compute Engine config settings for a cluster's secondary worker instances

    SecurityConfig SecurityConfigResponse

    Optional. Security settings for the cluster.

    SoftwareConfig SoftwareConfigResponse

    Optional. The config settings for cluster software.

    TempBucket string

    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    WorkerConfig InstanceGroupConfigResponse

    Optional. The Compute Engine config settings for the cluster's worker instances.

    autoscalingConfig AutoscalingConfigResponse

    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

    auxiliaryNodeGroups List<AuxiliaryNodeGroupResponse>

    Optional. The node group settings.

    configBucket String

    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    dataprocMetricConfig DataprocMetricConfigResponse

    Optional. The config for Dataproc metrics.

    encryptionConfig EncryptionConfigResponse

    Optional. Encryption settings for the cluster.

    endpointConfig EndpointConfigResponse

    Optional. Port/endpoint configuration for this cluster

    gceClusterConfig GceClusterConfigResponse

    Optional. The shared Compute Engine config settings for all instances in a cluster.

    gkeClusterConfig GkeClusterConfigResponse

    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

    initializationActions List<NodeInitializationActionResponse>

    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

    lifecycleConfig LifecycleConfigResponse

    Optional. Lifecycle setting for the cluster.

    masterConfig InstanceGroupConfigResponse

    Optional. The Compute Engine config settings for the cluster's master instance.

    metastoreConfig MetastoreConfigResponse

    Optional. Metastore configuration.

    secondaryWorkerConfig InstanceGroupConfigResponse

    Optional. The Compute Engine config settings for a cluster's secondary worker instances

    securityConfig SecurityConfigResponse

    Optional. Security settings for the cluster.

    softwareConfig SoftwareConfigResponse

    Optional. The config settings for cluster software.

    tempBucket String

    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    workerConfig InstanceGroupConfigResponse

    Optional. The Compute Engine config settings for the cluster's worker instances.

    autoscalingConfig AutoscalingConfigResponse

    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

    auxiliaryNodeGroups AuxiliaryNodeGroupResponse[]

    Optional. The node group settings.

    configBucket string

    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    dataprocMetricConfig DataprocMetricConfigResponse

    Optional. The config for Dataproc metrics.

    encryptionConfig EncryptionConfigResponse

    Optional. Encryption settings for the cluster.

    endpointConfig EndpointConfigResponse

    Optional. Port/endpoint configuration for this cluster

    gceClusterConfig GceClusterConfigResponse

    Optional. The shared Compute Engine config settings for all instances in a cluster.

    gkeClusterConfig GkeClusterConfigResponse

    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

    initializationActions NodeInitializationActionResponse[]

    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

    lifecycleConfig LifecycleConfigResponse

    Optional. Lifecycle setting for the cluster.

    masterConfig InstanceGroupConfigResponse

    Optional. The Compute Engine config settings for the cluster's master instance.

    metastoreConfig MetastoreConfigResponse

    Optional. Metastore configuration.

    secondaryWorkerConfig InstanceGroupConfigResponse

    Optional. The Compute Engine config settings for a cluster's secondary worker instances

    securityConfig SecurityConfigResponse

    Optional. Security settings for the cluster.

    softwareConfig SoftwareConfigResponse

    Optional. The config settings for cluster software.

    tempBucket string

    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    workerConfig InstanceGroupConfigResponse

    Optional. The Compute Engine config settings for the cluster's worker instances.

    autoscaling_config AutoscalingConfigResponse

    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

    auxiliary_node_groups Sequence[AuxiliaryNodeGroupResponse]

    Optional. The node group settings.

    config_bucket str

    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    dataproc_metric_config DataprocMetricConfigResponse

    Optional. The config for Dataproc metrics.

    encryption_config EncryptionConfigResponse

    Optional. Encryption settings for the cluster.

    endpoint_config EndpointConfigResponse

    Optional. Port/endpoint configuration for this cluster

    gce_cluster_config GceClusterConfigResponse

    Optional. The shared Compute Engine config settings for all instances in a cluster.

    gke_cluster_config GkeClusterConfigResponse

    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

    initialization_actions Sequence[NodeInitializationActionResponse]

    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

    lifecycle_config LifecycleConfigResponse

    Optional. Lifecycle setting for the cluster.

    master_config InstanceGroupConfigResponse

    Optional. The Compute Engine config settings for the cluster's master instance.

    metastore_config MetastoreConfigResponse

    Optional. Metastore configuration.

    secondary_worker_config InstanceGroupConfigResponse

    Optional. The Compute Engine config settings for a cluster's secondary worker instances

    security_config SecurityConfigResponse

    Optional. Security settings for the cluster.

    software_config SoftwareConfigResponse

    Optional. The config settings for cluster software.

    temp_bucket str

    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    worker_config InstanceGroupConfigResponse

    Optional. The Compute Engine config settings for the cluster's worker instances.

    autoscalingConfig Property Map

    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

    auxiliaryNodeGroups List<Property Map>

    Optional. The node group settings.

    configBucket String

    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    dataprocMetricConfig Property Map

    Optional. The config for Dataproc metrics.

    encryptionConfig Property Map

    Optional. Encryption settings for the cluster.

    endpointConfig Property Map

    Optional. Port/endpoint configuration for this cluster

    gceClusterConfig Property Map

    Optional. The shared Compute Engine config settings for all instances in a cluster.

    gkeClusterConfig Property Map

    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

    initializationActions List<Property Map>

    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

    lifecycleConfig Property Map

    Optional. Lifecycle setting for the cluster.

    masterConfig Property Map

    Optional. The Compute Engine config settings for the cluster's master instance.

    metastoreConfig Property Map

    Optional. Metastore configuration.

    secondaryWorkerConfig Property Map

    Optional. The Compute Engine config settings for a cluster's secondary worker instances

    securityConfig Property Map

    Optional. Security settings for the cluster.

    softwareConfig Property Map

    Optional. The config settings for cluster software.

    tempBucket String

    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    workerConfig Property Map

    Optional. The Compute Engine config settings for the cluster's worker instances.

    ClusterMetricsResponse

    HdfsMetrics Dictionary<string, string>

    The HDFS metrics.

    YarnMetrics Dictionary<string, string>

    YARN metrics.

    HdfsMetrics map[string]string

    The HDFS metrics.

    YarnMetrics map[string]string

    YARN metrics.

    hdfsMetrics Map<String,String>

    The HDFS metrics.

    yarnMetrics Map<String,String>

    YARN metrics.

    hdfsMetrics {[key: string]: string}

    The HDFS metrics.

    yarnMetrics {[key: string]: string}

    YARN metrics.

    hdfs_metrics Mapping[str, str]

    The HDFS metrics.

    yarn_metrics Mapping[str, str]

    YARN metrics.

    hdfsMetrics Map<String>

    The HDFS metrics.

    yarnMetrics Map<String>

    YARN metrics.

    ClusterStatusResponse

    Detail string

    Optional. Output only. Details of cluster's state.

    State string

    The cluster's state.

    StateStartTime string

    Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    Substate string

    Additional state information that includes status reported by the agent.

    Detail string

    Optional. Output only. Details of cluster's state.

    State string

    The cluster's state.

    StateStartTime string

    Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    Substate string

    Additional state information that includes status reported by the agent.

    detail String

    Optional. Output only. Details of cluster's state.

    state String

    The cluster's state.

    stateStartTime String

    Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    substate String

    Additional state information that includes status reported by the agent.

    detail string

    Optional. Output only. Details of cluster's state.

    state string

    The cluster's state.

    stateStartTime string

    Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    substate string

    Additional state information that includes status reported by the agent.

    detail str

    Optional. Output only. Details of cluster's state.

    state str

    The cluster's state.

    state_start_time str

    Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    substate str

    Additional state information that includes status reported by the agent.

    detail String

    Optional. Output only. Details of cluster's state.

    state String

    The cluster's state.

    stateStartTime String

    Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    substate String

    Additional state information that includes status reported by the agent.

    ConfidentialInstanceConfigResponse

    EnableConfidentialCompute bool

    Optional. Defines whether the instance should have confidential compute enabled.

    EnableConfidentialCompute bool

    Optional. Defines whether the instance should have confidential compute enabled.

    enableConfidentialCompute Boolean

    Optional. Defines whether the instance should have confidential compute enabled.

    enableConfidentialCompute boolean

    Optional. Defines whether the instance should have confidential compute enabled.

    enable_confidential_compute bool

    Optional. Defines whether the instance should have confidential compute enabled.

    enableConfidentialCompute Boolean

    Optional. Defines whether the instance should have confidential compute enabled.

    DataprocMetricConfigResponse

    Metrics []MetricResponse

    Metrics sources to enable.

    metrics List<MetricResponse>

    Metrics sources to enable.

    metrics MetricResponse[]

    Metrics sources to enable.

    metrics Sequence[MetricResponse]

    Metrics sources to enable.

    metrics List<Property Map>

    Metrics sources to enable.

    DiskConfigResponse

    BootDiskSizeGb int

    Optional. Size in GB of the boot disk (default is 500GB).

    BootDiskType string

    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

    LocalSsdInterface string

    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

    NumLocalSsds int

    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

    BootDiskSizeGb int

    Optional. Size in GB of the boot disk (default is 500GB).

    BootDiskType string

    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

    LocalSsdInterface string

    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

    NumLocalSsds int

    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

    bootDiskSizeGb Integer

    Optional. Size in GB of the boot disk (default is 500GB).

    bootDiskType String

    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

    localSsdInterface String

    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

    numLocalSsds Integer

    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

    bootDiskSizeGb number

    Optional. Size in GB of the boot disk (default is 500GB).

    bootDiskType string

    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

    localSsdInterface string

    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

    numLocalSsds number

    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

    boot_disk_size_gb int

    Optional. Size in GB of the boot disk (default is 500GB).

    boot_disk_type str

    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

    local_ssd_interface str

    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

    num_local_ssds int

    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

    bootDiskSizeGb Number

    Optional. Size in GB of the boot disk (default is 500GB).

    bootDiskType String

    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).

    localSsdInterface String

    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).

    numLocalSsds Number

    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

    EncryptionConfigResponse

    GcePdKmsKeyName string

    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

    KmsKey string

    Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

    GcePdKmsKeyName string

    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

    KmsKey string

    Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

    gcePdKmsKeyName String

    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

    kmsKey String

    Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

    gcePdKmsKeyName string

    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

    kmsKey string

    Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

    gce_pd_kms_key_name str

    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

    kms_key str

    Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

    gcePdKmsKeyName String

    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

    kmsKey String

    Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.

    EndpointConfigResponse

    EnableHttpPortAccess bool

    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

    HttpPorts Dictionary<string, string>

    The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

    EnableHttpPortAccess bool

    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

    HttpPorts map[string]string

    The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

    enableHttpPortAccess Boolean

    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

    httpPorts Map<String,String>

    The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

    enableHttpPortAccess boolean

    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

    httpPorts {[key: string]: string}

    The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

    enable_http_port_access bool

    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

    http_ports Mapping[str, str]

    The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

    enableHttpPortAccess Boolean

    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

    httpPorts Map<String>

    The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

    GceClusterConfigResponse

    ConfidentialInstanceConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ConfidentialInstanceConfigResponse

    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

    InternalIpOnly bool

    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

    Metadata Dictionary<string, string>

    The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

    NetworkUri string

    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

    NodeGroupAffinity Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeGroupAffinityResponse

    Optional. Node Group Affinity for sole-tenant clusters.

    PrivateIpv6GoogleAccess string

    Optional. The type of IPv6 access for a cluster.

    ReservationAffinity Pulumi.GoogleNative.Dataproc.V1.Inputs.ReservationAffinityResponse

    Optional. Reservation Affinity for consuming Zonal reservation.

    ServiceAccount string

    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

    ServiceAccountScopes List<string>

    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

    ShieldedInstanceConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ShieldedInstanceConfigResponse

    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

    SubnetworkUri string

    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

    Tags List<string>

    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

    ZoneUri string

    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

    ConfidentialInstanceConfig ConfidentialInstanceConfigResponse

    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

    InternalIpOnly bool

    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

    Metadata map[string]string

    The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

    NetworkUri string

    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

    NodeGroupAffinity NodeGroupAffinityResponse

    Optional. Node Group Affinity for sole-tenant clusters.

    PrivateIpv6GoogleAccess string

    Optional. The type of IPv6 access for a cluster.

    ReservationAffinity ReservationAffinityResponse

    Optional. Reservation Affinity for consuming Zonal reservation.

    ServiceAccount string

    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

    ServiceAccountScopes []string

    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

    ShieldedInstanceConfig ShieldedInstanceConfigResponse

    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

    SubnetworkUri string

    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

    Tags []string

    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

    ZoneUri string

    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

    confidentialInstanceConfig ConfidentialInstanceConfigResponse

    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

    internalIpOnly Boolean

    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

    metadata Map<String,String>

    The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

    networkUri String

    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

    nodeGroupAffinity NodeGroupAffinityResponse

    Optional. Node Group Affinity for sole-tenant clusters.

    privateIpv6GoogleAccess String

    Optional. The type of IPv6 access for a cluster.

    reservationAffinity ReservationAffinityResponse

    Optional. Reservation Affinity for consuming Zonal reservation.

    serviceAccount String

    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

    serviceAccountScopes List<String>

    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

    shieldedInstanceConfig ShieldedInstanceConfigResponse

    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

    subnetworkUri String

    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

    tags List<String>

    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

    zoneUri String

    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

    confidentialInstanceConfig ConfidentialInstanceConfigResponse

    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

    internalIpOnly boolean

    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

    metadata {[key: string]: string}

    The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

    networkUri string

    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

    nodeGroupAffinity NodeGroupAffinityResponse

    Optional. Node Group Affinity for sole-tenant clusters.

    privateIpv6GoogleAccess string

    Optional. The type of IPv6 access for a cluster.

    reservationAffinity ReservationAffinityResponse

    Optional. Reservation Affinity for consuming Zonal reservation.

    serviceAccount string

    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

    serviceAccountScopes string[]

    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

    shieldedInstanceConfig ShieldedInstanceConfigResponse

    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

    subnetworkUri string

    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

    tags string[]

    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

    zoneUri string

    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

    confidential_instance_config ConfidentialInstanceConfigResponse

    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

    internal_ip_only bool

    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

    metadata Mapping[str, str]

    The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

    network_uri str

    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

    node_group_affinity NodeGroupAffinityResponse

    Optional. Node Group Affinity for sole-tenant clusters.

    private_ipv6_google_access str

    Optional. The type of IPv6 access for a cluster.

    reservation_affinity ReservationAffinityResponse

    Optional. Reservation Affinity for consuming Zonal reservation.

    service_account str

    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

    service_account_scopes Sequence[str]

    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

    shielded_instance_config ShieldedInstanceConfigResponse

    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

    subnetwork_uri str

    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

    tags Sequence[str]

    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

    zone_uri str

    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

    confidentialInstanceConfig Property Map

    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).

    internalIpOnly Boolean

    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

    metadata Map<String>

    The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).

    networkUri String

    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default

    nodeGroupAffinity Property Map

    Optional. Node Group Affinity for sole-tenant clusters.

    privateIpv6GoogleAccess String

    Optional. The type of IPv6 access for a cluster.

    reservationAffinity Property Map

    Optional. Reservation Affinity for consuming Zonal reservation.

    serviceAccount String

    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.

    serviceAccountScopes List<String>

    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control

    shieldedInstanceConfig Property Map

    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).

    subnetworkUri String

    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0

    tags List<String>

    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).

    zoneUri String

    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

    GkeClusterConfigResponse

    GkeClusterTarget string

    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

    NamespacedGkeDeploymentTarget Pulumi.GoogleNative.Dataproc.V1.Inputs.NamespacedGkeDeploymentTargetResponse

    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated:

    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    NodePoolTarget List<Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolTargetResponse>

    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

    GkeClusterTarget string

    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

    NamespacedGkeDeploymentTarget NamespacedGkeDeploymentTargetResponse

    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated:

    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    NodePoolTarget []GkeNodePoolTargetResponse

    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

    gkeClusterTarget String

    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

    namespacedGkeDeploymentTarget NamespacedGkeDeploymentTargetResponse

    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated:

    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    nodePoolTarget List<GkeNodePoolTargetResponse>

    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

    gkeClusterTarget string

    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

    namespacedGkeDeploymentTarget NamespacedGkeDeploymentTargetResponse

    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated:

    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    nodePoolTarget GkeNodePoolTargetResponse[]

    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

    gke_cluster_target str

    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

    namespaced_gke_deployment_target NamespacedGkeDeploymentTargetResponse

    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated:

    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    node_pool_target Sequence[GkeNodePoolTargetResponse]

    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

    gkeClusterTarget String

    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

    namespacedGkeDeploymentTarget Property Map

    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated:

    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    nodePoolTarget List<Property Map>

    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

    GkeNodeConfigResponse

    Accelerators List<Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAcceleratorConfigResponse>

    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

    BootDiskKmsKey string

    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}

    LocalSsdCount int

    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

    MachineType string

    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

    MinCpuPlatform string

    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

    Preemptible bool

    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

    Spot bool

    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

    Accelerators []GkeNodePoolAcceleratorConfigResponse

    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

    BootDiskKmsKey string

    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}

    LocalSsdCount int

    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

    MachineType string

    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

    MinCpuPlatform string

    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

    Preemptible bool

    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

    Spot bool

    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

    accelerators List<GkeNodePoolAcceleratorConfigResponse>

    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

    bootDiskKmsKey String

    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}

    localSsdCount Integer

    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

    machineType String

    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

    minCpuPlatform String

    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

    preemptible Boolean

    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

    spot Boolean

    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

    accelerators GkeNodePoolAcceleratorConfigResponse[]

    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

    bootDiskKmsKey string

    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}

    localSsdCount number

    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

    machineType string

    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

    minCpuPlatform string

    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

    preemptible boolean

    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

    spot boolean

    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

    accelerators Sequence[GkeNodePoolAcceleratorConfigResponse]

    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

    boot_disk_kms_key str

    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}

    local_ssd_count int

    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

    machine_type str

    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

    min_cpu_platform str

    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

    preemptible bool

    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

    spot bool

    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

    accelerators List<Property Map>

    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.

    bootDiskKmsKey String

    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}

    localSsdCount Number

    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).

    machineType String

    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).

    minCpuPlatform String

    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".

    preemptible Boolean

    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

    spot Boolean

    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

    GkeNodePoolAcceleratorConfigResponse

    AcceleratorCount string

    The number of accelerator cards exposed to an instance.

    AcceleratorType string

    The accelerator type resource namename (see GPUs on Compute Engine).

    GpuPartitionSize string

    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

    AcceleratorCount string

    The number of accelerator cards exposed to an instance.

    AcceleratorType string

    The accelerator type resource namename (see GPUs on Compute Engine).

    GpuPartitionSize string

    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

    acceleratorCount String

    The number of accelerator cards exposed to an instance.

    acceleratorType String

    The accelerator type resource namename (see GPUs on Compute Engine).

    gpuPartitionSize String

    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

    acceleratorCount string

    The number of accelerator cards exposed to an instance.

    acceleratorType string

    The accelerator type resource namename (see GPUs on Compute Engine).

    gpuPartitionSize string

    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

    accelerator_count str

    The number of accelerator cards exposed to an instance.

    accelerator_type str

    The accelerator type resource namename (see GPUs on Compute Engine).

    gpu_partition_size str

    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

    acceleratorCount String

    The number of accelerator cards exposed to an instance.

    acceleratorType String

    The accelerator type resource namename (see GPUs on Compute Engine).

    gpuPartitionSize String

    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

    GkeNodePoolAutoscalingConfigResponse

    MaxNodeCount int

    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

    MinNodeCount int

    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

    MaxNodeCount int

    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

    MinNodeCount int

    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

    maxNodeCount Integer

    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

    minNodeCount Integer

    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

    maxNodeCount number

    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

    minNodeCount number

    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

    max_node_count int

    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

    min_node_count int

    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

    maxNodeCount Number

    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.

    minNodeCount Number

    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

    GkeNodePoolConfigResponse

    Autoscaling Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAutoscalingConfigResponse

    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

    Config Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodeConfigResponse

    Optional. The node pool configuration.

    Locations List<string>

    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

    Autoscaling GkeNodePoolAutoscalingConfigResponse

    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

    Config GkeNodeConfigResponse

    Optional. The node pool configuration.

    Locations []string

    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

    autoscaling GkeNodePoolAutoscalingConfigResponse

    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

    config GkeNodeConfigResponse

    Optional. The node pool configuration.

    locations List<String>

    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

    autoscaling GkeNodePoolAutoscalingConfigResponse

    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

    config GkeNodeConfigResponse

    Optional. The node pool configuration.

    locations string[]

    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

    autoscaling GkeNodePoolAutoscalingConfigResponse

    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

    config GkeNodeConfigResponse

    Optional. The node pool configuration.

    locations Sequence[str]

    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

    autoscaling Property Map

    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.

    config Property Map

    Optional. The node pool configuration.

    locations List<String>

    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

    GkeNodePoolTargetResponse

    NodePool string

    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

    NodePoolConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolConfigResponse

    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

    Roles List<string>

    The roles associated with the GKE node pool.

    NodePool string

    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

    NodePoolConfig GkeNodePoolConfigResponse

    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

    Roles []string

    The roles associated with the GKE node pool.

    nodePool String

    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

    nodePoolConfig GkeNodePoolConfigResponse

    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

    roles List<String>

    The roles associated with the GKE node pool.

    nodePool string

    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

    nodePoolConfig GkeNodePoolConfigResponse

    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

    roles string[]

    The roles associated with the GKE node pool.

    node_pool str

    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

    node_pool_config GkeNodePoolConfigResponse

    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

    roles Sequence[str]

    The roles associated with the GKE node pool.

    nodePool String

    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'

    nodePoolConfig Property Map

    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

    roles List<String>

    The roles associated with the GKE node pool.

    IdentityConfigResponse

    UserServiceAccountMapping Dictionary<string, string>

    Map of user to service account.

    UserServiceAccountMapping map[string]string

    Map of user to service account.

    userServiceAccountMapping Map<String,String>

    Map of user to service account.

    userServiceAccountMapping {[key: string]: string}

    Map of user to service account.

    user_service_account_mapping Mapping[str, str]

    Map of user to service account.

    userServiceAccountMapping Map<String>

    Map of user to service account.

    InstanceGroupConfigResponse

    Accelerators List<Pulumi.GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigResponse>

    Optional. The Compute Engine accelerator configuration for these instances.

    DiskConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.DiskConfigResponse

    Optional. Disk option config settings.

    ImageUri string

    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

    InstanceNames List<string>

    The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

    InstanceReferences List<Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceReferenceResponse>

    List of references to Compute Engine instances.

    IsPreemptible bool

    Specifies that this instance group contains preemptible instances.

    MachineTypeUri string

    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

    ManagedGroupConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ManagedGroupConfigResponse

    The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

    MinCpuPlatform string

    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

    NumInstances int

    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

    Preemptibility string

    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

    Accelerators []AcceleratorConfigResponse

    Optional. The Compute Engine accelerator configuration for these instances.

    DiskConfig DiskConfigResponse

    Optional. Disk option config settings.

    ImageUri string

    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

    InstanceNames []string

    The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

    InstanceReferences []InstanceReferenceResponse

    List of references to Compute Engine instances.

    IsPreemptible bool

    Specifies that this instance group contains preemptible instances.

    MachineTypeUri string

    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

    ManagedGroupConfig ManagedGroupConfigResponse

    The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

    MinCpuPlatform string

    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

    NumInstances int

    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

    Preemptibility string

    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

    accelerators List<AcceleratorConfigResponse>

    Optional. The Compute Engine accelerator configuration for these instances.

    diskConfig DiskConfigResponse

    Optional. Disk option config settings.

    imageUri String

    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

    instanceNames List<String>

    The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

    instanceReferences List<InstanceReferenceResponse>

    List of references to Compute Engine instances.

    isPreemptible Boolean

    Specifies that this instance group contains preemptible instances.

    machineTypeUri String

    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

    managedGroupConfig ManagedGroupConfigResponse

    The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

    minCpuPlatform String

    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

    numInstances Integer

    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

    preemptibility String

    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

    accelerators AcceleratorConfigResponse[]

    Optional. The Compute Engine accelerator configuration for these instances.

    diskConfig DiskConfigResponse

    Optional. Disk option config settings.

    imageUri string

    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

    instanceNames string[]

    The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

    instanceReferences InstanceReferenceResponse[]

    List of references to Compute Engine instances.

    isPreemptible boolean

    Specifies that this instance group contains preemptible instances.

    machineTypeUri string

    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

    managedGroupConfig ManagedGroupConfigResponse

    The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

    minCpuPlatform string

    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

    numInstances number

    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

    preemptibility string

    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

    accelerators Sequence[AcceleratorConfigResponse]

    Optional. The Compute Engine accelerator configuration for these instances.

    disk_config DiskConfigResponse

    Optional. Disk option config settings.

    image_uri str

    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

    instance_names Sequence[str]

    The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

    instance_references Sequence[InstanceReferenceResponse]

    List of references to Compute Engine instances.

    is_preemptible bool

    Specifies that this instance group contains preemptible instances.

    machine_type_uri str

    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

    managed_group_config ManagedGroupConfigResponse

    The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

    min_cpu_platform str

    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

    num_instances int

    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

    preemptibility str

    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

    accelerators List<Property Map>

    Optional. The Compute Engine accelerator configuration for these instances.

    diskConfig Property Map

    Optional. Disk option config settings.

    imageUri String

    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

    instanceNames List<String>

    The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

    instanceReferences List<Property Map>

    List of references to Compute Engine instances.

    isPreemptible Boolean

    Specifies that this instance group contains preemptible instances.

    machineTypeUri String

    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.

    managedGroupConfig Property Map

    The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

    minCpuPlatform String

    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).

    numInstances Number

    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.

    preemptibility String

    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

    InstanceReferenceResponse

    InstanceId string

    The unique identifier of the Compute Engine instance.

    InstanceName string

    The user-friendly name of the Compute Engine instance.

    PublicEciesKey string

    The public ECIES key used for sharing data with this instance.

    PublicKey string

    The public RSA key used for sharing data with this instance.

    InstanceId string

    The unique identifier of the Compute Engine instance.

    InstanceName string

    The user-friendly name of the Compute Engine instance.

    PublicEciesKey string

    The public ECIES key used for sharing data with this instance.

    PublicKey string

    The public RSA key used for sharing data with this instance.

    instanceId String

    The unique identifier of the Compute Engine instance.

    instanceName String

    The user-friendly name of the Compute Engine instance.

    publicEciesKey String

    The public ECIES key used for sharing data with this instance.

    publicKey String

    The public RSA key used for sharing data with this instance.

    instanceId string

    The unique identifier of the Compute Engine instance.

    instanceName string

    The user-friendly name of the Compute Engine instance.

    publicEciesKey string

    The public ECIES key used for sharing data with this instance.

    publicKey string

    The public RSA key used for sharing data with this instance.

    instance_id str

    The unique identifier of the Compute Engine instance.

    instance_name str

    The user-friendly name of the Compute Engine instance.

    public_ecies_key str

    The public ECIES key used for sharing data with this instance.

    public_key str

    The public RSA key used for sharing data with this instance.

    instanceId String

    The unique identifier of the Compute Engine instance.

    instanceName String

    The user-friendly name of the Compute Engine instance.

    publicEciesKey String

    The public ECIES key used for sharing data with this instance.

    publicKey String

    The public RSA key used for sharing data with this instance.

    KerberosConfigResponse

    CrossRealmTrustAdminServer string

    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

    CrossRealmTrustKdc string

    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

    CrossRealmTrustRealm string

    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

    CrossRealmTrustSharedPasswordUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

    EnableKerberos bool

    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

    KdcDbKeyUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

    KeyPasswordUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

    KeystorePasswordUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

    KeystoreUri string

    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

    KmsKeyUri string

    Optional. The uri of the KMS key used to encrypt various sensitive files.

    Realm string

    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

    RootPrincipalPasswordUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

    TgtLifetimeHours int

    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

    TruststorePasswordUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

    TruststoreUri string

    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

    CrossRealmTrustAdminServer string

    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

    CrossRealmTrustKdc string

    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

    CrossRealmTrustRealm string

    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

    CrossRealmTrustSharedPasswordUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

    EnableKerberos bool

    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

    KdcDbKeyUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

    KeyPasswordUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

    KeystorePasswordUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

    KeystoreUri string

    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

    KmsKeyUri string

    Optional. The uri of the KMS key used to encrypt various sensitive files.

    Realm string

    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

    RootPrincipalPasswordUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

    TgtLifetimeHours int

    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

    TruststorePasswordUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

    TruststoreUri string

    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

    crossRealmTrustAdminServer String

    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

    crossRealmTrustKdc String

    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

    crossRealmTrustRealm String

    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

    crossRealmTrustSharedPasswordUri String

    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

    enableKerberos Boolean

    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

    kdcDbKeyUri String

    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

    keyPasswordUri String

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

    keystorePasswordUri String

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

    keystoreUri String

    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

    kmsKeyUri String

    Optional. The uri of the KMS key used to encrypt various sensitive files.

    realm String

    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

    rootPrincipalPasswordUri String

    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

    tgtLifetimeHours Integer

    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

    truststorePasswordUri String

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

    truststoreUri String

    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

    crossRealmTrustAdminServer string

    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

    crossRealmTrustKdc string

    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

    crossRealmTrustRealm string

    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

    crossRealmTrustSharedPasswordUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

    enableKerberos boolean

    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

    kdcDbKeyUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

    keyPasswordUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

    keystorePasswordUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

    keystoreUri string

    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

    kmsKeyUri string

    Optional. The uri of the KMS key used to encrypt various sensitive files.

    realm string

    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

    rootPrincipalPasswordUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

    tgtLifetimeHours number

    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

    truststorePasswordUri string

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

    truststoreUri string

    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

    cross_realm_trust_admin_server str

    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

    cross_realm_trust_kdc str

    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

    cross_realm_trust_realm str

    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

    cross_realm_trust_shared_password_uri str

    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

    enable_kerberos bool

    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

    kdc_db_key_uri str

    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

    key_password_uri str

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

    keystore_password_uri str

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

    keystore_uri str

    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

    kms_key_uri str

    Optional. The uri of the KMS key used to encrypt various sensitive files.

    realm str

    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

    root_principal_password_uri str

    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

    tgt_lifetime_hours int

    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

    truststore_password_uri str

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

    truststore_uri str

    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

    crossRealmTrustAdminServer String

    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

    crossRealmTrustKdc String

    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

    crossRealmTrustRealm String

    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

    crossRealmTrustSharedPasswordUri String

    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

    enableKerberos Boolean

    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

    kdcDbKeyUri String

    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

    keyPasswordUri String

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

    keystorePasswordUri String

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

    keystoreUri String

    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

    kmsKeyUri String

    Optional. The uri of the KMS key used to encrypt various sensitive files.

    realm String

    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

    rootPrincipalPasswordUri String

    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

    tgtLifetimeHours Number

    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

    truststorePasswordUri String

    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

    truststoreUri String

    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

    KubernetesClusterConfigResponse

    GkeClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeClusterConfigResponse

    The configuration for running the Dataproc cluster on GKE.

    KubernetesNamespace string

    Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

    KubernetesSoftwareConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.KubernetesSoftwareConfigResponse

    Optional. The software configuration for this Dataproc cluster running on Kubernetes.

    GkeClusterConfig GkeClusterConfigResponse

    The configuration for running the Dataproc cluster on GKE.

    KubernetesNamespace string

    Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

    KubernetesSoftwareConfig KubernetesSoftwareConfigResponse

    Optional. The software configuration for this Dataproc cluster running on Kubernetes.

    gkeClusterConfig GkeClusterConfigResponse

    The configuration for running the Dataproc cluster on GKE.

    kubernetesNamespace String

    Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

    kubernetesSoftwareConfig KubernetesSoftwareConfigResponse

    Optional. The software configuration for this Dataproc cluster running on Kubernetes.

    gkeClusterConfig GkeClusterConfigResponse

    The configuration for running the Dataproc cluster on GKE.

    kubernetesNamespace string

    Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

    kubernetesSoftwareConfig KubernetesSoftwareConfigResponse

    Optional. The software configuration for this Dataproc cluster running on Kubernetes.

    gke_cluster_config GkeClusterConfigResponse

    The configuration for running the Dataproc cluster on GKE.

    kubernetes_namespace str

    Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

    kubernetes_software_config KubernetesSoftwareConfigResponse

    Optional. The software configuration for this Dataproc cluster running on Kubernetes.

    gkeClusterConfig Property Map

    The configuration for running the Dataproc cluster on GKE.

    kubernetesNamespace String

    Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.

    kubernetesSoftwareConfig Property Map

    Optional. The software configuration for this Dataproc cluster running on Kubernetes.

    KubernetesSoftwareConfigResponse

    ComponentVersion Dictionary<string, string>

    The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

    Properties Dictionary<string, string>

    The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

    ComponentVersion map[string]string

    The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

    Properties map[string]string

    The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

    componentVersion Map<String,String>

    The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

    properties Map<String,String>

    The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

    componentVersion {[key: string]: string}

    The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

    properties {[key: string]: string}

    The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

    component_version Mapping[str, str]

    The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

    properties Mapping[str, str]

    The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

    componentVersion Map<String>

    The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.

    properties Map<String>

    The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

    LifecycleConfigResponse

    AutoDeleteTime string

    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    AutoDeleteTtl string

    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    IdleDeleteTtl string

    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    IdleStartTime string

    The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    AutoDeleteTime string

    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    AutoDeleteTtl string

    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    IdleDeleteTtl string

    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    IdleStartTime string

    The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    autoDeleteTime String

    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    autoDeleteTtl String

    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    idleDeleteTtl String

    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    idleStartTime String

    The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    autoDeleteTime string

    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    autoDeleteTtl string

    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    idleDeleteTtl string

    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    idleStartTime string

    The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    auto_delete_time str

    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    auto_delete_ttl str

    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    idle_delete_ttl str

    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    idle_start_time str

    The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    autoDeleteTime String

    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    autoDeleteTtl String

    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    idleDeleteTtl String

    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    idleStartTime String

    The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    ManagedGroupConfigResponse

    InstanceGroupManagerName string

    The name of the Instance Group Manager for this group.

    InstanceTemplateName string

    The name of the Instance Template used for the Managed Instance Group.

    InstanceGroupManagerName string

    The name of the Instance Group Manager for this group.

    InstanceTemplateName string

    The name of the Instance Template used for the Managed Instance Group.

    instanceGroupManagerName String

    The name of the Instance Group Manager for this group.

    instanceTemplateName String

    The name of the Instance Template used for the Managed Instance Group.

    instanceGroupManagerName string

    The name of the Instance Group Manager for this group.

    instanceTemplateName string

    The name of the Instance Template used for the Managed Instance Group.

    instance_group_manager_name str

    The name of the Instance Group Manager for this group.

    instance_template_name str

    The name of the Instance Template used for the Managed Instance Group.

    instanceGroupManagerName String

    The name of the Instance Group Manager for this group.

    instanceTemplateName String

    The name of the Instance Template used for the Managed Instance Group.

    MetastoreConfigResponse

    DataprocMetastoreService string

    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

    DataprocMetastoreService string

    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

    dataprocMetastoreService String

    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

    dataprocMetastoreService string

    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

    dataproc_metastore_service str

    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

    dataprocMetastoreService String

    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

    MetricResponse

    MetricOverrides List<string>

    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.

    MetricSource string

    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).

    MetricOverrides []string

    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.

    MetricSource string

    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).

    metricOverrides List<String>

    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.

    metricSource String

    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).

    metricOverrides string[]

    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.

    metricSource string

    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).

    metric_overrides Sequence[str]

    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.

    metric_source str

    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).

    metricOverrides List<String>

    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.

    metricSource String

    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).

    NamespacedGkeDeploymentTargetResponse

    ClusterNamespace string

    Optional. A namespace within the GKE cluster to deploy into.

    TargetGkeCluster string

    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

    ClusterNamespace string

    Optional. A namespace within the GKE cluster to deploy into.

    TargetGkeCluster string

    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

    clusterNamespace String

    Optional. A namespace within the GKE cluster to deploy into.

    targetGkeCluster String

    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

    clusterNamespace string

    Optional. A namespace within the GKE cluster to deploy into.

    targetGkeCluster string

    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

    cluster_namespace str

    Optional. A namespace within the GKE cluster to deploy into.

    target_gke_cluster str

    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

    clusterNamespace String

    Optional. A namespace within the GKE cluster to deploy into.

    targetGkeCluster String

    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

    NodeGroupAffinityResponse

    NodeGroupUri string

    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

    NodeGroupUri string

    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

    nodeGroupUri String

    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

    nodeGroupUri string

    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

    node_group_uri str

    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

    nodeGroupUri String

    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

    NodeGroupResponse

    Labels Dictionary<string, string>

    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

    Name string

    The Node group resource name (https://aip.dev/122).

    NodeGroupConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse

    Optional. The node group instance group configuration.

    Roles List<string>

    Node group roles.

    Labels map[string]string

    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

    Name string

    The Node group resource name (https://aip.dev/122).

    NodeGroupConfig InstanceGroupConfigResponse

    Optional. The node group instance group configuration.

    Roles []string

    Node group roles.

    labels Map<String,String>

    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

    name String

    The Node group resource name (https://aip.dev/122).

    nodeGroupConfig InstanceGroupConfigResponse

    Optional. The node group instance group configuration.

    roles List<String>

    Node group roles.

    labels {[key: string]: string}

    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

    name string

    The Node group resource name (https://aip.dev/122).

    nodeGroupConfig InstanceGroupConfigResponse

    Optional. The node group instance group configuration.

    roles string[]

    Node group roles.

    labels Mapping[str, str]

    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

    name str

    The Node group resource name (https://aip.dev/122).

    node_group_config InstanceGroupConfigResponse

    Optional. The node group instance group configuration.

    roles Sequence[str]

    Node group roles.

    labels Map<String>

    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.

    name String

    The Node group resource name (https://aip.dev/122).

    nodeGroupConfig Property Map

    Optional. The node group instance group configuration.

    roles List<String>

    Node group roles.

    NodeInitializationActionResponse

    ExecutableFile string

    Cloud Storage URI of executable file.

    ExecutionTimeout string

    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

    ExecutableFile string

    Cloud Storage URI of executable file.

    ExecutionTimeout string

    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

    executableFile String

    Cloud Storage URI of executable file.

    executionTimeout String

    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

    executableFile string

    Cloud Storage URI of executable file.

    executionTimeout string

    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

    executable_file str

    Cloud Storage URI of executable file.

    execution_timeout str

    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

    executableFile String

    Cloud Storage URI of executable file.

    executionTimeout String

    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

    ReservationAffinityResponse

    ConsumeReservationType string

    Optional. Type of reservation to consume

    Key string

    Optional. Corresponds to the label key of reservation resource.

    Values List<string>

    Optional. Corresponds to the label values of reservation resource.

    ConsumeReservationType string

    Optional. Type of reservation to consume

    Key string

    Optional. Corresponds to the label key of reservation resource.

    Values []string

    Optional. Corresponds to the label values of reservation resource.

    consumeReservationType String

    Optional. Type of reservation to consume

    key String

    Optional. Corresponds to the label key of reservation resource.

    values List<String>

    Optional. Corresponds to the label values of reservation resource.

    consumeReservationType string

    Optional. Type of reservation to consume

    key string

    Optional. Corresponds to the label key of reservation resource.

    values string[]

    Optional. Corresponds to the label values of reservation resource.

    consume_reservation_type str

    Optional. Type of reservation to consume

    key str

    Optional. Corresponds to the label key of reservation resource.

    values Sequence[str]

    Optional. Corresponds to the label values of reservation resource.

    consumeReservationType String

    Optional. Type of reservation to consume

    key String

    Optional. Corresponds to the label key of reservation resource.

    values List<String>

    Optional. Corresponds to the label values of reservation resource.

    SecurityConfigResponse

    IdentityConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.IdentityConfigResponse

    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

    KerberosConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.KerberosConfigResponse

    Optional. Kerberos related configuration.

    IdentityConfig IdentityConfigResponse

    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

    KerberosConfig KerberosConfigResponse

    Optional. Kerberos related configuration.

    identityConfig IdentityConfigResponse

    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

    kerberosConfig KerberosConfigResponse

    Optional. Kerberos related configuration.

    identityConfig IdentityConfigResponse

    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

    kerberosConfig KerberosConfigResponse

    Optional. Kerberos related configuration.

    identity_config IdentityConfigResponse

    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

    kerberos_config KerberosConfigResponse

    Optional. Kerberos related configuration.

    identityConfig Property Map

    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.

    kerberosConfig Property Map

    Optional. Kerberos related configuration.

    ShieldedInstanceConfigResponse

    EnableIntegrityMonitoring bool

    Optional. Defines whether instances have integrity monitoring enabled.

    EnableSecureBoot bool

    Optional. Defines whether instances have Secure Boot enabled.

    EnableVtpm bool

    Optional. Defines whether instances have the vTPM enabled.

    EnableIntegrityMonitoring bool

    Optional. Defines whether instances have integrity monitoring enabled.

    EnableSecureBoot bool

    Optional. Defines whether instances have Secure Boot enabled.

    EnableVtpm bool

    Optional. Defines whether instances have the vTPM enabled.

    enableIntegrityMonitoring Boolean

    Optional. Defines whether instances have integrity monitoring enabled.

    enableSecureBoot Boolean

    Optional. Defines whether instances have Secure Boot enabled.

    enableVtpm Boolean

    Optional. Defines whether instances have the vTPM enabled.

    enableIntegrityMonitoring boolean

    Optional. Defines whether instances have integrity monitoring enabled.

    enableSecureBoot boolean

    Optional. Defines whether instances have Secure Boot enabled.

    enableVtpm boolean

    Optional. Defines whether instances have the vTPM enabled.

    enable_integrity_monitoring bool

    Optional. Defines whether instances have integrity monitoring enabled.

    enable_secure_boot bool

    Optional. Defines whether instances have Secure Boot enabled.

    enable_vtpm bool

    Optional. Defines whether instances have the vTPM enabled.

    enableIntegrityMonitoring Boolean

    Optional. Defines whether instances have integrity monitoring enabled.

    enableSecureBoot Boolean

    Optional. Defines whether instances have Secure Boot enabled.

    enableVtpm Boolean

    Optional. Defines whether instances have the vTPM enabled.

    SoftwareConfigResponse

    ImageVersion string

    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

    OptionalComponents List<string>

    Optional. The set of components to activate on the cluster.

    Properties Dictionary<string, string>

    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

    ImageVersion string

    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

    OptionalComponents []string

    Optional. The set of components to activate on the cluster.

    Properties map[string]string

    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

    imageVersion String

    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

    optionalComponents List<String>

    Optional. The set of components to activate on the cluster.

    properties Map<String,String>

    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

    imageVersion string

    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

    optionalComponents string[]

    Optional. The set of components to activate on the cluster.

    properties {[key: string]: string}

    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

    image_version str

    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

    optional_components Sequence[str]

    Optional. The set of components to activate on the cluster.

    properties Mapping[str, str]

    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

    imageVersion String

    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.

    optionalComponents List<String>

    Optional. The set of components to activate on the cluster.

    properties Map<String>

    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

    SparkHistoryServerConfigResponse

    DataprocCluster string

    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

    DataprocCluster string

    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

    dataprocCluster String

    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

    dataprocCluster string

    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

    dataproc_cluster str

    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

    dataprocCluster String

    Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

    VirtualClusterConfigResponse

    AuxiliaryServicesConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.AuxiliaryServicesConfigResponse

    Optional. Configuration of auxiliary services used by this cluster.

    KubernetesClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.KubernetesClusterConfigResponse

    The configuration for running the Dataproc cluster on Kubernetes.

    StagingBucket string

    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    AuxiliaryServicesConfig AuxiliaryServicesConfigResponse

    Optional. Configuration of auxiliary services used by this cluster.

    KubernetesClusterConfig KubernetesClusterConfigResponse

    The configuration for running the Dataproc cluster on Kubernetes.

    StagingBucket string

    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    auxiliaryServicesConfig AuxiliaryServicesConfigResponse

    Optional. Configuration of auxiliary services used by this cluster.

    kubernetesClusterConfig KubernetesClusterConfigResponse

    The configuration for running the Dataproc cluster on Kubernetes.

    stagingBucket String

    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    auxiliaryServicesConfig AuxiliaryServicesConfigResponse

    Optional. Configuration of auxiliary services used by this cluster.

    kubernetesClusterConfig KubernetesClusterConfigResponse

    The configuration for running the Dataproc cluster on Kubernetes.

    stagingBucket string

    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    auxiliary_services_config AuxiliaryServicesConfigResponse

    Optional. Configuration of auxiliary services used by this cluster.

    kubernetes_cluster_config KubernetesClusterConfigResponse

    The configuration for running the Dataproc cluster on Kubernetes.

    staging_bucket str

    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    auxiliaryServicesConfig Property Map

    Optional. Configuration of auxiliary services used by this cluster.

    kubernetesClusterConfig Property Map

    The configuration for running the Dataproc cluster on Kubernetes.

    stagingBucket String

    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.31.1 published on Thursday, Jul 20, 2023 by Pulumi