Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.dataproc/v1.getCluster
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Gets the resource representation for a cluster in a project.
Using getCluster
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getCluster(args: GetClusterArgs, opts?: InvokeOptions): Promise<GetClusterResult>
function getClusterOutput(args: GetClusterOutputArgs, opts?: InvokeOptions): Output<GetClusterResult>
def get_cluster(cluster_name: Optional[str] = None,
project: Optional[str] = None,
region: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetClusterResult
def get_cluster_output(cluster_name: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
region: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetClusterResult]
func LookupCluster(ctx *Context, args *LookupClusterArgs, opts ...InvokeOption) (*LookupClusterResult, error)
func LookupClusterOutput(ctx *Context, args *LookupClusterOutputArgs, opts ...InvokeOption) LookupClusterResultOutput
> Note: This function is named LookupCluster
in the Go SDK.
public static class GetCluster
{
public static Task<GetClusterResult> InvokeAsync(GetClusterArgs args, InvokeOptions? opts = null)
public static Output<GetClusterResult> Invoke(GetClusterInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetClusterResult> getCluster(GetClusterArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
fn::invoke:
function: google-native:dataproc/v1:getCluster
arguments:
# arguments dictionary
The following arguments are supported:
- Cluster
Name string - Region string
- Project string
- Cluster
Name string - Region string
- Project string
- cluster
Name String - region String
- project String
- cluster
Name string - region string
- project string
- cluster_
name str - region str
- project str
- cluster
Name String - region String
- project String
getCluster Result
The following output properties are available:
- Cluster
Name string The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- Cluster
Uuid string A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- Config
Pulumi.
Google Native. Dataproc. V1. Outputs. Cluster Config Response Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- Labels Dictionary<string, string>
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- Metrics
Pulumi.
Google Native. Dataproc. V1. Outputs. Cluster Metrics Response Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- Project string
The Google Cloud Platform project ID that the cluster belongs to.
- Status
Pulumi.
Google Native. Dataproc. V1. Outputs. Cluster Status Response Cluster status.
- Status
History List<Pulumi.Google Native. Dataproc. V1. Outputs. Cluster Status Response> The previous cluster status.
- Virtual
Cluster Pulumi.Config Google Native. Dataproc. V1. Outputs. Virtual Cluster Config Response Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
- Cluster
Name string The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- Cluster
Uuid string A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- Config
Cluster
Config Response Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- Labels map[string]string
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- Metrics
Cluster
Metrics Response Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- Project string
The Google Cloud Platform project ID that the cluster belongs to.
- Status
Cluster
Status Response Cluster status.
- Status
History []ClusterStatus Response The previous cluster status.
- Virtual
Cluster VirtualConfig Cluster Config Response Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
- cluster
Name String The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- cluster
Uuid String A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- config
Cluster
Config Response Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- labels Map<String,String>
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- metrics
Cluster
Metrics Response Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- project String
The Google Cloud Platform project ID that the cluster belongs to.
- status
Cluster
Status Response Cluster status.
- status
History List<ClusterStatus Response> The previous cluster status.
- virtual
Cluster VirtualConfig Cluster Config Response Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
- cluster
Name string The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- cluster
Uuid string A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- config
Cluster
Config Response Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- labels {[key: string]: string}
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- metrics
Cluster
Metrics Response Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- project string
The Google Cloud Platform project ID that the cluster belongs to.
- status
Cluster
Status Response Cluster status.
- status
History ClusterStatus Response[] The previous cluster status.
- virtual
Cluster VirtualConfig Cluster Config Response Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
- cluster_
name str The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- cluster_
uuid str A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- config
Cluster
Config Response Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- labels Mapping[str, str]
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- metrics
Cluster
Metrics Response Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- project str
The Google Cloud Platform project ID that the cluster belongs to.
- status
Cluster
Status Response Cluster status.
- status_
history Sequence[ClusterStatus Response] The previous cluster status.
- virtual_
cluster_ Virtualconfig Cluster Config Response Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
- cluster
Name String The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- cluster
Uuid String A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- config Property Map
Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- labels Map<String>
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- metrics Property Map
Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- project String
The Google Cloud Platform project ID that the cluster belongs to.
- status Property Map
Cluster status.
- status
History List<Property Map> The previous cluster status.
- virtual
Cluster Property MapConfig Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
Supporting Types
AcceleratorConfigResponse
- Accelerator
Count int The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type stringUri Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- Accelerator
Count int The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type stringUri Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count Integer The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type StringUri Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count number The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type stringUri Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator_
count int The number of the accelerator cards of this type exposed to this instance.
- accelerator_
type_ struri Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count Number The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type StringUri Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AutoscalingConfigResponse
- Policy
Uri string Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- Policy
Uri string Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri String Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri string Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy_
uri str Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri String Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
AuxiliaryNodeGroupResponse
- Node
Group Pulumi.Google Native. Dataproc. V1. Inputs. Node Group Response Node group configuration.
- Node
Group stringId Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- Node
Group NodeGroup Response Node group configuration.
- Node
Group stringId Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node
Group NodeGroup Response Node group configuration.
- node
Group StringId Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node
Group NodeGroup Response Node group configuration.
- node
Group stringId Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node_
group NodeGroup Response Node group configuration.
- node_
group_ strid Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node
Group Property Map Node group configuration.
- node
Group StringId Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
AuxiliaryServicesConfigResponse
- Metastore
Config Pulumi.Google Native. Dataproc. V1. Inputs. Metastore Config Response Optional. The Hive Metastore configuration for this workload.
- Spark
History Pulumi.Server Config Google Native. Dataproc. V1. Inputs. Spark History Server Config Response Optional. The Spark History Server configuration for the workload.
- Metastore
Config MetastoreConfig Response Optional. The Hive Metastore configuration for this workload.
- Spark
History SparkServer Config History Server Config Response Optional. The Spark History Server configuration for the workload.
- metastore
Config MetastoreConfig Response Optional. The Hive Metastore configuration for this workload.
- spark
History SparkServer Config History Server Config Response Optional. The Spark History Server configuration for the workload.
- metastore
Config MetastoreConfig Response Optional. The Hive Metastore configuration for this workload.
- spark
History SparkServer Config History Server Config Response Optional. The Spark History Server configuration for the workload.
- metastore_
config MetastoreConfig Response Optional. The Hive Metastore configuration for this workload.
- spark_
history_ Sparkserver_ config History Server Config Response Optional. The Spark History Server configuration for the workload.
- metastore
Config Property Map Optional. The Hive Metastore configuration for this workload.
- spark
History Property MapServer Config Optional. The Spark History Server configuration for the workload.
ClusterConfigResponse
- Autoscaling
Config Pulumi.Google Native. Dataproc. V1. Inputs. Autoscaling Config Response Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Auxiliary
Node List<Pulumi.Groups Google Native. Dataproc. V1. Inputs. Auxiliary Node Group Response> Optional. The node group settings.
- Config
Bucket string Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Dataproc
Metric Pulumi.Config Google Native. Dataproc. V1. Inputs. Dataproc Metric Config Response Optional. The config for Dataproc metrics.
- Encryption
Config Pulumi.Google Native. Dataproc. V1. Inputs. Encryption Config Response Optional. Encryption settings for the cluster.
- Endpoint
Config Pulumi.Google Native. Dataproc. V1. Inputs. Endpoint Config Response Optional. Port/endpoint configuration for this cluster
- Gce
Cluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gce Cluster Config Response Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gke Cluster Config Response Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions List<Pulumi.Google Native. Dataproc. V1. Inputs. Node Initialization Action Response> Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config Pulumi.Google Native. Dataproc. V1. Inputs. Lifecycle Config Response Optional. Lifecycle setting for the cluster.
- Master
Config Pulumi.Google Native. Dataproc. V1. Inputs. Instance Group Config Response Optional. The Compute Engine config settings for the cluster's master instance.
- Metastore
Config Pulumi.Google Native. Dataproc. V1. Inputs. Metastore Config Response Optional. Metastore configuration.
- Secondary
Worker Pulumi.Config Google Native. Dataproc. V1. Inputs. Instance Group Config Response Optional. The Compute Engine config settings for a cluster's secondary worker instances
- Security
Config Pulumi.Google Native. Dataproc. V1. Inputs. Security Config Response Optional. Security settings for the cluster.
- Software
Config Pulumi.Google Native. Dataproc. V1. Inputs. Software Config Response Optional. The config settings for cluster software.
- Temp
Bucket string Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Worker
Config Pulumi.Google Native. Dataproc. V1. Inputs. Instance Group Config Response Optional. The Compute Engine config settings for the cluster's worker instances.
- Autoscaling
Config AutoscalingConfig Response Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Auxiliary
Node []AuxiliaryGroups Node Group Response Optional. The node group settings.
- Config
Bucket string Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Dataproc
Metric DataprocConfig Metric Config Response Optional. The config for Dataproc metrics.
- Encryption
Config EncryptionConfig Response Optional. Encryption settings for the cluster.
- Endpoint
Config EndpointConfig Response Optional. Port/endpoint configuration for this cluster
- Gce
Cluster GceConfig Cluster Config Response Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster GkeConfig Cluster Config Response Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions []NodeInitialization Action Response Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config LifecycleConfig Response Optional. Lifecycle setting for the cluster.
- Master
Config InstanceGroup Config Response Optional. The Compute Engine config settings for the cluster's master instance.
- Metastore
Config MetastoreConfig Response Optional. Metastore configuration.
- Secondary
Worker InstanceConfig Group Config Response Optional. The Compute Engine config settings for a cluster's secondary worker instances
- Security
Config SecurityConfig Response Optional. Security settings for the cluster.
- Software
Config SoftwareConfig Response Optional. The config settings for cluster software.
- Temp
Bucket string Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Worker
Config InstanceGroup Config Response Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling
Config AutoscalingConfig Response Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary
Node List<AuxiliaryGroups Node Group Response> Optional. The node group settings.
- config
Bucket String Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc
Metric DataprocConfig Metric Config Response Optional. The config for Dataproc metrics.
- encryption
Config EncryptionConfig Response Optional. Encryption settings for the cluster.
- endpoint
Config EndpointConfig Response Optional. Port/endpoint configuration for this cluster
- gce
Cluster GceConfig Cluster Config Response Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster GkeConfig Cluster Config Response Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions List<NodeInitialization Action Response> Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config LifecycleConfig Response Optional. Lifecycle setting for the cluster.
- master
Config InstanceGroup Config Response Optional. The Compute Engine config settings for the cluster's master instance.
- metastore
Config MetastoreConfig Response Optional. Metastore configuration.
- secondary
Worker InstanceConfig Group Config Response Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security
Config SecurityConfig Response Optional. Security settings for the cluster.
- software
Config SoftwareConfig Response Optional. The config settings for cluster software.
- temp
Bucket String Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker
Config InstanceGroup Config Response Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling
Config AutoscalingConfig Response Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary
Node AuxiliaryGroups Node Group Response[] Optional. The node group settings.
- config
Bucket string Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc
Metric DataprocConfig Metric Config Response Optional. The config for Dataproc metrics.
- encryption
Config EncryptionConfig Response Optional. Encryption settings for the cluster.
- endpoint
Config EndpointConfig Response Optional. Port/endpoint configuration for this cluster
- gce
Cluster GceConfig Cluster Config Response Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster GkeConfig Cluster Config Response Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions NodeInitialization Action Response[] Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config LifecycleConfig Response Optional. Lifecycle setting for the cluster.
- master
Config InstanceGroup Config Response Optional. The Compute Engine config settings for the cluster's master instance.
- metastore
Config MetastoreConfig Response Optional. Metastore configuration.
- secondary
Worker InstanceConfig Group Config Response Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security
Config SecurityConfig Response Optional. Security settings for the cluster.
- software
Config SoftwareConfig Response Optional. The config settings for cluster software.
- temp
Bucket string Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker
Config InstanceGroup Config Response Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling_
config AutoscalingConfig Response Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary_
node_ Sequence[Auxiliarygroups Node Group Response] Optional. The node group settings.
- config_
bucket str Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc_
metric_ Dataprocconfig Metric Config Response Optional. The config for Dataproc metrics.
- encryption_
config EncryptionConfig Response Optional. Encryption settings for the cluster.
- endpoint_
config EndpointConfig Response Optional. Port/endpoint configuration for this cluster
- gce_
cluster_ Gceconfig Cluster Config Response Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke_
cluster_ Gkeconfig Cluster Config Response Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization_
actions Sequence[NodeInitialization Action Response] Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle_
config LifecycleConfig Response Optional. Lifecycle setting for the cluster.
- master_
config InstanceGroup Config Response Optional. The Compute Engine config settings for the cluster's master instance.
- metastore_
config MetastoreConfig Response Optional. Metastore configuration.
- secondary_
worker_ Instanceconfig Group Config Response Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security_
config SecurityConfig Response Optional. Security settings for the cluster.
- software_
config SoftwareConfig Response Optional. The config settings for cluster software.
- temp_
bucket str Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker_
config InstanceGroup Config Response Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling
Config Property Map Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary
Node List<Property Map>Groups Optional. The node group settings.
- config
Bucket String Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc
Metric Property MapConfig Optional. The config for Dataproc metrics.
- encryption
Config Property Map Optional. Encryption settings for the cluster.
- endpoint
Config Property Map Optional. Port/endpoint configuration for this cluster
- gce
Cluster Property MapConfig Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster Property MapConfig Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions List<Property Map> Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config Property Map Optional. Lifecycle setting for the cluster.
- master
Config Property Map Optional. The Compute Engine config settings for the cluster's master instance.
- metastore
Config Property Map Optional. Metastore configuration.
- secondary
Worker Property MapConfig Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security
Config Property Map Optional. Security settings for the cluster.
- software
Config Property Map Optional. The config settings for cluster software.
- temp
Bucket String Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker
Config Property Map Optional. The Compute Engine config settings for the cluster's worker instances.
ClusterMetricsResponse
- Hdfs
Metrics Dictionary<string, string> The HDFS metrics.
- Yarn
Metrics Dictionary<string, string> YARN metrics.
- Hdfs
Metrics map[string]string The HDFS metrics.
- Yarn
Metrics map[string]string YARN metrics.
- hdfs
Metrics Map<String,String> The HDFS metrics.
- yarn
Metrics Map<String,String> YARN metrics.
- hdfs
Metrics {[key: string]: string} The HDFS metrics.
- yarn
Metrics {[key: string]: string} YARN metrics.
- hdfs_
metrics Mapping[str, str] The HDFS metrics.
- yarn_
metrics Mapping[str, str] YARN metrics.
- hdfs
Metrics Map<String> The HDFS metrics.
- yarn
Metrics Map<String> YARN metrics.
ClusterStatusResponse
- Detail string
Optional. Output only. Details of cluster's state.
- State string
The cluster's state.
- State
Start stringTime Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Substate string
Additional state information that includes status reported by the agent.
- Detail string
Optional. Output only. Details of cluster's state.
- State string
The cluster's state.
- State
Start stringTime Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Substate string
Additional state information that includes status reported by the agent.
- detail String
Optional. Output only. Details of cluster's state.
- state String
The cluster's state.
- state
Start StringTime Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate String
Additional state information that includes status reported by the agent.
- detail string
Optional. Output only. Details of cluster's state.
- state string
The cluster's state.
- state
Start stringTime Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate string
Additional state information that includes status reported by the agent.
- detail str
Optional. Output only. Details of cluster's state.
- state str
The cluster's state.
- state_
start_ strtime Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate str
Additional state information that includes status reported by the agent.
- detail String
Optional. Output only. Details of cluster's state.
- state String
The cluster's state.
- state
Start StringTime Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate String
Additional state information that includes status reported by the agent.
ConfidentialInstanceConfigResponse
- Enable
Confidential boolCompute Optional. Defines whether the instance should have confidential compute enabled.
- Enable
Confidential boolCompute Optional. Defines whether the instance should have confidential compute enabled.
- enable
Confidential BooleanCompute Optional. Defines whether the instance should have confidential compute enabled.
- enable
Confidential booleanCompute Optional. Defines whether the instance should have confidential compute enabled.
- enable_
confidential_ boolcompute Optional. Defines whether the instance should have confidential compute enabled.
- enable
Confidential BooleanCompute Optional. Defines whether the instance should have confidential compute enabled.
DataprocMetricConfigResponse
- Metrics
List<Pulumi.
Google Native. Dataproc. V1. Inputs. Metric Response> Metrics sources to enable.
- Metrics
[]Metric
Response Metrics sources to enable.
- metrics
List<Metric
Response> Metrics sources to enable.
- metrics
Metric
Response[] Metrics sources to enable.
- metrics
Sequence[Metric
Response] Metrics sources to enable.
- metrics List<Property Map>
Metrics sources to enable.
DiskConfigResponse
- Boot
Disk intSize Gb Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk stringType Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Local
Ssd stringInterface Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- Num
Local intSsds Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- Boot
Disk intSize Gb Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk stringType Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Local
Ssd stringInterface Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- Num
Local intSsds Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot
Disk IntegerSize Gb Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk StringType Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local
Ssd StringInterface Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num
Local IntegerSsds Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot
Disk numberSize Gb Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk stringType Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local
Ssd stringInterface Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num
Local numberSsds Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot_
disk_ intsize_ gb Optional. Size in GB of the boot disk (default is 500GB).
- boot_
disk_ strtype Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local_
ssd_ strinterface Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num_
local_ intssds Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot
Disk NumberSize Gb Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk StringType Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local
Ssd StringInterface Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num
Local NumberSsds Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
EncryptionConfigResponse
- Gce
Pd stringKms Key Name Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- Kms
Key string Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.
- Gce
Pd stringKms Key Name Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- Kms
Key string Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.
- gce
Pd StringKms Key Name Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms
Key String Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.
- gce
Pd stringKms Key Name Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms
Key string Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.
- gce_
pd_ strkms_ key_ name Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms_
key str Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.
- gce
Pd StringKms Key Name Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms
Key String Optional. The Cloud KMS key name to use for encrypting customer core content and cluster PD disk for all instances in the cluster.
EndpointConfigResponse
- Enable
Http boolPort Access Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- Http
Ports Dictionary<string, string> The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- Enable
Http boolPort Access Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- Http
Ports map[string]string The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable
Http BooleanPort Access Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http
Ports Map<String,String> The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable
Http booleanPort Access Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http
Ports {[key: string]: string} The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable_
http_ boolport_ access Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http_
ports Mapping[str, str] The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable
Http BooleanPort Access Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http
Ports Map<String> The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
GceClusterConfigResponse
- Confidential
Instance Pulumi.Config Google Native. Dataproc. V1. Inputs. Confidential Instance Config Response Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- Internal
Ip boolOnly Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata Dictionary<string, string>
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri string Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- Node
Group Pulumi.Affinity Google Native. Dataproc. V1. Inputs. Node Group Affinity Response Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google stringAccess Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity Pulumi.Google Native. Dataproc. V1. Inputs. Reservation Affinity Response Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account string Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account List<string>Scopes Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance Pulumi.Config Google Native. Dataproc. V1. Inputs. Shielded Instance Config Response Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri string Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<string>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri string Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- Confidential
Instance ConfidentialConfig Instance Config Response Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- Internal
Ip boolOnly Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata map[string]string
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri string Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- Node
Group NodeAffinity Group Affinity Response Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google stringAccess Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity ReservationAffinity Response Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account string Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account []stringScopes Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance ShieldedConfig Instance Config Response Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri string Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- []string
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri string Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential
Instance ConfidentialConfig Instance Config Response Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal
Ip BooleanOnly Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String,String>
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri String Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node
Group NodeAffinity Group Affinity Response Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google StringAccess Optional. The type of IPv6 access for a cluster.
- reservation
Affinity ReservationAffinity Response Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account String Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account List<String>Scopes Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance ShieldedConfig Instance Config Response Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri String Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<String>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri String Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential
Instance ConfidentialConfig Instance Config Response Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal
Ip booleanOnly Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata {[key: string]: string}
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri string Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node
Group NodeAffinity Group Affinity Response Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google stringAccess Optional. The type of IPv6 access for a cluster.
- reservation
Affinity ReservationAffinity Response Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account string Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account string[]Scopes Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance ShieldedConfig Instance Config Response Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri string Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- string[]
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri string Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential_
instance_ Confidentialconfig Instance Config Response Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal_
ip_ boolonly Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Mapping[str, str]
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network_
uri str Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node_
group_ Nodeaffinity Group Affinity Response Optional. Node Group Affinity for sole-tenant clusters.
- private_
ipv6_ strgoogle_ access Optional. The type of IPv6 access for a cluster.
- reservation_
affinity ReservationAffinity Response Optional. Reservation Affinity for consuming Zonal reservation.
- service_
account str Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service_
account_ Sequence[str]scopes Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded_
instance_ Shieldedconfig Instance Config Response Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork_
uri str Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- Sequence[str]
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone_
uri str Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential
Instance Property MapConfig Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal
Ip BooleanOnly Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String>
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri String Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node
Group Property MapAffinity Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google StringAccess Optional. The type of IPv6 access for a cluster.
- reservation
Affinity Property Map Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account String Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account List<String>Scopes Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance Property MapConfig Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri String Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<String>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri String Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
GkeClusterConfigResponse
- Gke
Cluster stringTarget Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- Namespaced
Gke Pulumi.Deployment Target Google Native. Dataproc. V1. Inputs. Namespaced Gke Deployment Target Response Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- Node
Pool List<Pulumi.Target Google Native. Dataproc. V1. Inputs. Gke Node Pool Target Response> Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- Gke
Cluster stringTarget Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- Namespaced
Gke NamespacedDeployment Target Gke Deployment Target Response Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- Node
Pool []GkeTarget Node Pool Target Response Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke
Cluster StringTarget Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced
Gke NamespacedDeployment Target Gke Deployment Target Response Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node
Pool List<GkeTarget Node Pool Target Response> Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke
Cluster stringTarget Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced
Gke NamespacedDeployment Target Gke Deployment Target Response Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node
Pool GkeTarget Node Pool Target Response[] Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke_
cluster_ strtarget Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced_
gke_ Namespaceddeployment_ target Gke Deployment Target Response Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node_
pool_ Sequence[Gketarget Node Pool Target Response] Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke
Cluster StringTarget Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced
Gke Property MapDeployment Target Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node
Pool List<Property Map>Target Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
GkeNodeConfigResponse
- Accelerators
List<Pulumi.
Google Native. Dataproc. V1. Inputs. Gke Node Pool Accelerator Config Response> Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- Boot
Disk stringKms Key Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- Local
Ssd intCount Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- Machine
Type string Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- Min
Cpu stringPlatform Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- Preemptible bool
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Spot bool
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Accelerators
[]Gke
Node Pool Accelerator Config Response Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- Boot
Disk stringKms Key Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- Local
Ssd intCount Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- Machine
Type string Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- Min
Cpu stringPlatform Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- Preemptible bool
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Spot bool
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
List<Gke
Node Pool Accelerator Config Response> Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot
Disk StringKms Key Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local
Ssd IntegerCount Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine
Type String Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min
Cpu StringPlatform Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible Boolean
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot Boolean
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
Gke
Node Pool Accelerator Config Response[] Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot
Disk stringKms Key Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local
Ssd numberCount Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine
Type string Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min
Cpu stringPlatform Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible boolean
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot boolean
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
Sequence[Gke
Node Pool Accelerator Config Response] Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot_
disk_ strkms_ key Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local_
ssd_ intcount Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine_
type str Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min_
cpu_ strplatform Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible bool
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot bool
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators List<Property Map>
Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot
Disk StringKms Key Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local
Ssd NumberCount Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine
Type String Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min
Cpu StringPlatform Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible Boolean
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot Boolean
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
GkeNodePoolAcceleratorConfigResponse
- Accelerator
Count string The number of accelerator cards exposed to an instance.
- Accelerator
Type string The accelerator type resource namename (see GPUs on Compute Engine).
- Gpu
Partition stringSize Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- Accelerator
Count string The number of accelerator cards exposed to an instance.
- Accelerator
Type string The accelerator type resource namename (see GPUs on Compute Engine).
- Gpu
Partition stringSize Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- accelerator
Count String The number of accelerator cards exposed to an instance.
- accelerator
Type String The accelerator type resource namename (see GPUs on Compute Engine).
- gpu
Partition StringSize Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- accelerator
Count string The number of accelerator cards exposed to an instance.
- accelerator
Type string The accelerator type resource namename (see GPUs on Compute Engine).
- gpu
Partition stringSize Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- accelerator_
count str The number of accelerator cards exposed to an instance.
- accelerator_
type str The accelerator type resource namename (see GPUs on Compute Engine).
- gpu_
partition_ strsize Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- accelerator
Count String The number of accelerator cards exposed to an instance.
- accelerator
Type String The accelerator type resource namename (see GPUs on Compute Engine).
- gpu
Partition StringSize Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
GkeNodePoolAutoscalingConfigResponse
- Max
Node intCount The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- Min
Node intCount The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- Max
Node intCount The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- Min
Node intCount The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- max
Node IntegerCount The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- min
Node IntegerCount The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- max
Node numberCount The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- min
Node numberCount The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- max_
node_ intcount The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- min_
node_ intcount The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- max
Node NumberCount The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- min
Node NumberCount The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
GkeNodePoolConfigResponse
- Autoscaling
Pulumi.
Google Native. Dataproc. V1. Inputs. Gke Node Pool Autoscaling Config Response Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- Config
Pulumi.
Google Native. Dataproc. V1. Inputs. Gke Node Config Response Optional. The node pool configuration.
- Locations List<string>
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- Autoscaling
Gke
Node Pool Autoscaling Config Response Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- Config
Gke
Node Config Response Optional. The node pool configuration.
- Locations []string
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling
Gke
Node Pool Autoscaling Config Response Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config
Gke
Node Config Response Optional. The node pool configuration.
- locations List<String>
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling
Gke
Node Pool Autoscaling Config Response Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config
Gke
Node Config Response Optional. The node pool configuration.
- locations string[]
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling
Gke
Node Pool Autoscaling Config Response Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config
Gke
Node Config Response Optional. The node pool configuration.
- locations Sequence[str]
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling Property Map
Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config Property Map
Optional. The node pool configuration.
- locations List<String>
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
GkeNodePoolTargetResponse
- Node
Pool string The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- Node
Pool Pulumi.Config Google Native. Dataproc. V1. Inputs. Gke Node Pool Config Response Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- Roles List<string>
The roles associated with the GKE node pool.
- Node
Pool string The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- Node
Pool GkeConfig Node Pool Config Response Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- Roles []string
The roles associated with the GKE node pool.
- node
Pool String The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- node
Pool GkeConfig Node Pool Config Response Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- roles List<String>
The roles associated with the GKE node pool.
- node
Pool string The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- node
Pool GkeConfig Node Pool Config Response Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- roles string[]
The roles associated with the GKE node pool.
- node_
pool str The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- node_
pool_ Gkeconfig Node Pool Config Response Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- roles Sequence[str]
The roles associated with the GKE node pool.
- node
Pool String The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- node
Pool Property MapConfig Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- roles List<String>
The roles associated with the GKE node pool.
IdentityConfigResponse
- User
Service Dictionary<string, string>Account Mapping Map of user to service account.
- User
Service map[string]stringAccount Mapping Map of user to service account.
- user
Service Map<String,String>Account Mapping Map of user to service account.
- user
Service {[key: string]: string}Account Mapping Map of user to service account.
- user_
service_ Mapping[str, str]account_ mapping Map of user to service account.
- user
Service Map<String>Account Mapping Map of user to service account.
InstanceGroupConfigResponse
- Accelerators
List<Pulumi.
Google Native. Dataproc. V1. Inputs. Accelerator Config Response> Optional. The Compute Engine accelerator configuration for these instances.
- Disk
Config Pulumi.Google Native. Dataproc. V1. Inputs. Disk Config Response Optional. Disk option config settings.
- Image
Uri string Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- Instance
Names List<string> The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- Instance
References List<Pulumi.Google Native. Dataproc. V1. Inputs. Instance Reference Response> List of references to Compute Engine instances.
- Is
Preemptible bool Specifies that this instance group contains preemptible instances.
- Machine
Type stringUri Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- Managed
Group Pulumi.Config Google Native. Dataproc. V1. Inputs. Managed Group Config Response The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- Min
Cpu stringPlatform Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- Num
Instances int Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility string
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- Accelerators
[]Accelerator
Config Response Optional. The Compute Engine accelerator configuration for these instances.
- Disk
Config DiskConfig Response Optional. Disk option config settings.
- Image
Uri string Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- Instance
Names []string The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- Instance
References []InstanceReference Response List of references to Compute Engine instances.
- Is
Preemptible bool Specifies that this instance group contains preemptible instances.
- Machine
Type stringUri Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- Managed
Group ManagedConfig Group Config Response The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- Min
Cpu stringPlatform Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- Num
Instances int Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility string
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
List<Accelerator
Config Response> Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config DiskConfig Response Optional. Disk option config settings.
- image
Uri String Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance
Names List<String> The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance
References List<InstanceReference Response> List of references to Compute Engine instances.
- is
Preemptible Boolean Specifies that this instance group contains preemptible instances.
- machine
Type StringUri Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed
Group ManagedConfig Group Config Response The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min
Cpu StringPlatform Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num
Instances Integer Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility String
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
Accelerator
Config Response[] Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config DiskConfig Response Optional. Disk option config settings.
- image
Uri string Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance
Names string[] The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance
References InstanceReference Response[] List of references to Compute Engine instances.
- is
Preemptible boolean Specifies that this instance group contains preemptible instances.
- machine
Type stringUri Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed
Group ManagedConfig Group Config Response The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min
Cpu stringPlatform Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num
Instances number Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility string
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
Sequence[Accelerator
Config Response] Optional. The Compute Engine accelerator configuration for these instances.
- disk_
config DiskConfig Response Optional. Disk option config settings.
- image_
uri str Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance_
names Sequence[str] The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance_
references Sequence[InstanceReference Response] List of references to Compute Engine instances.
- is_
preemptible bool Specifies that this instance group contains preemptible instances.
- machine_
type_ struri Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed_
group_ Managedconfig Group Config Response The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min_
cpu_ strplatform Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num_
instances int Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility str
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators List<Property Map>
Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config Property Map Optional. Disk option config settings.
- image
Uri String Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance
Names List<String> The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance
References List<Property Map> List of references to Compute Engine instances.
- is
Preemptible Boolean Specifies that this instance group contains preemptible instances.
- machine
Type StringUri Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed
Group Property MapConfig The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min
Cpu StringPlatform Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num
Instances Number Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility String
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
InstanceReferenceResponse
- Instance
Id string The unique identifier of the Compute Engine instance.
- Instance
Name string The user-friendly name of the Compute Engine instance.
- Public
Ecies stringKey The public ECIES key used for sharing data with this instance.
- Public
Key string The public RSA key used for sharing data with this instance.
- Instance
Id string The unique identifier of the Compute Engine instance.
- Instance
Name string The user-friendly name of the Compute Engine instance.
- Public
Ecies stringKey The public ECIES key used for sharing data with this instance.
- Public
Key string The public RSA key used for sharing data with this instance.
- instance
Id String The unique identifier of the Compute Engine instance.
- instance
Name String The user-friendly name of the Compute Engine instance.
- public
Ecies StringKey The public ECIES key used for sharing data with this instance.
- public
Key String The public RSA key used for sharing data with this instance.
- instance
Id string The unique identifier of the Compute Engine instance.
- instance
Name string The user-friendly name of the Compute Engine instance.
- public
Ecies stringKey The public ECIES key used for sharing data with this instance.
- public
Key string The public RSA key used for sharing data with this instance.
- instance_
id str The unique identifier of the Compute Engine instance.
- instance_
name str The user-friendly name of the Compute Engine instance.
- public_
ecies_ strkey The public ECIES key used for sharing data with this instance.
- public_
key str The public RSA key used for sharing data with this instance.
- instance
Id String The unique identifier of the Compute Engine instance.
- instance
Name String The user-friendly name of the Compute Engine instance.
- public
Ecies StringKey The public ECIES key used for sharing data with this instance.
- public
Key String The public RSA key used for sharing data with this instance.
KerberosConfigResponse
- Cross
Realm stringTrust Admin Server Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm stringTrust Kdc Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm stringTrust Realm Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- Enable
Kerberos bool Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- Kdc
Db stringKey Uri Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- Key
Password stringUri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Password stringUri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Uri string Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- Kms
Key stringUri Optional. The uri of the KMS key used to encrypt various sensitive files.
- Realm string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- Root
Principal stringPassword Uri Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- Tgt
Lifetime intHours Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- Truststore
Password stringUri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- Truststore
Uri string Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- Cross
Realm stringTrust Admin Server Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm stringTrust Kdc Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm stringTrust Realm Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- Enable
Kerberos bool Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- Kdc
Db stringKey Uri Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- Key
Password stringUri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Password stringUri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Uri string Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- Kms
Key stringUri Optional. The uri of the KMS key used to encrypt various sensitive files.
- Realm string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- Root
Principal stringPassword Uri Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- Tgt
Lifetime intHours Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- Truststore
Password stringUri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- Truststore
Uri string Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross
Realm StringTrust Admin Server Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm StringTrust Kdc Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm StringTrust Realm Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- String
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable
Kerberos Boolean Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc
Db StringKey Uri Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key
Password StringUri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Password StringUri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Uri String Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms
Key StringUri Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm String
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root
Principal StringPassword Uri Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt
Lifetime IntegerHours Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore
Password StringUri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore
Uri String Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross
Realm stringTrust Admin Server Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm stringTrust Kdc Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm stringTrust Realm Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable
Kerberos boolean Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc
Db stringKey Uri Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key
Password stringUri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Password stringUri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Uri string Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms
Key stringUri Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root
Principal stringPassword Uri Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt
Lifetime numberHours Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore
Password stringUri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore
Uri string Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross_
realm_ strtrust_ admin_ server Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross_
realm_ strtrust_ kdc Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross_
realm_ strtrust_ realm Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- str
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable_
kerberos bool Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc_
db_ strkey_ uri Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key_
password_ struri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore_
password_ struri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore_
uri str Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms_
key_ struri Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm str
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root_
principal_ strpassword_ uri Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt_
lifetime_ inthours Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore_
password_ struri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore_
uri str Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross
Realm StringTrust Admin Server Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm StringTrust Kdc Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm StringTrust Realm Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- String
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable
Kerberos Boolean Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc
Db StringKey Uri Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key
Password StringUri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Password StringUri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Uri String Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms
Key StringUri Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm String
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root
Principal StringPassword Uri Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt
Lifetime NumberHours Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore
Password StringUri Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore
Uri String Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
KubernetesClusterConfigResponse
- Gke
Cluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gke Cluster Config Response The configuration for running the Dataproc cluster on GKE.
- Kubernetes
Namespace string Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- Kubernetes
Software Pulumi.Config Google Native. Dataproc. V1. Inputs. Kubernetes Software Config Response Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- Gke
Cluster GkeConfig Cluster Config Response The configuration for running the Dataproc cluster on GKE.
- Kubernetes
Namespace string Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- Kubernetes
Software KubernetesConfig Software Config Response Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- gke
Cluster GkeConfig Cluster Config Response The configuration for running the Dataproc cluster on GKE.
- kubernetes
Namespace String Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- kubernetes
Software KubernetesConfig Software Config Response Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- gke
Cluster GkeConfig Cluster Config Response The configuration for running the Dataproc cluster on GKE.
- kubernetes
Namespace string Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- kubernetes
Software KubernetesConfig Software Config Response Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- gke_
cluster_ Gkeconfig Cluster Config Response The configuration for running the Dataproc cluster on GKE.
- kubernetes_
namespace str Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- kubernetes_
software_ Kubernetesconfig Software Config Response Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- gke
Cluster Property MapConfig The configuration for running the Dataproc cluster on GKE.
- kubernetes
Namespace String Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- kubernetes
Software Property MapConfig Optional. The software configuration for this Dataproc cluster running on Kubernetes.
KubernetesSoftwareConfigResponse
- Component
Version Dictionary<string, string> The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- Properties Dictionary<string, string>
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- Component
Version map[string]string The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- Properties map[string]string
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- component
Version Map<String,String> The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- properties Map<String,String>
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- component
Version {[key: string]: string} The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- properties {[key: string]: string}
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- component_
version Mapping[str, str] The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- properties Mapping[str, str]
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- component
Version Map<String> The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- properties Map<String>
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
LifecycleConfigResponse
- Auto
Delete stringTime Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Auto
Delete stringTtl Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Idle
Delete stringTtl Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Idle
Start stringTime The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Auto
Delete stringTime Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Auto
Delete stringTtl Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Idle
Delete stringTtl Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Idle
Start stringTime The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete StringTime Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete StringTtl Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Delete StringTtl Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Start StringTime The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete stringTime Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete stringTtl Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Delete stringTtl Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Start stringTime The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto_
delete_ strtime Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto_
delete_ strttl Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle_
delete_ strttl Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle_
start_ strtime The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete StringTime Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete StringTtl Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Delete StringTtl Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Start StringTime The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
ManagedGroupConfigResponse
- Instance
Group stringManager Name The name of the Instance Group Manager for this group.
- Instance
Template stringName The name of the Instance Template used for the Managed Instance Group.
- Instance
Group stringManager Name The name of the Instance Group Manager for this group.
- Instance
Template stringName The name of the Instance Template used for the Managed Instance Group.
- instance
Group StringManager Name The name of the Instance Group Manager for this group.
- instance
Template StringName The name of the Instance Template used for the Managed Instance Group.
- instance
Group stringManager Name The name of the Instance Group Manager for this group.
- instance
Template stringName The name of the Instance Template used for the Managed Instance Group.
- instance_
group_ strmanager_ name The name of the Instance Group Manager for this group.
- instance_
template_ strname The name of the Instance Template used for the Managed Instance Group.
- instance
Group StringManager Name The name of the Instance Group Manager for this group.
- instance
Template StringName The name of the Instance Template used for the Managed Instance Group.
MetastoreConfigResponse
- Dataproc
Metastore stringService Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- Dataproc
Metastore stringService Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc
Metastore StringService Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc
Metastore stringService Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc_
metastore_ strservice Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc
Metastore StringService Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
MetricResponse
- Metric
Overrides List<string> Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- Metric
Source string A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- Metric
Overrides []string Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- Metric
Source string A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- metric
Overrides List<String> Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- metric
Source String A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- metric
Overrides string[] Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- metric
Source string A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- metric_
overrides Sequence[str] Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- metric_
source str A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- metric
Overrides List<String> Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- metric
Source String A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
NamespacedGkeDeploymentTargetResponse
- Cluster
Namespace string Optional. A namespace within the GKE cluster to deploy into.
- Target
Gke stringCluster Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- Cluster
Namespace string Optional. A namespace within the GKE cluster to deploy into.
- Target
Gke stringCluster Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster
Namespace String Optional. A namespace within the GKE cluster to deploy into.
- target
Gke StringCluster Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster
Namespace string Optional. A namespace within the GKE cluster to deploy into.
- target
Gke stringCluster Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster_
namespace str Optional. A namespace within the GKE cluster to deploy into.
- target_
gke_ strcluster Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster
Namespace String Optional. A namespace within the GKE cluster to deploy into.
- target
Gke StringCluster Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
NodeGroupAffinityResponse
- Node
Group stringUri The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
- Node
Group stringUri The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
- node
Group StringUri The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
- node
Group stringUri The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
- node_
group_ struri The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
- node
Group StringUri The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
NodeGroupResponse
- Labels Dictionary<string, string>
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- Name string
The Node group resource name (https://aip.dev/122).
- Node
Group Pulumi.Config Google Native. Dataproc. V1. Inputs. Instance Group Config Response Optional. The node group instance group configuration.
- Roles List<string>
Node group roles.
- Labels map[string]string
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- Name string
The Node group resource name (https://aip.dev/122).
- Node
Group InstanceConfig Group Config Response Optional. The node group instance group configuration.
- Roles []string
Node group roles.
- labels Map<String,String>
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- name String
The Node group resource name (https://aip.dev/122).
- node
Group InstanceConfig Group Config Response Optional. The node group instance group configuration.
- roles List<String>
Node group roles.
- labels {[key: string]: string}
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- name string
The Node group resource name (https://aip.dev/122).
- node
Group InstanceConfig Group Config Response Optional. The node group instance group configuration.
- roles string[]
Node group roles.
- labels Mapping[str, str]
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- name str
The Node group resource name (https://aip.dev/122).
- node_
group_ Instanceconfig Group Config Response Optional. The node group instance group configuration.
- roles Sequence[str]
Node group roles.
- labels Map<String>
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- name String
The Node group resource name (https://aip.dev/122).
- node
Group Property MapConfig Optional. The node group instance group configuration.
- roles List<String>
Node group roles.
NodeInitializationActionResponse
- Executable
File string Cloud Storage URI of executable file.
- Execution
Timeout string Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- Executable
File string Cloud Storage URI of executable file.
- Execution
Timeout string Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable
File String Cloud Storage URI of executable file.
- execution
Timeout String Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable
File string Cloud Storage URI of executable file.
- execution
Timeout string Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable_
file str Cloud Storage URI of executable file.
- execution_
timeout str Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable
File String Cloud Storage URI of executable file.
- execution
Timeout String Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
ReservationAffinityResponse
- Consume
Reservation stringType Optional. Type of reservation to consume
- Key string
Optional. Corresponds to the label key of reservation resource.
- Values List<string>
Optional. Corresponds to the label values of reservation resource.
- Consume
Reservation stringType Optional. Type of reservation to consume
- Key string
Optional. Corresponds to the label key of reservation resource.
- Values []string
Optional. Corresponds to the label values of reservation resource.
- consume
Reservation StringType Optional. Type of reservation to consume
- key String
Optional. Corresponds to the label key of reservation resource.
- values List<String>
Optional. Corresponds to the label values of reservation resource.
- consume
Reservation stringType Optional. Type of reservation to consume
- key string
Optional. Corresponds to the label key of reservation resource.
- values string[]
Optional. Corresponds to the label values of reservation resource.
- consume_
reservation_ strtype Optional. Type of reservation to consume
- key str
Optional. Corresponds to the label key of reservation resource.
- values Sequence[str]
Optional. Corresponds to the label values of reservation resource.
- consume
Reservation StringType Optional. Type of reservation to consume
- key String
Optional. Corresponds to the label key of reservation resource.
- values List<String>
Optional. Corresponds to the label values of reservation resource.
SecurityConfigResponse
- Identity
Config Pulumi.Google Native. Dataproc. V1. Inputs. Identity Config Response Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- Kerberos
Config Pulumi.Google Native. Dataproc. V1. Inputs. Kerberos Config Response Optional. Kerberos related configuration.
- Identity
Config IdentityConfig Response Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- Kerberos
Config KerberosConfig Response Optional. Kerberos related configuration.
- identity
Config IdentityConfig Response Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- kerberos
Config KerberosConfig Response Optional. Kerberos related configuration.
- identity
Config IdentityConfig Response Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- kerberos
Config KerberosConfig Response Optional. Kerberos related configuration.
- identity_
config IdentityConfig Response Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- kerberos_
config KerberosConfig Response Optional. Kerberos related configuration.
- identity
Config Property Map Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- kerberos
Config Property Map Optional. Kerberos related configuration.
ShieldedInstanceConfigResponse
- Enable
Integrity boolMonitoring Optional. Defines whether instances have integrity monitoring enabled.
- Enable
Secure boolBoot Optional. Defines whether instances have Secure Boot enabled.
- Enable
Vtpm bool Optional. Defines whether instances have the vTPM enabled.
- Enable
Integrity boolMonitoring Optional. Defines whether instances have integrity monitoring enabled.
- Enable
Secure boolBoot Optional. Defines whether instances have Secure Boot enabled.
- Enable
Vtpm bool Optional. Defines whether instances have the vTPM enabled.
- enable
Integrity BooleanMonitoring Optional. Defines whether instances have integrity monitoring enabled.
- enable
Secure BooleanBoot Optional. Defines whether instances have Secure Boot enabled.
- enable
Vtpm Boolean Optional. Defines whether instances have the vTPM enabled.
- enable
Integrity booleanMonitoring Optional. Defines whether instances have integrity monitoring enabled.
- enable
Secure booleanBoot Optional. Defines whether instances have Secure Boot enabled.
- enable
Vtpm boolean Optional. Defines whether instances have the vTPM enabled.
- enable_
integrity_ boolmonitoring Optional. Defines whether instances have integrity monitoring enabled.
- enable_
secure_ boolboot Optional. Defines whether instances have Secure Boot enabled.
- enable_
vtpm bool Optional. Defines whether instances have the vTPM enabled.
- enable
Integrity BooleanMonitoring Optional. Defines whether instances have integrity monitoring enabled.
- enable
Secure BooleanBoot Optional. Defines whether instances have Secure Boot enabled.
- enable
Vtpm Boolean Optional. Defines whether instances have the vTPM enabled.
SoftwareConfigResponse
- Image
Version string Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- Optional
Components List<string> Optional. The set of components to activate on the cluster.
- Properties Dictionary<string, string>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- Image
Version string Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- Optional
Components []string Optional. The set of components to activate on the cluster.
- Properties map[string]string
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image
Version String Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional
Components List<String> Optional. The set of components to activate on the cluster.
- properties Map<String,String>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image
Version string Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional
Components string[] Optional. The set of components to activate on the cluster.
- properties {[key: string]: string}
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image_
version str Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional_
components Sequence[str] Optional. The set of components to activate on the cluster.
- properties Mapping[str, str]
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image
Version String Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional
Components List<String> Optional. The set of components to activate on the cluster.
- properties Map<String>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
SparkHistoryServerConfigResponse
- Dataproc
Cluster string Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- Dataproc
Cluster string Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc
Cluster String Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc
Cluster string Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc_
cluster str Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc
Cluster String Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
VirtualClusterConfigResponse
- Auxiliary
Services Pulumi.Config Google Native. Dataproc. V1. Inputs. Auxiliary Services Config Response Optional. Configuration of auxiliary services used by this cluster.
- Kubernetes
Cluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Kubernetes Cluster Config Response The configuration for running the Dataproc cluster on Kubernetes.
- Staging
Bucket string Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Auxiliary
Services AuxiliaryConfig Services Config Response Optional. Configuration of auxiliary services used by this cluster.
- Kubernetes
Cluster KubernetesConfig Cluster Config Response The configuration for running the Dataproc cluster on Kubernetes.
- Staging
Bucket string Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- auxiliary
Services AuxiliaryConfig Services Config Response Optional. Configuration of auxiliary services used by this cluster.
- kubernetes
Cluster KubernetesConfig Cluster Config Response The configuration for running the Dataproc cluster on Kubernetes.
- staging
Bucket String Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- auxiliary
Services AuxiliaryConfig Services Config Response Optional. Configuration of auxiliary services used by this cluster.
- kubernetes
Cluster KubernetesConfig Cluster Config Response The configuration for running the Dataproc cluster on Kubernetes.
- staging
Bucket string Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- auxiliary_
services_ Auxiliaryconfig Services Config Response Optional. Configuration of auxiliary services used by this cluster.
- kubernetes_
cluster_ Kubernetesconfig Cluster Config Response The configuration for running the Dataproc cluster on Kubernetes.
- staging_
bucket str Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- auxiliary
Services Property MapConfig Optional. Configuration of auxiliary services used by this cluster.
- kubernetes
Cluster Property MapConfig The configuration for running the Dataproc cluster on Kubernetes.
- staging
Bucket String Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.