Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.dataproc/v1.Cluster
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a cluster in a project. The returned Operation.metadata will be ClusterOperationMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#clusteroperationmetadata). Auto-naming is currently not supported for this resource.
Create Cluster Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Cluster(name: string, args: ClusterArgs, opts?: CustomResourceOptions);
@overload
def Cluster(resource_name: str,
args: ClusterArgs,
opts: Optional[ResourceOptions] = None)
@overload
def Cluster(resource_name: str,
opts: Optional[ResourceOptions] = None,
cluster_name: Optional[str] = None,
region: Optional[str] = None,
action_on_failed_primary_workers: Optional[str] = None,
config: Optional[ClusterConfigArgs] = None,
labels: Optional[Mapping[str, str]] = None,
project: Optional[str] = None,
request_id: Optional[str] = None,
virtual_cluster_config: Optional[VirtualClusterConfigArgs] = None)
func NewCluster(ctx *Context, name string, args ClusterArgs, opts ...ResourceOption) (*Cluster, error)
public Cluster(string name, ClusterArgs args, CustomResourceOptions? opts = null)
public Cluster(String name, ClusterArgs args)
public Cluster(String name, ClusterArgs args, CustomResourceOptions options)
type: google-native:dataproc/v1:Cluster
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args ClusterArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args ClusterArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args ClusterArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args ClusterArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args ClusterArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var exampleclusterResourceResourceFromDataprocv1 = new GoogleNative.Dataproc.V1.Cluster("exampleclusterResourceResourceFromDataprocv1", new()
{
ClusterName = "string",
Region = "string",
ActionOnFailedPrimaryWorkers = "string",
Config = new GoogleNative.Dataproc.V1.Inputs.ClusterConfigArgs
{
AutoscalingConfig = new GoogleNative.Dataproc.V1.Inputs.AutoscalingConfigArgs
{
PolicyUri = "string",
},
AuxiliaryNodeGroups = new[]
{
new GoogleNative.Dataproc.V1.Inputs.AuxiliaryNodeGroupArgs
{
NodeGroup = new GoogleNative.Dataproc.V1.Inputs.NodeGroupArgs
{
Roles = new[]
{
GoogleNative.Dataproc.V1.NodeGroupRolesItem.RoleUnspecified,
},
Labels =
{
{ "string", "string" },
},
Name = "string",
NodeGroupConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
{
Accelerators = new[]
{
new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
{
AcceleratorCount = 0,
AcceleratorTypeUri = "string",
},
},
DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
{
BootDiskSizeGb = 0,
BootDiskType = "string",
LocalSsdInterface = "string",
NumLocalSsds = 0,
},
ImageUri = "string",
InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
{
InstanceSelectionList = new[]
{
new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
{
MachineTypes = new[]
{
"string",
},
Rank = 0,
},
},
},
MachineTypeUri = "string",
MinCpuPlatform = "string",
MinNumInstances = 0,
NumInstances = 0,
Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
{
RequiredRegistrationFraction = 0,
},
},
},
NodeGroupId = "string",
},
},
ConfigBucket = "string",
DataprocMetricConfig = new GoogleNative.Dataproc.V1.Inputs.DataprocMetricConfigArgs
{
Metrics = new[]
{
new GoogleNative.Dataproc.V1.Inputs.MetricArgs
{
MetricSource = GoogleNative.Dataproc.V1.MetricMetricSource.MetricSourceUnspecified,
MetricOverrides = new[]
{
"string",
},
},
},
},
EncryptionConfig = new GoogleNative.Dataproc.V1.Inputs.EncryptionConfigArgs
{
GcePdKmsKeyName = "string",
KmsKey = "string",
},
EndpointConfig = new GoogleNative.Dataproc.V1.Inputs.EndpointConfigArgs
{
EnableHttpPortAccess = false,
},
GceClusterConfig = new GoogleNative.Dataproc.V1.Inputs.GceClusterConfigArgs
{
ConfidentialInstanceConfig = new GoogleNative.Dataproc.V1.Inputs.ConfidentialInstanceConfigArgs
{
EnableConfidentialCompute = false,
},
InternalIpOnly = false,
Metadata =
{
{ "string", "string" },
},
NetworkUri = "string",
NodeGroupAffinity = new GoogleNative.Dataproc.V1.Inputs.NodeGroupAffinityArgs
{
NodeGroupUri = "string",
},
PrivateIpv6GoogleAccess = GoogleNative.Dataproc.V1.GceClusterConfigPrivateIpv6GoogleAccess.PrivateIpv6GoogleAccessUnspecified,
ReservationAffinity = new GoogleNative.Dataproc.V1.Inputs.ReservationAffinityArgs
{
ConsumeReservationType = GoogleNative.Dataproc.V1.ReservationAffinityConsumeReservationType.TypeUnspecified,
Key = "string",
Values = new[]
{
"string",
},
},
ServiceAccount = "string",
ServiceAccountScopes = new[]
{
"string",
},
ShieldedInstanceConfig = new GoogleNative.Dataproc.V1.Inputs.ShieldedInstanceConfigArgs
{
EnableIntegrityMonitoring = false,
EnableSecureBoot = false,
EnableVtpm = false,
},
SubnetworkUri = "string",
Tags = new[]
{
"string",
},
ZoneUri = "string",
},
GkeClusterConfig = new GoogleNative.Dataproc.V1.Inputs.GkeClusterConfigArgs
{
GkeClusterTarget = "string",
NodePoolTarget = new[]
{
new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolTargetArgs
{
NodePool = "string",
Roles = new[]
{
GoogleNative.Dataproc.V1.GkeNodePoolTargetRolesItem.RoleUnspecified,
},
NodePoolConfig = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolConfigArgs
{
Autoscaling = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAutoscalingConfigArgs
{
MaxNodeCount = 0,
MinNodeCount = 0,
},
Config = new GoogleNative.Dataproc.V1.Inputs.GkeNodeConfigArgs
{
Accelerators = new[]
{
new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAcceleratorConfigArgs
{
AcceleratorCount = "string",
AcceleratorType = "string",
GpuPartitionSize = "string",
},
},
BootDiskKmsKey = "string",
LocalSsdCount = 0,
MachineType = "string",
MinCpuPlatform = "string",
Preemptible = false,
Spot = false,
},
Locations = new[]
{
"string",
},
},
},
},
},
InitializationActions = new[]
{
new GoogleNative.Dataproc.V1.Inputs.NodeInitializationActionArgs
{
ExecutableFile = "string",
ExecutionTimeout = "string",
},
},
LifecycleConfig = new GoogleNative.Dataproc.V1.Inputs.LifecycleConfigArgs
{
AutoDeleteTime = "string",
AutoDeleteTtl = "string",
IdleDeleteTtl = "string",
},
MasterConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
{
Accelerators = new[]
{
new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
{
AcceleratorCount = 0,
AcceleratorTypeUri = "string",
},
},
DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
{
BootDiskSizeGb = 0,
BootDiskType = "string",
LocalSsdInterface = "string",
NumLocalSsds = 0,
},
ImageUri = "string",
InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
{
InstanceSelectionList = new[]
{
new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
{
MachineTypes = new[]
{
"string",
},
Rank = 0,
},
},
},
MachineTypeUri = "string",
MinCpuPlatform = "string",
MinNumInstances = 0,
NumInstances = 0,
Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
{
RequiredRegistrationFraction = 0,
},
},
MetastoreConfig = new GoogleNative.Dataproc.V1.Inputs.MetastoreConfigArgs
{
DataprocMetastoreService = "string",
},
SecondaryWorkerConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
{
Accelerators = new[]
{
new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
{
AcceleratorCount = 0,
AcceleratorTypeUri = "string",
},
},
DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
{
BootDiskSizeGb = 0,
BootDiskType = "string",
LocalSsdInterface = "string",
NumLocalSsds = 0,
},
ImageUri = "string",
InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
{
InstanceSelectionList = new[]
{
new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
{
MachineTypes = new[]
{
"string",
},
Rank = 0,
},
},
},
MachineTypeUri = "string",
MinCpuPlatform = "string",
MinNumInstances = 0,
NumInstances = 0,
Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
{
RequiredRegistrationFraction = 0,
},
},
SecurityConfig = new GoogleNative.Dataproc.V1.Inputs.SecurityConfigArgs
{
IdentityConfig = new GoogleNative.Dataproc.V1.Inputs.IdentityConfigArgs
{
UserServiceAccountMapping =
{
{ "string", "string" },
},
},
KerberosConfig = new GoogleNative.Dataproc.V1.Inputs.KerberosConfigArgs
{
CrossRealmTrustAdminServer = "string",
CrossRealmTrustKdc = "string",
CrossRealmTrustRealm = "string",
CrossRealmTrustSharedPasswordUri = "string",
EnableKerberos = false,
KdcDbKeyUri = "string",
KeyPasswordUri = "string",
KeystorePasswordUri = "string",
KeystoreUri = "string",
KmsKeyUri = "string",
Realm = "string",
RootPrincipalPasswordUri = "string",
TgtLifetimeHours = 0,
TruststorePasswordUri = "string",
TruststoreUri = "string",
},
},
SoftwareConfig = new GoogleNative.Dataproc.V1.Inputs.SoftwareConfigArgs
{
ImageVersion = "string",
OptionalComponents = new[]
{
GoogleNative.Dataproc.V1.SoftwareConfigOptionalComponentsItem.ComponentUnspecified,
},
Properties =
{
{ "string", "string" },
},
},
TempBucket = "string",
WorkerConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
{
Accelerators = new[]
{
new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
{
AcceleratorCount = 0,
AcceleratorTypeUri = "string",
},
},
DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
{
BootDiskSizeGb = 0,
BootDiskType = "string",
LocalSsdInterface = "string",
NumLocalSsds = 0,
},
ImageUri = "string",
InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
{
InstanceSelectionList = new[]
{
new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
{
MachineTypes = new[]
{
"string",
},
Rank = 0,
},
},
},
MachineTypeUri = "string",
MinCpuPlatform = "string",
MinNumInstances = 0,
NumInstances = 0,
Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
{
RequiredRegistrationFraction = 0,
},
},
},
Labels =
{
{ "string", "string" },
},
Project = "string",
RequestId = "string",
VirtualClusterConfig = new GoogleNative.Dataproc.V1.Inputs.VirtualClusterConfigArgs
{
KubernetesClusterConfig = new GoogleNative.Dataproc.V1.Inputs.KubernetesClusterConfigArgs
{
GkeClusterConfig = new GoogleNative.Dataproc.V1.Inputs.GkeClusterConfigArgs
{
GkeClusterTarget = "string",
NodePoolTarget = new[]
{
new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolTargetArgs
{
NodePool = "string",
Roles = new[]
{
GoogleNative.Dataproc.V1.GkeNodePoolTargetRolesItem.RoleUnspecified,
},
NodePoolConfig = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolConfigArgs
{
Autoscaling = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAutoscalingConfigArgs
{
MaxNodeCount = 0,
MinNodeCount = 0,
},
Config = new GoogleNative.Dataproc.V1.Inputs.GkeNodeConfigArgs
{
Accelerators = new[]
{
new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAcceleratorConfigArgs
{
AcceleratorCount = "string",
AcceleratorType = "string",
GpuPartitionSize = "string",
},
},
BootDiskKmsKey = "string",
LocalSsdCount = 0,
MachineType = "string",
MinCpuPlatform = "string",
Preemptible = false,
Spot = false,
},
Locations = new[]
{
"string",
},
},
},
},
},
KubernetesNamespace = "string",
KubernetesSoftwareConfig = new GoogleNative.Dataproc.V1.Inputs.KubernetesSoftwareConfigArgs
{
ComponentVersion =
{
{ "string", "string" },
},
Properties =
{
{ "string", "string" },
},
},
},
AuxiliaryServicesConfig = new GoogleNative.Dataproc.V1.Inputs.AuxiliaryServicesConfigArgs
{
MetastoreConfig = new GoogleNative.Dataproc.V1.Inputs.MetastoreConfigArgs
{
DataprocMetastoreService = "string",
},
SparkHistoryServerConfig = new GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfigArgs
{
DataprocCluster = "string",
},
},
StagingBucket = "string",
},
});
example, err := dataproc.NewCluster(ctx, "exampleclusterResourceResourceFromDataprocv1", &dataproc.ClusterArgs{
ClusterName: pulumi.String("string"),
Region: pulumi.String("string"),
ActionOnFailedPrimaryWorkers: pulumi.String("string"),
Config: &dataproc.ClusterConfigArgs{
AutoscalingConfig: &dataproc.AutoscalingConfigArgs{
PolicyUri: pulumi.String("string"),
},
AuxiliaryNodeGroups: dataproc.AuxiliaryNodeGroupArray{
&dataproc.AuxiliaryNodeGroupArgs{
NodeGroup: &dataproc.NodeGroupTypeArgs{
Roles: dataproc.NodeGroupRolesItemArray{
dataproc.NodeGroupRolesItemRoleUnspecified,
},
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
Name: pulumi.String("string"),
NodeGroupConfig: &dataproc.InstanceGroupConfigArgs{
Accelerators: dataproc.AcceleratorConfigArray{
&dataproc.AcceleratorConfigArgs{
AcceleratorCount: pulumi.Int(0),
AcceleratorTypeUri: pulumi.String("string"),
},
},
DiskConfig: &dataproc.DiskConfigArgs{
BootDiskSizeGb: pulumi.Int(0),
BootDiskType: pulumi.String("string"),
LocalSsdInterface: pulumi.String("string"),
NumLocalSsds: pulumi.Int(0),
},
ImageUri: pulumi.String("string"),
InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
InstanceSelectionList: dataproc.InstanceSelectionArray{
&dataproc.InstanceSelectionArgs{
MachineTypes: pulumi.StringArray{
pulumi.String("string"),
},
Rank: pulumi.Int(0),
},
},
},
MachineTypeUri: pulumi.String("string"),
MinCpuPlatform: pulumi.String("string"),
MinNumInstances: pulumi.Int(0),
NumInstances: pulumi.Int(0),
Preemptibility: dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
StartupConfig: &dataproc.StartupConfigArgs{
RequiredRegistrationFraction: pulumi.Float64(0),
},
},
},
NodeGroupId: pulumi.String("string"),
},
},
ConfigBucket: pulumi.String("string"),
DataprocMetricConfig: &dataproc.DataprocMetricConfigArgs{
Metrics: dataproc.MetricArray{
&dataproc.MetricArgs{
MetricSource: dataproc.MetricMetricSourceMetricSourceUnspecified,
MetricOverrides: pulumi.StringArray{
pulumi.String("string"),
},
},
},
},
EncryptionConfig: &dataproc.EncryptionConfigArgs{
GcePdKmsKeyName: pulumi.String("string"),
KmsKey: pulumi.String("string"),
},
EndpointConfig: &dataproc.EndpointConfigArgs{
EnableHttpPortAccess: pulumi.Bool(false),
},
GceClusterConfig: &dataproc.GceClusterConfigArgs{
ConfidentialInstanceConfig: &dataproc.ConfidentialInstanceConfigArgs{
EnableConfidentialCompute: pulumi.Bool(false),
},
InternalIpOnly: pulumi.Bool(false),
Metadata: pulumi.StringMap{
"string": pulumi.String("string"),
},
NetworkUri: pulumi.String("string"),
NodeGroupAffinity: &dataproc.NodeGroupAffinityArgs{
NodeGroupUri: pulumi.String("string"),
},
PrivateIpv6GoogleAccess: dataproc.GceClusterConfigPrivateIpv6GoogleAccessPrivateIpv6GoogleAccessUnspecified,
ReservationAffinity: &dataproc.ReservationAffinityArgs{
ConsumeReservationType: dataproc.ReservationAffinityConsumeReservationTypeTypeUnspecified,
Key: pulumi.String("string"),
Values: pulumi.StringArray{
pulumi.String("string"),
},
},
ServiceAccount: pulumi.String("string"),
ServiceAccountScopes: pulumi.StringArray{
pulumi.String("string"),
},
ShieldedInstanceConfig: &dataproc.ShieldedInstanceConfigArgs{
EnableIntegrityMonitoring: pulumi.Bool(false),
EnableSecureBoot: pulumi.Bool(false),
EnableVtpm: pulumi.Bool(false),
},
SubnetworkUri: pulumi.String("string"),
Tags: pulumi.StringArray{
pulumi.String("string"),
},
ZoneUri: pulumi.String("string"),
},
GkeClusterConfig: &dataproc.GkeClusterConfigArgs{
GkeClusterTarget: pulumi.String("string"),
NodePoolTarget: dataproc.GkeNodePoolTargetArray{
&dataproc.GkeNodePoolTargetArgs{
NodePool: pulumi.String("string"),
Roles: dataproc.GkeNodePoolTargetRolesItemArray{
dataproc.GkeNodePoolTargetRolesItemRoleUnspecified,
},
NodePoolConfig: &dataproc.GkeNodePoolConfigArgs{
Autoscaling: &dataproc.GkeNodePoolAutoscalingConfigArgs{
MaxNodeCount: pulumi.Int(0),
MinNodeCount: pulumi.Int(0),
},
Config: &dataproc.GkeNodeConfigArgs{
Accelerators: dataproc.GkeNodePoolAcceleratorConfigArray{
&dataproc.GkeNodePoolAcceleratorConfigArgs{
AcceleratorCount: pulumi.String("string"),
AcceleratorType: pulumi.String("string"),
GpuPartitionSize: pulumi.String("string"),
},
},
BootDiskKmsKey: pulumi.String("string"),
LocalSsdCount: pulumi.Int(0),
MachineType: pulumi.String("string"),
MinCpuPlatform: pulumi.String("string"),
Preemptible: pulumi.Bool(false),
Spot: pulumi.Bool(false),
},
Locations: pulumi.StringArray{
pulumi.String("string"),
},
},
},
},
},
InitializationActions: dataproc.NodeInitializationActionArray{
&dataproc.NodeInitializationActionArgs{
ExecutableFile: pulumi.String("string"),
ExecutionTimeout: pulumi.String("string"),
},
},
LifecycleConfig: &dataproc.LifecycleConfigArgs{
AutoDeleteTime: pulumi.String("string"),
AutoDeleteTtl: pulumi.String("string"),
IdleDeleteTtl: pulumi.String("string"),
},
MasterConfig: &dataproc.InstanceGroupConfigArgs{
Accelerators: dataproc.AcceleratorConfigArray{
&dataproc.AcceleratorConfigArgs{
AcceleratorCount: pulumi.Int(0),
AcceleratorTypeUri: pulumi.String("string"),
},
},
DiskConfig: &dataproc.DiskConfigArgs{
BootDiskSizeGb: pulumi.Int(0),
BootDiskType: pulumi.String("string"),
LocalSsdInterface: pulumi.String("string"),
NumLocalSsds: pulumi.Int(0),
},
ImageUri: pulumi.String("string"),
InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
InstanceSelectionList: dataproc.InstanceSelectionArray{
&dataproc.InstanceSelectionArgs{
MachineTypes: pulumi.StringArray{
pulumi.String("string"),
},
Rank: pulumi.Int(0),
},
},
},
MachineTypeUri: pulumi.String("string"),
MinCpuPlatform: pulumi.String("string"),
MinNumInstances: pulumi.Int(0),
NumInstances: pulumi.Int(0),
Preemptibility: dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
StartupConfig: &dataproc.StartupConfigArgs{
RequiredRegistrationFraction: pulumi.Float64(0),
},
},
MetastoreConfig: &dataproc.MetastoreConfigArgs{
DataprocMetastoreService: pulumi.String("string"),
},
SecondaryWorkerConfig: &dataproc.InstanceGroupConfigArgs{
Accelerators: dataproc.AcceleratorConfigArray{
&dataproc.AcceleratorConfigArgs{
AcceleratorCount: pulumi.Int(0),
AcceleratorTypeUri: pulumi.String("string"),
},
},
DiskConfig: &dataproc.DiskConfigArgs{
BootDiskSizeGb: pulumi.Int(0),
BootDiskType: pulumi.String("string"),
LocalSsdInterface: pulumi.String("string"),
NumLocalSsds: pulumi.Int(0),
},
ImageUri: pulumi.String("string"),
InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
InstanceSelectionList: dataproc.InstanceSelectionArray{
&dataproc.InstanceSelectionArgs{
MachineTypes: pulumi.StringArray{
pulumi.String("string"),
},
Rank: pulumi.Int(0),
},
},
},
MachineTypeUri: pulumi.String("string"),
MinCpuPlatform: pulumi.String("string"),
MinNumInstances: pulumi.Int(0),
NumInstances: pulumi.Int(0),
Preemptibility: dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
StartupConfig: &dataproc.StartupConfigArgs{
RequiredRegistrationFraction: pulumi.Float64(0),
},
},
SecurityConfig: &dataproc.SecurityConfigArgs{
IdentityConfig: &dataproc.IdentityConfigArgs{
UserServiceAccountMapping: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
KerberosConfig: &dataproc.KerberosConfigArgs{
CrossRealmTrustAdminServer: pulumi.String("string"),
CrossRealmTrustKdc: pulumi.String("string"),
CrossRealmTrustRealm: pulumi.String("string"),
CrossRealmTrustSharedPasswordUri: pulumi.String("string"),
EnableKerberos: pulumi.Bool(false),
KdcDbKeyUri: pulumi.String("string"),
KeyPasswordUri: pulumi.String("string"),
KeystorePasswordUri: pulumi.String("string"),
KeystoreUri: pulumi.String("string"),
KmsKeyUri: pulumi.String("string"),
Realm: pulumi.String("string"),
RootPrincipalPasswordUri: pulumi.String("string"),
TgtLifetimeHours: pulumi.Int(0),
TruststorePasswordUri: pulumi.String("string"),
TruststoreUri: pulumi.String("string"),
},
},
SoftwareConfig: &dataproc.SoftwareConfigArgs{
ImageVersion: pulumi.String("string"),
OptionalComponents: dataproc.SoftwareConfigOptionalComponentsItemArray{
dataproc.SoftwareConfigOptionalComponentsItemComponentUnspecified,
},
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
TempBucket: pulumi.String("string"),
WorkerConfig: &dataproc.InstanceGroupConfigArgs{
Accelerators: dataproc.AcceleratorConfigArray{
&dataproc.AcceleratorConfigArgs{
AcceleratorCount: pulumi.Int(0),
AcceleratorTypeUri: pulumi.String("string"),
},
},
DiskConfig: &dataproc.DiskConfigArgs{
BootDiskSizeGb: pulumi.Int(0),
BootDiskType: pulumi.String("string"),
LocalSsdInterface: pulumi.String("string"),
NumLocalSsds: pulumi.Int(0),
},
ImageUri: pulumi.String("string"),
InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
InstanceSelectionList: dataproc.InstanceSelectionArray{
&dataproc.InstanceSelectionArgs{
MachineTypes: pulumi.StringArray{
pulumi.String("string"),
},
Rank: pulumi.Int(0),
},
},
},
MachineTypeUri: pulumi.String("string"),
MinCpuPlatform: pulumi.String("string"),
MinNumInstances: pulumi.Int(0),
NumInstances: pulumi.Int(0),
Preemptibility: dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
StartupConfig: &dataproc.StartupConfigArgs{
RequiredRegistrationFraction: pulumi.Float64(0),
},
},
},
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
Project: pulumi.String("string"),
RequestId: pulumi.String("string"),
VirtualClusterConfig: &dataproc.VirtualClusterConfigArgs{
KubernetesClusterConfig: &dataproc.KubernetesClusterConfigArgs{
GkeClusterConfig: &dataproc.GkeClusterConfigArgs{
GkeClusterTarget: pulumi.String("string"),
NodePoolTarget: dataproc.GkeNodePoolTargetArray{
&dataproc.GkeNodePoolTargetArgs{
NodePool: pulumi.String("string"),
Roles: dataproc.GkeNodePoolTargetRolesItemArray{
dataproc.GkeNodePoolTargetRolesItemRoleUnspecified,
},
NodePoolConfig: &dataproc.GkeNodePoolConfigArgs{
Autoscaling: &dataproc.GkeNodePoolAutoscalingConfigArgs{
MaxNodeCount: pulumi.Int(0),
MinNodeCount: pulumi.Int(0),
},
Config: &dataproc.GkeNodeConfigArgs{
Accelerators: dataproc.GkeNodePoolAcceleratorConfigArray{
&dataproc.GkeNodePoolAcceleratorConfigArgs{
AcceleratorCount: pulumi.String("string"),
AcceleratorType: pulumi.String("string"),
GpuPartitionSize: pulumi.String("string"),
},
},
BootDiskKmsKey: pulumi.String("string"),
LocalSsdCount: pulumi.Int(0),
MachineType: pulumi.String("string"),
MinCpuPlatform: pulumi.String("string"),
Preemptible: pulumi.Bool(false),
Spot: pulumi.Bool(false),
},
Locations: pulumi.StringArray{
pulumi.String("string"),
},
},
},
},
},
KubernetesNamespace: pulumi.String("string"),
KubernetesSoftwareConfig: &dataproc.KubernetesSoftwareConfigArgs{
ComponentVersion: pulumi.StringMap{
"string": pulumi.String("string"),
},
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
},
AuxiliaryServicesConfig: &dataproc.AuxiliaryServicesConfigArgs{
MetastoreConfig: &dataproc.MetastoreConfigArgs{
DataprocMetastoreService: pulumi.String("string"),
},
SparkHistoryServerConfig: &dataproc.SparkHistoryServerConfigArgs{
DataprocCluster: pulumi.String("string"),
},
},
StagingBucket: pulumi.String("string"),
},
})
var exampleclusterResourceResourceFromDataprocv1 = new Cluster("exampleclusterResourceResourceFromDataprocv1", ClusterArgs.builder()
.clusterName("string")
.region("string")
.actionOnFailedPrimaryWorkers("string")
.config(ClusterConfigArgs.builder()
.autoscalingConfig(AutoscalingConfigArgs.builder()
.policyUri("string")
.build())
.auxiliaryNodeGroups(AuxiliaryNodeGroupArgs.builder()
.nodeGroup(NodeGroupArgs.builder()
.roles("ROLE_UNSPECIFIED")
.labels(Map.of("string", "string"))
.name("string")
.nodeGroupConfig(InstanceGroupConfigArgs.builder()
.accelerators(AcceleratorConfigArgs.builder()
.acceleratorCount(0)
.acceleratorTypeUri("string")
.build())
.diskConfig(DiskConfigArgs.builder()
.bootDiskSizeGb(0)
.bootDiskType("string")
.localSsdInterface("string")
.numLocalSsds(0)
.build())
.imageUri("string")
.instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
.instanceSelectionList(InstanceSelectionArgs.builder()
.machineTypes("string")
.rank(0)
.build())
.build())
.machineTypeUri("string")
.minCpuPlatform("string")
.minNumInstances(0)
.numInstances(0)
.preemptibility("PREEMPTIBILITY_UNSPECIFIED")
.startupConfig(StartupConfigArgs.builder()
.requiredRegistrationFraction(0)
.build())
.build())
.build())
.nodeGroupId("string")
.build())
.configBucket("string")
.dataprocMetricConfig(DataprocMetricConfigArgs.builder()
.metrics(MetricArgs.builder()
.metricSource("METRIC_SOURCE_UNSPECIFIED")
.metricOverrides("string")
.build())
.build())
.encryptionConfig(EncryptionConfigArgs.builder()
.gcePdKmsKeyName("string")
.kmsKey("string")
.build())
.endpointConfig(EndpointConfigArgs.builder()
.enableHttpPortAccess(false)
.build())
.gceClusterConfig(GceClusterConfigArgs.builder()
.confidentialInstanceConfig(ConfidentialInstanceConfigArgs.builder()
.enableConfidentialCompute(false)
.build())
.internalIpOnly(false)
.metadata(Map.of("string", "string"))
.networkUri("string")
.nodeGroupAffinity(NodeGroupAffinityArgs.builder()
.nodeGroupUri("string")
.build())
.privateIpv6GoogleAccess("PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED")
.reservationAffinity(ReservationAffinityArgs.builder()
.consumeReservationType("TYPE_UNSPECIFIED")
.key("string")
.values("string")
.build())
.serviceAccount("string")
.serviceAccountScopes("string")
.shieldedInstanceConfig(ShieldedInstanceConfigArgs.builder()
.enableIntegrityMonitoring(false)
.enableSecureBoot(false)
.enableVtpm(false)
.build())
.subnetworkUri("string")
.tags("string")
.zoneUri("string")
.build())
.gkeClusterConfig(GkeClusterConfigArgs.builder()
.gkeClusterTarget("string")
.nodePoolTarget(GkeNodePoolTargetArgs.builder()
.nodePool("string")
.roles("ROLE_UNSPECIFIED")
.nodePoolConfig(GkeNodePoolConfigArgs.builder()
.autoscaling(GkeNodePoolAutoscalingConfigArgs.builder()
.maxNodeCount(0)
.minNodeCount(0)
.build())
.config(GkeNodeConfigArgs.builder()
.accelerators(GkeNodePoolAcceleratorConfigArgs.builder()
.acceleratorCount("string")
.acceleratorType("string")
.gpuPartitionSize("string")
.build())
.bootDiskKmsKey("string")
.localSsdCount(0)
.machineType("string")
.minCpuPlatform("string")
.preemptible(false)
.spot(false)
.build())
.locations("string")
.build())
.build())
.build())
.initializationActions(NodeInitializationActionArgs.builder()
.executableFile("string")
.executionTimeout("string")
.build())
.lifecycleConfig(LifecycleConfigArgs.builder()
.autoDeleteTime("string")
.autoDeleteTtl("string")
.idleDeleteTtl("string")
.build())
.masterConfig(InstanceGroupConfigArgs.builder()
.accelerators(AcceleratorConfigArgs.builder()
.acceleratorCount(0)
.acceleratorTypeUri("string")
.build())
.diskConfig(DiskConfigArgs.builder()
.bootDiskSizeGb(0)
.bootDiskType("string")
.localSsdInterface("string")
.numLocalSsds(0)
.build())
.imageUri("string")
.instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
.instanceSelectionList(InstanceSelectionArgs.builder()
.machineTypes("string")
.rank(0)
.build())
.build())
.machineTypeUri("string")
.minCpuPlatform("string")
.minNumInstances(0)
.numInstances(0)
.preemptibility("PREEMPTIBILITY_UNSPECIFIED")
.startupConfig(StartupConfigArgs.builder()
.requiredRegistrationFraction(0)
.build())
.build())
.metastoreConfig(MetastoreConfigArgs.builder()
.dataprocMetastoreService("string")
.build())
.secondaryWorkerConfig(InstanceGroupConfigArgs.builder()
.accelerators(AcceleratorConfigArgs.builder()
.acceleratorCount(0)
.acceleratorTypeUri("string")
.build())
.diskConfig(DiskConfigArgs.builder()
.bootDiskSizeGb(0)
.bootDiskType("string")
.localSsdInterface("string")
.numLocalSsds(0)
.build())
.imageUri("string")
.instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
.instanceSelectionList(InstanceSelectionArgs.builder()
.machineTypes("string")
.rank(0)
.build())
.build())
.machineTypeUri("string")
.minCpuPlatform("string")
.minNumInstances(0)
.numInstances(0)
.preemptibility("PREEMPTIBILITY_UNSPECIFIED")
.startupConfig(StartupConfigArgs.builder()
.requiredRegistrationFraction(0)
.build())
.build())
.securityConfig(SecurityConfigArgs.builder()
.identityConfig(IdentityConfigArgs.builder()
.userServiceAccountMapping(Map.of("string", "string"))
.build())
.kerberosConfig(KerberosConfigArgs.builder()
.crossRealmTrustAdminServer("string")
.crossRealmTrustKdc("string")
.crossRealmTrustRealm("string")
.crossRealmTrustSharedPasswordUri("string")
.enableKerberos(false)
.kdcDbKeyUri("string")
.keyPasswordUri("string")
.keystorePasswordUri("string")
.keystoreUri("string")
.kmsKeyUri("string")
.realm("string")
.rootPrincipalPasswordUri("string")
.tgtLifetimeHours(0)
.truststorePasswordUri("string")
.truststoreUri("string")
.build())
.build())
.softwareConfig(SoftwareConfigArgs.builder()
.imageVersion("string")
.optionalComponents("COMPONENT_UNSPECIFIED")
.properties(Map.of("string", "string"))
.build())
.tempBucket("string")
.workerConfig(InstanceGroupConfigArgs.builder()
.accelerators(AcceleratorConfigArgs.builder()
.acceleratorCount(0)
.acceleratorTypeUri("string")
.build())
.diskConfig(DiskConfigArgs.builder()
.bootDiskSizeGb(0)
.bootDiskType("string")
.localSsdInterface("string")
.numLocalSsds(0)
.build())
.imageUri("string")
.instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
.instanceSelectionList(InstanceSelectionArgs.builder()
.machineTypes("string")
.rank(0)
.build())
.build())
.machineTypeUri("string")
.minCpuPlatform("string")
.minNumInstances(0)
.numInstances(0)
.preemptibility("PREEMPTIBILITY_UNSPECIFIED")
.startupConfig(StartupConfigArgs.builder()
.requiredRegistrationFraction(0)
.build())
.build())
.build())
.labels(Map.of("string", "string"))
.project("string")
.requestId("string")
.virtualClusterConfig(VirtualClusterConfigArgs.builder()
.kubernetesClusterConfig(KubernetesClusterConfigArgs.builder()
.gkeClusterConfig(GkeClusterConfigArgs.builder()
.gkeClusterTarget("string")
.nodePoolTarget(GkeNodePoolTargetArgs.builder()
.nodePool("string")
.roles("ROLE_UNSPECIFIED")
.nodePoolConfig(GkeNodePoolConfigArgs.builder()
.autoscaling(GkeNodePoolAutoscalingConfigArgs.builder()
.maxNodeCount(0)
.minNodeCount(0)
.build())
.config(GkeNodeConfigArgs.builder()
.accelerators(GkeNodePoolAcceleratorConfigArgs.builder()
.acceleratorCount("string")
.acceleratorType("string")
.gpuPartitionSize("string")
.build())
.bootDiskKmsKey("string")
.localSsdCount(0)
.machineType("string")
.minCpuPlatform("string")
.preemptible(false)
.spot(false)
.build())
.locations("string")
.build())
.build())
.build())
.kubernetesNamespace("string")
.kubernetesSoftwareConfig(KubernetesSoftwareConfigArgs.builder()
.componentVersion(Map.of("string", "string"))
.properties(Map.of("string", "string"))
.build())
.build())
.auxiliaryServicesConfig(AuxiliaryServicesConfigArgs.builder()
.metastoreConfig(MetastoreConfigArgs.builder()
.dataprocMetastoreService("string")
.build())
.sparkHistoryServerConfig(SparkHistoryServerConfigArgs.builder()
.dataprocCluster("string")
.build())
.build())
.stagingBucket("string")
.build())
.build());
examplecluster_resource_resource_from_dataprocv1 = google_native.dataproc.v1.Cluster("exampleclusterResourceResourceFromDataprocv1",
cluster_name="string",
region="string",
action_on_failed_primary_workers="string",
config=google_native.dataproc.v1.ClusterConfigArgs(
autoscaling_config=google_native.dataproc.v1.AutoscalingConfigArgs(
policy_uri="string",
),
auxiliary_node_groups=[google_native.dataproc.v1.AuxiliaryNodeGroupArgs(
node_group=google_native.dataproc.v1.NodeGroupArgs(
roles=[google_native.dataproc.v1.NodeGroupRolesItem.ROLE_UNSPECIFIED],
labels={
"string": "string",
},
name="string",
node_group_config=google_native.dataproc.v1.InstanceGroupConfigArgs(
accelerators=[google_native.dataproc.v1.AcceleratorConfigArgs(
accelerator_count=0,
accelerator_type_uri="string",
)],
disk_config=google_native.dataproc.v1.DiskConfigArgs(
boot_disk_size_gb=0,
boot_disk_type="string",
local_ssd_interface="string",
num_local_ssds=0,
),
image_uri="string",
instance_flexibility_policy=google_native.dataproc.v1.InstanceFlexibilityPolicyArgs(
instance_selection_list=[google_native.dataproc.v1.InstanceSelectionArgs(
machine_types=["string"],
rank=0,
)],
),
machine_type_uri="string",
min_cpu_platform="string",
min_num_instances=0,
num_instances=0,
preemptibility=google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
startup_config=google_native.dataproc.v1.StartupConfigArgs(
required_registration_fraction=0,
),
),
),
node_group_id="string",
)],
config_bucket="string",
dataproc_metric_config=google_native.dataproc.v1.DataprocMetricConfigArgs(
metrics=[google_native.dataproc.v1.MetricArgs(
metric_source=google_native.dataproc.v1.MetricMetricSource.METRIC_SOURCE_UNSPECIFIED,
metric_overrides=["string"],
)],
),
encryption_config=google_native.dataproc.v1.EncryptionConfigArgs(
gce_pd_kms_key_name="string",
kms_key="string",
),
endpoint_config=google_native.dataproc.v1.EndpointConfigArgs(
enable_http_port_access=False,
),
gce_cluster_config=google_native.dataproc.v1.GceClusterConfigArgs(
confidential_instance_config=google_native.dataproc.v1.ConfidentialInstanceConfigArgs(
enable_confidential_compute=False,
),
internal_ip_only=False,
metadata={
"string": "string",
},
network_uri="string",
node_group_affinity=google_native.dataproc.v1.NodeGroupAffinityArgs(
node_group_uri="string",
),
private_ipv6_google_access=google_native.dataproc.v1.GceClusterConfigPrivateIpv6GoogleAccess.PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED,
reservation_affinity=google_native.dataproc.v1.ReservationAffinityArgs(
consume_reservation_type=google_native.dataproc.v1.ReservationAffinityConsumeReservationType.TYPE_UNSPECIFIED,
key="string",
values=["string"],
),
service_account="string",
service_account_scopes=["string"],
shielded_instance_config=google_native.dataproc.v1.ShieldedInstanceConfigArgs(
enable_integrity_monitoring=False,
enable_secure_boot=False,
enable_vtpm=False,
),
subnetwork_uri="string",
tags=["string"],
zone_uri="string",
),
gke_cluster_config=google_native.dataproc.v1.GkeClusterConfigArgs(
gke_cluster_target="string",
node_pool_target=[google_native.dataproc.v1.GkeNodePoolTargetArgs(
node_pool="string",
roles=[google_native.dataproc.v1.GkeNodePoolTargetRolesItem.ROLE_UNSPECIFIED],
node_pool_config=google_native.dataproc.v1.GkeNodePoolConfigArgs(
autoscaling=google_native.dataproc.v1.GkeNodePoolAutoscalingConfigArgs(
max_node_count=0,
min_node_count=0,
),
config=google_native.dataproc.v1.GkeNodeConfigArgs(
accelerators=[google_native.dataproc.v1.GkeNodePoolAcceleratorConfigArgs(
accelerator_count="string",
accelerator_type="string",
gpu_partition_size="string",
)],
boot_disk_kms_key="string",
local_ssd_count=0,
machine_type="string",
min_cpu_platform="string",
preemptible=False,
spot=False,
),
locations=["string"],
),
)],
),
initialization_actions=[google_native.dataproc.v1.NodeInitializationActionArgs(
executable_file="string",
execution_timeout="string",
)],
lifecycle_config=google_native.dataproc.v1.LifecycleConfigArgs(
auto_delete_time="string",
auto_delete_ttl="string",
idle_delete_ttl="string",
),
master_config=google_native.dataproc.v1.InstanceGroupConfigArgs(
accelerators=[google_native.dataproc.v1.AcceleratorConfigArgs(
accelerator_count=0,
accelerator_type_uri="string",
)],
disk_config=google_native.dataproc.v1.DiskConfigArgs(
boot_disk_size_gb=0,
boot_disk_type="string",
local_ssd_interface="string",
num_local_ssds=0,
),
image_uri="string",
instance_flexibility_policy=google_native.dataproc.v1.InstanceFlexibilityPolicyArgs(
instance_selection_list=[google_native.dataproc.v1.InstanceSelectionArgs(
machine_types=["string"],
rank=0,
)],
),
machine_type_uri="string",
min_cpu_platform="string",
min_num_instances=0,
num_instances=0,
preemptibility=google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
startup_config=google_native.dataproc.v1.StartupConfigArgs(
required_registration_fraction=0,
),
),
metastore_config=google_native.dataproc.v1.MetastoreConfigArgs(
dataproc_metastore_service="string",
),
secondary_worker_config=google_native.dataproc.v1.InstanceGroupConfigArgs(
accelerators=[google_native.dataproc.v1.AcceleratorConfigArgs(
accelerator_count=0,
accelerator_type_uri="string",
)],
disk_config=google_native.dataproc.v1.DiskConfigArgs(
boot_disk_size_gb=0,
boot_disk_type="string",
local_ssd_interface="string",
num_local_ssds=0,
),
image_uri="string",
instance_flexibility_policy=google_native.dataproc.v1.InstanceFlexibilityPolicyArgs(
instance_selection_list=[google_native.dataproc.v1.InstanceSelectionArgs(
machine_types=["string"],
rank=0,
)],
),
machine_type_uri="string",
min_cpu_platform="string",
min_num_instances=0,
num_instances=0,
preemptibility=google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
startup_config=google_native.dataproc.v1.StartupConfigArgs(
required_registration_fraction=0,
),
),
security_config=google_native.dataproc.v1.SecurityConfigArgs(
identity_config=google_native.dataproc.v1.IdentityConfigArgs(
user_service_account_mapping={
"string": "string",
},
),
kerberos_config=google_native.dataproc.v1.KerberosConfigArgs(
cross_realm_trust_admin_server="string",
cross_realm_trust_kdc="string",
cross_realm_trust_realm="string",
cross_realm_trust_shared_password_uri="string",
enable_kerberos=False,
kdc_db_key_uri="string",
key_password_uri="string",
keystore_password_uri="string",
keystore_uri="string",
kms_key_uri="string",
realm="string",
root_principal_password_uri="string",
tgt_lifetime_hours=0,
truststore_password_uri="string",
truststore_uri="string",
),
),
software_config=google_native.dataproc.v1.SoftwareConfigArgs(
image_version="string",
optional_components=[google_native.dataproc.v1.SoftwareConfigOptionalComponentsItem.COMPONENT_UNSPECIFIED],
properties={
"string": "string",
},
),
temp_bucket="string",
worker_config=google_native.dataproc.v1.InstanceGroupConfigArgs(
accelerators=[google_native.dataproc.v1.AcceleratorConfigArgs(
accelerator_count=0,
accelerator_type_uri="string",
)],
disk_config=google_native.dataproc.v1.DiskConfigArgs(
boot_disk_size_gb=0,
boot_disk_type="string",
local_ssd_interface="string",
num_local_ssds=0,
),
image_uri="string",
instance_flexibility_policy=google_native.dataproc.v1.InstanceFlexibilityPolicyArgs(
instance_selection_list=[google_native.dataproc.v1.InstanceSelectionArgs(
machine_types=["string"],
rank=0,
)],
),
machine_type_uri="string",
min_cpu_platform="string",
min_num_instances=0,
num_instances=0,
preemptibility=google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
startup_config=google_native.dataproc.v1.StartupConfigArgs(
required_registration_fraction=0,
),
),
),
labels={
"string": "string",
},
project="string",
request_id="string",
virtual_cluster_config=google_native.dataproc.v1.VirtualClusterConfigArgs(
kubernetes_cluster_config=google_native.dataproc.v1.KubernetesClusterConfigArgs(
gke_cluster_config=google_native.dataproc.v1.GkeClusterConfigArgs(
gke_cluster_target="string",
node_pool_target=[google_native.dataproc.v1.GkeNodePoolTargetArgs(
node_pool="string",
roles=[google_native.dataproc.v1.GkeNodePoolTargetRolesItem.ROLE_UNSPECIFIED],
node_pool_config=google_native.dataproc.v1.GkeNodePoolConfigArgs(
autoscaling=google_native.dataproc.v1.GkeNodePoolAutoscalingConfigArgs(
max_node_count=0,
min_node_count=0,
),
config=google_native.dataproc.v1.GkeNodeConfigArgs(
accelerators=[google_native.dataproc.v1.GkeNodePoolAcceleratorConfigArgs(
accelerator_count="string",
accelerator_type="string",
gpu_partition_size="string",
)],
boot_disk_kms_key="string",
local_ssd_count=0,
machine_type="string",
min_cpu_platform="string",
preemptible=False,
spot=False,
),
locations=["string"],
),
)],
),
kubernetes_namespace="string",
kubernetes_software_config=google_native.dataproc.v1.KubernetesSoftwareConfigArgs(
component_version={
"string": "string",
},
properties={
"string": "string",
},
),
),
auxiliary_services_config=google_native.dataproc.v1.AuxiliaryServicesConfigArgs(
metastore_config=google_native.dataproc.v1.MetastoreConfigArgs(
dataproc_metastore_service="string",
),
spark_history_server_config=google_native.dataproc.v1.SparkHistoryServerConfigArgs(
dataproc_cluster="string",
),
),
staging_bucket="string",
))
const exampleclusterResourceResourceFromDataprocv1 = new google_native.dataproc.v1.Cluster("exampleclusterResourceResourceFromDataprocv1", {
clusterName: "string",
region: "string",
actionOnFailedPrimaryWorkers: "string",
config: {
autoscalingConfig: {
policyUri: "string",
},
auxiliaryNodeGroups: [{
nodeGroup: {
roles: [google_native.dataproc.v1.NodeGroupRolesItem.RoleUnspecified],
labels: {
string: "string",
},
name: "string",
nodeGroupConfig: {
accelerators: [{
acceleratorCount: 0,
acceleratorTypeUri: "string",
}],
diskConfig: {
bootDiskSizeGb: 0,
bootDiskType: "string",
localSsdInterface: "string",
numLocalSsds: 0,
},
imageUri: "string",
instanceFlexibilityPolicy: {
instanceSelectionList: [{
machineTypes: ["string"],
rank: 0,
}],
},
machineTypeUri: "string",
minCpuPlatform: "string",
minNumInstances: 0,
numInstances: 0,
preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
startupConfig: {
requiredRegistrationFraction: 0,
},
},
},
nodeGroupId: "string",
}],
configBucket: "string",
dataprocMetricConfig: {
metrics: [{
metricSource: google_native.dataproc.v1.MetricMetricSource.MetricSourceUnspecified,
metricOverrides: ["string"],
}],
},
encryptionConfig: {
gcePdKmsKeyName: "string",
kmsKey: "string",
},
endpointConfig: {
enableHttpPortAccess: false,
},
gceClusterConfig: {
confidentialInstanceConfig: {
enableConfidentialCompute: false,
},
internalIpOnly: false,
metadata: {
string: "string",
},
networkUri: "string",
nodeGroupAffinity: {
nodeGroupUri: "string",
},
privateIpv6GoogleAccess: google_native.dataproc.v1.GceClusterConfigPrivateIpv6GoogleAccess.PrivateIpv6GoogleAccessUnspecified,
reservationAffinity: {
consumeReservationType: google_native.dataproc.v1.ReservationAffinityConsumeReservationType.TypeUnspecified,
key: "string",
values: ["string"],
},
serviceAccount: "string",
serviceAccountScopes: ["string"],
shieldedInstanceConfig: {
enableIntegrityMonitoring: false,
enableSecureBoot: false,
enableVtpm: false,
},
subnetworkUri: "string",
tags: ["string"],
zoneUri: "string",
},
gkeClusterConfig: {
gkeClusterTarget: "string",
nodePoolTarget: [{
nodePool: "string",
roles: [google_native.dataproc.v1.GkeNodePoolTargetRolesItem.RoleUnspecified],
nodePoolConfig: {
autoscaling: {
maxNodeCount: 0,
minNodeCount: 0,
},
config: {
accelerators: [{
acceleratorCount: "string",
acceleratorType: "string",
gpuPartitionSize: "string",
}],
bootDiskKmsKey: "string",
localSsdCount: 0,
machineType: "string",
minCpuPlatform: "string",
preemptible: false,
spot: false,
},
locations: ["string"],
},
}],
},
initializationActions: [{
executableFile: "string",
executionTimeout: "string",
}],
lifecycleConfig: {
autoDeleteTime: "string",
autoDeleteTtl: "string",
idleDeleteTtl: "string",
},
masterConfig: {
accelerators: [{
acceleratorCount: 0,
acceleratorTypeUri: "string",
}],
diskConfig: {
bootDiskSizeGb: 0,
bootDiskType: "string",
localSsdInterface: "string",
numLocalSsds: 0,
},
imageUri: "string",
instanceFlexibilityPolicy: {
instanceSelectionList: [{
machineTypes: ["string"],
rank: 0,
}],
},
machineTypeUri: "string",
minCpuPlatform: "string",
minNumInstances: 0,
numInstances: 0,
preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
startupConfig: {
requiredRegistrationFraction: 0,
},
},
metastoreConfig: {
dataprocMetastoreService: "string",
},
secondaryWorkerConfig: {
accelerators: [{
acceleratorCount: 0,
acceleratorTypeUri: "string",
}],
diskConfig: {
bootDiskSizeGb: 0,
bootDiskType: "string",
localSsdInterface: "string",
numLocalSsds: 0,
},
imageUri: "string",
instanceFlexibilityPolicy: {
instanceSelectionList: [{
machineTypes: ["string"],
rank: 0,
}],
},
machineTypeUri: "string",
minCpuPlatform: "string",
minNumInstances: 0,
numInstances: 0,
preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
startupConfig: {
requiredRegistrationFraction: 0,
},
},
securityConfig: {
identityConfig: {
userServiceAccountMapping: {
string: "string",
},
},
kerberosConfig: {
crossRealmTrustAdminServer: "string",
crossRealmTrustKdc: "string",
crossRealmTrustRealm: "string",
crossRealmTrustSharedPasswordUri: "string",
enableKerberos: false,
kdcDbKeyUri: "string",
keyPasswordUri: "string",
keystorePasswordUri: "string",
keystoreUri: "string",
kmsKeyUri: "string",
realm: "string",
rootPrincipalPasswordUri: "string",
tgtLifetimeHours: 0,
truststorePasswordUri: "string",
truststoreUri: "string",
},
},
softwareConfig: {
imageVersion: "string",
optionalComponents: [google_native.dataproc.v1.SoftwareConfigOptionalComponentsItem.ComponentUnspecified],
properties: {
string: "string",
},
},
tempBucket: "string",
workerConfig: {
accelerators: [{
acceleratorCount: 0,
acceleratorTypeUri: "string",
}],
diskConfig: {
bootDiskSizeGb: 0,
bootDiskType: "string",
localSsdInterface: "string",
numLocalSsds: 0,
},
imageUri: "string",
instanceFlexibilityPolicy: {
instanceSelectionList: [{
machineTypes: ["string"],
rank: 0,
}],
},
machineTypeUri: "string",
minCpuPlatform: "string",
minNumInstances: 0,
numInstances: 0,
preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
startupConfig: {
requiredRegistrationFraction: 0,
},
},
},
labels: {
string: "string",
},
project: "string",
requestId: "string",
virtualClusterConfig: {
kubernetesClusterConfig: {
gkeClusterConfig: {
gkeClusterTarget: "string",
nodePoolTarget: [{
nodePool: "string",
roles: [google_native.dataproc.v1.GkeNodePoolTargetRolesItem.RoleUnspecified],
nodePoolConfig: {
autoscaling: {
maxNodeCount: 0,
minNodeCount: 0,
},
config: {
accelerators: [{
acceleratorCount: "string",
acceleratorType: "string",
gpuPartitionSize: "string",
}],
bootDiskKmsKey: "string",
localSsdCount: 0,
machineType: "string",
minCpuPlatform: "string",
preemptible: false,
spot: false,
},
locations: ["string"],
},
}],
},
kubernetesNamespace: "string",
kubernetesSoftwareConfig: {
componentVersion: {
string: "string",
},
properties: {
string: "string",
},
},
},
auxiliaryServicesConfig: {
metastoreConfig: {
dataprocMetastoreService: "string",
},
sparkHistoryServerConfig: {
dataprocCluster: "string",
},
},
stagingBucket: "string",
},
});
type: google-native:dataproc/v1:Cluster
properties:
actionOnFailedPrimaryWorkers: string
clusterName: string
config:
autoscalingConfig:
policyUri: string
auxiliaryNodeGroups:
- nodeGroup:
labels:
string: string
name: string
nodeGroupConfig:
accelerators:
- acceleratorCount: 0
acceleratorTypeUri: string
diskConfig:
bootDiskSizeGb: 0
bootDiskType: string
localSsdInterface: string
numLocalSsds: 0
imageUri: string
instanceFlexibilityPolicy:
instanceSelectionList:
- machineTypes:
- string
rank: 0
machineTypeUri: string
minCpuPlatform: string
minNumInstances: 0
numInstances: 0
preemptibility: PREEMPTIBILITY_UNSPECIFIED
startupConfig:
requiredRegistrationFraction: 0
roles:
- ROLE_UNSPECIFIED
nodeGroupId: string
configBucket: string
dataprocMetricConfig:
metrics:
- metricOverrides:
- string
metricSource: METRIC_SOURCE_UNSPECIFIED
encryptionConfig:
gcePdKmsKeyName: string
kmsKey: string
endpointConfig:
enableHttpPortAccess: false
gceClusterConfig:
confidentialInstanceConfig:
enableConfidentialCompute: false
internalIpOnly: false
metadata:
string: string
networkUri: string
nodeGroupAffinity:
nodeGroupUri: string
privateIpv6GoogleAccess: PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED
reservationAffinity:
consumeReservationType: TYPE_UNSPECIFIED
key: string
values:
- string
serviceAccount: string
serviceAccountScopes:
- string
shieldedInstanceConfig:
enableIntegrityMonitoring: false
enableSecureBoot: false
enableVtpm: false
subnetworkUri: string
tags:
- string
zoneUri: string
gkeClusterConfig:
gkeClusterTarget: string
nodePoolTarget:
- nodePool: string
nodePoolConfig:
autoscaling:
maxNodeCount: 0
minNodeCount: 0
config:
accelerators:
- acceleratorCount: string
acceleratorType: string
gpuPartitionSize: string
bootDiskKmsKey: string
localSsdCount: 0
machineType: string
minCpuPlatform: string
preemptible: false
spot: false
locations:
- string
roles:
- ROLE_UNSPECIFIED
initializationActions:
- executableFile: string
executionTimeout: string
lifecycleConfig:
autoDeleteTime: string
autoDeleteTtl: string
idleDeleteTtl: string
masterConfig:
accelerators:
- acceleratorCount: 0
acceleratorTypeUri: string
diskConfig:
bootDiskSizeGb: 0
bootDiskType: string
localSsdInterface: string
numLocalSsds: 0
imageUri: string
instanceFlexibilityPolicy:
instanceSelectionList:
- machineTypes:
- string
rank: 0
machineTypeUri: string
minCpuPlatform: string
minNumInstances: 0
numInstances: 0
preemptibility: PREEMPTIBILITY_UNSPECIFIED
startupConfig:
requiredRegistrationFraction: 0
metastoreConfig:
dataprocMetastoreService: string
secondaryWorkerConfig:
accelerators:
- acceleratorCount: 0
acceleratorTypeUri: string
diskConfig:
bootDiskSizeGb: 0
bootDiskType: string
localSsdInterface: string
numLocalSsds: 0
imageUri: string
instanceFlexibilityPolicy:
instanceSelectionList:
- machineTypes:
- string
rank: 0
machineTypeUri: string
minCpuPlatform: string
minNumInstances: 0
numInstances: 0
preemptibility: PREEMPTIBILITY_UNSPECIFIED
startupConfig:
requiredRegistrationFraction: 0
securityConfig:
identityConfig:
userServiceAccountMapping:
string: string
kerberosConfig:
crossRealmTrustAdminServer: string
crossRealmTrustKdc: string
crossRealmTrustRealm: string
crossRealmTrustSharedPasswordUri: string
enableKerberos: false
kdcDbKeyUri: string
keyPasswordUri: string
keystorePasswordUri: string
keystoreUri: string
kmsKeyUri: string
realm: string
rootPrincipalPasswordUri: string
tgtLifetimeHours: 0
truststorePasswordUri: string
truststoreUri: string
softwareConfig:
imageVersion: string
optionalComponents:
- COMPONENT_UNSPECIFIED
properties:
string: string
tempBucket: string
workerConfig:
accelerators:
- acceleratorCount: 0
acceleratorTypeUri: string
diskConfig:
bootDiskSizeGb: 0
bootDiskType: string
localSsdInterface: string
numLocalSsds: 0
imageUri: string
instanceFlexibilityPolicy:
instanceSelectionList:
- machineTypes:
- string
rank: 0
machineTypeUri: string
minCpuPlatform: string
minNumInstances: 0
numInstances: 0
preemptibility: PREEMPTIBILITY_UNSPECIFIED
startupConfig:
requiredRegistrationFraction: 0
labels:
string: string
project: string
region: string
requestId: string
virtualClusterConfig:
auxiliaryServicesConfig:
metastoreConfig:
dataprocMetastoreService: string
sparkHistoryServerConfig:
dataprocCluster: string
kubernetesClusterConfig:
gkeClusterConfig:
gkeClusterTarget: string
nodePoolTarget:
- nodePool: string
nodePoolConfig:
autoscaling:
maxNodeCount: 0
minNodeCount: 0
config:
accelerators:
- acceleratorCount: string
acceleratorType: string
gpuPartitionSize: string
bootDiskKmsKey: string
localSsdCount: 0
machineType: string
minCpuPlatform: string
preemptible: false
spot: false
locations:
- string
roles:
- ROLE_UNSPECIFIED
kubernetesNamespace: string
kubernetesSoftwareConfig:
componentVersion:
string: string
properties:
string: string
stagingBucket: string
Cluster Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The Cluster resource accepts the following input properties:
- Cluster
Name string - The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- Region string
- Action
On stringFailed Primary Workers - Optional. Failure action when primary worker creation fails.
- Config
Pulumi.
Google Native. Dataproc. V1. Inputs. Cluster Config - Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- Labels Dictionary<string, string>
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- Project string
- The Google Cloud Platform project ID that the cluster belongs to.
- Request
Id string - Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Virtual
Cluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Virtual Cluster Config - Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
- Cluster
Name string - The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- Region string
- Action
On stringFailed Primary Workers - Optional. Failure action when primary worker creation fails.
- Config
Cluster
Config Args - Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- Labels map[string]string
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- Project string
- The Google Cloud Platform project ID that the cluster belongs to.
- Request
Id string - Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Virtual
Cluster VirtualConfig Cluster Config Args - Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
- cluster
Name String - The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- region String
- action
On StringFailed Primary Workers - Optional. Failure action when primary worker creation fails.
- config
Cluster
Config - Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- labels Map<String,String>
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- project String
- The Google Cloud Platform project ID that the cluster belongs to.
- request
Id String - Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- virtual
Cluster VirtualConfig Cluster Config - Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
- cluster
Name string - The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- region string
- action
On stringFailed Primary Workers - Optional. Failure action when primary worker creation fails.
- config
Cluster
Config - Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- labels {[key: string]: string}
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- project string
- The Google Cloud Platform project ID that the cluster belongs to.
- request
Id string - Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- virtual
Cluster VirtualConfig Cluster Config - Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
- cluster_
name str - The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- region str
- action_
on_ strfailed_ primary_ workers - Optional. Failure action when primary worker creation fails.
- config
Cluster
Config Args - Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- labels Mapping[str, str]
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- project str
- The Google Cloud Platform project ID that the cluster belongs to.
- request_
id str - Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- virtual_
cluster_ Virtualconfig Cluster Config Args - Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
- cluster
Name String - The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- region String
- action
On StringFailed Primary Workers - Optional. Failure action when primary worker creation fails.
- config Property Map
- Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- labels Map<String>
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- project String
- The Google Cloud Platform project ID that the cluster belongs to.
- request
Id String - Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- virtual
Cluster Property MapConfig - Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
Outputs
All input properties are implicitly available as output properties. Additionally, the Cluster resource produces the following output properties:
- Cluster
Uuid string - A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- Id string
- The provider-assigned unique ID for this managed resource.
- Metrics
Pulumi.
Google Native. Dataproc. V1. Outputs. Cluster Metrics Response - Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- Status
Pulumi.
Google Native. Dataproc. V1. Outputs. Cluster Status Response - Cluster status.
- Status
History List<Pulumi.Google Native. Dataproc. V1. Outputs. Cluster Status Response> - The previous cluster status.
- Cluster
Uuid string - A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- Id string
- The provider-assigned unique ID for this managed resource.
- Metrics
Cluster
Metrics Response - Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- Status
Cluster
Status Response - Cluster status.
- Status
History []ClusterStatus Response - The previous cluster status.
- cluster
Uuid String - A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- id String
- The provider-assigned unique ID for this managed resource.
- metrics
Cluster
Metrics Response - Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- status
Cluster
Status Response - Cluster status.
- status
History List<ClusterStatus Response> - The previous cluster status.
- cluster
Uuid string - A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- id string
- The provider-assigned unique ID for this managed resource.
- metrics
Cluster
Metrics Response - Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- status
Cluster
Status Response - Cluster status.
- status
History ClusterStatus Response[] - The previous cluster status.
- cluster_
uuid str - A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- id str
- The provider-assigned unique ID for this managed resource.
- metrics
Cluster
Metrics Response - Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- status
Cluster
Status Response - Cluster status.
- status_
history Sequence[ClusterStatus Response] - The previous cluster status.
- cluster
Uuid String - A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- id String
- The provider-assigned unique ID for this managed resource.
- metrics Property Map
- Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- status Property Map
- Cluster status.
- status
History List<Property Map> - The previous cluster status.
Supporting Types
AcceleratorConfig, AcceleratorConfigArgs
- Accelerator
Count int - The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- Accelerator
Count int - The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count Integer - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type StringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count number - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator_
count int - The number of the accelerator cards of this type exposed to this instance.
- accelerator_
type_ struri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count Number - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type StringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AcceleratorConfigResponse, AcceleratorConfigResponseArgs
- Accelerator
Count int - The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- Accelerator
Count int - The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count Integer - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type StringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count number - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator_
count int - The number of the accelerator cards of this type exposed to this instance.
- accelerator_
type_ struri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count Number - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type StringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AutoscalingConfig, AutoscalingConfigArgs
- Policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- Policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri String - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy_
uri str - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri String - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
AutoscalingConfigResponse, AutoscalingConfigResponseArgs
- Policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- Policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri String - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy_
uri str - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri String - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
AuxiliaryNodeGroup, AuxiliaryNodeGroupArgs
- Node
Group Pulumi.Google Native. Dataproc. V1. Inputs. Node Group - Node group configuration.
- Node
Group stringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- Node
Group NodeGroup Type - Node group configuration.
- Node
Group stringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node
Group NodeGroup - Node group configuration.
- node
Group StringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node
Group NodeGroup - Node group configuration.
- node
Group stringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node_
group NodeGroup - Node group configuration.
- node_
group_ strid - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node
Group Property Map - Node group configuration.
- node
Group StringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
AuxiliaryNodeGroupResponse, AuxiliaryNodeGroupResponseArgs
- Node
Group Pulumi.Google Native. Dataproc. V1. Inputs. Node Group Response - Node group configuration.
- Node
Group stringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- Node
Group NodeGroup Response - Node group configuration.
- Node
Group stringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node
Group NodeGroup Response - Node group configuration.
- node
Group StringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node
Group NodeGroup Response - Node group configuration.
- node
Group stringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node_
group NodeGroup Response - Node group configuration.
- node_
group_ strid - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node
Group Property Map - Node group configuration.
- node
Group StringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
AuxiliaryServicesConfig, AuxiliaryServicesConfigArgs
- Metastore
Config Pulumi.Google Native. Dataproc. V1. Inputs. Metastore Config - Optional. The Hive Metastore configuration for this workload.
- Spark
History Pulumi.Server Config Google Native. Dataproc. V1. Inputs. Spark History Server Config - Optional. The Spark History Server configuration for the workload.
- Metastore
Config MetastoreConfig - Optional. The Hive Metastore configuration for this workload.
- Spark
History SparkServer Config History Server Config - Optional. The Spark History Server configuration for the workload.
- metastore
Config MetastoreConfig - Optional. The Hive Metastore configuration for this workload.
- spark
History SparkServer Config History Server Config - Optional. The Spark History Server configuration for the workload.
- metastore
Config MetastoreConfig - Optional. The Hive Metastore configuration for this workload.
- spark
History SparkServer Config History Server Config - Optional. The Spark History Server configuration for the workload.
- metastore_
config MetastoreConfig - Optional. The Hive Metastore configuration for this workload.
- spark_
history_ Sparkserver_ config History Server Config - Optional. The Spark History Server configuration for the workload.
- metastore
Config Property Map - Optional. The Hive Metastore configuration for this workload.
- spark
History Property MapServer Config - Optional. The Spark History Server configuration for the workload.
AuxiliaryServicesConfigResponse, AuxiliaryServicesConfigResponseArgs
- Metastore
Config Pulumi.Google Native. Dataproc. V1. Inputs. Metastore Config Response - Optional. The Hive Metastore configuration for this workload.
- Spark
History Pulumi.Server Config Google Native. Dataproc. V1. Inputs. Spark History Server Config Response - Optional. The Spark History Server configuration for the workload.
- Metastore
Config MetastoreConfig Response - Optional. The Hive Metastore configuration for this workload.
- Spark
History SparkServer Config History Server Config Response - Optional. The Spark History Server configuration for the workload.
- metastore
Config MetastoreConfig Response - Optional. The Hive Metastore configuration for this workload.
- spark
History SparkServer Config History Server Config Response - Optional. The Spark History Server configuration for the workload.
- metastore
Config MetastoreConfig Response - Optional. The Hive Metastore configuration for this workload.
- spark
History SparkServer Config History Server Config Response - Optional. The Spark History Server configuration for the workload.
- metastore_
config MetastoreConfig Response - Optional. The Hive Metastore configuration for this workload.
- spark_
history_ Sparkserver_ config History Server Config Response - Optional. The Spark History Server configuration for the workload.
- metastore
Config Property Map - Optional. The Hive Metastore configuration for this workload.
- spark
History Property MapServer Config - Optional. The Spark History Server configuration for the workload.
ClusterConfig, ClusterConfigArgs
- Autoscaling
Config Pulumi.Google Native. Dataproc. V1. Inputs. Autoscaling Config - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Auxiliary
Node List<Pulumi.Groups Google Native. Dataproc. V1. Inputs. Auxiliary Node Group> - Optional. The node group settings.
- Config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Dataproc
Metric Pulumi.Config Google Native. Dataproc. V1. Inputs. Dataproc Metric Config - Optional. The config for Dataproc metrics.
- Encryption
Config Pulumi.Google Native. Dataproc. V1. Inputs. Encryption Config - Optional. Encryption settings for the cluster.
- Endpoint
Config Pulumi.Google Native. Dataproc. V1. Inputs. Endpoint Config - Optional. Port/endpoint configuration for this cluster
- Gce
Cluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gce Cluster Config - Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gke Cluster Config - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions List<Pulumi.Google Native. Dataproc. V1. Inputs. Node Initialization Action> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config Pulumi.Google Native. Dataproc. V1. Inputs. Lifecycle Config - Optional. Lifecycle setting for the cluster.
- Master
Config Pulumi.Google Native. Dataproc. V1. Inputs. Instance Group Config - Optional. The Compute Engine config settings for the cluster's master instance.
- Metastore
Config Pulumi.Google Native. Dataproc. V1. Inputs. Metastore Config - Optional. Metastore configuration.
- Secondary
Worker Pulumi.Config Google Native. Dataproc. V1. Inputs. Instance Group Config - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- Security
Config Pulumi.Google Native. Dataproc. V1. Inputs. Security Config - Optional. Security settings for the cluster.
- Software
Config Pulumi.Google Native. Dataproc. V1. Inputs. Software Config - Optional. The config settings for cluster software.
- Temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Worker
Config Pulumi.Google Native. Dataproc. V1. Inputs. Instance Group Config - Optional. The Compute Engine config settings for the cluster's worker instances.
- Autoscaling
Config AutoscalingConfig - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Auxiliary
Node []AuxiliaryGroups Node Group - Optional. The node group settings.
- Config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Dataproc
Metric DataprocConfig Metric Config - Optional. The config for Dataproc metrics.
- Encryption
Config EncryptionConfig - Optional. Encryption settings for the cluster.
- Endpoint
Config EndpointConfig - Optional. Port/endpoint configuration for this cluster
- Gce
Cluster GceConfig Cluster Config - Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster GkeConfig Cluster Config - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions []NodeInitialization Action - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config LifecycleConfig - Optional. Lifecycle setting for the cluster.
- Master
Config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's master instance.
- Metastore
Config MetastoreConfig - Optional. Metastore configuration.
- Secondary
Worker InstanceConfig Group Config - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- Security
Config SecurityConfig - Optional. Security settings for the cluster.
- Software
Config SoftwareConfig - Optional. The config settings for cluster software.
- Temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Worker
Config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling
Config AutoscalingConfig - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary
Node List<AuxiliaryGroups Node Group> - Optional. The node group settings.
- config
Bucket String - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc
Metric DataprocConfig Metric Config - Optional. The config for Dataproc metrics.
- encryption
Config EncryptionConfig - Optional. Encryption settings for the cluster.
- endpoint
Config EndpointConfig - Optional. Port/endpoint configuration for this cluster
- gce
Cluster GceConfig Cluster Config - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster GkeConfig Cluster Config - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions List<NodeInitialization Action> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config LifecycleConfig - Optional. Lifecycle setting for the cluster.
- master
Config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore
Config MetastoreConfig - Optional. Metastore configuration.
- secondary
Worker InstanceConfig Group Config - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security
Config SecurityConfig - Optional. Security settings for the cluster.
- software
Config SoftwareConfig - Optional. The config settings for cluster software.
- temp
Bucket String - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker
Config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling
Config AutoscalingConfig - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary
Node AuxiliaryGroups Node Group[] - Optional. The node group settings.
- config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc
Metric DataprocConfig Metric Config - Optional. The config for Dataproc metrics.
- encryption
Config EncryptionConfig - Optional. Encryption settings for the cluster.
- endpoint
Config EndpointConfig - Optional. Port/endpoint configuration for this cluster
- gce
Cluster GceConfig Cluster Config - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster GkeConfig Cluster Config - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions NodeInitialization Action[] - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config LifecycleConfig - Optional. Lifecycle setting for the cluster.
- master
Config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore
Config MetastoreConfig - Optional. Metastore configuration.
- secondary
Worker InstanceConfig Group Config - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security
Config SecurityConfig - Optional. Security settings for the cluster.
- software
Config SoftwareConfig - Optional. The config settings for cluster software.
- temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker
Config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling_
config AutoscalingConfig - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary_
node_ Sequence[Auxiliarygroups Node Group] - Optional. The node group settings.
- config_
bucket str - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc_
metric_ Dataprocconfig Metric Config - Optional. The config for Dataproc metrics.
- encryption_
config EncryptionConfig - Optional. Encryption settings for the cluster.
- endpoint_
config EndpointConfig - Optional. Port/endpoint configuration for this cluster
- gce_
cluster_ Gceconfig Cluster Config - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke_
cluster_ Gkeconfig Cluster Config - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization_
actions Sequence[NodeInitialization Action] - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle_
config LifecycleConfig - Optional. Lifecycle setting for the cluster.
- master_
config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore_
config MetastoreConfig - Optional. Metastore configuration.
- secondary_
worker_ Instanceconfig Group Config - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security_
config SecurityConfig - Optional. Security settings for the cluster.
- software_
config SoftwareConfig - Optional. The config settings for cluster software.
- temp_
bucket str - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker_
config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling
Config Property Map - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary
Node List<Property Map>Groups - Optional. The node group settings.
- config
Bucket String - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc
Metric Property MapConfig - Optional. The config for Dataproc metrics.
- encryption
Config Property Map - Optional. Encryption settings for the cluster.
- endpoint
Config Property Map - Optional. Port/endpoint configuration for this cluster
- gce
Cluster Property MapConfig - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster Property MapConfig - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions List<Property Map> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config Property Map - Optional. Lifecycle setting for the cluster.
- master
Config Property Map - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore
Config Property Map - Optional. Metastore configuration.
- secondary
Worker Property MapConfig - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security
Config Property Map - Optional. Security settings for the cluster.
- software
Config Property Map - Optional. The config settings for cluster software.
- temp
Bucket String - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker
Config Property Map - Optional. The Compute Engine config settings for the cluster's worker instances.
ClusterConfigResponse, ClusterConfigResponseArgs
- Autoscaling
Config Pulumi.Google Native. Dataproc. V1. Inputs. Autoscaling Config Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Auxiliary
Node List<Pulumi.Groups Google Native. Dataproc. V1. Inputs. Auxiliary Node Group Response> - Optional. The node group settings.
- Config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Dataproc
Metric Pulumi.Config Google Native. Dataproc. V1. Inputs. Dataproc Metric Config Response - Optional. The config for Dataproc metrics.
- Encryption
Config Pulumi.Google Native. Dataproc. V1. Inputs. Encryption Config Response - Optional. Encryption settings for the cluster.
- Endpoint
Config Pulumi.Google Native. Dataproc. V1. Inputs. Endpoint Config Response - Optional. Port/endpoint configuration for this cluster
- Gce
Cluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gce Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gke Cluster Config Response - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions List<Pulumi.Google Native. Dataproc. V1. Inputs. Node Initialization Action Response> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config Pulumi.Google Native. Dataproc. V1. Inputs. Lifecycle Config Response - Optional. Lifecycle setting for the cluster.
- Master
Config Pulumi.Google Native. Dataproc. V1. Inputs. Instance Group Config Response - Optional. The Compute Engine config settings for the cluster's master instance.
- Metastore
Config Pulumi.Google Native. Dataproc. V1. Inputs. Metastore Config Response - Optional. Metastore configuration.
- Secondary
Worker Pulumi.Config Google Native. Dataproc. V1. Inputs. Instance Group Config Response - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- Security
Config Pulumi.Google Native. Dataproc. V1. Inputs. Security Config Response - Optional. Security settings for the cluster.
- Software
Config Pulumi.Google Native. Dataproc. V1. Inputs. Software Config Response - Optional. The config settings for cluster software.
- Temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Worker
Config Pulumi.Google Native. Dataproc. V1. Inputs. Instance Group Config Response - Optional. The Compute Engine config settings for the cluster's worker instances.
- Autoscaling
Config AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Auxiliary
Node []AuxiliaryGroups Node Group Response - Optional. The node group settings.
- Config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Dataproc
Metric DataprocConfig Metric Config Response - Optional. The config for Dataproc metrics.
- Encryption
Config EncryptionConfig Response - Optional. Encryption settings for the cluster.
- Endpoint
Config EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- Gce
Cluster GceConfig Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster GkeConfig Cluster Config Response - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions []NodeInitialization Action Response - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config LifecycleConfig Response - Optional. Lifecycle setting for the cluster.
- Master
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's master instance.
- Metastore
Config MetastoreConfig Response - Optional. Metastore configuration.
- Secondary
Worker InstanceConfig Group Config Response - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- Security
Config SecurityConfig Response - Optional. Security settings for the cluster.
- Software
Config SoftwareConfig Response - Optional. The config settings for cluster software.
- Temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Worker
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling
Config AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary
Node List<AuxiliaryGroups Node Group Response> - Optional. The node group settings.
- config
Bucket String - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc
Metric DataprocConfig Metric Config Response - Optional. The config for Dataproc metrics.
- encryption
Config EncryptionConfig Response - Optional. Encryption settings for the cluster.
- endpoint
Config EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- gce
Cluster GceConfig Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster GkeConfig Cluster Config Response - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions List<NodeInitialization Action Response> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config LifecycleConfig Response - Optional. Lifecycle setting for the cluster.
- master
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore
Config MetastoreConfig Response - Optional. Metastore configuration.
- secondary
Worker InstanceConfig Group Config Response - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security
Config SecurityConfig Response - Optional. Security settings for the cluster.
- software
Config SoftwareConfig Response - Optional. The config settings for cluster software.
- temp
Bucket String - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling
Config AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary
Node AuxiliaryGroups Node Group Response[] - Optional. The node group settings.
- config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc
Metric DataprocConfig Metric Config Response - Optional. The config for Dataproc metrics.
- encryption
Config EncryptionConfig Response - Optional. Encryption settings for the cluster.
- endpoint
Config EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- gce
Cluster GceConfig Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster GkeConfig Cluster Config Response - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions NodeInitialization Action Response[] - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config LifecycleConfig Response - Optional. Lifecycle setting for the cluster.
- master
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore
Config MetastoreConfig Response - Optional. Metastore configuration.
- secondary
Worker InstanceConfig Group Config Response - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security
Config SecurityConfig Response - Optional. Security settings for the cluster.
- software
Config SoftwareConfig Response - Optional. The config settings for cluster software.
- temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling_
config AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary_
node_ Sequence[Auxiliarygroups Node Group Response] - Optional. The node group settings.
- config_
bucket str - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc_
metric_ Dataprocconfig Metric Config Response - Optional. The config for Dataproc metrics.
- encryption_
config EncryptionConfig Response - Optional. Encryption settings for the cluster.
- endpoint_
config EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- gce_
cluster_ Gceconfig Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke_
cluster_ Gkeconfig Cluster Config Response - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization_
actions Sequence[NodeInitialization Action Response] - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle_
config LifecycleConfig Response - Optional. Lifecycle setting for the cluster.
- master_
config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore_
config MetastoreConfig Response - Optional. Metastore configuration.
- secondary_
worker_ Instanceconfig Group Config Response - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security_
config SecurityConfig Response - Optional. Security settings for the cluster.
- software_
config SoftwareConfig Response - Optional. The config settings for cluster software.
- temp_
bucket str - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker_
config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling
Config Property Map - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary
Node List<Property Map>Groups - Optional. The node group settings.
- config
Bucket String - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc
Metric Property MapConfig - Optional. The config for Dataproc metrics.
- encryption
Config Property Map - Optional. Encryption settings for the cluster.
- endpoint
Config Property Map - Optional. Port/endpoint configuration for this cluster
- gce
Cluster Property MapConfig - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster Property MapConfig - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions List<Property Map> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config Property Map - Optional. Lifecycle setting for the cluster.
- master
Config Property Map - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore
Config Property Map - Optional. Metastore configuration.
- secondary
Worker Property MapConfig - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security
Config Property Map - Optional. Security settings for the cluster.
- software
Config Property Map - Optional. The config settings for cluster software.
- temp
Bucket String - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker
Config Property Map - Optional. The Compute Engine config settings for the cluster's worker instances.
ClusterMetricsResponse, ClusterMetricsResponseArgs
- Hdfs
Metrics Dictionary<string, string> - The HDFS metrics.
- Yarn
Metrics Dictionary<string, string> - YARN metrics.
- Hdfs
Metrics map[string]string - The HDFS metrics.
- Yarn
Metrics map[string]string - YARN metrics.
- hdfs
Metrics Map<String,String> - The HDFS metrics.
- yarn
Metrics Map<String,String> - YARN metrics.
- hdfs
Metrics {[key: string]: string} - The HDFS metrics.
- yarn
Metrics {[key: string]: string} - YARN metrics.
- hdfs_
metrics Mapping[str, str] - The HDFS metrics.
- yarn_
metrics Mapping[str, str] - YARN metrics.
- hdfs
Metrics Map<String> - The HDFS metrics.
- yarn
Metrics Map<String> - YARN metrics.
ClusterStatusResponse, ClusterStatusResponseArgs
- Detail string
- Optional. Output only. Details of cluster's state.
- State string
- The cluster's state.
- State
Start stringTime - Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Substate string
- Additional state information that includes status reported by the agent.
- Detail string
- Optional. Output only. Details of cluster's state.
- State string
- The cluster's state.
- State
Start stringTime - Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Substate string
- Additional state information that includes status reported by the agent.
- detail String
- Optional. Output only. Details of cluster's state.
- state String
- The cluster's state.
- state
Start StringTime - Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate String
- Additional state information that includes status reported by the agent.
- detail string
- Optional. Output only. Details of cluster's state.
- state string
- The cluster's state.
- state
Start stringTime - Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate string
- Additional state information that includes status reported by the agent.
- detail str
- Optional. Output only. Details of cluster's state.
- state str
- The cluster's state.
- state_
start_ strtime - Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate str
- Additional state information that includes status reported by the agent.
- detail String
- Optional. Output only. Details of cluster's state.
- state String
- The cluster's state.
- state
Start StringTime - Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate String
- Additional state information that includes status reported by the agent.
ConfidentialInstanceConfig, ConfidentialInstanceConfigArgs
- Enable
Confidential boolCompute - Optional. Defines whether the instance should have confidential compute enabled.
- Enable
Confidential boolCompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable
Confidential BooleanCompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable
Confidential booleanCompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable_
confidential_ boolcompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable
Confidential BooleanCompute - Optional. Defines whether the instance should have confidential compute enabled.
ConfidentialInstanceConfigResponse, ConfidentialInstanceConfigResponseArgs
- Enable
Confidential boolCompute - Optional. Defines whether the instance should have confidential compute enabled.
- Enable
Confidential boolCompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable
Confidential BooleanCompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable
Confidential booleanCompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable_
confidential_ boolcompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable
Confidential BooleanCompute - Optional. Defines whether the instance should have confidential compute enabled.
DataprocMetricConfig, DataprocMetricConfigArgs
- Metrics
List<Pulumi.
Google Native. Dataproc. V1. Inputs. Metric> - Metrics sources to enable.
- metrics List<Metric>
- Metrics sources to enable.
- metrics Sequence[Metric]
- Metrics sources to enable.
- metrics List<Property Map>
- Metrics sources to enable.
DataprocMetricConfigResponse, DataprocMetricConfigResponseArgs
- Metrics
List<Pulumi.
Google Native. Dataproc. V1. Inputs. Metric Response> - Metrics sources to enable.
- Metrics
[]Metric
Response - Metrics sources to enable.
- metrics
List<Metric
Response> - Metrics sources to enable.
- metrics
Metric
Response[] - Metrics sources to enable.
- metrics
Sequence[Metric
Response] - Metrics sources to enable.
- metrics List<Property Map>
- Metrics sources to enable.
DiskConfig, DiskConfigArgs
- Boot
Disk intSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Local
Ssd stringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- Num
Local intSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- Boot
Disk intSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Local
Ssd stringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- Num
Local intSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot
Disk IntegerSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk StringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local
Ssd StringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num
Local IntegerSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot
Disk numberSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local
Ssd stringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num
Local numberSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot_
disk_ intsize_ gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot_
disk_ strtype - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local_
ssd_ strinterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num_
local_ intssds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot
Disk NumberSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk StringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local
Ssd StringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num
Local NumberSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
DiskConfigResponse, DiskConfigResponseArgs
- Boot
Disk intSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Local
Ssd stringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- Num
Local intSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- Boot
Disk intSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Local
Ssd stringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- Num
Local intSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot
Disk IntegerSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk StringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local
Ssd StringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num
Local IntegerSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot
Disk numberSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local
Ssd stringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num
Local numberSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot_
disk_ intsize_ gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot_
disk_ strtype - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local_
ssd_ strinterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num_
local_ intssds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot
Disk NumberSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk StringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local
Ssd StringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num
Local NumberSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
EncryptionConfig, EncryptionConfigArgs
- Gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- Kms
Key string - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- Gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- Kms
Key string - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce
Pd StringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms
Key String - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms
Key string - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce_
pd_ strkms_ key_ name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms_
key str - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce
Pd StringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms
Key String - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
EncryptionConfigResponse, EncryptionConfigResponseArgs
- Gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- Kms
Key string - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- Gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- Kms
Key string - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce
Pd StringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms
Key String - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms
Key string - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce_
pd_ strkms_ key_ name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms_
key str - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce
Pd StringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms
Key String - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
EndpointConfig, EndpointConfigArgs
- Enable
Http boolPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- Enable
Http boolPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enable
Http BooleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enable
Http booleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enable_
http_ boolport_ access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enable
Http BooleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
EndpointConfigResponse, EndpointConfigResponseArgs
- Enable
Http boolPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- Http
Ports Dictionary<string, string> - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- Enable
Http boolPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- Http
Ports map[string]string - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable
Http BooleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http
Ports Map<String,String> - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable
Http booleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http
Ports {[key: string]: string} - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable_
http_ boolport_ access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http_
ports Mapping[str, str] - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable
Http BooleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http
Ports Map<String> - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
GceClusterConfig, GceClusterConfigArgs
- Confidential
Instance Pulumi.Config Google Native. Dataproc. V1. Inputs. Confidential Instance Config - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- Internal
Ip boolOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata Dictionary<string, string>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- Node
Group Pulumi.Affinity Google Native. Dataproc. V1. Inputs. Node Group Affinity - Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google Pulumi.Access Google Native. Dataproc. V1. Gce Cluster Config Private Ipv6Google Access - Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity Pulumi.Google Native. Dataproc. V1. Inputs. Reservation Affinity - Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account List<string>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance Pulumi.Config Google Native. Dataproc. V1. Inputs. Shielded Instance Config - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<string>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri string - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- Confidential
Instance ConfidentialConfig Instance Config - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- Internal
Ip boolOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata map[string]string
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- Node
Group NodeAffinity Group Affinity - Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google GceAccess Cluster Config Private Ipv6Google Access - Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity ReservationAffinity - Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account []stringScopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance ShieldedConfig Instance Config - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- []string
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri string - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential
Instance ConfidentialConfig Instance Config - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal
Ip BooleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String,String>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri String - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node
Group NodeAffinity Group Affinity - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google GceAccess Cluster Config Private Ipv6Google Access - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity ReservationAffinity - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account String - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account List<String>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance ShieldedConfig Instance Config - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri String - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri String - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential
Instance ConfidentialConfig Instance Config - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal
Ip booleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata {[key: string]: string}
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node
Group NodeAffinity Group Affinity - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google GceAccess Cluster Config Private Ipv6Google Access - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity ReservationAffinity - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account string[]Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance ShieldedConfig Instance Config - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- string[]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri string - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential_
instance_ Confidentialconfig Instance Config - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal_
ip_ boolonly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Mapping[str, str]
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network_
uri str - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node_
group_ Nodeaffinity Group Affinity - Optional. Node Group Affinity for sole-tenant clusters.
- private_
ipv6_ Gcegoogle_ access Cluster Config Private Ipv6Google Access - Optional. The type of IPv6 access for a cluster.
- reservation_
affinity ReservationAffinity - Optional. Reservation Affinity for consuming Zonal reservation.
- service_
account str - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service_
account_ Sequence[str]scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded_
instance_ Shieldedconfig Instance Config - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork_
uri str - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- Sequence[str]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone_
uri str - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential
Instance Property MapConfig - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal
Ip BooleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri String - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node
Group Property MapAffinity - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google "PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED" | "INHERIT_FROM_SUBNETWORK" | "OUTBOUND" | "BIDIRECTIONAL"Access - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity Property Map - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account String - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account List<String>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance Property MapConfig - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri String - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri String - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
GceClusterConfigPrivateIpv6GoogleAccess, GceClusterConfigPrivateIpv6GoogleAccessArgs
- Private
Ipv6Google Access Unspecified - PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- Inherit
From Subnetwork - INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- Outbound
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- Bidirectional
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- Gce
Cluster Config Private Ipv6Google Access Private Ipv6Google Access Unspecified - PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- Gce
Cluster Config Private Ipv6Google Access Inherit From Subnetwork - INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- Gce
Cluster Config Private Ipv6Google Access Outbound - OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- Gce
Cluster Config Private Ipv6Google Access Bidirectional - BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- Private
Ipv6Google Access Unspecified - PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- Inherit
From Subnetwork - INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- Outbound
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- Bidirectional
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- Private
Ipv6Google Access Unspecified - PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- Inherit
From Subnetwork - INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- Outbound
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- Bidirectional
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- INHERIT_FROM_SUBNETWORK
- INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- OUTBOUND
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- BIDIRECTIONAL
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- "PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED"
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- "INHERIT_FROM_SUBNETWORK"
- INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- "OUTBOUND"
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- "BIDIRECTIONAL"
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
GceClusterConfigResponse, GceClusterConfigResponseArgs
- Confidential
Instance Pulumi.Config Google Native. Dataproc. V1. Inputs. Confidential Instance Config Response - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- Internal
Ip boolOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata Dictionary<string, string>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- Node
Group Pulumi.Affinity Google Native. Dataproc. V1. Inputs. Node Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google stringAccess - Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity Pulumi.Google Native. Dataproc. V1. Inputs. Reservation Affinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account List<string>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance Pulumi.Config Google Native. Dataproc. V1. Inputs. Shielded Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<string>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri string - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- Confidential
Instance ConfidentialConfig Instance Config Response - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- Internal
Ip boolOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata map[string]string
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- Node
Group NodeAffinity Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google stringAccess - Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account []stringScopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance ShieldedConfig Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- []string
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri string - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential
Instance ConfidentialConfig Instance Config Response - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal
Ip BooleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String,String>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri String - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node
Group NodeAffinity Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google StringAccess - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account String - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account List<String>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance ShieldedConfig Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri String - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri String - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential
Instance ConfidentialConfig Instance Config Response - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal
Ip booleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata {[key: string]: string}
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node
Group NodeAffinity Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google stringAccess - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account string[]Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance ShieldedConfig Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- string[]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri string - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential_
instance_ Confidentialconfig Instance Config Response - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal_
ip_ boolonly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Mapping[str, str]
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network_
uri str - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node_
group_ Nodeaffinity Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- private_
ipv6_ strgoogle_ access - Optional. The type of IPv6 access for a cluster.
- reservation_
affinity ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- service_
account str - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service_
account_ Sequence[str]scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded_
instance_ Shieldedconfig Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork_
uri str - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- Sequence[str]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone_
uri str - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential
Instance Property MapConfig - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal
Ip BooleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri String - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node
Group Property MapAffinity - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google StringAccess - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity Property Map - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account String - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account List<String>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance Property MapConfig - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri String - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri String - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
GkeClusterConfig, GkeClusterConfigArgs
- Gke
Cluster stringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- Namespaced
Gke Pulumi.Deployment Target Google Native. Dataproc. V1. Inputs. Namespaced Gke Deployment Target - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- Node
Pool List<Pulumi.Target Google Native. Dataproc. V1. Inputs. Gke Node Pool Target> - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- Gke
Cluster stringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- Namespaced
Gke NamespacedDeployment Target Gke Deployment Target - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- Node
Pool []GkeTarget Node Pool Target - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke
Cluster StringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced
Gke NamespacedDeployment Target Gke Deployment Target - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node
Pool List<GkeTarget Node Pool Target> - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke
Cluster stringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced
Gke NamespacedDeployment Target Gke Deployment Target - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node
Pool GkeTarget Node Pool Target[] - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke_
cluster_ strtarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced_
gke_ Namespaceddeployment_ target Gke Deployment Target - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node_
pool_ Sequence[Gketarget Node Pool Target] - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke
Cluster StringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced
Gke Property MapDeployment Target - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node
Pool List<Property Map>Target - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
GkeClusterConfigResponse, GkeClusterConfigResponseArgs
- Gke
Cluster stringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- Namespaced
Gke Pulumi.Deployment Target Google Native. Dataproc. V1. Inputs. Namespaced Gke Deployment Target Response - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- Node
Pool List<Pulumi.Target Google Native. Dataproc. V1. Inputs. Gke Node Pool Target Response> - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- Gke
Cluster stringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- Namespaced
Gke NamespacedDeployment Target Gke Deployment Target Response - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- Node
Pool []GkeTarget Node Pool Target Response - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke
Cluster StringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced
Gke NamespacedDeployment Target Gke Deployment Target Response - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node
Pool List<GkeTarget Node Pool Target Response> - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke
Cluster stringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced
Gke NamespacedDeployment Target Gke Deployment Target Response - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node
Pool GkeTarget Node Pool Target Response[] - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke_
cluster_ strtarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced_
gke_ Namespaceddeployment_ target Gke Deployment Target Response - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node_
pool_ Sequence[Gketarget Node Pool Target Response] - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke
Cluster StringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced
Gke Property MapDeployment Target - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node
Pool List<Property Map>Target - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
GkeNodeConfig, GkeNodeConfigArgs
- Accelerators
List<Pulumi.
Google Native. Dataproc. V1. Inputs. Gke Node Pool Accelerator Config> - Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- Boot
Disk stringKms Key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- Local
Ssd intCount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- Machine
Type string - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- Min
Cpu stringPlatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- Preemptible bool
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Spot bool
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Accelerators
[]Gke
Node Pool Accelerator Config - Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- Boot
Disk stringKms Key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- Local
Ssd intCount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- Machine
Type string - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- Min
Cpu stringPlatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- Preemptible bool
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Spot bool
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
List<Gke
Node Pool Accelerator Config> - Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot
Disk StringKms Key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local
Ssd IntegerCount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine
Type String - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min
Cpu StringPlatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible Boolean
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot Boolean
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
Gke
Node Pool Accelerator Config[] - Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot
Disk stringKms Key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local
Ssd numberCount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine
Type string - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min
Cpu stringPlatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible boolean
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot boolean
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
Sequence[Gke
Node Pool Accelerator Config] - Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot_
disk_ strkms_ key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local_
ssd_ intcount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine_
type str - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min_
cpu_ strplatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible bool
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot bool
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators List<Property Map>
- Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot
Disk StringKms Key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local
Ssd NumberCount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine
Type String - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min
Cpu StringPlatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible Boolean
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot Boolean
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
GkeNodeConfigResponse, GkeNodeConfigResponseArgs
- Accelerators
List<Pulumi.
Google Native. Dataproc. V1. Inputs. Gke Node Pool Accelerator Config Response> - Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- Boot
Disk stringKms Key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- Local
Ssd intCount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- Machine
Type string - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- Min
Cpu stringPlatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- Preemptible bool
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Spot bool
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Accelerators
[]Gke
Node Pool Accelerator Config Response - Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- Boot
Disk stringKms Key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- Local
Ssd intCount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- Machine
Type string - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- Min
Cpu stringPlatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- Preemptible bool
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Spot bool
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
List<Gke
Node Pool Accelerator Config Response> - Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot
Disk StringKms Key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local
Ssd IntegerCount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine
Type String - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min
Cpu StringPlatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible Boolean
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot Boolean
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
Gke
Node Pool Accelerator Config Response[] - Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot
Disk stringKms Key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local
Ssd numberCount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine
Type string - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min
Cpu stringPlatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible boolean
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot boolean
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
Sequence[Gke
Node Pool Accelerator Config Response] - Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot_
disk_ strkms_ key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local_
ssd_ intcount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine_
type str - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min_
cpu_ strplatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible bool
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot bool
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators List<Property Map>
- Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot
Disk StringKms Key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local
Ssd NumberCount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine
Type String - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min
Cpu StringPlatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible Boolean
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot Boolean
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
GkeNodePoolAcceleratorConfig, GkeNodePoolAcceleratorConfigArgs
- Accelerator
Count string - The number of accelerator cards exposed to an instance.
- Accelerator
Type string - The accelerator type resource namename (see GPUs on Compute Engine).
- Gpu
Partition stringSize - Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- Accelerator
Count string - The number of accelerator cards exposed to an instance.
- Accelerator
Type string - The accelerator type resource namename (see GPUs on Compute Engine).
- Gpu
Partition stringSize - Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- accelerator
Count String - The number of accelerator cards exposed to an instance.
- accelerator
Type String - The accelerator type resource namename (see GPUs on Compute Engine).
- gpu
Partition StringSize - Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- accelerator
Count string - The number of accelerator cards exposed to an instance.
- accelerator
Type string - The accelerator type resource namename (see GPUs on Compute Engine).
- gpu
Partition stringSize - Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- accelerator_
count str - The number of accelerator cards exposed to an instance.
- accelerator_
type str - The accelerator type resource namename (see GPUs on Compute Engine).
- gpu_
partition_ strsize - Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- accelerator
Count String - The number of accelerator cards exposed to an instance.
- accelerator
Type String - The accelerator type resource namename (see GPUs on Compute Engine).
- gpu
Partition StringSize - Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
GkeNodePoolAcceleratorConfigResponse, GkeNodePoolAcceleratorConfigResponseArgs
- Accelerator
Count string - The number of accelerator cards exposed to an instance.
- Accelerator
Type string - The accelerator type resource namename (see GPUs on Compute Engine).
- Gpu
Partition stringSize - Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- Accelerator
Count string - The number of accelerator cards exposed to an instance.
- Accelerator
Type string - The accelerator type resource namename (see GPUs on Compute Engine).
- Gpu
Partition stringSize - Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- accelerator
Count String - The number of accelerator cards exposed to an instance.
- accelerator
Type String - The accelerator type resource namename (see GPUs on Compute Engine).
- gpu
Partition StringSize - Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- accelerator
Count string - The number of accelerator cards exposed to an instance.
- accelerator
Type string - The accelerator type resource namename (see GPUs on Compute Engine).
- gpu
Partition stringSize - Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- accelerator_
count str - The number of accelerator cards exposed to an instance.
- accelerator_
type str - The accelerator type resource namename (see GPUs on Compute Engine).
- gpu_
partition_ strsize - Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- accelerator
Count String - The number of accelerator cards exposed to an instance.
- accelerator
Type String - The accelerator type resource namename (see GPUs on Compute Engine).
- gpu
Partition StringSize - Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
GkeNodePoolAutoscalingConfig, GkeNodePoolAutoscalingConfigArgs
- Max
Node intCount - The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- Min
Node intCount - The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- Max
Node intCount - The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- Min
Node intCount - The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- max
Node IntegerCount - The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- min
Node IntegerCount - The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- max
Node numberCount - The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- min
Node numberCount - The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- max_
node_ intcount - The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- min_
node_ intcount - The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- max
Node NumberCount - The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- min
Node NumberCount - The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
GkeNodePoolAutoscalingConfigResponse, GkeNodePoolAutoscalingConfigResponseArgs
- Max
Node intCount - The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- Min
Node intCount - The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- Max
Node intCount - The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- Min
Node intCount - The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- max
Node IntegerCount - The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- min
Node IntegerCount - The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- max
Node numberCount - The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- min
Node numberCount - The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- max_
node_ intcount - The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- min_
node_ intcount - The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- max
Node NumberCount - The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- min
Node NumberCount - The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
GkeNodePoolConfig, GkeNodePoolConfigArgs
- Autoscaling
Pulumi.
Google Native. Dataproc. V1. Inputs. Gke Node Pool Autoscaling Config - Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- Config
Pulumi.
Google Native. Dataproc. V1. Inputs. Gke Node Config - Optional. The node pool configuration.
- Locations List<string>
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- Autoscaling
Gke
Node Pool Autoscaling Config - Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- Config
Gke
Node Config - Optional. The node pool configuration.
- Locations []string
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling
Gke
Node Pool Autoscaling Config - Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config
Gke
Node Config - Optional. The node pool configuration.
- locations List<String>
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling
Gke
Node Pool Autoscaling Config - Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config
Gke
Node Config - Optional. The node pool configuration.
- locations string[]
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling
Gke
Node Pool Autoscaling Config - Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config
Gke
Node Config - Optional. The node pool configuration.
- locations Sequence[str]
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling Property Map
- Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config Property Map
- Optional. The node pool configuration.
- locations List<String>
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
GkeNodePoolConfigResponse, GkeNodePoolConfigResponseArgs
- Autoscaling
Pulumi.
Google Native. Dataproc. V1. Inputs. Gke Node Pool Autoscaling Config Response - Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- Config
Pulumi.
Google Native. Dataproc. V1. Inputs. Gke Node Config Response - Optional. The node pool configuration.
- Locations List<string>
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- Autoscaling
Gke
Node Pool Autoscaling Config Response - Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- Config
Gke
Node Config Response - Optional. The node pool configuration.
- Locations []string
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling
Gke
Node Pool Autoscaling Config Response - Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config
Gke
Node Config Response - Optional. The node pool configuration.
- locations List<String>
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling
Gke
Node Pool Autoscaling Config Response - Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config
Gke
Node Config Response - Optional. The node pool configuration.
- locations string[]
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling
Gke
Node Pool Autoscaling Config Response - Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config
Gke
Node Config Response - Optional. The node pool configuration.
- locations Sequence[str]
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling Property Map
- Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config Property Map
- Optional. The node pool configuration.
- locations List<String>
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
GkeNodePoolTarget, GkeNodePoolTargetArgs
- Node
Pool string - The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- Roles
List<Pulumi.
Google Native. Dataproc. V1. Gke Node Pool Target Roles Item> - The roles associated with the GKE node pool.
- Node
Pool Pulumi.Config Google Native. Dataproc. V1. Inputs. Gke Node Pool Config - Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- Node
Pool string - The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- Roles
[]Gke
Node Pool Target Roles Item - The roles associated with the GKE node pool.
- Node
Pool GkeConfig Node Pool Config - Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- node
Pool String - The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- roles
List<Gke
Node Pool Target Roles Item> - The roles associated with the GKE node pool.
- node
Pool GkeConfig Node Pool Config - Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- node
Pool string - The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- roles
Gke
Node Pool Target Roles Item[] - The roles associated with the GKE node pool.
- node
Pool GkeConfig Node Pool Config - Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- node_
pool str - The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- roles
Sequence[Gke
Node Pool Target Roles Item] - The roles associated with the GKE node pool.
- node_
pool_ Gkeconfig Node Pool Config - Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- node
Pool String - The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- roles List<"ROLE_UNSPECIFIED" | "DEFAULT" | "CONTROLLER" | "SPARK_DRIVER" | "SPARK_EXECUTOR">
- The roles associated with the GKE node pool.
- node
Pool Property MapConfig - Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
GkeNodePoolTargetResponse, GkeNodePoolTargetResponseArgs
- Node
Pool string - The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- Node
Pool Pulumi.Config Google Native. Dataproc. V1. Inputs. Gke Node Pool Config Response - Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- Roles List<string>
- The roles associated with the GKE node pool.
- Node
Pool string - The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- Node
Pool GkeConfig Node Pool Config Response - Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- Roles []string
- The roles associated with the GKE node pool.
- node
Pool String - The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- node
Pool GkeConfig Node Pool Config Response - Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- roles List<String>
- The roles associated with the GKE node pool.
- node
Pool string - The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- node
Pool GkeConfig Node Pool Config Response - Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- roles string[]
- The roles associated with the GKE node pool.
- node_
pool str - The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- node_
pool_ Gkeconfig Node Pool Config Response - Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- roles Sequence[str]
- The roles associated with the GKE node pool.
- node
Pool String - The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- node
Pool Property MapConfig - Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- roles List<String>
- The roles associated with the GKE node pool.
GkeNodePoolTargetRolesItem, GkeNodePoolTargetRolesItemArgs
- Role
Unspecified - ROLE_UNSPECIFIEDRole is unspecified.
- Default
- DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
- Controller
- CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
- Spark
Driver - SPARK_DRIVERRun work associated with a Spark driver of a job.
- Spark
Executor - SPARK_EXECUTORRun work associated with a Spark executor of a job.
- Gke
Node Pool Target Roles Item Role Unspecified - ROLE_UNSPECIFIEDRole is unspecified.
- Gke
Node Pool Target Roles Item Default - DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
- Gke
Node Pool Target Roles Item Controller - CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
- Gke
Node Pool Target Roles Item Spark Driver - SPARK_DRIVERRun work associated with a Spark driver of a job.
- Gke
Node Pool Target Roles Item Spark Executor - SPARK_EXECUTORRun work associated with a Spark executor of a job.
- Role
Unspecified - ROLE_UNSPECIFIEDRole is unspecified.
- Default
- DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
- Controller
- CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
- Spark
Driver - SPARK_DRIVERRun work associated with a Spark driver of a job.
- Spark
Executor - SPARK_EXECUTORRun work associated with a Spark executor of a job.
- Role
Unspecified - ROLE_UNSPECIFIEDRole is unspecified.
- Default
- DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
- Controller
- CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
- Spark
Driver - SPARK_DRIVERRun work associated with a Spark driver of a job.
- Spark
Executor - SPARK_EXECUTORRun work associated with a Spark executor of a job.
- ROLE_UNSPECIFIED
- ROLE_UNSPECIFIEDRole is unspecified.
- DEFAULT
- DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
- CONTROLLER
- CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
- SPARK_DRIVER
- SPARK_DRIVERRun work associated with a Spark driver of a job.
- SPARK_EXECUTOR
- SPARK_EXECUTORRun work associated with a Spark executor of a job.
- "ROLE_UNSPECIFIED"
- ROLE_UNSPECIFIEDRole is unspecified.
- "DEFAULT"
- DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
- "CONTROLLER"
- CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
- "SPARK_DRIVER"
- SPARK_DRIVERRun work associated with a Spark driver of a job.
- "SPARK_EXECUTOR"
- SPARK_EXECUTORRun work associated with a Spark executor of a job.
IdentityConfig, IdentityConfigArgs
- User
Service Dictionary<string, string>Account Mapping - Map of user to service account.
- User
Service map[string]stringAccount Mapping - Map of user to service account.
- user
Service Map<String,String>Account Mapping - Map of user to service account.
- user
Service {[key: string]: string}Account Mapping - Map of user to service account.
- user_
service_ Mapping[str, str]account_ mapping - Map of user to service account.
- user
Service Map<String>Account Mapping - Map of user to service account.
IdentityConfigResponse, IdentityConfigResponseArgs
- User
Service Dictionary<string, string>Account Mapping - Map of user to service account.
- User
Service map[string]stringAccount Mapping - Map of user to service account.
- user
Service Map<String,String>Account Mapping - Map of user to service account.
- user
Service {[key: string]: string}Account Mapping - Map of user to service account.
- user_
service_ Mapping[str, str]account_ mapping - Map of user to service account.
- user
Service Map<String>Account Mapping - Map of user to service account.
InstanceFlexibilityPolicy, InstanceFlexibilityPolicyArgs
- Instance
Selection List<Pulumi.List Google Native. Dataproc. V1. Inputs. Instance Selection> - Optional. List of instance selection options that the group will use when creating new VMs.
- Instance
Selection []InstanceList Selection - Optional. List of instance selection options that the group will use when creating new VMs.
- instance
Selection List<InstanceList Selection> - Optional. List of instance selection options that the group will use when creating new VMs.
- instance
Selection InstanceList Selection[] - Optional. List of instance selection options that the group will use when creating new VMs.
- instance_
selection_ Sequence[Instancelist Selection] - Optional. List of instance selection options that the group will use when creating new VMs.
- instance
Selection List<Property Map>List - Optional. List of instance selection options that the group will use when creating new VMs.
InstanceFlexibilityPolicyResponse, InstanceFlexibilityPolicyResponseArgs
- Instance
Selection List<Pulumi.List Google Native. Dataproc. V1. Inputs. Instance Selection Response> - Optional. List of instance selection options that the group will use when creating new VMs.
- Instance
Selection List<Pulumi.Results Google Native. Dataproc. V1. Inputs. Instance Selection Result Response> - A list of instance selection results in the group.
- Instance
Selection []InstanceList Selection Response - Optional. List of instance selection options that the group will use when creating new VMs.
- Instance
Selection []InstanceResults Selection Result Response - A list of instance selection results in the group.
- instance
Selection List<InstanceList Selection Response> - Optional. List of instance selection options that the group will use when creating new VMs.
- instance
Selection List<InstanceResults Selection Result Response> - A list of instance selection results in the group.
- instance
Selection InstanceList Selection Response[] - Optional. List of instance selection options that the group will use when creating new VMs.
- instance
Selection InstanceResults Selection Result Response[] - A list of instance selection results in the group.
- instance_
selection_ Sequence[Instancelist Selection Response] - Optional. List of instance selection options that the group will use when creating new VMs.
- instance_
selection_ Sequence[Instanceresults Selection Result Response] - A list of instance selection results in the group.
- instance
Selection List<Property Map>List - Optional. List of instance selection options that the group will use when creating new VMs.
- instance
Selection List<Property Map>Results - A list of instance selection results in the group.
InstanceGroupConfig, InstanceGroupConfigArgs
- Accelerators
List<Pulumi.
Google Native. Dataproc. V1. Inputs. Accelerator Config> - Optional. The Compute Engine accelerator configuration for these instances.
- Disk
Config Pulumi.Google Native. Dataproc. V1. Inputs. Disk Config - Optional. Disk option config settings.
- Image
Uri string - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- Instance
Flexibility Pulumi.Policy Google Native. Dataproc. V1. Inputs. Instance Flexibility Policy - Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- Machine
Type stringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- Min
Cpu stringPlatform - Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- Min
Num intInstances - Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- Num
Instances int - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility
Pulumi.
Google Native. Dataproc. V1. Instance Group Config Preemptibility - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- Startup
Config Pulumi.Google Native. Dataproc. V1. Inputs. Startup Config - Optional. Configuration to handle the startup of instances during cluster create and update process.
- Accelerators
[]Accelerator
Config - Optional. The Compute Engine accelerator configuration for these instances.
- Disk
Config DiskConfig - Optional. Disk option config settings.
- Image
Uri string - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- Instance
Flexibility InstancePolicy Flexibility Policy - Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- Machine
Type stringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- Min
Cpu stringPlatform - Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- Min
Num intInstances - Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- Num
Instances int - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility
Instance
Group Config Preemptibility - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- Startup
Config StartupConfig - Optional. Configuration to handle the startup of instances during cluster create and update process.
- accelerators
List<Accelerator
Config> - Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config DiskConfig - Optional. Disk option config settings.
- image
Uri String - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance
Flexibility InstancePolicy Flexibility Policy - Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- machine
Type StringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- min
Cpu StringPlatform - Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- min
Num IntegerInstances - Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- num
Instances Integer - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility
Instance
Group Config Preemptibility - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- startup
Config StartupConfig - Optional. Configuration to handle the startup of instances during cluster create and update process.
- accelerators
Accelerator
Config[] - Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config DiskConfig - Optional. Disk option config settings.
- image
Uri string - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance
Flexibility InstancePolicy Flexibility Policy - Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- machine
Type stringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- min
Cpu stringPlatform - Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- min
Num numberInstances - Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- num
Instances number - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility
Instance
Group Config Preemptibility - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- startup
Config StartupConfig - Optional. Configuration to handle the startup of instances during cluster create and update process.
- accelerators
Sequence[Accelerator
Config] - Optional. The Compute Engine accelerator configuration for these instances.
- disk_
config DiskConfig - Optional. Disk option config settings.
- image_
uri str - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance_
flexibility_ Instancepolicy Flexibility Policy - Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- machine_
type_ struri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- min_
cpu_ strplatform - Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- min_
num_ intinstances - Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- num_
instances int - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility
Instance
Group Config Preemptibility - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- startup_
config StartupConfig - Optional. Configuration to handle the startup of instances during cluster create and update process.
- accelerators List<Property Map>
- Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config Property Map - Optional. Disk option config settings.
- image
Uri String - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance
Flexibility Property MapPolicy - Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- machine
Type StringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- min
Cpu StringPlatform - Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- min
Num NumberInstances - Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- num
Instances Number - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility "PREEMPTIBILITY_UNSPECIFIED" | "NON_PREEMPTIBLE" | "PREEMPTIBLE" | "SPOT"
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- startup
Config Property Map - Optional. Configuration to handle the startup of instances during cluster create and update process.
InstanceGroupConfigPreemptibility, InstanceGroupConfigPreemptibilityArgs
- Preemptibility
Unspecified - PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- Non
Preemptible - NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- Preemptible
- PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
- Spot
- SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
- Instance
Group Config Preemptibility Preemptibility Unspecified - PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- Instance
Group Config Preemptibility Non Preemptible - NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- Instance
Group Config Preemptibility Preemptible - PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
- Instance
Group Config Preemptibility Spot - SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
- Preemptibility
Unspecified - PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- Non
Preemptible - NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- Preemptible
- PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
- Spot
- SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
- Preemptibility
Unspecified - PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- Non
Preemptible - NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- Preemptible
- PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
- Spot
- SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
- PREEMPTIBILITY_UNSPECIFIED
- PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- NON_PREEMPTIBLE
- NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- PREEMPTIBLE
- PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
- SPOT
- SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
- "PREEMPTIBILITY_UNSPECIFIED"
- PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- "NON_PREEMPTIBLE"
- NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- "PREEMPTIBLE"
- PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
- "SPOT"
- SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
InstanceGroupConfigResponse, InstanceGroupConfigResponseArgs
- Accelerators
List<Pulumi.
Google Native. Dataproc. V1. Inputs. Accelerator Config Response> - Optional. The Compute Engine accelerator configuration for these instances.
- Disk
Config Pulumi.Google Native. Dataproc. V1. Inputs. Disk Config Response - Optional. Disk option config settings.
- Image
Uri string - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- Instance
Flexibility Pulumi.Policy Google Native. Dataproc. V1. Inputs. Instance Flexibility Policy Response - Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- Instance
Names List<string> - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- Instance
References List<Pulumi.Google Native. Dataproc. V1. Inputs. Instance Reference Response> - List of references to Compute Engine instances.
- Is
Preemptible bool - Specifies that this instance group contains preemptible instances.
- Machine
Type stringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- Managed
Group Pulumi.Config Google Native. Dataproc. V1. Inputs. Managed Group Config Response - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- Min
Cpu stringPlatform - Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- Min
Num intInstances - Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- Num
Instances int - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility string
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- Startup
Config Pulumi.Google Native. Dataproc. V1. Inputs. Startup Config Response - Optional. Configuration to handle the startup of instances during cluster create and update process.
- Accelerators
[]Accelerator
Config Response - Optional. The Compute Engine accelerator configuration for these instances.
- Disk
Config DiskConfig Response - Optional. Disk option config settings.
- Image
Uri string - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- Instance
Flexibility InstancePolicy Flexibility Policy Response - Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- Instance
Names []string - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- Instance
References []InstanceReference Response - List of references to Compute Engine instances.
- Is
Preemptible bool - Specifies that this instance group contains preemptible instances.
- Machine
Type stringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- Managed
Group ManagedConfig Group Config Response - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- Min
Cpu stringPlatform - Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- Min
Num intInstances - Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- Num
Instances int - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility string
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- Startup
Config StartupConfig Response - Optional. Configuration to handle the startup of instances during cluster create and update process.
- accelerators
List<Accelerator
Config Response> - Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config DiskConfig Response - Optional. Disk option config settings.
- image
Uri String - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance
Flexibility InstancePolicy Flexibility Policy Response - Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- instance
Names List<String> - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance
References List<InstanceReference Response> - List of references to Compute Engine instances.
- is
Preemptible Boolean - Specifies that this instance group contains preemptible instances.
- machine
Type StringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed
Group ManagedConfig Group Config Response - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min
Cpu StringPlatform - Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- min
Num IntegerInstances - Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- num
Instances Integer - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility String
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- startup
Config StartupConfig Response - Optional. Configuration to handle the startup of instances during cluster create and update process.
- accelerators
Accelerator
Config Response[] - Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config DiskConfig Response - Optional. Disk option config settings.
- image
Uri string - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance
Flexibility InstancePolicy Flexibility Policy Response - Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- instance
Names string[] - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance
References InstanceReference Response[] - List of references to Compute Engine instances.
- is
Preemptible boolean - Specifies that this instance group contains preemptible instances.
- machine
Type stringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed
Group ManagedConfig Group Config Response - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min
Cpu stringPlatform - Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- min
Num numberInstances - Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.