Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.dataproc/v1.WorkflowTemplate
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates new workflow template. Auto-naming is currently not supported for this resource.
Create WorkflowTemplate Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new WorkflowTemplate(name: string, args: WorkflowTemplateArgs, opts?: CustomResourceOptions);
@overload
def WorkflowTemplate(resource_name: str,
args: WorkflowTemplateArgs,
opts: Optional[ResourceOptions] = None)
@overload
def WorkflowTemplate(resource_name: str,
opts: Optional[ResourceOptions] = None,
jobs: Optional[Sequence[OrderedJobArgs]] = None,
placement: Optional[WorkflowTemplatePlacementArgs] = None,
dag_timeout: Optional[str] = None,
encryption_config: Optional[GoogleCloudDataprocV1WorkflowTemplateEncryptionConfigArgs] = None,
id: Optional[str] = None,
labels: Optional[Mapping[str, str]] = None,
location: Optional[str] = None,
parameters: Optional[Sequence[TemplateParameterArgs]] = None,
project: Optional[str] = None,
version: Optional[int] = None)
func NewWorkflowTemplate(ctx *Context, name string, args WorkflowTemplateArgs, opts ...ResourceOption) (*WorkflowTemplate, error)
public WorkflowTemplate(string name, WorkflowTemplateArgs args, CustomResourceOptions? opts = null)
public WorkflowTemplate(String name, WorkflowTemplateArgs args)
public WorkflowTemplate(String name, WorkflowTemplateArgs args, CustomResourceOptions options)
type: google-native:dataproc/v1:WorkflowTemplate
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args WorkflowTemplateArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args WorkflowTemplateArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args WorkflowTemplateArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args WorkflowTemplateArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args WorkflowTemplateArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var workflowTemplateResource = new GoogleNative.Dataproc.V1.WorkflowTemplate("workflowTemplateResource", new()
{
Jobs = new[]
{
new GoogleNative.Dataproc.V1.Inputs.OrderedJobArgs
{
StepId = "string",
PrestoJob = new GoogleNative.Dataproc.V1.Inputs.PrestoJobArgs
{
ClientTags = new[]
{
"string",
},
ContinueOnFailure = false,
LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
OutputFormat = "string",
Properties =
{
{ "string", "string" },
},
QueryFileUri = "string",
QueryList = new GoogleNative.Dataproc.V1.Inputs.QueryListArgs
{
Queries = new[]
{
"string",
},
},
},
HiveJob = new GoogleNative.Dataproc.V1.Inputs.HiveJobArgs
{
ContinueOnFailure = false,
JarFileUris = new[]
{
"string",
},
Properties =
{
{ "string", "string" },
},
QueryFileUri = "string",
QueryList = new GoogleNative.Dataproc.V1.Inputs.QueryListArgs
{
Queries = new[]
{
"string",
},
},
ScriptVariables =
{
{ "string", "string" },
},
},
Labels =
{
{ "string", "string" },
},
PigJob = new GoogleNative.Dataproc.V1.Inputs.PigJobArgs
{
ContinueOnFailure = false,
JarFileUris = new[]
{
"string",
},
LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
Properties =
{
{ "string", "string" },
},
QueryFileUri = "string",
QueryList = new GoogleNative.Dataproc.V1.Inputs.QueryListArgs
{
Queries = new[]
{
"string",
},
},
ScriptVariables =
{
{ "string", "string" },
},
},
PrerequisiteStepIds = new[]
{
"string",
},
FlinkJob = new GoogleNative.Dataproc.V1.Inputs.FlinkJobArgs
{
Args = new[]
{
"string",
},
JarFileUris = new[]
{
"string",
},
LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
MainClass = "string",
MainJarFileUri = "string",
Properties =
{
{ "string", "string" },
},
SavepointUri = "string",
},
PysparkJob = new GoogleNative.Dataproc.V1.Inputs.PySparkJobArgs
{
MainPythonFileUri = "string",
ArchiveUris = new[]
{
"string",
},
Args = new[]
{
"string",
},
FileUris = new[]
{
"string",
},
JarFileUris = new[]
{
"string",
},
LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
Properties =
{
{ "string", "string" },
},
PythonFileUris = new[]
{
"string",
},
},
Scheduling = new GoogleNative.Dataproc.V1.Inputs.JobSchedulingArgs
{
MaxFailuresPerHour = 0,
MaxFailuresTotal = 0,
},
SparkJob = new GoogleNative.Dataproc.V1.Inputs.SparkJobArgs
{
ArchiveUris = new[]
{
"string",
},
Args = new[]
{
"string",
},
FileUris = new[]
{
"string",
},
JarFileUris = new[]
{
"string",
},
LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
MainClass = "string",
MainJarFileUri = "string",
Properties =
{
{ "string", "string" },
},
},
SparkRJob = new GoogleNative.Dataproc.V1.Inputs.SparkRJobArgs
{
MainRFileUri = "string",
ArchiveUris = new[]
{
"string",
},
Args = new[]
{
"string",
},
FileUris = new[]
{
"string",
},
LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
Properties =
{
{ "string", "string" },
},
},
SparkSqlJob = new GoogleNative.Dataproc.V1.Inputs.SparkSqlJobArgs
{
JarFileUris = new[]
{
"string",
},
LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
Properties =
{
{ "string", "string" },
},
QueryFileUri = "string",
QueryList = new GoogleNative.Dataproc.V1.Inputs.QueryListArgs
{
Queries = new[]
{
"string",
},
},
ScriptVariables =
{
{ "string", "string" },
},
},
HadoopJob = new GoogleNative.Dataproc.V1.Inputs.HadoopJobArgs
{
ArchiveUris = new[]
{
"string",
},
Args = new[]
{
"string",
},
FileUris = new[]
{
"string",
},
JarFileUris = new[]
{
"string",
},
LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
MainClass = "string",
MainJarFileUri = "string",
Properties =
{
{ "string", "string" },
},
},
TrinoJob = new GoogleNative.Dataproc.V1.Inputs.TrinoJobArgs
{
ClientTags = new[]
{
"string",
},
ContinueOnFailure = false,
LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
OutputFormat = "string",
Properties =
{
{ "string", "string" },
},
QueryFileUri = "string",
QueryList = new GoogleNative.Dataproc.V1.Inputs.QueryListArgs
{
Queries = new[]
{
"string",
},
},
},
},
},
Placement = new GoogleNative.Dataproc.V1.Inputs.WorkflowTemplatePlacementArgs
{
ClusterSelector = new GoogleNative.Dataproc.V1.Inputs.ClusterSelectorArgs
{
ClusterLabels =
{
{ "string", "string" },
},
Zone = "string",
},
ManagedCluster = new GoogleNative.Dataproc.V1.Inputs.ManagedClusterArgs
{
ClusterName = "string",
Config = new GoogleNative.Dataproc.V1.Inputs.ClusterConfigArgs
{
AutoscalingConfig = new GoogleNative.Dataproc.V1.Inputs.AutoscalingConfigArgs
{
PolicyUri = "string",
},
AuxiliaryNodeGroups = new[]
{
new GoogleNative.Dataproc.V1.Inputs.AuxiliaryNodeGroupArgs
{
NodeGroup = new GoogleNative.Dataproc.V1.Inputs.NodeGroupArgs
{
Roles = new[]
{
GoogleNative.Dataproc.V1.NodeGroupRolesItem.RoleUnspecified,
},
Labels =
{
{ "string", "string" },
},
Name = "string",
NodeGroupConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
{
Accelerators = new[]
{
new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
{
AcceleratorCount = 0,
AcceleratorTypeUri = "string",
},
},
DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
{
BootDiskSizeGb = 0,
BootDiskType = "string",
LocalSsdInterface = "string",
NumLocalSsds = 0,
},
ImageUri = "string",
InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
{
InstanceSelectionList = new[]
{
new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
{
MachineTypes = new[]
{
"string",
},
Rank = 0,
},
},
},
MachineTypeUri = "string",
MinCpuPlatform = "string",
MinNumInstances = 0,
NumInstances = 0,
Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
{
RequiredRegistrationFraction = 0,
},
},
},
NodeGroupId = "string",
},
},
ConfigBucket = "string",
DataprocMetricConfig = new GoogleNative.Dataproc.V1.Inputs.DataprocMetricConfigArgs
{
Metrics = new[]
{
new GoogleNative.Dataproc.V1.Inputs.MetricArgs
{
MetricSource = GoogleNative.Dataproc.V1.MetricMetricSource.MetricSourceUnspecified,
MetricOverrides = new[]
{
"string",
},
},
},
},
EncryptionConfig = new GoogleNative.Dataproc.V1.Inputs.EncryptionConfigArgs
{
GcePdKmsKeyName = "string",
KmsKey = "string",
},
EndpointConfig = new GoogleNative.Dataproc.V1.Inputs.EndpointConfigArgs
{
EnableHttpPortAccess = false,
},
GceClusterConfig = new GoogleNative.Dataproc.V1.Inputs.GceClusterConfigArgs
{
ConfidentialInstanceConfig = new GoogleNative.Dataproc.V1.Inputs.ConfidentialInstanceConfigArgs
{
EnableConfidentialCompute = false,
},
InternalIpOnly = false,
Metadata =
{
{ "string", "string" },
},
NetworkUri = "string",
NodeGroupAffinity = new GoogleNative.Dataproc.V1.Inputs.NodeGroupAffinityArgs
{
NodeGroupUri = "string",
},
PrivateIpv6GoogleAccess = GoogleNative.Dataproc.V1.GceClusterConfigPrivateIpv6GoogleAccess.PrivateIpv6GoogleAccessUnspecified,
ReservationAffinity = new GoogleNative.Dataproc.V1.Inputs.ReservationAffinityArgs
{
ConsumeReservationType = GoogleNative.Dataproc.V1.ReservationAffinityConsumeReservationType.TypeUnspecified,
Key = "string",
Values = new[]
{
"string",
},
},
ServiceAccount = "string",
ServiceAccountScopes = new[]
{
"string",
},
ShieldedInstanceConfig = new GoogleNative.Dataproc.V1.Inputs.ShieldedInstanceConfigArgs
{
EnableIntegrityMonitoring = false,
EnableSecureBoot = false,
EnableVtpm = false,
},
SubnetworkUri = "string",
Tags = new[]
{
"string",
},
ZoneUri = "string",
},
GkeClusterConfig = new GoogleNative.Dataproc.V1.Inputs.GkeClusterConfigArgs
{
GkeClusterTarget = "string",
NodePoolTarget = new[]
{
new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolTargetArgs
{
NodePool = "string",
Roles = new[]
{
GoogleNative.Dataproc.V1.GkeNodePoolTargetRolesItem.RoleUnspecified,
},
NodePoolConfig = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolConfigArgs
{
Autoscaling = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAutoscalingConfigArgs
{
MaxNodeCount = 0,
MinNodeCount = 0,
},
Config = new GoogleNative.Dataproc.V1.Inputs.GkeNodeConfigArgs
{
Accelerators = new[]
{
new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAcceleratorConfigArgs
{
AcceleratorCount = "string",
AcceleratorType = "string",
GpuPartitionSize = "string",
},
},
BootDiskKmsKey = "string",
LocalSsdCount = 0,
MachineType = "string",
MinCpuPlatform = "string",
Preemptible = false,
Spot = false,
},
Locations = new[]
{
"string",
},
},
},
},
},
InitializationActions = new[]
{
new GoogleNative.Dataproc.V1.Inputs.NodeInitializationActionArgs
{
ExecutableFile = "string",
ExecutionTimeout = "string",
},
},
LifecycleConfig = new GoogleNative.Dataproc.V1.Inputs.LifecycleConfigArgs
{
AutoDeleteTime = "string",
AutoDeleteTtl = "string",
IdleDeleteTtl = "string",
},
MasterConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
{
Accelerators = new[]
{
new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
{
AcceleratorCount = 0,
AcceleratorTypeUri = "string",
},
},
DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
{
BootDiskSizeGb = 0,
BootDiskType = "string",
LocalSsdInterface = "string",
NumLocalSsds = 0,
},
ImageUri = "string",
InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
{
InstanceSelectionList = new[]
{
new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
{
MachineTypes = new[]
{
"string",
},
Rank = 0,
},
},
},
MachineTypeUri = "string",
MinCpuPlatform = "string",
MinNumInstances = 0,
NumInstances = 0,
Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
{
RequiredRegistrationFraction = 0,
},
},
MetastoreConfig = new GoogleNative.Dataproc.V1.Inputs.MetastoreConfigArgs
{
DataprocMetastoreService = "string",
},
SecondaryWorkerConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
{
Accelerators = new[]
{
new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
{
AcceleratorCount = 0,
AcceleratorTypeUri = "string",
},
},
DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
{
BootDiskSizeGb = 0,
BootDiskType = "string",
LocalSsdInterface = "string",
NumLocalSsds = 0,
},
ImageUri = "string",
InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
{
InstanceSelectionList = new[]
{
new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
{
MachineTypes = new[]
{
"string",
},
Rank = 0,
},
},
},
MachineTypeUri = "string",
MinCpuPlatform = "string",
MinNumInstances = 0,
NumInstances = 0,
Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
{
RequiredRegistrationFraction = 0,
},
},
SecurityConfig = new GoogleNative.Dataproc.V1.Inputs.SecurityConfigArgs
{
IdentityConfig = new GoogleNative.Dataproc.V1.Inputs.IdentityConfigArgs
{
UserServiceAccountMapping =
{
{ "string", "string" },
},
},
KerberosConfig = new GoogleNative.Dataproc.V1.Inputs.KerberosConfigArgs
{
CrossRealmTrustAdminServer = "string",
CrossRealmTrustKdc = "string",
CrossRealmTrustRealm = "string",
CrossRealmTrustSharedPasswordUri = "string",
EnableKerberos = false,
KdcDbKeyUri = "string",
KeyPasswordUri = "string",
KeystorePasswordUri = "string",
KeystoreUri = "string",
KmsKeyUri = "string",
Realm = "string",
RootPrincipalPasswordUri = "string",
TgtLifetimeHours = 0,
TruststorePasswordUri = "string",
TruststoreUri = "string",
},
},
SoftwareConfig = new GoogleNative.Dataproc.V1.Inputs.SoftwareConfigArgs
{
ImageVersion = "string",
OptionalComponents = new[]
{
GoogleNative.Dataproc.V1.SoftwareConfigOptionalComponentsItem.ComponentUnspecified,
},
Properties =
{
{ "string", "string" },
},
},
TempBucket = "string",
WorkerConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
{
Accelerators = new[]
{
new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
{
AcceleratorCount = 0,
AcceleratorTypeUri = "string",
},
},
DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
{
BootDiskSizeGb = 0,
BootDiskType = "string",
LocalSsdInterface = "string",
NumLocalSsds = 0,
},
ImageUri = "string",
InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
{
InstanceSelectionList = new[]
{
new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
{
MachineTypes = new[]
{
"string",
},
Rank = 0,
},
},
},
MachineTypeUri = "string",
MinCpuPlatform = "string",
MinNumInstances = 0,
NumInstances = 0,
Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
{
RequiredRegistrationFraction = 0,
},
},
},
Labels =
{
{ "string", "string" },
},
},
},
DagTimeout = "string",
EncryptionConfig = new GoogleNative.Dataproc.V1.Inputs.GoogleCloudDataprocV1WorkflowTemplateEncryptionConfigArgs
{
KmsKey = "string",
},
Id = "string",
Labels =
{
{ "string", "string" },
},
Location = "string",
Parameters = new[]
{
new GoogleNative.Dataproc.V1.Inputs.TemplateParameterArgs
{
Fields = new[]
{
"string",
},
Name = "string",
Description = "string",
Validation = new GoogleNative.Dataproc.V1.Inputs.ParameterValidationArgs
{
Regex = new GoogleNative.Dataproc.V1.Inputs.RegexValidationArgs
{
Regexes = new[]
{
"string",
},
},
Values = new GoogleNative.Dataproc.V1.Inputs.ValueValidationArgs
{
Values = new[]
{
"string",
},
},
},
},
},
Project = "string",
Version = 0,
});
example, err := dataproc.NewWorkflowTemplate(ctx, "workflowTemplateResource", &dataproc.WorkflowTemplateArgs{
Jobs: dataproc.OrderedJobArray{
&dataproc.OrderedJobArgs{
StepId: pulumi.String("string"),
PrestoJob: &dataproc.PrestoJobArgs{
ClientTags: pulumi.StringArray{
pulumi.String("string"),
},
ContinueOnFailure: pulumi.Bool(false),
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
OutputFormat: pulumi.String("string"),
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
QueryFileUri: pulumi.String("string"),
QueryList: &dataproc.QueryListArgs{
Queries: pulumi.StringArray{
pulumi.String("string"),
},
},
},
HiveJob: &dataproc.HiveJobArgs{
ContinueOnFailure: pulumi.Bool(false),
JarFileUris: pulumi.StringArray{
pulumi.String("string"),
},
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
QueryFileUri: pulumi.String("string"),
QueryList: &dataproc.QueryListArgs{
Queries: pulumi.StringArray{
pulumi.String("string"),
},
},
ScriptVariables: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
PigJob: &dataproc.PigJobArgs{
ContinueOnFailure: pulumi.Bool(false),
JarFileUris: pulumi.StringArray{
pulumi.String("string"),
},
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
QueryFileUri: pulumi.String("string"),
QueryList: &dataproc.QueryListArgs{
Queries: pulumi.StringArray{
pulumi.String("string"),
},
},
ScriptVariables: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
PrerequisiteStepIds: pulumi.StringArray{
pulumi.String("string"),
},
FlinkJob: &dataproc.FlinkJobArgs{
Args: pulumi.StringArray{
pulumi.String("string"),
},
JarFileUris: pulumi.StringArray{
pulumi.String("string"),
},
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
MainClass: pulumi.String("string"),
MainJarFileUri: pulumi.String("string"),
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
SavepointUri: pulumi.String("string"),
},
PysparkJob: &dataproc.PySparkJobArgs{
MainPythonFileUri: pulumi.String("string"),
ArchiveUris: pulumi.StringArray{
pulumi.String("string"),
},
Args: pulumi.StringArray{
pulumi.String("string"),
},
FileUris: pulumi.StringArray{
pulumi.String("string"),
},
JarFileUris: pulumi.StringArray{
pulumi.String("string"),
},
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
PythonFileUris: pulumi.StringArray{
pulumi.String("string"),
},
},
Scheduling: &dataproc.JobSchedulingArgs{
MaxFailuresPerHour: pulumi.Int(0),
MaxFailuresTotal: pulumi.Int(0),
},
SparkJob: &dataproc.SparkJobArgs{
ArchiveUris: pulumi.StringArray{
pulumi.String("string"),
},
Args: pulumi.StringArray{
pulumi.String("string"),
},
FileUris: pulumi.StringArray{
pulumi.String("string"),
},
JarFileUris: pulumi.StringArray{
pulumi.String("string"),
},
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
MainClass: pulumi.String("string"),
MainJarFileUri: pulumi.String("string"),
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
SparkRJob: &dataproc.SparkRJobArgs{
MainRFileUri: pulumi.String("string"),
ArchiveUris: pulumi.StringArray{
pulumi.String("string"),
},
Args: pulumi.StringArray{
pulumi.String("string"),
},
FileUris: pulumi.StringArray{
pulumi.String("string"),
},
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
SparkSqlJob: &dataproc.SparkSqlJobArgs{
JarFileUris: pulumi.StringArray{
pulumi.String("string"),
},
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
QueryFileUri: pulumi.String("string"),
QueryList: &dataproc.QueryListArgs{
Queries: pulumi.StringArray{
pulumi.String("string"),
},
},
ScriptVariables: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
HadoopJob: &dataproc.HadoopJobArgs{
ArchiveUris: pulumi.StringArray{
pulumi.String("string"),
},
Args: pulumi.StringArray{
pulumi.String("string"),
},
FileUris: pulumi.StringArray{
pulumi.String("string"),
},
JarFileUris: pulumi.StringArray{
pulumi.String("string"),
},
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
MainClass: pulumi.String("string"),
MainJarFileUri: pulumi.String("string"),
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
TrinoJob: &dataproc.TrinoJobArgs{
ClientTags: pulumi.StringArray{
pulumi.String("string"),
},
ContinueOnFailure: pulumi.Bool(false),
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
OutputFormat: pulumi.String("string"),
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
QueryFileUri: pulumi.String("string"),
QueryList: &dataproc.QueryListArgs{
Queries: pulumi.StringArray{
pulumi.String("string"),
},
},
},
},
},
Placement: &dataproc.WorkflowTemplatePlacementArgs{
ClusterSelector: &dataproc.ClusterSelectorArgs{
ClusterLabels: pulumi.StringMap{
"string": pulumi.String("string"),
},
Zone: pulumi.String("string"),
},
ManagedCluster: &dataproc.ManagedClusterArgs{
ClusterName: pulumi.String("string"),
Config: &dataproc.ClusterConfigArgs{
AutoscalingConfig: &dataproc.AutoscalingConfigArgs{
PolicyUri: pulumi.String("string"),
},
AuxiliaryNodeGroups: dataproc.AuxiliaryNodeGroupArray{
&dataproc.AuxiliaryNodeGroupArgs{
NodeGroup: &dataproc.NodeGroupTypeArgs{
Roles: dataproc.NodeGroupRolesItemArray{
dataproc.NodeGroupRolesItemRoleUnspecified,
},
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
Name: pulumi.String("string"),
NodeGroupConfig: &dataproc.InstanceGroupConfigArgs{
Accelerators: dataproc.AcceleratorConfigArray{
&dataproc.AcceleratorConfigArgs{
AcceleratorCount: pulumi.Int(0),
AcceleratorTypeUri: pulumi.String("string"),
},
},
DiskConfig: &dataproc.DiskConfigArgs{
BootDiskSizeGb: pulumi.Int(0),
BootDiskType: pulumi.String("string"),
LocalSsdInterface: pulumi.String("string"),
NumLocalSsds: pulumi.Int(0),
},
ImageUri: pulumi.String("string"),
InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
InstanceSelectionList: dataproc.InstanceSelectionArray{
&dataproc.InstanceSelectionArgs{
MachineTypes: pulumi.StringArray{
pulumi.String("string"),
},
Rank: pulumi.Int(0),
},
},
},
MachineTypeUri: pulumi.String("string"),
MinCpuPlatform: pulumi.String("string"),
MinNumInstances: pulumi.Int(0),
NumInstances: pulumi.Int(0),
Preemptibility: dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
StartupConfig: &dataproc.StartupConfigArgs{
RequiredRegistrationFraction: pulumi.Float64(0),
},
},
},
NodeGroupId: pulumi.String("string"),
},
},
ConfigBucket: pulumi.String("string"),
DataprocMetricConfig: &dataproc.DataprocMetricConfigArgs{
Metrics: dataproc.MetricArray{
&dataproc.MetricArgs{
MetricSource: dataproc.MetricMetricSourceMetricSourceUnspecified,
MetricOverrides: pulumi.StringArray{
pulumi.String("string"),
},
},
},
},
EncryptionConfig: &dataproc.EncryptionConfigArgs{
GcePdKmsKeyName: pulumi.String("string"),
KmsKey: pulumi.String("string"),
},
EndpointConfig: &dataproc.EndpointConfigArgs{
EnableHttpPortAccess: pulumi.Bool(false),
},
GceClusterConfig: &dataproc.GceClusterConfigArgs{
ConfidentialInstanceConfig: &dataproc.ConfidentialInstanceConfigArgs{
EnableConfidentialCompute: pulumi.Bool(false),
},
InternalIpOnly: pulumi.Bool(false),
Metadata: pulumi.StringMap{
"string": pulumi.String("string"),
},
NetworkUri: pulumi.String("string"),
NodeGroupAffinity: &dataproc.NodeGroupAffinityArgs{
NodeGroupUri: pulumi.String("string"),
},
PrivateIpv6GoogleAccess: dataproc.GceClusterConfigPrivateIpv6GoogleAccessPrivateIpv6GoogleAccessUnspecified,
ReservationAffinity: &dataproc.ReservationAffinityArgs{
ConsumeReservationType: dataproc.ReservationAffinityConsumeReservationTypeTypeUnspecified,
Key: pulumi.String("string"),
Values: pulumi.StringArray{
pulumi.String("string"),
},
},
ServiceAccount: pulumi.String("string"),
ServiceAccountScopes: pulumi.StringArray{
pulumi.String("string"),
},
ShieldedInstanceConfig: &dataproc.ShieldedInstanceConfigArgs{
EnableIntegrityMonitoring: pulumi.Bool(false),
EnableSecureBoot: pulumi.Bool(false),
EnableVtpm: pulumi.Bool(false),
},
SubnetworkUri: pulumi.String("string"),
Tags: pulumi.StringArray{
pulumi.String("string"),
},
ZoneUri: pulumi.String("string"),
},
GkeClusterConfig: &dataproc.GkeClusterConfigArgs{
GkeClusterTarget: pulumi.String("string"),
NodePoolTarget: dataproc.GkeNodePoolTargetArray{
&dataproc.GkeNodePoolTargetArgs{
NodePool: pulumi.String("string"),
Roles: dataproc.GkeNodePoolTargetRolesItemArray{
dataproc.GkeNodePoolTargetRolesItemRoleUnspecified,
},
NodePoolConfig: &dataproc.GkeNodePoolConfigArgs{
Autoscaling: &dataproc.GkeNodePoolAutoscalingConfigArgs{
MaxNodeCount: pulumi.Int(0),
MinNodeCount: pulumi.Int(0),
},
Config: &dataproc.GkeNodeConfigArgs{
Accelerators: dataproc.GkeNodePoolAcceleratorConfigArray{
&dataproc.GkeNodePoolAcceleratorConfigArgs{
AcceleratorCount: pulumi.String("string"),
AcceleratorType: pulumi.String("string"),
GpuPartitionSize: pulumi.String("string"),
},
},
BootDiskKmsKey: pulumi.String("string"),
LocalSsdCount: pulumi.Int(0),
MachineType: pulumi.String("string"),
MinCpuPlatform: pulumi.String("string"),
Preemptible: pulumi.Bool(false),
Spot: pulumi.Bool(false),
},
Locations: pulumi.StringArray{
pulumi.String("string"),
},
},
},
},
},
InitializationActions: dataproc.NodeInitializationActionArray{
&dataproc.NodeInitializationActionArgs{
ExecutableFile: pulumi.String("string"),
ExecutionTimeout: pulumi.String("string"),
},
},
LifecycleConfig: &dataproc.LifecycleConfigArgs{
AutoDeleteTime: pulumi.String("string"),
AutoDeleteTtl: pulumi.String("string"),
IdleDeleteTtl: pulumi.String("string"),
},
MasterConfig: &dataproc.InstanceGroupConfigArgs{
Accelerators: dataproc.AcceleratorConfigArray{
&dataproc.AcceleratorConfigArgs{
AcceleratorCount: pulumi.Int(0),
AcceleratorTypeUri: pulumi.String("string"),
},
},
DiskConfig: &dataproc.DiskConfigArgs{
BootDiskSizeGb: pulumi.Int(0),
BootDiskType: pulumi.String("string"),
LocalSsdInterface: pulumi.String("string"),
NumLocalSsds: pulumi.Int(0),
},
ImageUri: pulumi.String("string"),
InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
InstanceSelectionList: dataproc.InstanceSelectionArray{
&dataproc.InstanceSelectionArgs{
MachineTypes: pulumi.StringArray{
pulumi.String("string"),
},
Rank: pulumi.Int(0),
},
},
},
MachineTypeUri: pulumi.String("string"),
MinCpuPlatform: pulumi.String("string"),
MinNumInstances: pulumi.Int(0),
NumInstances: pulumi.Int(0),
Preemptibility: dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
StartupConfig: &dataproc.StartupConfigArgs{
RequiredRegistrationFraction: pulumi.Float64(0),
},
},
MetastoreConfig: &dataproc.MetastoreConfigArgs{
DataprocMetastoreService: pulumi.String("string"),
},
SecondaryWorkerConfig: &dataproc.InstanceGroupConfigArgs{
Accelerators: dataproc.AcceleratorConfigArray{
&dataproc.AcceleratorConfigArgs{
AcceleratorCount: pulumi.Int(0),
AcceleratorTypeUri: pulumi.String("string"),
},
},
DiskConfig: &dataproc.DiskConfigArgs{
BootDiskSizeGb: pulumi.Int(0),
BootDiskType: pulumi.String("string"),
LocalSsdInterface: pulumi.String("string"),
NumLocalSsds: pulumi.Int(0),
},
ImageUri: pulumi.String("string"),
InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
InstanceSelectionList: dataproc.InstanceSelectionArray{
&dataproc.InstanceSelectionArgs{
MachineTypes: pulumi.StringArray{
pulumi.String("string"),
},
Rank: pulumi.Int(0),
},
},
},
MachineTypeUri: pulumi.String("string"),
MinCpuPlatform: pulumi.String("string"),
MinNumInstances: pulumi.Int(0),
NumInstances: pulumi.Int(0),
Preemptibility: dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
StartupConfig: &dataproc.StartupConfigArgs{
RequiredRegistrationFraction: pulumi.Float64(0),
},
},
SecurityConfig: &dataproc.SecurityConfigArgs{
IdentityConfig: &dataproc.IdentityConfigArgs{
UserServiceAccountMapping: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
KerberosConfig: &dataproc.KerberosConfigArgs{
CrossRealmTrustAdminServer: pulumi.String("string"),
CrossRealmTrustKdc: pulumi.String("string"),
CrossRealmTrustRealm: pulumi.String("string"),
CrossRealmTrustSharedPasswordUri: pulumi.String("string"),
EnableKerberos: pulumi.Bool(false),
KdcDbKeyUri: pulumi.String("string"),
KeyPasswordUri: pulumi.String("string"),
KeystorePasswordUri: pulumi.String("string"),
KeystoreUri: pulumi.String("string"),
KmsKeyUri: pulumi.String("string"),
Realm: pulumi.String("string"),
RootPrincipalPasswordUri: pulumi.String("string"),
TgtLifetimeHours: pulumi.Int(0),
TruststorePasswordUri: pulumi.String("string"),
TruststoreUri: pulumi.String("string"),
},
},
SoftwareConfig: &dataproc.SoftwareConfigArgs{
ImageVersion: pulumi.String("string"),
OptionalComponents: dataproc.SoftwareConfigOptionalComponentsItemArray{
dataproc.SoftwareConfigOptionalComponentsItemComponentUnspecified,
},
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
TempBucket: pulumi.String("string"),
WorkerConfig: &dataproc.InstanceGroupConfigArgs{
Accelerators: dataproc.AcceleratorConfigArray{
&dataproc.AcceleratorConfigArgs{
AcceleratorCount: pulumi.Int(0),
AcceleratorTypeUri: pulumi.String("string"),
},
},
DiskConfig: &dataproc.DiskConfigArgs{
BootDiskSizeGb: pulumi.Int(0),
BootDiskType: pulumi.String("string"),
LocalSsdInterface: pulumi.String("string"),
NumLocalSsds: pulumi.Int(0),
},
ImageUri: pulumi.String("string"),
InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
InstanceSelectionList: dataproc.InstanceSelectionArray{
&dataproc.InstanceSelectionArgs{
MachineTypes: pulumi.StringArray{
pulumi.String("string"),
},
Rank: pulumi.Int(0),
},
},
},
MachineTypeUri: pulumi.String("string"),
MinCpuPlatform: pulumi.String("string"),
MinNumInstances: pulumi.Int(0),
NumInstances: pulumi.Int(0),
Preemptibility: dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
StartupConfig: &dataproc.StartupConfigArgs{
RequiredRegistrationFraction: pulumi.Float64(0),
},
},
},
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
},
DagTimeout: pulumi.String("string"),
EncryptionConfig: &dataproc.GoogleCloudDataprocV1WorkflowTemplateEncryptionConfigArgs{
KmsKey: pulumi.String("string"),
},
Id: pulumi.String("string"),
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
Location: pulumi.String("string"),
Parameters: dataproc.TemplateParameterArray{
&dataproc.TemplateParameterArgs{
Fields: pulumi.StringArray{
pulumi.String("string"),
},
Name: pulumi.String("string"),
Description: pulumi.String("string"),
Validation: &dataproc.ParameterValidationArgs{
Regex: &dataproc.RegexValidationArgs{
Regexes: pulumi.StringArray{
pulumi.String("string"),
},
},
Values: &dataproc.ValueValidationArgs{
Values: pulumi.StringArray{
pulumi.String("string"),
},
},
},
},
},
Project: pulumi.String("string"),
Version: pulumi.Int(0),
})
var workflowTemplateResource = new WorkflowTemplate("workflowTemplateResource", WorkflowTemplateArgs.builder()
.jobs(OrderedJobArgs.builder()
.stepId("string")
.prestoJob(PrestoJobArgs.builder()
.clientTags("string")
.continueOnFailure(false)
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.outputFormat("string")
.properties(Map.of("string", "string"))
.queryFileUri("string")
.queryList(QueryListArgs.builder()
.queries("string")
.build())
.build())
.hiveJob(HiveJobArgs.builder()
.continueOnFailure(false)
.jarFileUris("string")
.properties(Map.of("string", "string"))
.queryFileUri("string")
.queryList(QueryListArgs.builder()
.queries("string")
.build())
.scriptVariables(Map.of("string", "string"))
.build())
.labels(Map.of("string", "string"))
.pigJob(PigJobArgs.builder()
.continueOnFailure(false)
.jarFileUris("string")
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.properties(Map.of("string", "string"))
.queryFileUri("string")
.queryList(QueryListArgs.builder()
.queries("string")
.build())
.scriptVariables(Map.of("string", "string"))
.build())
.prerequisiteStepIds("string")
.flinkJob(FlinkJobArgs.builder()
.args("string")
.jarFileUris("string")
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.mainClass("string")
.mainJarFileUri("string")
.properties(Map.of("string", "string"))
.savepointUri("string")
.build())
.pysparkJob(PySparkJobArgs.builder()
.mainPythonFileUri("string")
.archiveUris("string")
.args("string")
.fileUris("string")
.jarFileUris("string")
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.properties(Map.of("string", "string"))
.pythonFileUris("string")
.build())
.scheduling(JobSchedulingArgs.builder()
.maxFailuresPerHour(0)
.maxFailuresTotal(0)
.build())
.sparkJob(SparkJobArgs.builder()
.archiveUris("string")
.args("string")
.fileUris("string")
.jarFileUris("string")
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.mainClass("string")
.mainJarFileUri("string")
.properties(Map.of("string", "string"))
.build())
.sparkRJob(SparkRJobArgs.builder()
.mainRFileUri("string")
.archiveUris("string")
.args("string")
.fileUris("string")
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.properties(Map.of("string", "string"))
.build())
.sparkSqlJob(SparkSqlJobArgs.builder()
.jarFileUris("string")
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.properties(Map.of("string", "string"))
.queryFileUri("string")
.queryList(QueryListArgs.builder()
.queries("string")
.build())
.scriptVariables(Map.of("string", "string"))
.build())
.hadoopJob(HadoopJobArgs.builder()
.archiveUris("string")
.args("string")
.fileUris("string")
.jarFileUris("string")
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.mainClass("string")
.mainJarFileUri("string")
.properties(Map.of("string", "string"))
.build())
.trinoJob(TrinoJobArgs.builder()
.clientTags("string")
.continueOnFailure(false)
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.outputFormat("string")
.properties(Map.of("string", "string"))
.queryFileUri("string")
.queryList(QueryListArgs.builder()
.queries("string")
.build())
.build())
.build())
.placement(WorkflowTemplatePlacementArgs.builder()
.clusterSelector(ClusterSelectorArgs.builder()
.clusterLabels(Map.of("string", "string"))
.zone("string")
.build())
.managedCluster(ManagedClusterArgs.builder()
.clusterName("string")
.config(ClusterConfigArgs.builder()
.autoscalingConfig(AutoscalingConfigArgs.builder()
.policyUri("string")
.build())
.auxiliaryNodeGroups(AuxiliaryNodeGroupArgs.builder()
.nodeGroup(NodeGroupArgs.builder()
.roles("ROLE_UNSPECIFIED")
.labels(Map.of("string", "string"))
.name("string")
.nodeGroupConfig(InstanceGroupConfigArgs.builder()
.accelerators(AcceleratorConfigArgs.builder()
.acceleratorCount(0)
.acceleratorTypeUri("string")
.build())
.diskConfig(DiskConfigArgs.builder()
.bootDiskSizeGb(0)
.bootDiskType("string")
.localSsdInterface("string")
.numLocalSsds(0)
.build())
.imageUri("string")
.instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
.instanceSelectionList(InstanceSelectionArgs.builder()
.machineTypes("string")
.rank(0)
.build())
.build())
.machineTypeUri("string")
.minCpuPlatform("string")
.minNumInstances(0)
.numInstances(0)
.preemptibility("PREEMPTIBILITY_UNSPECIFIED")
.startupConfig(StartupConfigArgs.builder()
.requiredRegistrationFraction(0)
.build())
.build())
.build())
.nodeGroupId("string")
.build())
.configBucket("string")
.dataprocMetricConfig(DataprocMetricConfigArgs.builder()
.metrics(MetricArgs.builder()
.metricSource("METRIC_SOURCE_UNSPECIFIED")
.metricOverrides("string")
.build())
.build())
.encryptionConfig(EncryptionConfigArgs.builder()
.gcePdKmsKeyName("string")
.kmsKey("string")
.build())
.endpointConfig(EndpointConfigArgs.builder()
.enableHttpPortAccess(false)
.build())
.gceClusterConfig(GceClusterConfigArgs.builder()
.confidentialInstanceConfig(ConfidentialInstanceConfigArgs.builder()
.enableConfidentialCompute(false)
.build())
.internalIpOnly(false)
.metadata(Map.of("string", "string"))
.networkUri("string")
.nodeGroupAffinity(NodeGroupAffinityArgs.builder()
.nodeGroupUri("string")
.build())
.privateIpv6GoogleAccess("PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED")
.reservationAffinity(ReservationAffinityArgs.builder()
.consumeReservationType("TYPE_UNSPECIFIED")
.key("string")
.values("string")
.build())
.serviceAccount("string")
.serviceAccountScopes("string")
.shieldedInstanceConfig(ShieldedInstanceConfigArgs.builder()
.enableIntegrityMonitoring(false)
.enableSecureBoot(false)
.enableVtpm(false)
.build())
.subnetworkUri("string")
.tags("string")
.zoneUri("string")
.build())
.gkeClusterConfig(GkeClusterConfigArgs.builder()
.gkeClusterTarget("string")
.nodePoolTarget(GkeNodePoolTargetArgs.builder()
.nodePool("string")
.roles("ROLE_UNSPECIFIED")
.nodePoolConfig(GkeNodePoolConfigArgs.builder()
.autoscaling(GkeNodePoolAutoscalingConfigArgs.builder()
.maxNodeCount(0)
.minNodeCount(0)
.build())
.config(GkeNodeConfigArgs.builder()
.accelerators(GkeNodePoolAcceleratorConfigArgs.builder()
.acceleratorCount("string")
.acceleratorType("string")
.gpuPartitionSize("string")
.build())
.bootDiskKmsKey("string")
.localSsdCount(0)
.machineType("string")
.minCpuPlatform("string")
.preemptible(false)
.spot(false)
.build())
.locations("string")
.build())
.build())
.build())
.initializationActions(NodeInitializationActionArgs.builder()
.executableFile("string")
.executionTimeout("string")
.build())
.lifecycleConfig(LifecycleConfigArgs.builder()
.autoDeleteTime("string")
.autoDeleteTtl("string")
.idleDeleteTtl("string")
.build())
.masterConfig(InstanceGroupConfigArgs.builder()
.accelerators(AcceleratorConfigArgs.builder()
.acceleratorCount(0)
.acceleratorTypeUri("string")
.build())
.diskConfig(DiskConfigArgs.builder()
.bootDiskSizeGb(0)
.bootDiskType("string")
.localSsdInterface("string")
.numLocalSsds(0)
.build())
.imageUri("string")
.instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
.instanceSelectionList(InstanceSelectionArgs.builder()
.machineTypes("string")
.rank(0)
.build())
.build())
.machineTypeUri("string")
.minCpuPlatform("string")
.minNumInstances(0)
.numInstances(0)
.preemptibility("PREEMPTIBILITY_UNSPECIFIED")
.startupConfig(StartupConfigArgs.builder()
.requiredRegistrationFraction(0)
.build())
.build())
.metastoreConfig(MetastoreConfigArgs.builder()
.dataprocMetastoreService("string")
.build())
.secondaryWorkerConfig(InstanceGroupConfigArgs.builder()
.accelerators(AcceleratorConfigArgs.builder()
.acceleratorCount(0)
.acceleratorTypeUri("string")
.build())
.diskConfig(DiskConfigArgs.builder()
.bootDiskSizeGb(0)
.bootDiskType("string")
.localSsdInterface("string")
.numLocalSsds(0)
.build())
.imageUri("string")
.instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
.instanceSelectionList(InstanceSelectionArgs.builder()
.machineTypes("string")
.rank(0)
.build())
.build())
.machineTypeUri("string")
.minCpuPlatform("string")
.minNumInstances(0)
.numInstances(0)
.preemptibility("PREEMPTIBILITY_UNSPECIFIED")
.startupConfig(StartupConfigArgs.builder()
.requiredRegistrationFraction(0)
.build())
.build())
.securityConfig(SecurityConfigArgs.builder()
.identityConfig(IdentityConfigArgs.builder()
.userServiceAccountMapping(Map.of("string", "string"))
.build())
.kerberosConfig(KerberosConfigArgs.builder()
.crossRealmTrustAdminServer("string")
.crossRealmTrustKdc("string")
.crossRealmTrustRealm("string")
.crossRealmTrustSharedPasswordUri("string")
.enableKerberos(false)
.kdcDbKeyUri("string")
.keyPasswordUri("string")
.keystorePasswordUri("string")
.keystoreUri("string")
.kmsKeyUri("string")
.realm("string")
.rootPrincipalPasswordUri("string")
.tgtLifetimeHours(0)
.truststorePasswordUri("string")
.truststoreUri("string")
.build())
.build())
.softwareConfig(SoftwareConfigArgs.builder()
.imageVersion("string")
.optionalComponents("COMPONENT_UNSPECIFIED")
.properties(Map.of("string", "string"))
.build())
.tempBucket("string")
.workerConfig(InstanceGroupConfigArgs.builder()
.accelerators(AcceleratorConfigArgs.builder()
.acceleratorCount(0)
.acceleratorTypeUri("string")
.build())
.diskConfig(DiskConfigArgs.builder()
.bootDiskSizeGb(0)
.bootDiskType("string")
.localSsdInterface("string")
.numLocalSsds(0)
.build())
.imageUri("string")
.instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
.instanceSelectionList(InstanceSelectionArgs.builder()
.machineTypes("string")
.rank(0)
.build())
.build())
.machineTypeUri("string")
.minCpuPlatform("string")
.minNumInstances(0)
.numInstances(0)
.preemptibility("PREEMPTIBILITY_UNSPECIFIED")
.startupConfig(StartupConfigArgs.builder()
.requiredRegistrationFraction(0)
.build())
.build())
.build())
.labels(Map.of("string", "string"))
.build())
.build())
.dagTimeout("string")
.encryptionConfig(GoogleCloudDataprocV1WorkflowTemplateEncryptionConfigArgs.builder()
.kmsKey("string")
.build())
.id("string")
.labels(Map.of("string", "string"))
.location("string")
.parameters(TemplateParameterArgs.builder()
.fields("string")
.name("string")
.description("string")
.validation(ParameterValidationArgs.builder()
.regex(RegexValidationArgs.builder()
.regexes("string")
.build())
.values(ValueValidationArgs.builder()
.values("string")
.build())
.build())
.build())
.project("string")
.version(0)
.build());
workflow_template_resource = google_native.dataproc.v1.WorkflowTemplate("workflowTemplateResource",
jobs=[{
"step_id": "string",
"presto_job": {
"client_tags": ["string"],
"continue_on_failure": False,
"logging_config": {
"driver_log_levels": {
"string": "string",
},
},
"output_format": "string",
"properties": {
"string": "string",
},
"query_file_uri": "string",
"query_list": {
"queries": ["string"],
},
},
"hive_job": {
"continue_on_failure": False,
"jar_file_uris": ["string"],
"properties": {
"string": "string",
},
"query_file_uri": "string",
"query_list": {
"queries": ["string"],
},
"script_variables": {
"string": "string",
},
},
"labels": {
"string": "string",
},
"pig_job": {
"continue_on_failure": False,
"jar_file_uris": ["string"],
"logging_config": {
"driver_log_levels": {
"string": "string",
},
},
"properties": {
"string": "string",
},
"query_file_uri": "string",
"query_list": {
"queries": ["string"],
},
"script_variables": {
"string": "string",
},
},
"prerequisite_step_ids": ["string"],
"flink_job": {
"args": ["string"],
"jar_file_uris": ["string"],
"logging_config": {
"driver_log_levels": {
"string": "string",
},
},
"main_class": "string",
"main_jar_file_uri": "string",
"properties": {
"string": "string",
},
"savepoint_uri": "string",
},
"pyspark_job": {
"main_python_file_uri": "string",
"archive_uris": ["string"],
"args": ["string"],
"file_uris": ["string"],
"jar_file_uris": ["string"],
"logging_config": {
"driver_log_levels": {
"string": "string",
},
},
"properties": {
"string": "string",
},
"python_file_uris": ["string"],
},
"scheduling": {
"max_failures_per_hour": 0,
"max_failures_total": 0,
},
"spark_job": {
"archive_uris": ["string"],
"args": ["string"],
"file_uris": ["string"],
"jar_file_uris": ["string"],
"logging_config": {
"driver_log_levels": {
"string": "string",
},
},
"main_class": "string",
"main_jar_file_uri": "string",
"properties": {
"string": "string",
},
},
"spark_r_job": {
"main_r_file_uri": "string",
"archive_uris": ["string"],
"args": ["string"],
"file_uris": ["string"],
"logging_config": {
"driver_log_levels": {
"string": "string",
},
},
"properties": {
"string": "string",
},
},
"spark_sql_job": {
"jar_file_uris": ["string"],
"logging_config": {
"driver_log_levels": {
"string": "string",
},
},
"properties": {
"string": "string",
},
"query_file_uri": "string",
"query_list": {
"queries": ["string"],
},
"script_variables": {
"string": "string",
},
},
"hadoop_job": {
"archive_uris": ["string"],
"args": ["string"],
"file_uris": ["string"],
"jar_file_uris": ["string"],
"logging_config": {
"driver_log_levels": {
"string": "string",
},
},
"main_class": "string",
"main_jar_file_uri": "string",
"properties": {
"string": "string",
},
},
"trino_job": {
"client_tags": ["string"],
"continue_on_failure": False,
"logging_config": {
"driver_log_levels": {
"string": "string",
},
},
"output_format": "string",
"properties": {
"string": "string",
},
"query_file_uri": "string",
"query_list": {
"queries": ["string"],
},
},
}],
placement={
"cluster_selector": {
"cluster_labels": {
"string": "string",
},
"zone": "string",
},
"managed_cluster": {
"cluster_name": "string",
"config": {
"autoscaling_config": {
"policy_uri": "string",
},
"auxiliary_node_groups": [{
"node_group": {
"roles": [google_native.dataproc.v1.NodeGroupRolesItem.ROLE_UNSPECIFIED],
"labels": {
"string": "string",
},
"name": "string",
"node_group_config": {
"accelerators": [{
"accelerator_count": 0,
"accelerator_type_uri": "string",
}],
"disk_config": {
"boot_disk_size_gb": 0,
"boot_disk_type": "string",
"local_ssd_interface": "string",
"num_local_ssds": 0,
},
"image_uri": "string",
"instance_flexibility_policy": {
"instance_selection_list": [{
"machine_types": ["string"],
"rank": 0,
}],
},
"machine_type_uri": "string",
"min_cpu_platform": "string",
"min_num_instances": 0,
"num_instances": 0,
"preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
"startup_config": {
"required_registration_fraction": 0,
},
},
},
"node_group_id": "string",
}],
"config_bucket": "string",
"dataproc_metric_config": {
"metrics": [{
"metric_source": google_native.dataproc.v1.MetricMetricSource.METRIC_SOURCE_UNSPECIFIED,
"metric_overrides": ["string"],
}],
},
"encryption_config": {
"gce_pd_kms_key_name": "string",
"kms_key": "string",
},
"endpoint_config": {
"enable_http_port_access": False,
},
"gce_cluster_config": {
"confidential_instance_config": {
"enable_confidential_compute": False,
},
"internal_ip_only": False,
"metadata": {
"string": "string",
},
"network_uri": "string",
"node_group_affinity": {
"node_group_uri": "string",
},
"private_ipv6_google_access": google_native.dataproc.v1.GceClusterConfigPrivateIpv6GoogleAccess.PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED,
"reservation_affinity": {
"consume_reservation_type": google_native.dataproc.v1.ReservationAffinityConsumeReservationType.TYPE_UNSPECIFIED,
"key": "string",
"values": ["string"],
},
"service_account": "string",
"service_account_scopes": ["string"],
"shielded_instance_config": {
"enable_integrity_monitoring": False,
"enable_secure_boot": False,
"enable_vtpm": False,
},
"subnetwork_uri": "string",
"tags": ["string"],
"zone_uri": "string",
},
"gke_cluster_config": {
"gke_cluster_target": "string",
"node_pool_target": [{
"node_pool": "string",
"roles": [google_native.dataproc.v1.GkeNodePoolTargetRolesItem.ROLE_UNSPECIFIED],
"node_pool_config": {
"autoscaling": {
"max_node_count": 0,
"min_node_count": 0,
},
"config": {
"accelerators": [{
"accelerator_count": "string",
"accelerator_type": "string",
"gpu_partition_size": "string",
}],
"boot_disk_kms_key": "string",
"local_ssd_count": 0,
"machine_type": "string",
"min_cpu_platform": "string",
"preemptible": False,
"spot": False,
},
"locations": ["string"],
},
}],
},
"initialization_actions": [{
"executable_file": "string",
"execution_timeout": "string",
}],
"lifecycle_config": {
"auto_delete_time": "string",
"auto_delete_ttl": "string",
"idle_delete_ttl": "string",
},
"master_config": {
"accelerators": [{
"accelerator_count": 0,
"accelerator_type_uri": "string",
}],
"disk_config": {
"boot_disk_size_gb": 0,
"boot_disk_type": "string",
"local_ssd_interface": "string",
"num_local_ssds": 0,
},
"image_uri": "string",
"instance_flexibility_policy": {
"instance_selection_list": [{
"machine_types": ["string"],
"rank": 0,
}],
},
"machine_type_uri": "string",
"min_cpu_platform": "string",
"min_num_instances": 0,
"num_instances": 0,
"preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
"startup_config": {
"required_registration_fraction": 0,
},
},
"metastore_config": {
"dataproc_metastore_service": "string",
},
"secondary_worker_config": {
"accelerators": [{
"accelerator_count": 0,
"accelerator_type_uri": "string",
}],
"disk_config": {
"boot_disk_size_gb": 0,
"boot_disk_type": "string",
"local_ssd_interface": "string",
"num_local_ssds": 0,
},
"image_uri": "string",
"instance_flexibility_policy": {
"instance_selection_list": [{
"machine_types": ["string"],
"rank": 0,
}],
},
"machine_type_uri": "string",
"min_cpu_platform": "string",
"min_num_instances": 0,
"num_instances": 0,
"preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
"startup_config": {
"required_registration_fraction": 0,
},
},
"security_config": {
"identity_config": {
"user_service_account_mapping": {
"string": "string",
},
},
"kerberos_config": {
"cross_realm_trust_admin_server": "string",
"cross_realm_trust_kdc": "string",
"cross_realm_trust_realm": "string",
"cross_realm_trust_shared_password_uri": "string",
"enable_kerberos": False,
"kdc_db_key_uri": "string",
"key_password_uri": "string",
"keystore_password_uri": "string",
"keystore_uri": "string",
"kms_key_uri": "string",
"realm": "string",
"root_principal_password_uri": "string",
"tgt_lifetime_hours": 0,
"truststore_password_uri": "string",
"truststore_uri": "string",
},
},
"software_config": {
"image_version": "string",
"optional_components": [google_native.dataproc.v1.SoftwareConfigOptionalComponentsItem.COMPONENT_UNSPECIFIED],
"properties": {
"string": "string",
},
},
"temp_bucket": "string",
"worker_config": {
"accelerators": [{
"accelerator_count": 0,
"accelerator_type_uri": "string",
}],
"disk_config": {
"boot_disk_size_gb": 0,
"boot_disk_type": "string",
"local_ssd_interface": "string",
"num_local_ssds": 0,
},
"image_uri": "string",
"instance_flexibility_policy": {
"instance_selection_list": [{
"machine_types": ["string"],
"rank": 0,
}],
},
"machine_type_uri": "string",
"min_cpu_platform": "string",
"min_num_instances": 0,
"num_instances": 0,
"preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
"startup_config": {
"required_registration_fraction": 0,
},
},
},
"labels": {
"string": "string",
},
},
},
dag_timeout="string",
encryption_config={
"kms_key": "string",
},
id="string",
labels={
"string": "string",
},
location="string",
parameters=[{
"fields": ["string"],
"name": "string",
"description": "string",
"validation": {
"regex": {
"regexes": ["string"],
},
"values": {
"values": ["string"],
},
},
}],
project="string",
version=0)
const workflowTemplateResource = new google_native.dataproc.v1.WorkflowTemplate("workflowTemplateResource", {
jobs: [{
stepId: "string",
prestoJob: {
clientTags: ["string"],
continueOnFailure: false,
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
outputFormat: "string",
properties: {
string: "string",
},
queryFileUri: "string",
queryList: {
queries: ["string"],
},
},
hiveJob: {
continueOnFailure: false,
jarFileUris: ["string"],
properties: {
string: "string",
},
queryFileUri: "string",
queryList: {
queries: ["string"],
},
scriptVariables: {
string: "string",
},
},
labels: {
string: "string",
},
pigJob: {
continueOnFailure: false,
jarFileUris: ["string"],
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
properties: {
string: "string",
},
queryFileUri: "string",
queryList: {
queries: ["string"],
},
scriptVariables: {
string: "string",
},
},
prerequisiteStepIds: ["string"],
flinkJob: {
args: ["string"],
jarFileUris: ["string"],
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
mainClass: "string",
mainJarFileUri: "string",
properties: {
string: "string",
},
savepointUri: "string",
},
pysparkJob: {
mainPythonFileUri: "string",
archiveUris: ["string"],
args: ["string"],
fileUris: ["string"],
jarFileUris: ["string"],
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
properties: {
string: "string",
},
pythonFileUris: ["string"],
},
scheduling: {
maxFailuresPerHour: 0,
maxFailuresTotal: 0,
},
sparkJob: {
archiveUris: ["string"],
args: ["string"],
fileUris: ["string"],
jarFileUris: ["string"],
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
mainClass: "string",
mainJarFileUri: "string",
properties: {
string: "string",
},
},
sparkRJob: {
mainRFileUri: "string",
archiveUris: ["string"],
args: ["string"],
fileUris: ["string"],
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
properties: {
string: "string",
},
},
sparkSqlJob: {
jarFileUris: ["string"],
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
properties: {
string: "string",
},
queryFileUri: "string",
queryList: {
queries: ["string"],
},
scriptVariables: {
string: "string",
},
},
hadoopJob: {
archiveUris: ["string"],
args: ["string"],
fileUris: ["string"],
jarFileUris: ["string"],
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
mainClass: "string",
mainJarFileUri: "string",
properties: {
string: "string",
},
},
trinoJob: {
clientTags: ["string"],
continueOnFailure: false,
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
outputFormat: "string",
properties: {
string: "string",
},
queryFileUri: "string",
queryList: {
queries: ["string"],
},
},
}],
placement: {
clusterSelector: {
clusterLabels: {
string: "string",
},
zone: "string",
},
managedCluster: {
clusterName: "string",
config: {
autoscalingConfig: {
policyUri: "string",
},
auxiliaryNodeGroups: [{
nodeGroup: {
roles: [google_native.dataproc.v1.NodeGroupRolesItem.RoleUnspecified],
labels: {
string: "string",
},
name: "string",
nodeGroupConfig: {
accelerators: [{
acceleratorCount: 0,
acceleratorTypeUri: "string",
}],
diskConfig: {
bootDiskSizeGb: 0,
bootDiskType: "string",
localSsdInterface: "string",
numLocalSsds: 0,
},
imageUri: "string",
instanceFlexibilityPolicy: {
instanceSelectionList: [{
machineTypes: ["string"],
rank: 0,
}],
},
machineTypeUri: "string",
minCpuPlatform: "string",
minNumInstances: 0,
numInstances: 0,
preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
startupConfig: {
requiredRegistrationFraction: 0,
},
},
},
nodeGroupId: "string",
}],
configBucket: "string",
dataprocMetricConfig: {
metrics: [{
metricSource: google_native.dataproc.v1.MetricMetricSource.MetricSourceUnspecified,
metricOverrides: ["string"],
}],
},
encryptionConfig: {
gcePdKmsKeyName: "string",
kmsKey: "string",
},
endpointConfig: {
enableHttpPortAccess: false,
},
gceClusterConfig: {
confidentialInstanceConfig: {
enableConfidentialCompute: false,
},
internalIpOnly: false,
metadata: {
string: "string",
},
networkUri: "string",
nodeGroupAffinity: {
nodeGroupUri: "string",
},
privateIpv6GoogleAccess: google_native.dataproc.v1.GceClusterConfigPrivateIpv6GoogleAccess.PrivateIpv6GoogleAccessUnspecified,
reservationAffinity: {
consumeReservationType: google_native.dataproc.v1.ReservationAffinityConsumeReservationType.TypeUnspecified,
key: "string",
values: ["string"],
},
serviceAccount: "string",
serviceAccountScopes: ["string"],
shieldedInstanceConfig: {
enableIntegrityMonitoring: false,
enableSecureBoot: false,
enableVtpm: false,
},
subnetworkUri: "string",
tags: ["string"],
zoneUri: "string",
},
gkeClusterConfig: {
gkeClusterTarget: "string",
nodePoolTarget: [{
nodePool: "string",
roles: [google_native.dataproc.v1.GkeNodePoolTargetRolesItem.RoleUnspecified],
nodePoolConfig: {
autoscaling: {
maxNodeCount: 0,
minNodeCount: 0,
},
config: {
accelerators: [{
acceleratorCount: "string",
acceleratorType: "string",
gpuPartitionSize: "string",
}],
bootDiskKmsKey: "string",
localSsdCount: 0,
machineType: "string",
minCpuPlatform: "string",
preemptible: false,
spot: false,
},
locations: ["string"],
},
}],
},
initializationActions: [{
executableFile: "string",
executionTimeout: "string",
}],
lifecycleConfig: {
autoDeleteTime: "string",
autoDeleteTtl: "string",
idleDeleteTtl: "string",
},
masterConfig: {
accelerators: [{
acceleratorCount: 0,
acceleratorTypeUri: "string",
}],
diskConfig: {
bootDiskSizeGb: 0,
bootDiskType: "string",
localSsdInterface: "string",
numLocalSsds: 0,
},
imageUri: "string",
instanceFlexibilityPolicy: {
instanceSelectionList: [{
machineTypes: ["string"],
rank: 0,
}],
},
machineTypeUri: "string",
minCpuPlatform: "string",
minNumInstances: 0,
numInstances: 0,
preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
startupConfig: {
requiredRegistrationFraction: 0,
},
},
metastoreConfig: {
dataprocMetastoreService: "string",
},
secondaryWorkerConfig: {
accelerators: [{
acceleratorCount: 0,
acceleratorTypeUri: "string",
}],
diskConfig: {
bootDiskSizeGb: 0,
bootDiskType: "string",
localSsdInterface: "string",
numLocalSsds: 0,
},
imageUri: "string",
instanceFlexibilityPolicy: {
instanceSelectionList: [{
machineTypes: ["string"],
rank: 0,
}],
},
machineTypeUri: "string",
minCpuPlatform: "string",
minNumInstances: 0,
numInstances: 0,
preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
startupConfig: {
requiredRegistrationFraction: 0,
},
},
securityConfig: {
identityConfig: {
userServiceAccountMapping: {
string: "string",
},
},
kerberosConfig: {
crossRealmTrustAdminServer: "string",
crossRealmTrustKdc: "string",
crossRealmTrustRealm: "string",
crossRealmTrustSharedPasswordUri: "string",
enableKerberos: false,
kdcDbKeyUri: "string",
keyPasswordUri: "string",
keystorePasswordUri: "string",
keystoreUri: "string",
kmsKeyUri: "string",
realm: "string",
rootPrincipalPasswordUri: "string",
tgtLifetimeHours: 0,
truststorePasswordUri: "string",
truststoreUri: "string",
},
},
softwareConfig: {
imageVersion: "string",
optionalComponents: [google_native.dataproc.v1.SoftwareConfigOptionalComponentsItem.ComponentUnspecified],
properties: {
string: "string",
},
},
tempBucket: "string",
workerConfig: {
accelerators: [{
acceleratorCount: 0,
acceleratorTypeUri: "string",
}],
diskConfig: {
bootDiskSizeGb: 0,
bootDiskType: "string",
localSsdInterface: "string",
numLocalSsds: 0,
},
imageUri: "string",
instanceFlexibilityPolicy: {
instanceSelectionList: [{
machineTypes: ["string"],
rank: 0,
}],
},
machineTypeUri: "string",
minCpuPlatform: "string",
minNumInstances: 0,
numInstances: 0,
preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
startupConfig: {
requiredRegistrationFraction: 0,
},
},
},
labels: {
string: "string",
},
},
},
dagTimeout: "string",
encryptionConfig: {
kmsKey: "string",
},
id: "string",
labels: {
string: "string",
},
location: "string",
parameters: [{
fields: ["string"],
name: "string",
description: "string",
validation: {
regex: {
regexes: ["string"],
},
values: {
values: ["string"],
},
},
}],
project: "string",
version: 0,
});
type: google-native:dataproc/v1:WorkflowTemplate
properties:
dagTimeout: string
encryptionConfig:
kmsKey: string
id: string
jobs:
- flinkJob:
args:
- string
jarFileUris:
- string
loggingConfig:
driverLogLevels:
string: string
mainClass: string
mainJarFileUri: string
properties:
string: string
savepointUri: string
hadoopJob:
archiveUris:
- string
args:
- string
fileUris:
- string
jarFileUris:
- string
loggingConfig:
driverLogLevels:
string: string
mainClass: string
mainJarFileUri: string
properties:
string: string
hiveJob:
continueOnFailure: false
jarFileUris:
- string
properties:
string: string
queryFileUri: string
queryList:
queries:
- string
scriptVariables:
string: string
labels:
string: string
pigJob:
continueOnFailure: false
jarFileUris:
- string
loggingConfig:
driverLogLevels:
string: string
properties:
string: string
queryFileUri: string
queryList:
queries:
- string
scriptVariables:
string: string
prerequisiteStepIds:
- string
prestoJob:
clientTags:
- string
continueOnFailure: false
loggingConfig:
driverLogLevels:
string: string
outputFormat: string
properties:
string: string
queryFileUri: string
queryList:
queries:
- string
pysparkJob:
archiveUris:
- string
args:
- string
fileUris:
- string
jarFileUris:
- string
loggingConfig:
driverLogLevels:
string: string
mainPythonFileUri: string
properties:
string: string
pythonFileUris:
- string
scheduling:
maxFailuresPerHour: 0
maxFailuresTotal: 0
sparkJob:
archiveUris:
- string
args:
- string
fileUris:
- string
jarFileUris:
- string
loggingConfig:
driverLogLevels:
string: string
mainClass: string
mainJarFileUri: string
properties:
string: string
sparkRJob:
archiveUris:
- string
args:
- string
fileUris:
- string
loggingConfig:
driverLogLevels:
string: string
mainRFileUri: string
properties:
string: string
sparkSqlJob:
jarFileUris:
- string
loggingConfig:
driverLogLevels:
string: string
properties:
string: string
queryFileUri: string
queryList:
queries:
- string
scriptVariables:
string: string
stepId: string
trinoJob:
clientTags:
- string
continueOnFailure: false
loggingConfig:
driverLogLevels:
string: string
outputFormat: string
properties:
string: string
queryFileUri: string
queryList:
queries:
- string
labels:
string: string
location: string
parameters:
- description: string
fields:
- string
name: string
validation:
regex:
regexes:
- string
values:
values:
- string
placement:
clusterSelector:
clusterLabels:
string: string
zone: string
managedCluster:
clusterName: string
config:
autoscalingConfig:
policyUri: string
auxiliaryNodeGroups:
- nodeGroup:
labels:
string: string
name: string
nodeGroupConfig:
accelerators:
- acceleratorCount: 0
acceleratorTypeUri: string
diskConfig:
bootDiskSizeGb: 0
bootDiskType: string
localSsdInterface: string
numLocalSsds: 0
imageUri: string
instanceFlexibilityPolicy:
instanceSelectionList:
- machineTypes:
- string
rank: 0
machineTypeUri: string
minCpuPlatform: string
minNumInstances: 0
numInstances: 0
preemptibility: PREEMPTIBILITY_UNSPECIFIED
startupConfig:
requiredRegistrationFraction: 0
roles:
- ROLE_UNSPECIFIED
nodeGroupId: string
configBucket: string
dataprocMetricConfig:
metrics:
- metricOverrides:
- string
metricSource: METRIC_SOURCE_UNSPECIFIED
encryptionConfig:
gcePdKmsKeyName: string
kmsKey: string
endpointConfig:
enableHttpPortAccess: false
gceClusterConfig:
confidentialInstanceConfig:
enableConfidentialCompute: false
internalIpOnly: false
metadata:
string: string
networkUri: string
nodeGroupAffinity:
nodeGroupUri: string
privateIpv6GoogleAccess: PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED
reservationAffinity:
consumeReservationType: TYPE_UNSPECIFIED
key: string
values:
- string
serviceAccount: string
serviceAccountScopes:
- string
shieldedInstanceConfig:
enableIntegrityMonitoring: false
enableSecureBoot: false
enableVtpm: false
subnetworkUri: string
tags:
- string
zoneUri: string
gkeClusterConfig:
gkeClusterTarget: string
nodePoolTarget:
- nodePool: string
nodePoolConfig:
autoscaling:
maxNodeCount: 0
minNodeCount: 0
config:
accelerators:
- acceleratorCount: string
acceleratorType: string
gpuPartitionSize: string
bootDiskKmsKey: string
localSsdCount: 0
machineType: string
minCpuPlatform: string
preemptible: false
spot: false
locations:
- string
roles:
- ROLE_UNSPECIFIED
initializationActions:
- executableFile: string
executionTimeout: string
lifecycleConfig:
autoDeleteTime: string
autoDeleteTtl: string
idleDeleteTtl: string
masterConfig:
accelerators:
- acceleratorCount: 0
acceleratorTypeUri: string
diskConfig:
bootDiskSizeGb: 0
bootDiskType: string
localSsdInterface: string
numLocalSsds: 0
imageUri: string
instanceFlexibilityPolicy:
instanceSelectionList:
- machineTypes:
- string
rank: 0
machineTypeUri: string
minCpuPlatform: string
minNumInstances: 0
numInstances: 0
preemptibility: PREEMPTIBILITY_UNSPECIFIED
startupConfig:
requiredRegistrationFraction: 0
metastoreConfig:
dataprocMetastoreService: string
secondaryWorkerConfig:
accelerators:
- acceleratorCount: 0
acceleratorTypeUri: string
diskConfig:
bootDiskSizeGb: 0
bootDiskType: string
localSsdInterface: string
numLocalSsds: 0
imageUri: string
instanceFlexibilityPolicy:
instanceSelectionList:
- machineTypes:
- string
rank: 0
machineTypeUri: string
minCpuPlatform: string
minNumInstances: 0
numInstances: 0
preemptibility: PREEMPTIBILITY_UNSPECIFIED
startupConfig:
requiredRegistrationFraction: 0
securityConfig:
identityConfig:
userServiceAccountMapping:
string: string
kerberosConfig:
crossRealmTrustAdminServer: string
crossRealmTrustKdc: string
crossRealmTrustRealm: string
crossRealmTrustSharedPasswordUri: string
enableKerberos: false
kdcDbKeyUri: string
keyPasswordUri: string
keystorePasswordUri: string
keystoreUri: string
kmsKeyUri: string
realm: string
rootPrincipalPasswordUri: string
tgtLifetimeHours: 0
truststorePasswordUri: string
truststoreUri: string
softwareConfig:
imageVersion: string
optionalComponents:
- COMPONENT_UNSPECIFIED
properties:
string: string
tempBucket: string
workerConfig:
accelerators:
- acceleratorCount: 0
acceleratorTypeUri: string
diskConfig:
bootDiskSizeGb: 0
bootDiskType: string
localSsdInterface: string
numLocalSsds: 0
imageUri: string
instanceFlexibilityPolicy:
instanceSelectionList:
- machineTypes:
- string
rank: 0
machineTypeUri: string
minCpuPlatform: string
minNumInstances: 0
numInstances: 0
preemptibility: PREEMPTIBILITY_UNSPECIFIED
startupConfig:
requiredRegistrationFraction: 0
labels:
string: string
project: string
version: 0
WorkflowTemplate Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The WorkflowTemplate resource accepts the following input properties:
- Jobs
List<Pulumi.
Google Native. Dataproc. V1. Inputs. Ordered Job> - The Directed Acyclic Graph of Jobs to submit.
- Placement
Pulumi.
Google Native. Dataproc. V1. Inputs. Workflow Template Placement - WorkflowTemplate scheduling information.
- Dag
Timeout string - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- Encryption
Config Pulumi.Google Native. Dataproc. V1. Inputs. Google Cloud Dataproc V1Workflow Template Encryption Config - Optional. Encryption settings for the encrypting customer core content.
- Id string
- Labels Dictionary<string, string>
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- Location string
- Parameters
List<Pulumi.
Google Native. Dataproc. V1. Inputs. Template Parameter> - Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- Project string
- Version int
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
- Jobs
[]Ordered
Job Args - The Directed Acyclic Graph of Jobs to submit.
- Placement
Workflow
Template Placement Args - WorkflowTemplate scheduling information.
- Dag
Timeout string - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- Encryption
Config GoogleCloud Dataproc V1Workflow Template Encryption Config Args - Optional. Encryption settings for the encrypting customer core content.
- Id string
- Labels map[string]string
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- Location string
- Parameters
[]Template
Parameter Args - Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- Project string
- Version int
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
- jobs
List<Ordered
Job> - The Directed Acyclic Graph of Jobs to submit.
- placement
Workflow
Template Placement - WorkflowTemplate scheduling information.
- dag
Timeout String - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- encryption
Config GoogleCloud Dataproc V1Workflow Template Encryption Config - Optional. Encryption settings for the encrypting customer core content.
- id String
- labels Map<String,String>
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- location String
- parameters
List<Template
Parameter> - Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- project String
- version Integer
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
- jobs
Ordered
Job[] - The Directed Acyclic Graph of Jobs to submit.
- placement
Workflow
Template Placement - WorkflowTemplate scheduling information.
- dag
Timeout string - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- encryption
Config GoogleCloud Dataproc V1Workflow Template Encryption Config - Optional. Encryption settings for the encrypting customer core content.
- id string
- labels {[key: string]: string}
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- location string
- parameters
Template
Parameter[] - Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- project string
- version number
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
- jobs
Sequence[Ordered
Job Args] - The Directed Acyclic Graph of Jobs to submit.
- placement
Workflow
Template Placement Args - WorkflowTemplate scheduling information.
- dag_
timeout str - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- encryption_
config GoogleCloud Dataproc V1Workflow Template Encryption Config Args - Optional. Encryption settings for the encrypting customer core content.
- id str
- labels Mapping[str, str]
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- location str
- parameters
Sequence[Template
Parameter Args] - Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- project str
- version int
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
- jobs List<Property Map>
- The Directed Acyclic Graph of Jobs to submit.
- placement Property Map
- WorkflowTemplate scheduling information.
- dag
Timeout String - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- encryption
Config Property Map - Optional. Encryption settings for the encrypting customer core content.
- id String
- labels Map<String>
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- location String
- parameters List<Property Map>
- Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- project String
- version Number
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
Outputs
All input properties are implicitly available as output properties. Additionally, the WorkflowTemplate resource produces the following output properties:
- Create
Time string - The time template was created.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- Update
Time string - The time template was last updated.
- Create
Time string - The time template was created.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- Update
Time string - The time template was last updated.
- create
Time String - The time template was created.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- update
Time String - The time template was last updated.
- create
Time string - The time template was created.
- id string
- The provider-assigned unique ID for this managed resource.
- name string
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- update
Time string - The time template was last updated.
- create_
time str - The time template was created.
- id str
- The provider-assigned unique ID for this managed resource.
- name str
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- update_
time str - The time template was last updated.
- create
Time String - The time template was created.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- update
Time String - The time template was last updated.
Supporting Types
AcceleratorConfig, AcceleratorConfigArgs
- Accelerator
Count int - The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- Accelerator
Count int - The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count Integer - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type StringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count number - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator_
count int - The number of the accelerator cards of this type exposed to this instance.
- accelerator_
type_ struri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count Number - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type StringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AcceleratorConfigResponse, AcceleratorConfigResponseArgs
- Accelerator
Count int - The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- Accelerator
Count int - The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count Integer - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type StringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count number - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator_
count int - The number of the accelerator cards of this type exposed to this instance.
- accelerator_
type_ struri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count Number - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type StringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AutoscalingConfig, AutoscalingConfigArgs
- Policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- Policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri String - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy_
uri str - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri String - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
AutoscalingConfigResponse, AutoscalingConfigResponseArgs
- Policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- Policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri String - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy_
uri str - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri String - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
AuxiliaryNodeGroup, AuxiliaryNodeGroupArgs
- Node
Group Pulumi.Google Native. Dataproc. V1. Inputs. Node Group - Node group configuration.
- Node
Group stringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- Node
Group NodeGroup Type - Node group configuration.
- Node
Group stringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node
Group NodeGroup - Node group configuration.
- node
Group StringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node
Group NodeGroup - Node group configuration.
- node
Group stringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node_
group NodeGroup - Node group configuration.
- node_
group_ strid - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node
Group Property Map - Node group configuration.
- node
Group StringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
AuxiliaryNodeGroupResponse, AuxiliaryNodeGroupResponseArgs
- Node
Group Pulumi.Google Native. Dataproc. V1. Inputs. Node Group Response - Node group configuration.
- Node
Group stringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- Node
Group NodeGroup Response - Node group configuration.
- Node
Group stringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node
Group NodeGroup Response - Node group configuration.
- node
Group StringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node
Group NodeGroup Response - Node group configuration.
- node
Group stringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node_
group NodeGroup Response - Node group configuration.
- node_
group_ strid - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node
Group Property Map - Node group configuration.
- node
Group StringId - Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
ClusterConfig, ClusterConfigArgs
- Autoscaling
Config Pulumi.Google Native. Dataproc. V1. Inputs. Autoscaling Config - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Auxiliary
Node List<Pulumi.Groups Google Native. Dataproc. V1. Inputs. Auxiliary Node Group> - Optional. The node group settings.
- Config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Dataproc
Metric Pulumi.Config Google Native. Dataproc. V1. Inputs. Dataproc Metric Config - Optional. The config for Dataproc metrics.
- Encryption
Config Pulumi.Google Native. Dataproc. V1. Inputs. Encryption Config - Optional. Encryption settings for the cluster.
- Endpoint
Config Pulumi.Google Native. Dataproc. V1. Inputs. Endpoint Config - Optional. Port/endpoint configuration for this cluster
- Gce
Cluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gce Cluster Config - Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gke Cluster Config - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions List<Pulumi.Google Native. Dataproc. V1. Inputs. Node Initialization Action> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config Pulumi.Google Native. Dataproc. V1. Inputs. Lifecycle Config - Optional. Lifecycle setting for the cluster.
- Master
Config Pulumi.Google Native. Dataproc. V1. Inputs. Instance Group Config - Optional. The Compute Engine config settings for the cluster's master instance.
- Metastore
Config Pulumi.Google Native. Dataproc. V1. Inputs. Metastore Config - Optional. Metastore configuration.
- Secondary
Worker Pulumi.Config Google Native. Dataproc. V1. Inputs. Instance Group Config - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- Security
Config Pulumi.Google Native. Dataproc. V1. Inputs. Security Config - Optional. Security settings for the cluster.
- Software
Config Pulumi.Google Native. Dataproc. V1. Inputs. Software Config - Optional. The config settings for cluster software.
- Temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Worker
Config Pulumi.Google Native. Dataproc. V1. Inputs. Instance Group Config - Optional. The Compute Engine config settings for the cluster's worker instances.
- Autoscaling
Config AutoscalingConfig - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Auxiliary
Node []AuxiliaryGroups Node Group - Optional. The node group settings.
- Config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Dataproc
Metric DataprocConfig Metric Config - Optional. The config for Dataproc metrics.
- Encryption
Config EncryptionConfig - Optional. Encryption settings for the cluster.
- Endpoint
Config EndpointConfig - Optional. Port/endpoint configuration for this cluster
- Gce
Cluster GceConfig Cluster Config - Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster GkeConfig Cluster Config - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions []NodeInitialization Action - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config LifecycleConfig - Optional. Lifecycle setting for the cluster.
- Master
Config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's master instance.
- Metastore
Config MetastoreConfig - Optional. Metastore configuration.
- Secondary
Worker InstanceConfig Group Config - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- Security
Config SecurityConfig - Optional. Security settings for the cluster.
- Software
Config SoftwareConfig - Optional. The config settings for cluster software.
- Temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Worker
Config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling
Config AutoscalingConfig - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary
Node List<AuxiliaryGroups Node Group> - Optional. The node group settings.
- config
Bucket String - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc
Metric DataprocConfig Metric Config - Optional. The config for Dataproc metrics.
- encryption
Config EncryptionConfig - Optional. Encryption settings for the cluster.
- endpoint
Config EndpointConfig - Optional. Port/endpoint configuration for this cluster
- gce
Cluster GceConfig Cluster Config - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster GkeConfig Cluster Config - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions List<NodeInitialization Action> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config LifecycleConfig - Optional. Lifecycle setting for the cluster.
- master
Config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore
Config MetastoreConfig - Optional. Metastore configuration.
- secondary
Worker InstanceConfig Group Config - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security
Config SecurityConfig - Optional. Security settings for the cluster.
- software
Config SoftwareConfig - Optional. The config settings for cluster software.
- temp
Bucket String - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker
Config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling
Config AutoscalingConfig - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary
Node AuxiliaryGroups Node Group[] - Optional. The node group settings.
- config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc
Metric DataprocConfig Metric Config - Optional. The config for Dataproc metrics.
- encryption
Config EncryptionConfig - Optional. Encryption settings for the cluster.
- endpoint
Config EndpointConfig - Optional. Port/endpoint configuration for this cluster
- gce
Cluster GceConfig Cluster Config - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster GkeConfig Cluster Config - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions NodeInitialization Action[] - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config LifecycleConfig - Optional. Lifecycle setting for the cluster.
- master
Config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore
Config MetastoreConfig - Optional. Metastore configuration.
- secondary
Worker InstanceConfig Group Config - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security
Config SecurityConfig - Optional. Security settings for the cluster.
- software
Config SoftwareConfig - Optional. The config settings for cluster software.
- temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker
Config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling_
config AutoscalingConfig - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary_
node_ Sequence[Auxiliarygroups Node Group] - Optional. The node group settings.
- config_
bucket str - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc_
metric_ Dataprocconfig Metric Config - Optional. The config for Dataproc metrics.
- encryption_
config EncryptionConfig - Optional. Encryption settings for the cluster.
- endpoint_
config EndpointConfig - Optional. Port/endpoint configuration for this cluster
- gce_
cluster_ Gceconfig Cluster Config - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke_
cluster_ Gkeconfig Cluster Config - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization_
actions Sequence[NodeInitialization Action] - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle_
config LifecycleConfig - Optional. Lifecycle setting for the cluster.
- master_
config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore_
config MetastoreConfig - Optional. Metastore configuration.
- secondary_
worker_ Instanceconfig Group Config - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security_
config SecurityConfig - Optional. Security settings for the cluster.
- software_
config SoftwareConfig - Optional. The config settings for cluster software.
- temp_
bucket str - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker_
config InstanceGroup Config - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling
Config Property Map - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary
Node List<Property Map>Groups - Optional. The node group settings.
- config
Bucket String - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc
Metric Property MapConfig - Optional. The config for Dataproc metrics.
- encryption
Config Property Map - Optional. Encryption settings for the cluster.
- endpoint
Config Property Map - Optional. Port/endpoint configuration for this cluster
- gce
Cluster Property MapConfig - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster Property MapConfig - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions List<Property Map> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config Property Map - Optional. Lifecycle setting for the cluster.
- master
Config Property Map - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore
Config Property Map - Optional. Metastore configuration.
- secondary
Worker Property MapConfig - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security
Config Property Map - Optional. Security settings for the cluster.
- software
Config Property Map - Optional. The config settings for cluster software.
- temp
Bucket String - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker
Config Property Map - Optional. The Compute Engine config settings for the cluster's worker instances.
ClusterConfigResponse, ClusterConfigResponseArgs
- Autoscaling
Config Pulumi.Google Native. Dataproc. V1. Inputs. Autoscaling Config Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Auxiliary
Node List<Pulumi.Groups Google Native. Dataproc. V1. Inputs. Auxiliary Node Group Response> - Optional. The node group settings.
- Config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Dataproc
Metric Pulumi.Config Google Native. Dataproc. V1. Inputs. Dataproc Metric Config Response - Optional. The config for Dataproc metrics.
- Encryption
Config Pulumi.Google Native. Dataproc. V1. Inputs. Encryption Config Response - Optional. Encryption settings for the cluster.
- Endpoint
Config Pulumi.Google Native. Dataproc. V1. Inputs. Endpoint Config Response - Optional. Port/endpoint configuration for this cluster
- Gce
Cluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gce Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gke Cluster Config Response - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions List<Pulumi.Google Native. Dataproc. V1. Inputs. Node Initialization Action Response> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config Pulumi.Google Native. Dataproc. V1. Inputs. Lifecycle Config Response - Optional. Lifecycle setting for the cluster.
- Master
Config Pulumi.Google Native. Dataproc. V1. Inputs. Instance Group Config Response - Optional. The Compute Engine config settings for the cluster's master instance.
- Metastore
Config Pulumi.Google Native. Dataproc. V1. Inputs. Metastore Config Response - Optional. Metastore configuration.
- Secondary
Worker Pulumi.Config Google Native. Dataproc. V1. Inputs. Instance Group Config Response - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- Security
Config Pulumi.Google Native. Dataproc. V1. Inputs. Security Config Response - Optional. Security settings for the cluster.
- Software
Config Pulumi.Google Native. Dataproc. V1. Inputs. Software Config Response - Optional. The config settings for cluster software.
- Temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Worker
Config Pulumi.Google Native. Dataproc. V1. Inputs. Instance Group Config Response - Optional. The Compute Engine config settings for the cluster's worker instances.
- Autoscaling
Config AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Auxiliary
Node []AuxiliaryGroups Node Group Response - Optional. The node group settings.
- Config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Dataproc
Metric DataprocConfig Metric Config Response - Optional. The config for Dataproc metrics.
- Encryption
Config EncryptionConfig Response - Optional. Encryption settings for the cluster.
- Endpoint
Config EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- Gce
Cluster GceConfig Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster GkeConfig Cluster Config Response - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions []NodeInitialization Action Response - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config LifecycleConfig Response - Optional. Lifecycle setting for the cluster.
- Master
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's master instance.
- Metastore
Config MetastoreConfig Response - Optional. Metastore configuration.
- Secondary
Worker InstanceConfig Group Config Response - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- Security
Config SecurityConfig Response - Optional. Security settings for the cluster.
- Software
Config SoftwareConfig Response - Optional. The config settings for cluster software.
- Temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Worker
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling
Config AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary
Node List<AuxiliaryGroups Node Group Response> - Optional. The node group settings.
- config
Bucket String - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc
Metric DataprocConfig Metric Config Response - Optional. The config for Dataproc metrics.
- encryption
Config EncryptionConfig Response - Optional. Encryption settings for the cluster.
- endpoint
Config EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- gce
Cluster GceConfig Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster GkeConfig Cluster Config Response - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions List<NodeInitialization Action Response> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config LifecycleConfig Response - Optional. Lifecycle setting for the cluster.
- master
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore
Config MetastoreConfig Response - Optional. Metastore configuration.
- secondary
Worker InstanceConfig Group Config Response - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security
Config SecurityConfig Response - Optional. Security settings for the cluster.
- software
Config SoftwareConfig Response - Optional. The config settings for cluster software.
- temp
Bucket String - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling
Config AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary
Node AuxiliaryGroups Node Group Response[] - Optional. The node group settings.
- config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc
Metric DataprocConfig Metric Config Response - Optional. The config for Dataproc metrics.
- encryption
Config EncryptionConfig Response - Optional. Encryption settings for the cluster.
- endpoint
Config EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- gce
Cluster GceConfig Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster GkeConfig Cluster Config Response - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions NodeInitialization Action Response[] - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config LifecycleConfig Response - Optional. Lifecycle setting for the cluster.
- master
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore
Config MetastoreConfig Response - Optional. Metastore configuration.
- secondary
Worker InstanceConfig Group Config Response - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security
Config SecurityConfig Response - Optional. Security settings for the cluster.
- software
Config SoftwareConfig Response - Optional. The config settings for cluster software.
- temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling_
config AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary_
node_ Sequence[Auxiliarygroups Node Group Response] - Optional. The node group settings.
- config_
bucket str - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc_
metric_ Dataprocconfig Metric Config Response - Optional. The config for Dataproc metrics.
- encryption_
config EncryptionConfig Response - Optional. Encryption settings for the cluster.
- endpoint_
config EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- gce_
cluster_ Gceconfig Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke_
cluster_ Gkeconfig Cluster Config Response - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization_
actions Sequence[NodeInitialization Action Response] - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle_
config LifecycleConfig Response - Optional. Lifecycle setting for the cluster.
- master_
config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore_
config MetastoreConfig Response - Optional. Metastore configuration.
- secondary_
worker_ Instanceconfig Group Config Response - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security_
config SecurityConfig Response - Optional. Security settings for the cluster.
- software_
config SoftwareConfig Response - Optional. The config settings for cluster software.
- temp_
bucket str - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker_
config InstanceGroup Config Response - Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling
Config Property Map - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary
Node List<Property Map>Groups - Optional. The node group settings.
- config
Bucket String - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc
Metric Property MapConfig - Optional. The config for Dataproc metrics.
- encryption
Config Property Map - Optional. Encryption settings for the cluster.
- endpoint
Config Property Map - Optional. Port/endpoint configuration for this cluster
- gce
Cluster Property MapConfig - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster Property MapConfig - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions List<Property Map> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config Property Map - Optional. Lifecycle setting for the cluster.
- master
Config Property Map - Optional. The Compute Engine config settings for the cluster's master instance.
- metastore
Config Property Map - Optional. Metastore configuration.
- secondary
Worker Property MapConfig - Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security
Config Property Map - Optional. Security settings for the cluster.
- software
Config Property Map - Optional. The config settings for cluster software.
- temp
Bucket String - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker
Config Property Map - Optional. The Compute Engine config settings for the cluster's worker instances.
ClusterSelector, ClusterSelectorArgs
- Cluster
Labels Dictionary<string, string> - The cluster labels. Cluster must have all labels to match.
- Zone string
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- Cluster
Labels map[string]string - The cluster labels. Cluster must have all labels to match.
- Zone string
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster
Labels Map<String,String> - The cluster labels. Cluster must have all labels to match.
- zone String
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster
Labels {[key: string]: string} - The cluster labels. Cluster must have all labels to match.
- zone string
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster_
labels Mapping[str, str] - The cluster labels. Cluster must have all labels to match.
- zone str
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster
Labels Map<String> - The cluster labels. Cluster must have all labels to match.
- zone String
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
ClusterSelectorResponse, ClusterSelectorResponseArgs
- Cluster
Labels Dictionary<string, string> - The cluster labels. Cluster must have all labels to match.
- Zone string
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- Cluster
Labels map[string]string - The cluster labels. Cluster must have all labels to match.
- Zone string
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster
Labels Map<String,String> - The cluster labels. Cluster must have all labels to match.
- zone String
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster
Labels {[key: string]: string} - The cluster labels. Cluster must have all labels to match.
- zone string
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster_
labels Mapping[str, str] - The cluster labels. Cluster must have all labels to match.
- zone str
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster
Labels Map<String> - The cluster labels. Cluster must have all labels to match.
- zone String
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
ConfidentialInstanceConfig, ConfidentialInstanceConfigArgs
- Enable
Confidential boolCompute - Optional. Defines whether the instance should have confidential compute enabled.
- Enable
Confidential boolCompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable
Confidential BooleanCompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable
Confidential booleanCompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable_
confidential_ boolcompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable
Confidential BooleanCompute - Optional. Defines whether the instance should have confidential compute enabled.
ConfidentialInstanceConfigResponse, ConfidentialInstanceConfigResponseArgs
- Enable
Confidential boolCompute - Optional. Defines whether the instance should have confidential compute enabled.
- Enable
Confidential boolCompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable
Confidential BooleanCompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable
Confidential booleanCompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable_
confidential_ boolcompute - Optional. Defines whether the instance should have confidential compute enabled.
- enable
Confidential BooleanCompute - Optional. Defines whether the instance should have confidential compute enabled.
DataprocMetricConfig, DataprocMetricConfigArgs
- Metrics
List<Pulumi.
Google Native. Dataproc. V1. Inputs. Metric> - Metrics sources to enable.
- metrics List<Metric>
- Metrics sources to enable.
- metrics Sequence[Metric]
- Metrics sources to enable.
- metrics List<Property Map>
- Metrics sources to enable.
DataprocMetricConfigResponse, DataprocMetricConfigResponseArgs
- Metrics
List<Pulumi.
Google Native. Dataproc. V1. Inputs. Metric Response> - Metrics sources to enable.
- Metrics
[]Metric
Response - Metrics sources to enable.
- metrics
List<Metric
Response> - Metrics sources to enable.
- metrics
Metric
Response[] - Metrics sources to enable.
- metrics
Sequence[Metric
Response] - Metrics sources to enable.
- metrics List<Property Map>
- Metrics sources to enable.
DiskConfig, DiskConfigArgs
- Boot
Disk intSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Local
Ssd stringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- Num
Local intSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- Boot
Disk intSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Local
Ssd stringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- Num
Local intSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot
Disk IntegerSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk StringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local
Ssd StringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num
Local IntegerSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot
Disk numberSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local
Ssd stringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num
Local numberSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot_
disk_ intsize_ gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot_
disk_ strtype - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local_
ssd_ strinterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num_
local_ intssds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot
Disk NumberSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk StringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local
Ssd StringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num
Local NumberSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
DiskConfigResponse, DiskConfigResponseArgs
- Boot
Disk intSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Local
Ssd stringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- Num
Local intSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- Boot
Disk intSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Local
Ssd stringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- Num
Local intSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot
Disk IntegerSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk StringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local
Ssd StringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num
Local IntegerSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot
Disk numberSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local
Ssd stringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num
Local numberSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot_
disk_ intsize_ gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot_
disk_ strtype - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local_
ssd_ strinterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num_
local_ intssds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot
Disk NumberSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk StringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local
Ssd StringInterface - Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num
Local NumberSsds - Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
EncryptionConfig, EncryptionConfigArgs
- Gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- Kms
Key string - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- Gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- Kms
Key string - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce
Pd StringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms
Key String - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms
Key string - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce_
pd_ strkms_ key_ name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms_
key str - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce
Pd StringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms
Key String - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
EncryptionConfigResponse, EncryptionConfigResponseArgs
- Gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- Kms
Key string - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- Gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- Kms
Key string - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce
Pd StringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms
Key String - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms
Key string - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce_
pd_ strkms_ key_ name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms_
key str - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce
Pd StringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms
Key String - Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
EndpointConfig, EndpointConfigArgs
- Enable
Http boolPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- Enable
Http boolPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enable
Http BooleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enable
Http booleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enable_
http_ boolport_ access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enable
Http BooleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
EndpointConfigResponse, EndpointConfigResponseArgs
- Enable
Http boolPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- Http
Ports Dictionary<string, string> - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- Enable
Http boolPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- Http
Ports map[string]string - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable
Http BooleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http
Ports Map<String,String> - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable
Http booleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http
Ports {[key: string]: string} - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable_
http_ boolport_ access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http_
ports Mapping[str, str] - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable
Http BooleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http
Ports Map<String> - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
FlinkJob, FlinkJobArgs
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
- Jar
File List<string>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
- Logging
Config Pulumi.Google Native. Dataproc. V1. Inputs. Logging Config - Optional. The runtime log config for job execution.
- Main
Class string - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
- Main
Jar stringFile Uri - The HCFS URI of the jar file that contains the main class.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
- Savepoint
Uri string - Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
- Jar
File []stringUris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
- Logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- Main
Class string - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
- Main
Jar stringFile Uri - The HCFS URI of the jar file that contains the main class.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
- Savepoint
Uri string - Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- main
Class String - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
- main
Jar StringFile Uri - The HCFS URI of the jar file that contains the main class.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
- savepoint
Uri String - Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
- jar
File string[]Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- main
Class string - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
- main
Jar stringFile Uri - The HCFS URI of the jar file that contains the main class.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
- savepoint
Uri string - Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
- jar_
file_ Sequence[str]uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
- logging_
config LoggingConfig - Optional. The runtime log config for job execution.
- main_
class str - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
- main_
jar_ strfile_ uri - The HCFS URI of the jar file that contains the main class.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
- savepoint_
uri str - Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- main
Class String - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
- main
Jar StringFile Uri - The HCFS URI of the jar file that contains the main class.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
- savepoint
Uri String - Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
FlinkJobResponse, FlinkJobResponseArgs
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
- Jar
File List<string>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
- Logging
Config Pulumi.Google Native. Dataproc. V1. Inputs. Logging Config Response - Optional. The runtime log config for job execution.
- Main
Class string - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
- Main
Jar stringFile Uri - The HCFS URI of the jar file that contains the main class.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
- Savepoint
Uri string - Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
- Jar
File []stringUris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
- Logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- Main
Class string - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
- Main
Jar stringFile Uri - The HCFS URI of the jar file that contains the main class.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
- Savepoint
Uri string - Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- main
Class String - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
- main
Jar StringFile Uri - The HCFS URI of the jar file that contains the main class.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
- savepoint
Uri String - Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
- jar
File string[]Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- main
Class string - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
- main
Jar stringFile Uri - The HCFS URI of the jar file that contains the main class.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
- savepoint
Uri string - Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
- jar_
file_ Sequence[str]uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
- logging_
config LoggingConfig Response - Optional. The runtime log config for job execution.
- main_
class str - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
- main_
jar_ strfile_ uri - The HCFS URI of the jar file that contains the main class.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
- savepoint_
uri str - Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- main
Class String - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
- main
Jar StringFile Uri - The HCFS URI of the jar file that contains the main class.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
- savepoint
Uri String - Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
GceClusterConfig, GceClusterConfigArgs
- Confidential
Instance Pulumi.Config Google Native. Dataproc. V1. Inputs. Confidential Instance Config - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- Internal
Ip boolOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata Dictionary<string, string>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- Node
Group Pulumi.Affinity Google Native. Dataproc. V1. Inputs. Node Group Affinity - Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google Pulumi.Access Google Native. Dataproc. V1. Gce Cluster Config Private Ipv6Google Access - Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity Pulumi.Google Native. Dataproc. V1. Inputs. Reservation Affinity - Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account List<string>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance Pulumi.Config Google Native. Dataproc. V1. Inputs. Shielded Instance Config - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<string>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri string - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- Confidential
Instance ConfidentialConfig Instance Config - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- Internal
Ip boolOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata map[string]string
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- Node
Group NodeAffinity Group Affinity - Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google GceAccess Cluster Config Private Ipv6Google Access - Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity ReservationAffinity - Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account []stringScopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance ShieldedConfig Instance Config - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- []string
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri string - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential
Instance ConfidentialConfig Instance Config - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal
Ip BooleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String,String>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri String - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node
Group NodeAffinity Group Affinity - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google GceAccess Cluster Config Private Ipv6Google Access - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity ReservationAffinity - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account String - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account List<String>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance ShieldedConfig Instance Config - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri String - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri String - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential
Instance ConfidentialConfig Instance Config - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal
Ip booleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata {[key: string]: string}
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node
Group NodeAffinity Group Affinity - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google GceAccess Cluster Config Private Ipv6Google Access - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity ReservationAffinity - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account string[]Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance ShieldedConfig Instance Config - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- string[]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri string - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential_
instance_ Confidentialconfig Instance Config - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal_
ip_ boolonly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Mapping[str, str]
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network_
uri str - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node_
group_ Nodeaffinity Group Affinity - Optional. Node Group Affinity for sole-tenant clusters.
- private_
ipv6_ Gcegoogle_ access Cluster Config Private Ipv6Google Access - Optional. The type of IPv6 access for a cluster.
- reservation_
affinity ReservationAffinity - Optional. Reservation Affinity for consuming Zonal reservation.
- service_
account str - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service_
account_ Sequence[str]scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded_
instance_ Shieldedconfig Instance Config - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork_
uri str - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- Sequence[str]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone_
uri str - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential
Instance Property MapConfig - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal
Ip BooleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri String - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node
Group Property MapAffinity - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google "PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED" | "INHERIT_FROM_SUBNETWORK" | "OUTBOUND" | "BIDIRECTIONAL"Access - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity Property Map - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account String - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account List<String>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance Property MapConfig - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri String - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri String - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
GceClusterConfigPrivateIpv6GoogleAccess, GceClusterConfigPrivateIpv6GoogleAccessArgs
- Private
Ipv6Google Access Unspecified - PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- Inherit
From Subnetwork - INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- Outbound
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- Bidirectional
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- Gce
Cluster Config Private Ipv6Google Access Private Ipv6Google Access Unspecified - PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- Gce
Cluster Config Private Ipv6Google Access Inherit From Subnetwork - INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- Gce
Cluster Config Private Ipv6Google Access Outbound - OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- Gce
Cluster Config Private Ipv6Google Access Bidirectional - BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- Private
Ipv6Google Access Unspecified - PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- Inherit
From Subnetwork - INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- Outbound
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- Bidirectional
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- Private
Ipv6Google Access Unspecified - PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- Inherit
From Subnetwork - INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- Outbound
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- Bidirectional
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- INHERIT_FROM_SUBNETWORK
- INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- OUTBOUND
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- BIDIRECTIONAL
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- "PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED"
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- "INHERIT_FROM_SUBNETWORK"
- INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- "OUTBOUND"
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- "BIDIRECTIONAL"
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
GceClusterConfigResponse, GceClusterConfigResponseArgs
- Confidential
Instance Pulumi.Config Google Native. Dataproc. V1. Inputs. Confidential Instance Config Response - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- Internal
Ip boolOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata Dictionary<string, string>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- Node
Group Pulumi.Affinity Google Native. Dataproc. V1. Inputs. Node Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google stringAccess - Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity Pulumi.Google Native. Dataproc. V1. Inputs. Reservation Affinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account List<string>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance Pulumi.Config Google Native. Dataproc. V1. Inputs. Shielded Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<string>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri string - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- Confidential
Instance ConfidentialConfig Instance Config Response - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- Internal
Ip boolOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata map[string]string
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- Node
Group NodeAffinity Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google stringAccess - Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account []stringScopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance ShieldedConfig Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- []string
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri string - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential
Instance ConfidentialConfig Instance Config Response - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal
Ip BooleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String,String>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri String - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node
Group NodeAffinity Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google StringAccess - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account String - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account List<String>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance ShieldedConfig Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri String - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri String - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential
Instance ConfidentialConfig Instance Config Response - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal
Ip booleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata {[key: string]: string}
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node
Group NodeAffinity Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google stringAccess - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account string[]Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance ShieldedConfig Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- string[]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri string - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential_
instance_ Confidentialconfig Instance Config Response - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal_
ip_ boolonly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Mapping[str, str]
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network_
uri str - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node_
group_ Nodeaffinity Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- private_
ipv6_ strgoogle_ access - Optional. The type of IPv6 access for a cluster.
- reservation_
affinity ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- service_
account str - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service_
account_ Sequence[str]scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded_
instance_ Shieldedconfig Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork_
uri str - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- Sequence[str]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone_
uri str - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential
Instance Property MapConfig - Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal
Ip BooleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri String - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node
Group Property MapAffinity - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google StringAccess - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity Property Map - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account String - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account List<String>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance Property MapConfig - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri String - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri String - Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
GkeClusterConfig, GkeClusterConfigArgs
- Gke
Cluster stringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- Namespaced
Gke Pulumi.Deployment Target Google Native. Dataproc. V1. Inputs. Namespaced Gke Deployment Target - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- Node
Pool List<Pulumi.Target Google Native. Dataproc. V1. Inputs. Gke Node Pool Target> - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- Gke
Cluster stringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- Namespaced
Gke NamespacedDeployment Target Gke Deployment Target - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- Node
Pool []GkeTarget Node Pool Target - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke
Cluster StringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced
Gke NamespacedDeployment Target Gke Deployment Target - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node
Pool List<GkeTarget Node Pool Target> - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke
Cluster stringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced
Gke NamespacedDeployment Target Gke Deployment Target - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node
Pool GkeTarget Node Pool Target[] - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke_
cluster_ strtarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced_
gke_ Namespaceddeployment_ target Gke Deployment Target - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node_
pool_ Sequence[Gketarget Node Pool Target] - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke
Cluster StringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced
Gke Property MapDeployment Target - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node
Pool List<Property Map>Target - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
GkeClusterConfigResponse, GkeClusterConfigResponseArgs
- Gke
Cluster stringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- Namespaced
Gke Pulumi.Deployment Target Google Native. Dataproc. V1. Inputs. Namespaced Gke Deployment Target Response - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- Node
Pool List<Pulumi.Target Google Native. Dataproc. V1. Inputs. Gke Node Pool Target Response> - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- Gke
Cluster stringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- Namespaced
Gke NamespacedDeployment Target Gke Deployment Target Response - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- Node
Pool []GkeTarget Node Pool Target Response - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke
Cluster StringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced
Gke NamespacedDeployment Target Gke Deployment Target Response - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node
Pool List<GkeTarget Node Pool Target Response> - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke
Cluster stringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced
Gke NamespacedDeployment Target Gke Deployment Target Response - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node
Pool GkeTarget Node Pool Target Response[] - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke_
cluster_ strtarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced_
gke_ Namespaceddeployment_ target Gke Deployment Target Response - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node_
pool_ Sequence[Gketarget Node Pool Target Response] - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke
Cluster StringTarget - Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced
Gke Property MapDeployment Target - Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node
Pool List<Property Map>Target - Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
GkeNodeConfig, GkeNodeConfigArgs
- Accelerators
List<Pulumi.
Google Native. Dataproc. V1. Inputs. Gke Node Pool Accelerator Config> - Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- Boot
Disk stringKms Key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- Local
Ssd intCount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- Machine
Type string - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- Min
Cpu stringPlatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- Preemptible bool
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Spot bool
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Accelerators
[]Gke
Node Pool Accelerator Config - Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- Boot
Disk stringKms Key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- Local
Ssd intCount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- Machine
Type string - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- Min
Cpu stringPlatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- Preemptible bool
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Spot bool
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
List<Gke
Node Pool Accelerator Config> - Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot
Disk StringKms Key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local
Ssd IntegerCount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine
Type String - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min
Cpu StringPlatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible Boolean
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot Boolean
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
Gke
Node Pool Accelerator Config[] - Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot
Disk stringKms Key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local
Ssd numberCount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine
Type string - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min
Cpu stringPlatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible boolean
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot boolean
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
Sequence[Gke
Node Pool Accelerator Config] - Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot_
disk_ strkms_ key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local_
ssd_ intcount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine_
type str - Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min_
cpu_ strplatform - Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible bool
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot bool
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators List<Property Map>
- Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot
Disk StringKms Key - Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local
Ssd NumberCount - Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).