1. Packages
  2. Google Cloud Native
  3. API Docs
  4. aiplatform
  5. aiplatform/v1beta1
  6. PersistentResource

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.aiplatform/v1beta1.PersistentResource

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Creates a PersistentResource.

    Create PersistentResource Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new PersistentResource(name: string, args: PersistentResourceArgs, opts?: CustomResourceOptions);
    @overload
    def PersistentResource(resource_name: str,
                           args: PersistentResourceArgs,
                           opts: Optional[ResourceOptions] = None)
    
    @overload
    def PersistentResource(resource_name: str,
                           opts: Optional[ResourceOptions] = None,
                           persistent_resource_id: Optional[str] = None,
                           resource_pools: Optional[Sequence[GoogleCloudAiplatformV1beta1ResourcePoolArgs]] = None,
                           display_name: Optional[str] = None,
                           encryption_spec: Optional[GoogleCloudAiplatformV1beta1EncryptionSpecArgs] = None,
                           labels: Optional[Mapping[str, str]] = None,
                           location: Optional[str] = None,
                           name: Optional[str] = None,
                           network: Optional[str] = None,
                           project: Optional[str] = None,
                           reserved_ip_ranges: Optional[Sequence[str]] = None,
                           resource_runtime_spec: Optional[GoogleCloudAiplatformV1beta1ResourceRuntimeSpecArgs] = None)
    func NewPersistentResource(ctx *Context, name string, args PersistentResourceArgs, opts ...ResourceOption) (*PersistentResource, error)
    public PersistentResource(string name, PersistentResourceArgs args, CustomResourceOptions? opts = null)
    public PersistentResource(String name, PersistentResourceArgs args)
    public PersistentResource(String name, PersistentResourceArgs args, CustomResourceOptions options)
    
    type: google-native:aiplatform/v1beta1:PersistentResource
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args PersistentResourceArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args PersistentResourceArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args PersistentResourceArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args PersistentResourceArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args PersistentResourceArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Example

    The following reference example uses placeholder values for all input properties.

    var persistentResourceResource = new GoogleNative.Aiplatform.V1Beta1.PersistentResource("persistentResourceResource", new()
    {
        PersistentResourceId = "string",
        ResourcePools = new[]
        {
            new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ResourcePoolArgs
            {
                MachineSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1MachineSpecArgs
                {
                    AcceleratorCount = 0,
                    AcceleratorType = GoogleNative.Aiplatform.V1Beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.AcceleratorTypeUnspecified,
                    MachineType = "string",
                    TpuTopology = "string",
                },
                AutoscalingSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecArgs
                {
                    MaxReplicaCount = "string",
                    MinReplicaCount = "string",
                },
                DiskSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1DiskSpecArgs
                {
                    BootDiskSizeGb = 0,
                    BootDiskType = "string",
                },
                Id = "string",
                ReplicaCount = "string",
            },
        },
        DisplayName = "string",
        EncryptionSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EncryptionSpecArgs
        {
            KmsKeyName = "string",
        },
        Labels = 
        {
            { "string", "string" },
        },
        Location = "string",
        Name = "string",
        Network = "string",
        Project = "string",
        ReservedIpRanges = new[]
        {
            "string",
        },
        ResourceRuntimeSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ResourceRuntimeSpecArgs
        {
            RaySpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1RaySpecArgs
            {
                HeadNodeResourcePoolId = "string",
                ImageUri = "string",
                ResourcePoolImages = 
                {
                    { "string", "string" },
                },
            },
            ServiceAccountSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ServiceAccountSpecArgs
            {
                EnableCustomServiceAccount = false,
                ServiceAccount = "string",
            },
        },
    });
    
    example, err := aiplatformv1beta1.NewPersistentResource(ctx, "persistentResourceResource", &aiplatformv1beta1.PersistentResourceArgs{
    PersistentResourceId: pulumi.String("string"),
    ResourcePools: aiplatform.GoogleCloudAiplatformV1beta1ResourcePoolArray{
    &aiplatform.GoogleCloudAiplatformV1beta1ResourcePoolArgs{
    MachineSpec: &aiplatform.GoogleCloudAiplatformV1beta1MachineSpecArgs{
    AcceleratorCount: pulumi.Int(0),
    AcceleratorType: aiplatformv1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeAcceleratorTypeUnspecified,
    MachineType: pulumi.String("string"),
    TpuTopology: pulumi.String("string"),
    },
    AutoscalingSpec: &aiplatform.GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecArgs{
    MaxReplicaCount: pulumi.String("string"),
    MinReplicaCount: pulumi.String("string"),
    },
    DiskSpec: &aiplatform.GoogleCloudAiplatformV1beta1DiskSpecArgs{
    BootDiskSizeGb: pulumi.Int(0),
    BootDiskType: pulumi.String("string"),
    },
    Id: pulumi.String("string"),
    ReplicaCount: pulumi.String("string"),
    },
    },
    DisplayName: pulumi.String("string"),
    EncryptionSpec: &aiplatform.GoogleCloudAiplatformV1beta1EncryptionSpecArgs{
    KmsKeyName: pulumi.String("string"),
    },
    Labels: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    Location: pulumi.String("string"),
    Name: pulumi.String("string"),
    Network: pulumi.String("string"),
    Project: pulumi.String("string"),
    ReservedIpRanges: pulumi.StringArray{
    pulumi.String("string"),
    },
    ResourceRuntimeSpec: &aiplatform.GoogleCloudAiplatformV1beta1ResourceRuntimeSpecArgs{
    RaySpec: &aiplatform.GoogleCloudAiplatformV1beta1RaySpecArgs{
    HeadNodeResourcePoolId: pulumi.String("string"),
    ImageUri: pulumi.String("string"),
    ResourcePoolImages: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    ServiceAccountSpec: &aiplatform.GoogleCloudAiplatformV1beta1ServiceAccountSpecArgs{
    EnableCustomServiceAccount: pulumi.Bool(false),
    ServiceAccount: pulumi.String("string"),
    },
    },
    })
    
    var persistentResourceResource = new PersistentResource("persistentResourceResource", PersistentResourceArgs.builder()        
        .persistentResourceId("string")
        .resourcePools(GoogleCloudAiplatformV1beta1ResourcePoolArgs.builder()
            .machineSpec(GoogleCloudAiplatformV1beta1MachineSpecArgs.builder()
                .acceleratorCount(0)
                .acceleratorType("ACCELERATOR_TYPE_UNSPECIFIED")
                .machineType("string")
                .tpuTopology("string")
                .build())
            .autoscalingSpec(GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecArgs.builder()
                .maxReplicaCount("string")
                .minReplicaCount("string")
                .build())
            .diskSpec(GoogleCloudAiplatformV1beta1DiskSpecArgs.builder()
                .bootDiskSizeGb(0)
                .bootDiskType("string")
                .build())
            .id("string")
            .replicaCount("string")
            .build())
        .displayName("string")
        .encryptionSpec(GoogleCloudAiplatformV1beta1EncryptionSpecArgs.builder()
            .kmsKeyName("string")
            .build())
        .labels(Map.of("string", "string"))
        .location("string")
        .name("string")
        .network("string")
        .project("string")
        .reservedIpRanges("string")
        .resourceRuntimeSpec(GoogleCloudAiplatformV1beta1ResourceRuntimeSpecArgs.builder()
            .raySpec(GoogleCloudAiplatformV1beta1RaySpecArgs.builder()
                .headNodeResourcePoolId("string")
                .imageUri("string")
                .resourcePoolImages(Map.of("string", "string"))
                .build())
            .serviceAccountSpec(GoogleCloudAiplatformV1beta1ServiceAccountSpecArgs.builder()
                .enableCustomServiceAccount(false)
                .serviceAccount("string")
                .build())
            .build())
        .build());
    
    persistent_resource_resource = google_native.aiplatform.v1beta1.PersistentResource("persistentResourceResource",
        persistent_resource_id="string",
        resource_pools=[google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1ResourcePoolArgs(
            machine_spec=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1MachineSpecArgs(
                accelerator_count=0,
                accelerator_type=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.ACCELERATOR_TYPE_UNSPECIFIED,
                machine_type="string",
                tpu_topology="string",
            ),
            autoscaling_spec=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecArgs(
                max_replica_count="string",
                min_replica_count="string",
            ),
            disk_spec=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1DiskSpecArgs(
                boot_disk_size_gb=0,
                boot_disk_type="string",
            ),
            id="string",
            replica_count="string",
        )],
        display_name="string",
        encryption_spec=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1EncryptionSpecArgs(
            kms_key_name="string",
        ),
        labels={
            "string": "string",
        },
        location="string",
        name="string",
        network="string",
        project="string",
        reserved_ip_ranges=["string"],
        resource_runtime_spec=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1ResourceRuntimeSpecArgs(
            ray_spec=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1RaySpecArgs(
                head_node_resource_pool_id="string",
                image_uri="string",
                resource_pool_images={
                    "string": "string",
                },
            ),
            service_account_spec=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1ServiceAccountSpecArgs(
                enable_custom_service_account=False,
                service_account="string",
            ),
        ))
    
    const persistentResourceResource = new google_native.aiplatform.v1beta1.PersistentResource("persistentResourceResource", {
        persistentResourceId: "string",
        resourcePools: [{
            machineSpec: {
                acceleratorCount: 0,
                acceleratorType: google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.AcceleratorTypeUnspecified,
                machineType: "string",
                tpuTopology: "string",
            },
            autoscalingSpec: {
                maxReplicaCount: "string",
                minReplicaCount: "string",
            },
            diskSpec: {
                bootDiskSizeGb: 0,
                bootDiskType: "string",
            },
            id: "string",
            replicaCount: "string",
        }],
        displayName: "string",
        encryptionSpec: {
            kmsKeyName: "string",
        },
        labels: {
            string: "string",
        },
        location: "string",
        name: "string",
        network: "string",
        project: "string",
        reservedIpRanges: ["string"],
        resourceRuntimeSpec: {
            raySpec: {
                headNodeResourcePoolId: "string",
                imageUri: "string",
                resourcePoolImages: {
                    string: "string",
                },
            },
            serviceAccountSpec: {
                enableCustomServiceAccount: false,
                serviceAccount: "string",
            },
        },
    });
    
    type: google-native:aiplatform/v1beta1:PersistentResource
    properties:
        displayName: string
        encryptionSpec:
            kmsKeyName: string
        labels:
            string: string
        location: string
        name: string
        network: string
        persistentResourceId: string
        project: string
        reservedIpRanges:
            - string
        resourcePools:
            - autoscalingSpec:
                maxReplicaCount: string
                minReplicaCount: string
              diskSpec:
                bootDiskSizeGb: 0
                bootDiskType: string
              id: string
              machineSpec:
                acceleratorCount: 0
                acceleratorType: ACCELERATOR_TYPE_UNSPECIFIED
                machineType: string
                tpuTopology: string
              replicaCount: string
        resourceRuntimeSpec:
            raySpec:
                headNodeResourcePoolId: string
                imageUri: string
                resourcePoolImages:
                    string: string
            serviceAccountSpec:
                enableCustomServiceAccount: false
                serviceAccount: string
    

    PersistentResource Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    The PersistentResource resource accepts the following input properties:

    PersistentResourceId string
    Required. The ID to use for the PersistentResource, which become the final component of the PersistentResource's resource name. The maximum length is 63 characters, and valid characters are /^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/.
    ResourcePools List<Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ResourcePool>
    The spec of the pools of different resources.
    DisplayName string
    Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    EncryptionSpec Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EncryptionSpec
    Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key.
    Labels Dictionary<string, string>
    Optional. The labels with user-defined metadata to organize PersistentResource. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    Location string
    Name string
    Immutable. Resource name of a PersistentResource.
    Network string
    Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the resources aren't peered with any network.
    Project string
    ReservedIpRanges List<string>
    Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource. If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    ResourceRuntimeSpec Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ResourceRuntimeSpec
    Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration.
    PersistentResourceId string
    Required. The ID to use for the PersistentResource, which become the final component of the PersistentResource's resource name. The maximum length is 63 characters, and valid characters are /^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/.
    ResourcePools []GoogleCloudAiplatformV1beta1ResourcePoolArgs
    The spec of the pools of different resources.
    DisplayName string
    Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    EncryptionSpec GoogleCloudAiplatformV1beta1EncryptionSpecArgs
    Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key.
    Labels map[string]string
    Optional. The labels with user-defined metadata to organize PersistentResource. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    Location string
    Name string
    Immutable. Resource name of a PersistentResource.
    Network string
    Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the resources aren't peered with any network.
    Project string
    ReservedIpRanges []string
    Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource. If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    ResourceRuntimeSpec GoogleCloudAiplatformV1beta1ResourceRuntimeSpecArgs
    Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration.
    persistentResourceId String
    Required. The ID to use for the PersistentResource, which become the final component of the PersistentResource's resource name. The maximum length is 63 characters, and valid characters are /^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/.
    resourcePools List<GoogleCloudAiplatformV1beta1ResourcePool>
    The spec of the pools of different resources.
    displayName String
    Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    encryptionSpec GoogleCloudAiplatformV1beta1EncryptionSpec
    Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key.
    labels Map<String,String>
    Optional. The labels with user-defined metadata to organize PersistentResource. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location String
    name String
    Immutable. Resource name of a PersistentResource.
    network String
    Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the resources aren't peered with any network.
    project String
    reservedIpRanges List<String>
    Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource. If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    resourceRuntimeSpec GoogleCloudAiplatformV1beta1ResourceRuntimeSpec
    Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration.
    persistentResourceId string
    Required. The ID to use for the PersistentResource, which become the final component of the PersistentResource's resource name. The maximum length is 63 characters, and valid characters are /^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/.
    resourcePools GoogleCloudAiplatformV1beta1ResourcePool[]
    The spec of the pools of different resources.
    displayName string
    Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    encryptionSpec GoogleCloudAiplatformV1beta1EncryptionSpec
    Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key.
    labels {[key: string]: string}
    Optional. The labels with user-defined metadata to organize PersistentResource. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location string
    name string
    Immutable. Resource name of a PersistentResource.
    network string
    Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the resources aren't peered with any network.
    project string
    reservedIpRanges string[]
    Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource. If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    resourceRuntimeSpec GoogleCloudAiplatformV1beta1ResourceRuntimeSpec
    Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration.
    persistent_resource_id str
    Required. The ID to use for the PersistentResource, which become the final component of the PersistentResource's resource name. The maximum length is 63 characters, and valid characters are /^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/.
    resource_pools Sequence[GoogleCloudAiplatformV1beta1ResourcePoolArgs]
    The spec of the pools of different resources.
    display_name str
    Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    encryption_spec GoogleCloudAiplatformV1beta1EncryptionSpecArgs
    Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key.
    labels Mapping[str, str]
    Optional. The labels with user-defined metadata to organize PersistentResource. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location str
    name str
    Immutable. Resource name of a PersistentResource.
    network str
    Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the resources aren't peered with any network.
    project str
    reserved_ip_ranges Sequence[str]
    Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource. If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    resource_runtime_spec GoogleCloudAiplatformV1beta1ResourceRuntimeSpecArgs
    Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration.
    persistentResourceId String
    Required. The ID to use for the PersistentResource, which become the final component of the PersistentResource's resource name. The maximum length is 63 characters, and valid characters are /^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/.
    resourcePools List<Property Map>
    The spec of the pools of different resources.
    displayName String
    Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    encryptionSpec Property Map
    Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key.
    labels Map<String>
    Optional. The labels with user-defined metadata to organize PersistentResource. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location String
    name String
    Immutable. Resource name of a PersistentResource.
    network String
    Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the resources aren't peered with any network.
    project String
    reservedIpRanges List<String>
    Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource. If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    resourceRuntimeSpec Property Map
    Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the PersistentResource resource produces the following output properties:

    CreateTime string
    Time when the PersistentResource was created.
    Error Pulumi.GoogleNative.Aiplatform.V1Beta1.Outputs.GoogleRpcStatusResponse
    Only populated when persistent resource's state is STOPPING or ERROR.
    Id string
    The provider-assigned unique ID for this managed resource.
    ResourceRuntime Pulumi.GoogleNative.Aiplatform.V1Beta1.Outputs.GoogleCloudAiplatformV1beta1ResourceRuntimeResponse
    Runtime information of the Persistent Resource.
    StartTime string
    Time when the PersistentResource for the first time entered the RUNNING state.
    State string
    The detailed state of a Study.
    UpdateTime string
    Time when the PersistentResource was most recently updated.
    CreateTime string
    Time when the PersistentResource was created.
    Error GoogleRpcStatusResponse
    Only populated when persistent resource's state is STOPPING or ERROR.
    Id string
    The provider-assigned unique ID for this managed resource.
    ResourceRuntime GoogleCloudAiplatformV1beta1ResourceRuntimeResponse
    Runtime information of the Persistent Resource.
    StartTime string
    Time when the PersistentResource for the first time entered the RUNNING state.
    State string
    The detailed state of a Study.
    UpdateTime string
    Time when the PersistentResource was most recently updated.
    createTime String
    Time when the PersistentResource was created.
    error GoogleRpcStatusResponse
    Only populated when persistent resource's state is STOPPING or ERROR.
    id String
    The provider-assigned unique ID for this managed resource.
    resourceRuntime GoogleCloudAiplatformV1beta1ResourceRuntimeResponse
    Runtime information of the Persistent Resource.
    startTime String
    Time when the PersistentResource for the first time entered the RUNNING state.
    state String
    The detailed state of a Study.
    updateTime String
    Time when the PersistentResource was most recently updated.
    createTime string
    Time when the PersistentResource was created.
    error GoogleRpcStatusResponse
    Only populated when persistent resource's state is STOPPING or ERROR.
    id string
    The provider-assigned unique ID for this managed resource.
    resourceRuntime GoogleCloudAiplatformV1beta1ResourceRuntimeResponse
    Runtime information of the Persistent Resource.
    startTime string
    Time when the PersistentResource for the first time entered the RUNNING state.
    state string
    The detailed state of a Study.
    updateTime string
    Time when the PersistentResource was most recently updated.
    create_time str
    Time when the PersistentResource was created.
    error GoogleRpcStatusResponse
    Only populated when persistent resource's state is STOPPING or ERROR.
    id str
    The provider-assigned unique ID for this managed resource.
    resource_runtime GoogleCloudAiplatformV1beta1ResourceRuntimeResponse
    Runtime information of the Persistent Resource.
    start_time str
    Time when the PersistentResource for the first time entered the RUNNING state.
    state str
    The detailed state of a Study.
    update_time str
    Time when the PersistentResource was most recently updated.
    createTime String
    Time when the PersistentResource was created.
    error Property Map
    Only populated when persistent resource's state is STOPPING or ERROR.
    id String
    The provider-assigned unique ID for this managed resource.
    resourceRuntime Property Map
    Runtime information of the Persistent Resource.
    startTime String
    Time when the PersistentResource for the first time entered the RUNNING state.
    state String
    The detailed state of a Study.
    updateTime String
    Time when the PersistentResource was most recently updated.

    Supporting Types

    GoogleCloudAiplatformV1beta1DiskSpec, GoogleCloudAiplatformV1beta1DiskSpecArgs

    BootDiskSizeGb int
    Size in GB of the boot disk (default is 100GB).
    BootDiskType string
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    BootDiskSizeGb int
    Size in GB of the boot disk (default is 100GB).
    BootDiskType string
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    bootDiskSizeGb Integer
    Size in GB of the boot disk (default is 100GB).
    bootDiskType String
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    bootDiskSizeGb number
    Size in GB of the boot disk (default is 100GB).
    bootDiskType string
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    boot_disk_size_gb int
    Size in GB of the boot disk (default is 100GB).
    boot_disk_type str
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    bootDiskSizeGb Number
    Size in GB of the boot disk (default is 100GB).
    bootDiskType String
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).

    GoogleCloudAiplatformV1beta1DiskSpecResponse, GoogleCloudAiplatformV1beta1DiskSpecResponseArgs

    BootDiskSizeGb int
    Size in GB of the boot disk (default is 100GB).
    BootDiskType string
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    BootDiskSizeGb int
    Size in GB of the boot disk (default is 100GB).
    BootDiskType string
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    bootDiskSizeGb Integer
    Size in GB of the boot disk (default is 100GB).
    bootDiskType String
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    bootDiskSizeGb number
    Size in GB of the boot disk (default is 100GB).
    bootDiskType string
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    boot_disk_size_gb int
    Size in GB of the boot disk (default is 100GB).
    boot_disk_type str
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    bootDiskSizeGb Number
    Size in GB of the boot disk (default is 100GB).
    bootDiskType String
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).

    GoogleCloudAiplatformV1beta1EncryptionSpec, GoogleCloudAiplatformV1beta1EncryptionSpecArgs

    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kms_key_name str
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

    GoogleCloudAiplatformV1beta1EncryptionSpecResponse, GoogleCloudAiplatformV1beta1EncryptionSpecResponseArgs

    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kms_key_name str
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

    GoogleCloudAiplatformV1beta1MachineSpec, GoogleCloudAiplatformV1beta1MachineSpecArgs

    AcceleratorCount int
    The number of accelerators to attach to the machine.
    AcceleratorType Pulumi.GoogleNative.Aiplatform.V1Beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    MachineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    TpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    AcceleratorCount int
    The number of accelerators to attach to the machine.
    AcceleratorType GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    MachineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    TpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount Integer
    The number of accelerators to attach to the machine.
    acceleratorType GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType String
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology String
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount number
    The number of accelerators to attach to the machine.
    acceleratorType GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    accelerator_count int
    The number of accelerators to attach to the machine.
    accelerator_type GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machine_type str
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpu_topology str
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount Number
    The number of accelerators to attach to the machine.
    acceleratorType "ACCELERATOR_TYPE_UNSPECIFIED" | "NVIDIA_TESLA_K80" | "NVIDIA_TESLA_P100" | "NVIDIA_TESLA_V100" | "NVIDIA_TESLA_P4" | "NVIDIA_TESLA_T4" | "NVIDIA_TESLA_A100" | "NVIDIA_A100_80GB" | "NVIDIA_L4" | "NVIDIA_H100_80GB" | "TPU_V2" | "TPU_V3" | "TPU_V4_POD" | "TPU_V5_LITEPOD"
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType String
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology String
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").

    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType, GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeArgs

    AcceleratorTypeUnspecified
    ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
    NvidiaTeslaK80
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    NvidiaTeslaP100
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    NvidiaTeslaV100
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    NvidiaTeslaP4
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    NvidiaTeslaT4
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    NvidiaTeslaA100
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    NvidiaA10080gb
    NVIDIA_A100_80GBNvidia A100 80GB GPU.
    NvidiaL4
    NVIDIA_L4Nvidia L4 GPU.
    NvidiaH10080gb
    NVIDIA_H100_80GBNvidia H100 80Gb GPU.
    TpuV2
    TPU_V2TPU v2.
    TpuV3
    TPU_V3TPU v3.
    TpuV4Pod
    TPU_V4_PODTPU v4.
    TpuV5Litepod
    TPU_V5_LITEPODTPU v5.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeAcceleratorTypeUnspecified
    ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaTeslaK80
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaTeslaP100
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaTeslaV100
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaTeslaP4
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaTeslaT4
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaTeslaA100
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaA10080gb
    NVIDIA_A100_80GBNvidia A100 80GB GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaL4
    NVIDIA_L4Nvidia L4 GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaH10080gb
    NVIDIA_H100_80GBNvidia H100 80Gb GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeTpuV2
    TPU_V2TPU v2.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeTpuV3
    TPU_V3TPU v3.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeTpuV4Pod
    TPU_V4_PODTPU v4.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeTpuV5Litepod
    TPU_V5_LITEPODTPU v5.
    AcceleratorTypeUnspecified
    ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
    NvidiaTeslaK80
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    NvidiaTeslaP100
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    NvidiaTeslaV100
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    NvidiaTeslaP4
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    NvidiaTeslaT4
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    NvidiaTeslaA100
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    NvidiaA10080gb
    NVIDIA_A100_80GBNvidia A100 80GB GPU.
    NvidiaL4
    NVIDIA_L4Nvidia L4 GPU.
    NvidiaH10080gb
    NVIDIA_H100_80GBNvidia H100 80Gb GPU.
    TpuV2
    TPU_V2TPU v2.
    TpuV3
    TPU_V3TPU v3.
    TpuV4Pod
    TPU_V4_PODTPU v4.
    TpuV5Litepod
    TPU_V5_LITEPODTPU v5.
    AcceleratorTypeUnspecified
    ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
    NvidiaTeslaK80
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    NvidiaTeslaP100
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    NvidiaTeslaV100
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    NvidiaTeslaP4
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    NvidiaTeslaT4
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    NvidiaTeslaA100
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    NvidiaA10080gb
    NVIDIA_A100_80GBNvidia A100 80GB GPU.
    NvidiaL4
    NVIDIA_L4Nvidia L4 GPU.
    NvidiaH10080gb
    NVIDIA_H100_80GBNvidia H100 80Gb GPU.
    TpuV2
    TPU_V2TPU v2.
    TpuV3
    TPU_V3TPU v3.
    TpuV4Pod
    TPU_V4_PODTPU v4.
    TpuV5Litepod
    TPU_V5_LITEPODTPU v5.
    ACCELERATOR_TYPE_UNSPECIFIED
    ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
    NVIDIA_TESLA_K80
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    NVIDIA_TESLA_P100
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    NVIDIA_TESLA_V100
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    NVIDIA_TESLA_P4
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    NVIDIA_TESLA_T4
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    NVIDIA_TESLA_A100
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    NVIDIA_A10080GB
    NVIDIA_A100_80GBNvidia A100 80GB GPU.
    NVIDIA_L4
    NVIDIA_L4Nvidia L4 GPU.
    NVIDIA_H10080GB
    NVIDIA_H100_80GBNvidia H100 80Gb GPU.
    TPU_V2
    TPU_V2TPU v2.
    TPU_V3
    TPU_V3TPU v3.
    TPU_V4_POD
    TPU_V4_PODTPU v4.
    TPU_V5_LITEPOD
    TPU_V5_LITEPODTPU v5.
    "ACCELERATOR_TYPE_UNSPECIFIED"
    ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
    "NVIDIA_TESLA_K80"
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    "NVIDIA_TESLA_P100"
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    "NVIDIA_TESLA_V100"
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    "NVIDIA_TESLA_P4"
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    "NVIDIA_TESLA_T4"
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    "NVIDIA_TESLA_A100"
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    "NVIDIA_A100_80GB"
    NVIDIA_A100_80GBNvidia A100 80GB GPU.
    "NVIDIA_L4"
    NVIDIA_L4Nvidia L4 GPU.
    "NVIDIA_H100_80GB"
    NVIDIA_H100_80GBNvidia H100 80Gb GPU.
    "TPU_V2"
    TPU_V2TPU v2.
    "TPU_V3"
    TPU_V3TPU v3.
    "TPU_V4_POD"
    TPU_V4_PODTPU v4.
    "TPU_V5_LITEPOD"
    TPU_V5_LITEPODTPU v5.

    GoogleCloudAiplatformV1beta1MachineSpecResponse, GoogleCloudAiplatformV1beta1MachineSpecResponseArgs

    AcceleratorCount int
    The number of accelerators to attach to the machine.
    AcceleratorType string
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    MachineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    TpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    AcceleratorCount int
    The number of accelerators to attach to the machine.
    AcceleratorType string
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    MachineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    TpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount Integer
    The number of accelerators to attach to the machine.
    acceleratorType String
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType String
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology String
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount number
    The number of accelerators to attach to the machine.
    acceleratorType string
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    accelerator_count int
    The number of accelerators to attach to the machine.
    accelerator_type str
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machine_type str
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpu_topology str
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount Number
    The number of accelerators to attach to the machine.
    acceleratorType String
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType String
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology String
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").

    GoogleCloudAiplatformV1beta1RaySpec, GoogleCloudAiplatformV1beta1RaySpecArgs

    HeadNodeResourcePoolId string
    Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
    ImageUri string
    Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
    ResourcePoolImages Dictionary<string, string>
    Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
    HeadNodeResourcePoolId string
    Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
    ImageUri string
    Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
    ResourcePoolImages map[string]string
    Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
    headNodeResourcePoolId String
    Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
    imageUri String
    Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
    resourcePoolImages Map<String,String>
    Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
    headNodeResourcePoolId string
    Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
    imageUri string
    Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
    resourcePoolImages {[key: string]: string}
    Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
    head_node_resource_pool_id str
    Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
    image_uri str
    Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
    resource_pool_images Mapping[str, str]
    Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
    headNodeResourcePoolId String
    Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
    imageUri String
    Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
    resourcePoolImages Map<String>
    Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }

    GoogleCloudAiplatformV1beta1RaySpecResponse, GoogleCloudAiplatformV1beta1RaySpecResponseArgs

    HeadNodeResourcePoolId string
    Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
    ImageUri string
    Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
    ResourcePoolImages Dictionary<string, string>
    Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
    HeadNodeResourcePoolId string
    Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
    ImageUri string
    Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
    ResourcePoolImages map[string]string
    Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
    headNodeResourcePoolId String
    Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
    imageUri String
    Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
    resourcePoolImages Map<String,String>
    Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
    headNodeResourcePoolId string
    Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
    imageUri string
    Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
    resourcePoolImages {[key: string]: string}
    Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
    head_node_resource_pool_id str
    Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
    image_uri str
    Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
    resource_pool_images Mapping[str, str]
    Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
    headNodeResourcePoolId String
    Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
    imageUri String
    Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
    resourcePoolImages Map<String>
    Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }

    GoogleCloudAiplatformV1beta1ResourcePool, GoogleCloudAiplatformV1beta1ResourcePoolArgs

    MachineSpec Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1MachineSpec
    Immutable. The specification of a single machine.
    AutoscalingSpec Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpec
    Optional. Optional spec to configure GKE autoscaling
    DiskSpec Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1DiskSpec
    Optional. Disk spec for the machine in this node pool.
    Id string
    Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically.
    ReplicaCount string
    Optional. The total number of machines to use for this resource pool.
    MachineSpec GoogleCloudAiplatformV1beta1MachineSpec
    Immutable. The specification of a single machine.
    AutoscalingSpec GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpec
    Optional. Optional spec to configure GKE autoscaling
    DiskSpec GoogleCloudAiplatformV1beta1DiskSpec
    Optional. Disk spec for the machine in this node pool.
    Id string
    Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically.
    ReplicaCount string
    Optional. The total number of machines to use for this resource pool.
    machineSpec GoogleCloudAiplatformV1beta1MachineSpec
    Immutable. The specification of a single machine.
    autoscalingSpec GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpec
    Optional. Optional spec to configure GKE autoscaling
    diskSpec GoogleCloudAiplatformV1beta1DiskSpec
    Optional. Disk spec for the machine in this node pool.
    id String
    Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically.
    replicaCount String
    Optional. The total number of machines to use for this resource pool.
    machineSpec GoogleCloudAiplatformV1beta1MachineSpec
    Immutable. The specification of a single machine.
    autoscalingSpec GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpec
    Optional. Optional spec to configure GKE autoscaling
    diskSpec GoogleCloudAiplatformV1beta1DiskSpec
    Optional. Disk spec for the machine in this node pool.
    id string
    Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically.
    replicaCount string
    Optional. The total number of machines to use for this resource pool.
    machine_spec GoogleCloudAiplatformV1beta1MachineSpec
    Immutable. The specification of a single machine.
    autoscaling_spec GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpec
    Optional. Optional spec to configure GKE autoscaling
    disk_spec GoogleCloudAiplatformV1beta1DiskSpec
    Optional. Disk spec for the machine in this node pool.
    id str
    Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically.
    replica_count str
    Optional. The total number of machines to use for this resource pool.
    machineSpec Property Map
    Immutable. The specification of a single machine.
    autoscalingSpec Property Map
    Optional. Optional spec to configure GKE autoscaling
    diskSpec Property Map
    Optional. Disk spec for the machine in this node pool.
    id String
    Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically.
    replicaCount String
    Optional. The total number of machines to use for this resource pool.

    GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpec, GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecArgs

    MaxReplicaCount string
    Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
    MinReplicaCount string
    Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
    MaxReplicaCount string
    Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
    MinReplicaCount string
    Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
    maxReplicaCount String
    Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
    minReplicaCount String
    Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
    maxReplicaCount string
    Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
    minReplicaCount string
    Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
    max_replica_count str
    Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
    min_replica_count str
    Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
    maxReplicaCount String
    Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
    minReplicaCount String
    Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error

    GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecResponse, GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecResponseArgs

    MaxReplicaCount string
    Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
    MinReplicaCount string
    Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
    MaxReplicaCount string
    Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
    MinReplicaCount string
    Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
    maxReplicaCount String
    Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
    minReplicaCount String
    Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
    maxReplicaCount string
    Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
    minReplicaCount string
    Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
    max_replica_count str
    Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
    min_replica_count str
    Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
    maxReplicaCount String
    Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
    minReplicaCount String
    Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error

    GoogleCloudAiplatformV1beta1ResourcePoolResponse, GoogleCloudAiplatformV1beta1ResourcePoolResponseArgs

    AutoscalingSpec Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecResponse
    Optional. Optional spec to configure GKE autoscaling
    DiskSpec Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1DiskSpecResponse
    Optional. Disk spec for the machine in this node pool.
    MachineSpec Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1MachineSpecResponse
    Immutable. The specification of a single machine.
    ReplicaCount string
    Optional. The total number of machines to use for this resource pool.
    UsedReplicaCount string
    The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count.
    AutoscalingSpec GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecResponse
    Optional. Optional spec to configure GKE autoscaling
    DiskSpec GoogleCloudAiplatformV1beta1DiskSpecResponse
    Optional. Disk spec for the machine in this node pool.
    MachineSpec GoogleCloudAiplatformV1beta1MachineSpecResponse
    Immutable. The specification of a single machine.
    ReplicaCount string
    Optional. The total number of machines to use for this resource pool.
    UsedReplicaCount string
    The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count.
    autoscalingSpec GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecResponse
    Optional. Optional spec to configure GKE autoscaling
    diskSpec GoogleCloudAiplatformV1beta1DiskSpecResponse
    Optional. Disk spec for the machine in this node pool.
    machineSpec GoogleCloudAiplatformV1beta1MachineSpecResponse
    Immutable. The specification of a single machine.
    replicaCount String
    Optional. The total number of machines to use for this resource pool.
    usedReplicaCount String
    The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count.
    autoscalingSpec GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecResponse
    Optional. Optional spec to configure GKE autoscaling
    diskSpec GoogleCloudAiplatformV1beta1DiskSpecResponse
    Optional. Disk spec for the machine in this node pool.
    machineSpec GoogleCloudAiplatformV1beta1MachineSpecResponse
    Immutable. The specification of a single machine.
    replicaCount string
    Optional. The total number of machines to use for this resource pool.
    usedReplicaCount string
    The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count.
    autoscaling_spec GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecResponse
    Optional. Optional spec to configure GKE autoscaling
    disk_spec GoogleCloudAiplatformV1beta1DiskSpecResponse
    Optional. Disk spec for the machine in this node pool.
    machine_spec GoogleCloudAiplatformV1beta1MachineSpecResponse
    Immutable. The specification of a single machine.
    replica_count str
    Optional. The total number of machines to use for this resource pool.
    used_replica_count str
    The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count.
    autoscalingSpec Property Map
    Optional. Optional spec to configure GKE autoscaling
    diskSpec Property Map
    Optional. Disk spec for the machine in this node pool.
    machineSpec Property Map
    Immutable. The specification of a single machine.
    replicaCount String
    Optional. The total number of machines to use for this resource pool.
    usedReplicaCount String
    The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count.

    GoogleCloudAiplatformV1beta1ResourceRuntimeResponse, GoogleCloudAiplatformV1beta1ResourceRuntimeResponseArgs

    AccessUris Dictionary<string, string>
    URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" }
    NotebookRuntimeTemplate string
    The resource name of NotebookRuntimeTemplate for the RoV Persistent Cluster The NotebokRuntimeTemplate is created in the same VPC (if set), and with the same Ray and Python version as the Persistent Cluster. Example: "projects/1000/locations/us-central1/notebookRuntimeTemplates/abc123"
    AccessUris map[string]string
    URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" }
    NotebookRuntimeTemplate string
    The resource name of NotebookRuntimeTemplate for the RoV Persistent Cluster The NotebokRuntimeTemplate is created in the same VPC (if set), and with the same Ray and Python version as the Persistent Cluster. Example: "projects/1000/locations/us-central1/notebookRuntimeTemplates/abc123"
    accessUris Map<String,String>
    URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" }
    notebookRuntimeTemplate String
    The resource name of NotebookRuntimeTemplate for the RoV Persistent Cluster The NotebokRuntimeTemplate is created in the same VPC (if set), and with the same Ray and Python version as the Persistent Cluster. Example: "projects/1000/locations/us-central1/notebookRuntimeTemplates/abc123"
    accessUris {[key: string]: string}
    URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" }
    notebookRuntimeTemplate string
    The resource name of NotebookRuntimeTemplate for the RoV Persistent Cluster The NotebokRuntimeTemplate is created in the same VPC (if set), and with the same Ray and Python version as the Persistent Cluster. Example: "projects/1000/locations/us-central1/notebookRuntimeTemplates/abc123"
    access_uris Mapping[str, str]
    URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" }
    notebook_runtime_template str
    The resource name of NotebookRuntimeTemplate for the RoV Persistent Cluster The NotebokRuntimeTemplate is created in the same VPC (if set), and with the same Ray and Python version as the Persistent Cluster. Example: "projects/1000/locations/us-central1/notebookRuntimeTemplates/abc123"
    accessUris Map<String>
    URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" }
    notebookRuntimeTemplate String
    The resource name of NotebookRuntimeTemplate for the RoV Persistent Cluster The NotebokRuntimeTemplate is created in the same VPC (if set), and with the same Ray and Python version as the Persistent Cluster. Example: "projects/1000/locations/us-central1/notebookRuntimeTemplates/abc123"

    GoogleCloudAiplatformV1beta1ResourceRuntimeSpec, GoogleCloudAiplatformV1beta1ResourceRuntimeSpecArgs

    RaySpec Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1RaySpec
    Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
    ServiceAccountSpec Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ServiceAccountSpec
    Optional. Configure the use of workload identity on the PersistentResource
    RaySpec GoogleCloudAiplatformV1beta1RaySpec
    Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
    ServiceAccountSpec GoogleCloudAiplatformV1beta1ServiceAccountSpec
    Optional. Configure the use of workload identity on the PersistentResource
    raySpec GoogleCloudAiplatformV1beta1RaySpec
    Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
    serviceAccountSpec GoogleCloudAiplatformV1beta1ServiceAccountSpec
    Optional. Configure the use of workload identity on the PersistentResource
    raySpec GoogleCloudAiplatformV1beta1RaySpec
    Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
    serviceAccountSpec GoogleCloudAiplatformV1beta1ServiceAccountSpec
    Optional. Configure the use of workload identity on the PersistentResource
    ray_spec GoogleCloudAiplatformV1beta1RaySpec
    Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
    service_account_spec GoogleCloudAiplatformV1beta1ServiceAccountSpec
    Optional. Configure the use of workload identity on the PersistentResource
    raySpec Property Map
    Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
    serviceAccountSpec Property Map
    Optional. Configure the use of workload identity on the PersistentResource

    GoogleCloudAiplatformV1beta1ResourceRuntimeSpecResponse, GoogleCloudAiplatformV1beta1ResourceRuntimeSpecResponseArgs

    RaySpec Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1RaySpecResponse
    Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
    ServiceAccountSpec Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ServiceAccountSpecResponse
    Optional. Configure the use of workload identity on the PersistentResource
    RaySpec GoogleCloudAiplatformV1beta1RaySpecResponse
    Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
    ServiceAccountSpec GoogleCloudAiplatformV1beta1ServiceAccountSpecResponse
    Optional. Configure the use of workload identity on the PersistentResource
    raySpec GoogleCloudAiplatformV1beta1RaySpecResponse
    Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
    serviceAccountSpec GoogleCloudAiplatformV1beta1ServiceAccountSpecResponse
    Optional. Configure the use of workload identity on the PersistentResource
    raySpec GoogleCloudAiplatformV1beta1RaySpecResponse
    Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
    serviceAccountSpec GoogleCloudAiplatformV1beta1ServiceAccountSpecResponse
    Optional. Configure the use of workload identity on the PersistentResource
    ray_spec GoogleCloudAiplatformV1beta1RaySpecResponse
    Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
    service_account_spec GoogleCloudAiplatformV1beta1ServiceAccountSpecResponse
    Optional. Configure the use of workload identity on the PersistentResource
    raySpec Property Map
    Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
    serviceAccountSpec Property Map
    Optional. Configure the use of workload identity on the PersistentResource

    GoogleCloudAiplatformV1beta1ServiceAccountSpec, GoogleCloudAiplatformV1beta1ServiceAccountSpecArgs

    EnableCustomServiceAccount bool
    If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
    ServiceAccount string
    Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via ResourceRuntimeSpec on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have the iam.serviceAccounts.actAs permission on this service account. Required if any containers are specified in ResourceRuntimeSpec.
    EnableCustomServiceAccount bool
    If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
    ServiceAccount string
    Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via ResourceRuntimeSpec on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have the iam.serviceAccounts.actAs permission on this service account. Required if any containers are specified in ResourceRuntimeSpec.
    enableCustomServiceAccount Boolean
    If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
    serviceAccount String
    Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via ResourceRuntimeSpec on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have the iam.serviceAccounts.actAs permission on this service account. Required if any containers are specified in ResourceRuntimeSpec.
    enableCustomServiceAccount boolean
    If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
    serviceAccount string
    Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via ResourceRuntimeSpec on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have the iam.serviceAccounts.actAs permission on this service account. Required if any containers are specified in ResourceRuntimeSpec.
    enable_custom_service_account bool
    If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
    service_account str
    Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via ResourceRuntimeSpec on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have the iam.serviceAccounts.actAs permission on this service account. Required if any containers are specified in ResourceRuntimeSpec.
    enableCustomServiceAccount Boolean
    If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
    serviceAccount String
    Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via ResourceRuntimeSpec on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have the iam.serviceAccounts.actAs permission on this service account. Required if any containers are specified in ResourceRuntimeSpec.

    GoogleCloudAiplatformV1beta1ServiceAccountSpecResponse, GoogleCloudAiplatformV1beta1ServiceAccountSpecResponseArgs

    EnableCustomServiceAccount bool
    If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
    ServiceAccount string
    Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via ResourceRuntimeSpec on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have the iam.serviceAccounts.actAs permission on this service account. Required if any containers are specified in ResourceRuntimeSpec.
    EnableCustomServiceAccount bool
    If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
    ServiceAccount string
    Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via ResourceRuntimeSpec on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have the iam.serviceAccounts.actAs permission on this service account. Required if any containers are specified in ResourceRuntimeSpec.
    enableCustomServiceAccount Boolean
    If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
    serviceAccount String
    Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via ResourceRuntimeSpec on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have the iam.serviceAccounts.actAs permission on this service account. Required if any containers are specified in ResourceRuntimeSpec.
    enableCustomServiceAccount boolean
    If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
    serviceAccount string
    Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via ResourceRuntimeSpec on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have the iam.serviceAccounts.actAs permission on this service account. Required if any containers are specified in ResourceRuntimeSpec.
    enable_custom_service_account bool
    If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
    service_account str
    Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via ResourceRuntimeSpec on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have the iam.serviceAccounts.actAs permission on this service account. Required if any containers are specified in ResourceRuntimeSpec.
    enableCustomServiceAccount Boolean
    If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
    serviceAccount String
    Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via ResourceRuntimeSpec on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have the iam.serviceAccounts.actAs permission on this service account. Required if any containers are specified in ResourceRuntimeSpec.

    GoogleRpcStatusResponse, GoogleRpcStatusResponseArgs

    Code int
    The status code, which should be an enum value of google.rpc.Code.
    Details List<ImmutableDictionary<string, string>>
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    Message string
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    Code int
    The status code, which should be an enum value of google.rpc.Code.
    Details []map[string]string
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    Message string
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code Integer
    The status code, which should be an enum value of google.rpc.Code.
    details List<Map<String,String>>
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message String
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code number
    The status code, which should be an enum value of google.rpc.Code.
    details {[key: string]: string}[]
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message string
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code int
    The status code, which should be an enum value of google.rpc.Code.
    details Sequence[Mapping[str, str]]
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message str
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code Number
    The status code, which should be an enum value of google.rpc.Code.
    details List<Map<String>>
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message String
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi