1. Packages
  2. Packages
  3. Databricks Provider
  4. API Docs
  5. InstancePool
Viewing docs for Databricks v0.4.0 (Older version)
published on Monday, Mar 9, 2026 by Pulumi
databricks logo
Viewing docs for Databricks v0.4.0 (Older version)
published on Monday, Mar 9, 2026 by Pulumi

    Import

    The resource instance pool can be imported using it’s idbash

     $ pulumi import databricks:index/instancePool:InstancePool this <instance-pool-id>
    

    Create InstancePool Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new InstancePool(name: string, args: InstancePoolArgs, opts?: CustomResourceOptions);
    @overload
    def InstancePool(resource_name: str,
                     args: InstancePoolArgs,
                     opts: Optional[ResourceOptions] = None)
    
    @overload
    def InstancePool(resource_name: str,
                     opts: Optional[ResourceOptions] = None,
                     idle_instance_autotermination_minutes: Optional[int] = None,
                     node_type_id: Optional[str] = None,
                     instance_pool_name: Optional[str] = None,
                     disk_spec: Optional[InstancePoolDiskSpecArgs] = None,
                     enable_elastic_disk: Optional[bool] = None,
                     gcp_attributes: Optional[InstancePoolGcpAttributesArgs] = None,
                     aws_attributes: Optional[InstancePoolAwsAttributesArgs] = None,
                     instance_pool_id: Optional[str] = None,
                     custom_tags: Optional[Mapping[str, Any]] = None,
                     max_capacity: Optional[int] = None,
                     min_idle_instances: Optional[int] = None,
                     azure_attributes: Optional[InstancePoolAzureAttributesArgs] = None,
                     preloaded_docker_images: Optional[Sequence[InstancePoolPreloadedDockerImageArgs]] = None,
                     preloaded_spark_versions: Optional[Sequence[str]] = None)
    func NewInstancePool(ctx *Context, name string, args InstancePoolArgs, opts ...ResourceOption) (*InstancePool, error)
    public InstancePool(string name, InstancePoolArgs args, CustomResourceOptions? opts = null)
    public InstancePool(String name, InstancePoolArgs args)
    public InstancePool(String name, InstancePoolArgs args, CustomResourceOptions options)
    
    type: databricks:InstancePool
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args InstancePoolArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args InstancePoolArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args InstancePoolArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args InstancePoolArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args InstancePoolArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Constructor example

    The following reference example uses placeholder values for all input properties.

    var instancePoolResource = new Databricks.InstancePool("instancePoolResource", new()
    {
        IdleInstanceAutoterminationMinutes = 0,
        NodeTypeId = "string",
        InstancePoolName = "string",
        DiskSpec = new Databricks.Inputs.InstancePoolDiskSpecArgs
        {
            DiskCount = 0,
            DiskSize = 0,
            DiskType = new Databricks.Inputs.InstancePoolDiskSpecDiskTypeArgs
            {
                AzureDiskVolumeType = "string",
                EbsVolumeType = "string",
            },
        },
        EnableElasticDisk = false,
        GcpAttributes = new Databricks.Inputs.InstancePoolGcpAttributesArgs
        {
            Availability = "string",
        },
        AwsAttributes = new Databricks.Inputs.InstancePoolAwsAttributesArgs
        {
            Availability = "string",
            SpotBidPricePercent = 0,
            ZoneId = "string",
        },
        InstancePoolId = "string",
        CustomTags = 
        {
            { "string", "any" },
        },
        MaxCapacity = 0,
        MinIdleInstances = 0,
        AzureAttributes = new Databricks.Inputs.InstancePoolAzureAttributesArgs
        {
            Availability = "string",
            SpotBidMaxPrice = 0,
        },
        PreloadedDockerImages = new[]
        {
            new Databricks.Inputs.InstancePoolPreloadedDockerImageArgs
            {
                Url = "string",
                BasicAuth = new Databricks.Inputs.InstancePoolPreloadedDockerImageBasicAuthArgs
                {
                    Password = "string",
                    Username = "string",
                },
            },
        },
        PreloadedSparkVersions = new[]
        {
            "string",
        },
    });
    
    example, err := databricks.NewInstancePool(ctx, "instancePoolResource", &databricks.InstancePoolArgs{
    	IdleInstanceAutoterminationMinutes: pulumi.Int(0),
    	NodeTypeId:                         pulumi.String("string"),
    	InstancePoolName:                   pulumi.String("string"),
    	DiskSpec: &databricks.InstancePoolDiskSpecArgs{
    		DiskCount: pulumi.Int(0),
    		DiskSize:  pulumi.Int(0),
    		DiskType: &databricks.InstancePoolDiskSpecDiskTypeArgs{
    			AzureDiskVolumeType: pulumi.String("string"),
    			EbsVolumeType:       pulumi.String("string"),
    		},
    	},
    	EnableElasticDisk: pulumi.Bool(false),
    	GcpAttributes: &databricks.InstancePoolGcpAttributesArgs{
    		Availability: pulumi.String("string"),
    	},
    	AwsAttributes: &databricks.InstancePoolAwsAttributesArgs{
    		Availability:        pulumi.String("string"),
    		SpotBidPricePercent: pulumi.Int(0),
    		ZoneId:              pulumi.String("string"),
    	},
    	InstancePoolId: pulumi.String("string"),
    	CustomTags: pulumi.Map{
    		"string": pulumi.Any("any"),
    	},
    	MaxCapacity:      pulumi.Int(0),
    	MinIdleInstances: pulumi.Int(0),
    	AzureAttributes: &databricks.InstancePoolAzureAttributesArgs{
    		Availability:    pulumi.String("string"),
    		SpotBidMaxPrice: pulumi.Float64(0),
    	},
    	PreloadedDockerImages: databricks.InstancePoolPreloadedDockerImageArray{
    		&databricks.InstancePoolPreloadedDockerImageArgs{
    			Url: pulumi.String("string"),
    			BasicAuth: &databricks.InstancePoolPreloadedDockerImageBasicAuthArgs{
    				Password: pulumi.String("string"),
    				Username: pulumi.String("string"),
    			},
    		},
    	},
    	PreloadedSparkVersions: pulumi.StringArray{
    		pulumi.String("string"),
    	},
    })
    
    var instancePoolResource = new InstancePool("instancePoolResource", InstancePoolArgs.builder()
        .idleInstanceAutoterminationMinutes(0)
        .nodeTypeId("string")
        .instancePoolName("string")
        .diskSpec(InstancePoolDiskSpecArgs.builder()
            .diskCount(0)
            .diskSize(0)
            .diskType(InstancePoolDiskSpecDiskTypeArgs.builder()
                .azureDiskVolumeType("string")
                .ebsVolumeType("string")
                .build())
            .build())
        .enableElasticDisk(false)
        .gcpAttributes(InstancePoolGcpAttributesArgs.builder()
            .availability("string")
            .build())
        .awsAttributes(InstancePoolAwsAttributesArgs.builder()
            .availability("string")
            .spotBidPricePercent(0)
            .zoneId("string")
            .build())
        .instancePoolId("string")
        .customTags(Map.of("string", "any"))
        .maxCapacity(0)
        .minIdleInstances(0)
        .azureAttributes(InstancePoolAzureAttributesArgs.builder()
            .availability("string")
            .spotBidMaxPrice(0.0)
            .build())
        .preloadedDockerImages(InstancePoolPreloadedDockerImageArgs.builder()
            .url("string")
            .basicAuth(InstancePoolPreloadedDockerImageBasicAuthArgs.builder()
                .password("string")
                .username("string")
                .build())
            .build())
        .preloadedSparkVersions("string")
        .build());
    
    instance_pool_resource = databricks.InstancePool("instancePoolResource",
        idle_instance_autotermination_minutes=0,
        node_type_id="string",
        instance_pool_name="string",
        disk_spec={
            "disk_count": 0,
            "disk_size": 0,
            "disk_type": {
                "azure_disk_volume_type": "string",
                "ebs_volume_type": "string",
            },
        },
        enable_elastic_disk=False,
        gcp_attributes={
            "availability": "string",
        },
        aws_attributes={
            "availability": "string",
            "spot_bid_price_percent": 0,
            "zone_id": "string",
        },
        instance_pool_id="string",
        custom_tags={
            "string": "any",
        },
        max_capacity=0,
        min_idle_instances=0,
        azure_attributes={
            "availability": "string",
            "spot_bid_max_price": 0,
        },
        preloaded_docker_images=[{
            "url": "string",
            "basic_auth": {
                "password": "string",
                "username": "string",
            },
        }],
        preloaded_spark_versions=["string"])
    
    const instancePoolResource = new databricks.InstancePool("instancePoolResource", {
        idleInstanceAutoterminationMinutes: 0,
        nodeTypeId: "string",
        instancePoolName: "string",
        diskSpec: {
            diskCount: 0,
            diskSize: 0,
            diskType: {
                azureDiskVolumeType: "string",
                ebsVolumeType: "string",
            },
        },
        enableElasticDisk: false,
        gcpAttributes: {
            availability: "string",
        },
        awsAttributes: {
            availability: "string",
            spotBidPricePercent: 0,
            zoneId: "string",
        },
        instancePoolId: "string",
        customTags: {
            string: "any",
        },
        maxCapacity: 0,
        minIdleInstances: 0,
        azureAttributes: {
            availability: "string",
            spotBidMaxPrice: 0,
        },
        preloadedDockerImages: [{
            url: "string",
            basicAuth: {
                password: "string",
                username: "string",
            },
        }],
        preloadedSparkVersions: ["string"],
    });
    
    type: databricks:InstancePool
    properties:
        awsAttributes:
            availability: string
            spotBidPricePercent: 0
            zoneId: string
        azureAttributes:
            availability: string
            spotBidMaxPrice: 0
        customTags:
            string: any
        diskSpec:
            diskCount: 0
            diskSize: 0
            diskType:
                azureDiskVolumeType: string
                ebsVolumeType: string
        enableElasticDisk: false
        gcpAttributes:
            availability: string
        idleInstanceAutoterminationMinutes: 0
        instancePoolId: string
        instancePoolName: string
        maxCapacity: 0
        minIdleInstances: 0
        nodeTypeId: string
        preloadedDockerImages:
            - basicAuth:
                password: string
                username: string
              url: string
        preloadedSparkVersions:
            - string
    

    InstancePool Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.

    The InstancePool resource accepts the following input properties:

    IdleInstanceAutoterminationMinutes int
    (Integer) The number of minutes that idle instances in excess of the min_idle_instances are maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If you specify 0, excess idle instances are removed as soon as possible.
    InstancePoolName string
    (String) The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters.
    NodeTypeId string
    (String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool’s idle instances are allocated based on this type. You can retrieve a list of available node types by using the List Node Types API call.
    AwsAttributes InstancePoolAwsAttributes
    AzureAttributes InstancePoolAzureAttributes
    CustomTags Dictionary<string, object>
    (Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS & Azure instances and Disk volumes). Databricks allows at most 43 custom tags.
    DiskSpec InstancePoolDiskSpec
    EnableElasticDisk bool
    (Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space.
    GcpAttributes InstancePoolGcpAttributes
    InstancePoolId string
    MaxCapacity int
    (Integer) The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling.
    MinIdleInstances int
    (Integer) The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.
    PreloadedDockerImages List<InstancePoolPreloadedDockerImage>
    PreloadedSparkVersions List<string>
    (List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via databricks.getSparkVersion data source or via Runtime Versions API call.
    IdleInstanceAutoterminationMinutes int
    (Integer) The number of minutes that idle instances in excess of the min_idle_instances are maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If you specify 0, excess idle instances are removed as soon as possible.
    InstancePoolName string
    (String) The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters.
    NodeTypeId string
    (String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool’s idle instances are allocated based on this type. You can retrieve a list of available node types by using the List Node Types API call.
    AwsAttributes InstancePoolAwsAttributesArgs
    AzureAttributes InstancePoolAzureAttributesArgs
    CustomTags map[string]interface{}
    (Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS & Azure instances and Disk volumes). Databricks allows at most 43 custom tags.
    DiskSpec InstancePoolDiskSpecArgs
    EnableElasticDisk bool
    (Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space.
    GcpAttributes InstancePoolGcpAttributesArgs
    InstancePoolId string
    MaxCapacity int
    (Integer) The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling.
    MinIdleInstances int
    (Integer) The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.
    PreloadedDockerImages []InstancePoolPreloadedDockerImageArgs
    PreloadedSparkVersions []string
    (List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via databricks.getSparkVersion data source or via Runtime Versions API call.
    idleInstanceAutoterminationMinutes Integer
    (Integer) The number of minutes that idle instances in excess of the min_idle_instances are maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If you specify 0, excess idle instances are removed as soon as possible.
    instancePoolName String
    (String) The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters.
    nodeTypeId String
    (String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool’s idle instances are allocated based on this type. You can retrieve a list of available node types by using the List Node Types API call.
    awsAttributes InstancePoolAwsAttributes
    azureAttributes InstancePoolAzureAttributes
    customTags Map<String,Object>
    (Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS & Azure instances and Disk volumes). Databricks allows at most 43 custom tags.
    diskSpec InstancePoolDiskSpec
    enableElasticDisk Boolean
    (Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space.
    gcpAttributes InstancePoolGcpAttributes
    instancePoolId String
    maxCapacity Integer
    (Integer) The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling.
    minIdleInstances Integer
    (Integer) The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.
    preloadedDockerImages List<InstancePoolPreloadedDockerImage>
    preloadedSparkVersions List<String>
    (List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via databricks.getSparkVersion data source or via Runtime Versions API call.
    idleInstanceAutoterminationMinutes number
    (Integer) The number of minutes that idle instances in excess of the min_idle_instances are maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If you specify 0, excess idle instances are removed as soon as possible.
    instancePoolName string
    (String) The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters.
    nodeTypeId string
    (String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool’s idle instances are allocated based on this type. You can retrieve a list of available node types by using the List Node Types API call.
    awsAttributes InstancePoolAwsAttributes
    azureAttributes InstancePoolAzureAttributes
    customTags {[key: string]: any}
    (Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS & Azure instances and Disk volumes). Databricks allows at most 43 custom tags.
    diskSpec InstancePoolDiskSpec
    enableElasticDisk boolean
    (Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space.
    gcpAttributes InstancePoolGcpAttributes
    instancePoolId string
    maxCapacity number
    (Integer) The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling.
    minIdleInstances number
    (Integer) The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.
    preloadedDockerImages InstancePoolPreloadedDockerImage[]
    preloadedSparkVersions string[]
    (List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via databricks.getSparkVersion data source or via Runtime Versions API call.
    idle_instance_autotermination_minutes int
    (Integer) The number of minutes that idle instances in excess of the min_idle_instances are maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If you specify 0, excess idle instances are removed as soon as possible.
    instance_pool_name str
    (String) The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters.
    node_type_id str
    (String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool’s idle instances are allocated based on this type. You can retrieve a list of available node types by using the List Node Types API call.
    aws_attributes InstancePoolAwsAttributesArgs
    azure_attributes InstancePoolAzureAttributesArgs
    custom_tags Mapping[str, Any]
    (Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS & Azure instances and Disk volumes). Databricks allows at most 43 custom tags.
    disk_spec InstancePoolDiskSpecArgs
    enable_elastic_disk bool
    (Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space.
    gcp_attributes InstancePoolGcpAttributesArgs
    instance_pool_id str
    max_capacity int
    (Integer) The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling.
    min_idle_instances int
    (Integer) The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.
    preloaded_docker_images Sequence[InstancePoolPreloadedDockerImageArgs]
    preloaded_spark_versions Sequence[str]
    (List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via databricks.getSparkVersion data source or via Runtime Versions API call.
    idleInstanceAutoterminationMinutes Number
    (Integer) The number of minutes that idle instances in excess of the min_idle_instances are maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If you specify 0, excess idle instances are removed as soon as possible.
    instancePoolName String
    (String) The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters.
    nodeTypeId String
    (String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool’s idle instances are allocated based on this type. You can retrieve a list of available node types by using the List Node Types API call.
    awsAttributes Property Map
    azureAttributes Property Map
    customTags Map<Any>
    (Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS & Azure instances and Disk volumes). Databricks allows at most 43 custom tags.
    diskSpec Property Map
    enableElasticDisk Boolean
    (Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space.
    gcpAttributes Property Map
    instancePoolId String
    maxCapacity Number
    (Integer) The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling.
    minIdleInstances Number
    (Integer) The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.
    preloadedDockerImages List<Property Map>
    preloadedSparkVersions List<String>
    (List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via databricks.getSparkVersion data source or via Runtime Versions API call.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the InstancePool resource produces the following output properties:

    Id string
    The provider-assigned unique ID for this managed resource.
    Id string
    The provider-assigned unique ID for this managed resource.
    id String
    The provider-assigned unique ID for this managed resource.
    id string
    The provider-assigned unique ID for this managed resource.
    id str
    The provider-assigned unique ID for this managed resource.
    id String
    The provider-assigned unique ID for this managed resource.

    Look up Existing InstancePool Resource

    Get an existing InstancePool resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.

    public static get(name: string, id: Input<ID>, state?: InstancePoolState, opts?: CustomResourceOptions): InstancePool
    @staticmethod
    def get(resource_name: str,
            id: str,
            opts: Optional[ResourceOptions] = None,
            aws_attributes: Optional[InstancePoolAwsAttributesArgs] = None,
            azure_attributes: Optional[InstancePoolAzureAttributesArgs] = None,
            custom_tags: Optional[Mapping[str, Any]] = None,
            disk_spec: Optional[InstancePoolDiskSpecArgs] = None,
            enable_elastic_disk: Optional[bool] = None,
            gcp_attributes: Optional[InstancePoolGcpAttributesArgs] = None,
            idle_instance_autotermination_minutes: Optional[int] = None,
            instance_pool_id: Optional[str] = None,
            instance_pool_name: Optional[str] = None,
            max_capacity: Optional[int] = None,
            min_idle_instances: Optional[int] = None,
            node_type_id: Optional[str] = None,
            preloaded_docker_images: Optional[Sequence[InstancePoolPreloadedDockerImageArgs]] = None,
            preloaded_spark_versions: Optional[Sequence[str]] = None) -> InstancePool
    func GetInstancePool(ctx *Context, name string, id IDInput, state *InstancePoolState, opts ...ResourceOption) (*InstancePool, error)
    public static InstancePool Get(string name, Input<string> id, InstancePoolState? state, CustomResourceOptions? opts = null)
    public static InstancePool get(String name, Output<String> id, InstancePoolState state, CustomResourceOptions options)
    resources:  _:    type: databricks:InstancePool    get:      id: ${id}
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    resource_name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    The following state arguments are supported:
    AwsAttributes InstancePoolAwsAttributes
    AzureAttributes InstancePoolAzureAttributes
    CustomTags Dictionary<string, object>
    (Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS & Azure instances and Disk volumes). Databricks allows at most 43 custom tags.
    DiskSpec InstancePoolDiskSpec
    EnableElasticDisk bool
    (Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space.
    GcpAttributes InstancePoolGcpAttributes
    IdleInstanceAutoterminationMinutes int
    (Integer) The number of minutes that idle instances in excess of the min_idle_instances are maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If you specify 0, excess idle instances are removed as soon as possible.
    InstancePoolId string
    InstancePoolName string
    (String) The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters.
    MaxCapacity int
    (Integer) The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling.
    MinIdleInstances int
    (Integer) The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.
    NodeTypeId string
    (String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool’s idle instances are allocated based on this type. You can retrieve a list of available node types by using the List Node Types API call.
    PreloadedDockerImages List<InstancePoolPreloadedDockerImage>
    PreloadedSparkVersions List<string>
    (List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via databricks.getSparkVersion data source or via Runtime Versions API call.
    AwsAttributes InstancePoolAwsAttributesArgs
    AzureAttributes InstancePoolAzureAttributesArgs
    CustomTags map[string]interface{}
    (Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS & Azure instances and Disk volumes). Databricks allows at most 43 custom tags.
    DiskSpec InstancePoolDiskSpecArgs
    EnableElasticDisk bool
    (Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space.
    GcpAttributes InstancePoolGcpAttributesArgs
    IdleInstanceAutoterminationMinutes int
    (Integer) The number of minutes that idle instances in excess of the min_idle_instances are maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If you specify 0, excess idle instances are removed as soon as possible.
    InstancePoolId string
    InstancePoolName string
    (String) The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters.
    MaxCapacity int
    (Integer) The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling.
    MinIdleInstances int
    (Integer) The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.
    NodeTypeId string
    (String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool’s idle instances are allocated based on this type. You can retrieve a list of available node types by using the List Node Types API call.
    PreloadedDockerImages []InstancePoolPreloadedDockerImageArgs
    PreloadedSparkVersions []string
    (List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via databricks.getSparkVersion data source or via Runtime Versions API call.
    awsAttributes InstancePoolAwsAttributes
    azureAttributes InstancePoolAzureAttributes
    customTags Map<String,Object>
    (Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS & Azure instances and Disk volumes). Databricks allows at most 43 custom tags.
    diskSpec InstancePoolDiskSpec
    enableElasticDisk Boolean
    (Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space.
    gcpAttributes InstancePoolGcpAttributes
    idleInstanceAutoterminationMinutes Integer
    (Integer) The number of minutes that idle instances in excess of the min_idle_instances are maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If you specify 0, excess idle instances are removed as soon as possible.
    instancePoolId String
    instancePoolName String
    (String) The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters.
    maxCapacity Integer
    (Integer) The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling.
    minIdleInstances Integer
    (Integer) The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.
    nodeTypeId String
    (String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool’s idle instances are allocated based on this type. You can retrieve a list of available node types by using the List Node Types API call.
    preloadedDockerImages List<InstancePoolPreloadedDockerImage>
    preloadedSparkVersions List<String>
    (List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via databricks.getSparkVersion data source or via Runtime Versions API call.
    awsAttributes InstancePoolAwsAttributes
    azureAttributes InstancePoolAzureAttributes
    customTags {[key: string]: any}
    (Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS & Azure instances and Disk volumes). Databricks allows at most 43 custom tags.
    diskSpec InstancePoolDiskSpec
    enableElasticDisk boolean
    (Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space.
    gcpAttributes InstancePoolGcpAttributes
    idleInstanceAutoterminationMinutes number
    (Integer) The number of minutes that idle instances in excess of the min_idle_instances are maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If you specify 0, excess idle instances are removed as soon as possible.
    instancePoolId string
    instancePoolName string
    (String) The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters.
    maxCapacity number
    (Integer) The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling.
    minIdleInstances number
    (Integer) The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.
    nodeTypeId string
    (String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool’s idle instances are allocated based on this type. You can retrieve a list of available node types by using the List Node Types API call.
    preloadedDockerImages InstancePoolPreloadedDockerImage[]
    preloadedSparkVersions string[]
    (List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via databricks.getSparkVersion data source or via Runtime Versions API call.
    aws_attributes InstancePoolAwsAttributesArgs
    azure_attributes InstancePoolAzureAttributesArgs
    custom_tags Mapping[str, Any]
    (Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS & Azure instances and Disk volumes). Databricks allows at most 43 custom tags.
    disk_spec InstancePoolDiskSpecArgs
    enable_elastic_disk bool
    (Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space.
    gcp_attributes InstancePoolGcpAttributesArgs
    idle_instance_autotermination_minutes int
    (Integer) The number of minutes that idle instances in excess of the min_idle_instances are maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If you specify 0, excess idle instances are removed as soon as possible.
    instance_pool_id str
    instance_pool_name str
    (String) The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters.
    max_capacity int
    (Integer) The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling.
    min_idle_instances int
    (Integer) The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.
    node_type_id str
    (String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool’s idle instances are allocated based on this type. You can retrieve a list of available node types by using the List Node Types API call.
    preloaded_docker_images Sequence[InstancePoolPreloadedDockerImageArgs]
    preloaded_spark_versions Sequence[str]
    (List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via databricks.getSparkVersion data source or via Runtime Versions API call.
    awsAttributes Property Map
    azureAttributes Property Map
    customTags Map<Any>
    (Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS & Azure instances and Disk volumes). Databricks allows at most 43 custom tags.
    diskSpec Property Map
    enableElasticDisk Boolean
    (Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space.
    gcpAttributes Property Map
    idleInstanceAutoterminationMinutes Number
    (Integer) The number of minutes that idle instances in excess of the min_idle_instances are maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If you specify 0, excess idle instances are removed as soon as possible.
    instancePoolId String
    instancePoolName String
    (String) The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters.
    maxCapacity Number
    (Integer) The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling.
    minIdleInstances Number
    (Integer) The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.
    nodeTypeId String
    (String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool’s idle instances are allocated based on this type. You can retrieve a list of available node types by using the List Node Types API call.
    preloadedDockerImages List<Property Map>
    preloadedSparkVersions List<String>
    (List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via databricks.getSparkVersion data source or via Runtime Versions API call.

    Supporting Types

    InstancePoolAwsAttributes, InstancePoolAwsAttributesArgs

    Availability string
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    SpotBidPricePercent int
    (Integer) The max price for AWS spot instances, as a percentage of the corresponding instance type’s on-demand price. For example, if this field is set to 50, and the instance pool needs a new i3.xlarge spot instance, then the max price is half of the price of on-demand i3.xlarge instances. Similarly, if this field is set to 200, the max price is twice the price of on-demand i3.xlarge instances. If not specified, the default value is 100. When spot instances are requested for this instance pool, only spot instances whose max price percentage matches this field are considered. For safety, this field cannot be greater than 10000.
    ZoneId string
    (String) Identifier for the availability zone/datacenter in which the instance pool resides. This string is of a form like "us-west-2a". The provided availability zone must be in the same region as the Databricks deployment. For example, "us-west-2a" is not a valid zone ID if the Databricks deployment resides in the "us-east-1" region. This is an optional field. If not specified, a default zone is used. You can find the list of available zones as well as the default value by using the List Zones API.
    Availability string
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    SpotBidPricePercent int
    (Integer) The max price for AWS spot instances, as a percentage of the corresponding instance type’s on-demand price. For example, if this field is set to 50, and the instance pool needs a new i3.xlarge spot instance, then the max price is half of the price of on-demand i3.xlarge instances. Similarly, if this field is set to 200, the max price is twice the price of on-demand i3.xlarge instances. If not specified, the default value is 100. When spot instances are requested for this instance pool, only spot instances whose max price percentage matches this field are considered. For safety, this field cannot be greater than 10000.
    ZoneId string
    (String) Identifier for the availability zone/datacenter in which the instance pool resides. This string is of a form like "us-west-2a". The provided availability zone must be in the same region as the Databricks deployment. For example, "us-west-2a" is not a valid zone ID if the Databricks deployment resides in the "us-east-1" region. This is an optional field. If not specified, a default zone is used. You can find the list of available zones as well as the default value by using the List Zones API.
    availability String
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    spotBidPricePercent Integer
    (Integer) The max price for AWS spot instances, as a percentage of the corresponding instance type’s on-demand price. For example, if this field is set to 50, and the instance pool needs a new i3.xlarge spot instance, then the max price is half of the price of on-demand i3.xlarge instances. Similarly, if this field is set to 200, the max price is twice the price of on-demand i3.xlarge instances. If not specified, the default value is 100. When spot instances are requested for this instance pool, only spot instances whose max price percentage matches this field are considered. For safety, this field cannot be greater than 10000.
    zoneId String
    (String) Identifier for the availability zone/datacenter in which the instance pool resides. This string is of a form like "us-west-2a". The provided availability zone must be in the same region as the Databricks deployment. For example, "us-west-2a" is not a valid zone ID if the Databricks deployment resides in the "us-east-1" region. This is an optional field. If not specified, a default zone is used. You can find the list of available zones as well as the default value by using the List Zones API.
    availability string
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    spotBidPricePercent number
    (Integer) The max price for AWS spot instances, as a percentage of the corresponding instance type’s on-demand price. For example, if this field is set to 50, and the instance pool needs a new i3.xlarge spot instance, then the max price is half of the price of on-demand i3.xlarge instances. Similarly, if this field is set to 200, the max price is twice the price of on-demand i3.xlarge instances. If not specified, the default value is 100. When spot instances are requested for this instance pool, only spot instances whose max price percentage matches this field are considered. For safety, this field cannot be greater than 10000.
    zoneId string
    (String) Identifier for the availability zone/datacenter in which the instance pool resides. This string is of a form like "us-west-2a". The provided availability zone must be in the same region as the Databricks deployment. For example, "us-west-2a" is not a valid zone ID if the Databricks deployment resides in the "us-east-1" region. This is an optional field. If not specified, a default zone is used. You can find the list of available zones as well as the default value by using the List Zones API.
    availability str
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    spot_bid_price_percent int
    (Integer) The max price for AWS spot instances, as a percentage of the corresponding instance type’s on-demand price. For example, if this field is set to 50, and the instance pool needs a new i3.xlarge spot instance, then the max price is half of the price of on-demand i3.xlarge instances. Similarly, if this field is set to 200, the max price is twice the price of on-demand i3.xlarge instances. If not specified, the default value is 100. When spot instances are requested for this instance pool, only spot instances whose max price percentage matches this field are considered. For safety, this field cannot be greater than 10000.
    zone_id str
    (String) Identifier for the availability zone/datacenter in which the instance pool resides. This string is of a form like "us-west-2a". The provided availability zone must be in the same region as the Databricks deployment. For example, "us-west-2a" is not a valid zone ID if the Databricks deployment resides in the "us-east-1" region. This is an optional field. If not specified, a default zone is used. You can find the list of available zones as well as the default value by using the List Zones API.
    availability String
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    spotBidPricePercent Number
    (Integer) The max price for AWS spot instances, as a percentage of the corresponding instance type’s on-demand price. For example, if this field is set to 50, and the instance pool needs a new i3.xlarge spot instance, then the max price is half of the price of on-demand i3.xlarge instances. Similarly, if this field is set to 200, the max price is twice the price of on-demand i3.xlarge instances. If not specified, the default value is 100. When spot instances are requested for this instance pool, only spot instances whose max price percentage matches this field are considered. For safety, this field cannot be greater than 10000.
    zoneId String
    (String) Identifier for the availability zone/datacenter in which the instance pool resides. This string is of a form like "us-west-2a". The provided availability zone must be in the same region as the Databricks deployment. For example, "us-west-2a" is not a valid zone ID if the Databricks deployment resides in the "us-east-1" region. This is an optional field. If not specified, a default zone is used. You can find the list of available zones as well as the default value by using the List Zones API.

    InstancePoolAzureAttributes, InstancePoolAzureAttributesArgs

    Availability string
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    SpotBidMaxPrice double
    The max price for Azure spot instances. Use -1 to specify lowest price.
    Availability string
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    SpotBidMaxPrice float64
    The max price for Azure spot instances. Use -1 to specify lowest price.
    availability String
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    spotBidMaxPrice Double
    The max price for Azure spot instances. Use -1 to specify lowest price.
    availability string
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    spotBidMaxPrice number
    The max price for Azure spot instances. Use -1 to specify lowest price.
    availability str
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    spot_bid_max_price float
    The max price for Azure spot instances. Use -1 to specify lowest price.
    availability String
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    spotBidMaxPrice Number
    The max price for Azure spot instances. Use -1 to specify lowest price.

    InstancePoolDiskSpec, InstancePoolDiskSpecArgs

    DiskCount int
    (Integer) The number of disks to attach to each instance. This feature is only enabled for supported node types. Users can choose up to the limit of the disks supported by the node type. For node types with no local disk, at least one disk needs to be specified.
    DiskSize int
    (Integer) The size of each disk (in GiB) to attach.
    DiskType InstancePoolDiskSpecDiskType
    DiskCount int
    (Integer) The number of disks to attach to each instance. This feature is only enabled for supported node types. Users can choose up to the limit of the disks supported by the node type. For node types with no local disk, at least one disk needs to be specified.
    DiskSize int
    (Integer) The size of each disk (in GiB) to attach.
    DiskType InstancePoolDiskSpecDiskType
    diskCount Integer
    (Integer) The number of disks to attach to each instance. This feature is only enabled for supported node types. Users can choose up to the limit of the disks supported by the node type. For node types with no local disk, at least one disk needs to be specified.
    diskSize Integer
    (Integer) The size of each disk (in GiB) to attach.
    diskType InstancePoolDiskSpecDiskType
    diskCount number
    (Integer) The number of disks to attach to each instance. This feature is only enabled for supported node types. Users can choose up to the limit of the disks supported by the node type. For node types with no local disk, at least one disk needs to be specified.
    diskSize number
    (Integer) The size of each disk (in GiB) to attach.
    diskType InstancePoolDiskSpecDiskType
    disk_count int
    (Integer) The number of disks to attach to each instance. This feature is only enabled for supported node types. Users can choose up to the limit of the disks supported by the node type. For node types with no local disk, at least one disk needs to be specified.
    disk_size int
    (Integer) The size of each disk (in GiB) to attach.
    disk_type InstancePoolDiskSpecDiskType
    diskCount Number
    (Integer) The number of disks to attach to each instance. This feature is only enabled for supported node types. Users can choose up to the limit of the disks supported by the node type. For node types with no local disk, at least one disk needs to be specified.
    diskSize Number
    (Integer) The size of each disk (in GiB) to attach.
    diskType Property Map

    InstancePoolDiskSpecDiskType, InstancePoolDiskSpecDiskTypeArgs

    InstancePoolGcpAttributes, InstancePoolGcpAttributesArgs

    Availability string
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    Availability string
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    availability String
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    availability string
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    availability str
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.
    availability String
    Availability type used for all nodes. Valid values are PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP and ON_DEMAND_GCP, default: ON_DEMAND_GCP.

    InstancePoolPreloadedDockerImage, InstancePoolPreloadedDockerImageArgs

    InstancePoolPreloadedDockerImageBasicAuth, InstancePoolPreloadedDockerImageBasicAuthArgs

    Password string
    Username string
    Password string
    Username string
    password String
    username String
    password string
    username string
    password String
    username String

    Package Details

    Repository
    databricks pulumi/pulumi-databricks
    License
    Apache-2.0
    Notes
    This Pulumi package is based on the databricks Terraform Provider.
    databricks logo
    Viewing docs for Databricks v0.4.0 (Older version)
    published on Monday, Mar 9, 2026 by Pulumi
      Try Pulumi Cloud free. Your team will thank you.