Viewing docs for Databricks v0.4.0 (Older version)
published on Monday, Mar 9, 2026 by Pulumi
published on Monday, Mar 9, 2026 by Pulumi
Viewing docs for Databricks v0.4.0 (Older version)
published on Monday, Mar 9, 2026 by Pulumi
published on Monday, Mar 9, 2026 by Pulumi
Related Resources
The following resources are used in the same context:
- End to end workspace management guide
- databricks.Cluster to create Databricks Clusters.
- databricks.ClusterPolicy to create a databricks.Cluster policy, which limits the ability to create clusters based on a set of rules.
- databricks.InstancePool to manage instance pools to reduce cluster start and auto-scaling times by maintaining a set of idle, ready-to-use instances.
- databricks.Job to manage Databricks Jobs to run non-interactive code in a databricks_cluster.
Example Usage
using Pulumi;
using Databricks = Pulumi.Databricks;
class MyStack : Stack
{
public MyStack()
{
var withGpu = Output.Create(Databricks.GetNodeType.InvokeAsync(new Databricks.GetNodeTypeArgs
{
LocalDisk = true,
MinCores = 16,
GbPerCore = 1,
MinGpus = 1,
}));
var gpuMl = Output.Create(Databricks.GetSparkVersion.InvokeAsync(new Databricks.GetSparkVersionArgs
{
Gpu = true,
Ml = true,
}));
var research = new Databricks.Cluster("research", new Databricks.ClusterArgs
{
ClusterName = "Research Cluster",
SparkVersion = gpuMl.Apply(gpuMl => gpuMl.Id),
NodeTypeId = withGpu.Apply(withGpu => withGpu.Id),
AutoterminationMinutes = 20,
Autoscale = new Databricks.Inputs.ClusterAutoscaleArgs
{
MinWorkers = 1,
MaxWorkers = 50,
},
});
}
}
package main
import (
"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
withGpu, err := databricks.GetNodeType(ctx, &GetNodeTypeArgs{
LocalDisk: pulumi.BoolRef(true),
MinCores: pulumi.IntRef(16),
GbPerCore: pulumi.IntRef(1),
MinGpus: pulumi.IntRef(1),
}, nil)
if err != nil {
return err
}
gpuMl, err := databricks.GetSparkVersion(ctx, &GetSparkVersionArgs{
Gpu: pulumi.BoolRef(true),
Ml: pulumi.BoolRef(true),
}, nil)
if err != nil {
return err
}
_, err = databricks.NewCluster(ctx, "research", &databricks.ClusterArgs{
ClusterName: pulumi.String("Research Cluster"),
SparkVersion: pulumi.String(gpuMl.Id),
NodeTypeId: pulumi.String(withGpu.Id),
AutoterminationMinutes: pulumi.Int(20),
Autoscale: &ClusterAutoscaleArgs{
MinWorkers: pulumi.Int(1),
MaxWorkers: pulumi.Int(50),
},
})
if err != nil {
return err
}
return nil
})
}
Example coming soon!
import * as pulumi from "@pulumi/pulumi";
import * as databricks from "@pulumi/databricks";
const withGpu = databricks.getNodeType({
localDisk: true,
minCores: 16,
gbPerCore: 1,
minGpus: 1,
});
const gpuMl = databricks.getSparkVersion({
gpu: true,
ml: true,
});
const research = new databricks.Cluster("research", {
clusterName: "Research Cluster",
sparkVersion: gpuMl.then(gpuMl => gpuMl.id),
nodeTypeId: withGpu.then(withGpu => withGpu.id),
autoterminationMinutes: 20,
autoscale: {
minWorkers: 1,
maxWorkers: 50,
},
});
import pulumi
import pulumi_databricks as databricks
with_gpu = databricks.get_node_type(local_disk=True,
min_cores=16,
gb_per_core=1,
min_gpus=1)
gpu_ml = databricks.get_spark_version(gpu=True,
ml=True)
research = databricks.Cluster("research",
cluster_name="Research Cluster",
spark_version=gpu_ml.id,
node_type_id=with_gpu.id,
autotermination_minutes=20,
autoscale=databricks.ClusterAutoscaleArgs(
min_workers=1,
max_workers=50,
))
Example coming soon!
Using getSparkVersion
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getSparkVersion(args: GetSparkVersionArgs, opts?: InvokeOptions): Promise<GetSparkVersionResult>
function getSparkVersionOutput(args: GetSparkVersionOutputArgs, opts?: InvokeOptions): Output<GetSparkVersionResult>def get_spark_version(beta: Optional[bool] = None,
genomics: Optional[bool] = None,
gpu: Optional[bool] = None,
graviton: Optional[bool] = None,
latest: Optional[bool] = None,
long_term_support: Optional[bool] = None,
ml: Optional[bool] = None,
photon: Optional[bool] = None,
scala: Optional[str] = None,
spark_version: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetSparkVersionResult
def get_spark_version_output(beta: Optional[pulumi.Input[bool]] = None,
genomics: Optional[pulumi.Input[bool]] = None,
gpu: Optional[pulumi.Input[bool]] = None,
graviton: Optional[pulumi.Input[bool]] = None,
latest: Optional[pulumi.Input[bool]] = None,
long_term_support: Optional[pulumi.Input[bool]] = None,
ml: Optional[pulumi.Input[bool]] = None,
photon: Optional[pulumi.Input[bool]] = None,
scala: Optional[pulumi.Input[str]] = None,
spark_version: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetSparkVersionResult]func GetSparkVersion(ctx *Context, args *GetSparkVersionArgs, opts ...InvokeOption) (*GetSparkVersionResult, error)
func GetSparkVersionOutput(ctx *Context, args *GetSparkVersionOutputArgs, opts ...InvokeOption) GetSparkVersionResultOutput> Note: This function is named GetSparkVersion in the Go SDK.
public static class GetSparkVersion
{
public static Task<GetSparkVersionResult> InvokeAsync(GetSparkVersionArgs args, InvokeOptions? opts = null)
public static Output<GetSparkVersionResult> Invoke(GetSparkVersionInvokeArgs args, InvokeOptions? opts = null)
}public static CompletableFuture<GetSparkVersionResult> getSparkVersion(GetSparkVersionArgs args, InvokeOptions options)
public static Output<GetSparkVersionResult> getSparkVersion(GetSparkVersionArgs args, InvokeOptions options)
fn::invoke:
function: databricks:index/getSparkVersion:getSparkVersion
arguments:
# arguments dictionaryThe following arguments are supported:
- Beta bool
- if we should limit the search only to runtimes that are in Beta stage. Default to
false. - Genomics bool
- if we should limit the search only to Genomics (HLS) runtimes. Default to
false. - Gpu bool
- if we should limit the search only to runtimes that support GPUs. Default to
false. - Graviton bool
- if we should limit the search only to runtimes supporting AWS Graviton CPUs. Default to
false. - Latest bool
- if we should return only the latest version if there is more than one result. Default to
true. If set tofalseand multiple versions are matching, throws an error. - Long
Term boolSupport - if we should limit the search only to LTS (long term support) & ESR (extended support) versions. Default to
false. - Ml bool
- if we should limit the search only to ML runtimes. Default to
false. - Photon bool
- if we should limit the search only to Photon runtimes. Default to
false. - Scala string
- if we should limit the search only to runtimes that are based on specific Scala version. Default to
2.12. - Spark
Version string - if we should limit the search only to runtimes that are based on specific Spark version. Default to empty string. It could be specified as
3, or3.0, or full version, like,3.0.1.
- Beta bool
- if we should limit the search only to runtimes that are in Beta stage. Default to
false. - Genomics bool
- if we should limit the search only to Genomics (HLS) runtimes. Default to
false. - Gpu bool
- if we should limit the search only to runtimes that support GPUs. Default to
false. - Graviton bool
- if we should limit the search only to runtimes supporting AWS Graviton CPUs. Default to
false. - Latest bool
- if we should return only the latest version if there is more than one result. Default to
true. If set tofalseand multiple versions are matching, throws an error. - Long
Term boolSupport - if we should limit the search only to LTS (long term support) & ESR (extended support) versions. Default to
false. - Ml bool
- if we should limit the search only to ML runtimes. Default to
false. - Photon bool
- if we should limit the search only to Photon runtimes. Default to
false. - Scala string
- if we should limit the search only to runtimes that are based on specific Scala version. Default to
2.12. - Spark
Version string - if we should limit the search only to runtimes that are based on specific Spark version. Default to empty string. It could be specified as
3, or3.0, or full version, like,3.0.1.
- beta Boolean
- if we should limit the search only to runtimes that are in Beta stage. Default to
false. - genomics Boolean
- if we should limit the search only to Genomics (HLS) runtimes. Default to
false. - gpu Boolean
- if we should limit the search only to runtimes that support GPUs. Default to
false. - graviton Boolean
- if we should limit the search only to runtimes supporting AWS Graviton CPUs. Default to
false. - latest Boolean
- if we should return only the latest version if there is more than one result. Default to
true. If set tofalseand multiple versions are matching, throws an error. - long
Term BooleanSupport - if we should limit the search only to LTS (long term support) & ESR (extended support) versions. Default to
false. - ml Boolean
- if we should limit the search only to ML runtimes. Default to
false. - photon Boolean
- if we should limit the search only to Photon runtimes. Default to
false. - scala String
- if we should limit the search only to runtimes that are based on specific Scala version. Default to
2.12. - spark
Version String - if we should limit the search only to runtimes that are based on specific Spark version. Default to empty string. It could be specified as
3, or3.0, or full version, like,3.0.1.
- beta boolean
- if we should limit the search only to runtimes that are in Beta stage. Default to
false. - genomics boolean
- if we should limit the search only to Genomics (HLS) runtimes. Default to
false. - gpu boolean
- if we should limit the search only to runtimes that support GPUs. Default to
false. - graviton boolean
- if we should limit the search only to runtimes supporting AWS Graviton CPUs. Default to
false. - latest boolean
- if we should return only the latest version if there is more than one result. Default to
true. If set tofalseand multiple versions are matching, throws an error. - long
Term booleanSupport - if we should limit the search only to LTS (long term support) & ESR (extended support) versions. Default to
false. - ml boolean
- if we should limit the search only to ML runtimes. Default to
false. - photon boolean
- if we should limit the search only to Photon runtimes. Default to
false. - scala string
- if we should limit the search only to runtimes that are based on specific Scala version. Default to
2.12. - spark
Version string - if we should limit the search only to runtimes that are based on specific Spark version. Default to empty string. It could be specified as
3, or3.0, or full version, like,3.0.1.
- beta bool
- if we should limit the search only to runtimes that are in Beta stage. Default to
false. - genomics bool
- if we should limit the search only to Genomics (HLS) runtimes. Default to
false. - gpu bool
- if we should limit the search only to runtimes that support GPUs. Default to
false. - graviton bool
- if we should limit the search only to runtimes supporting AWS Graviton CPUs. Default to
false. - latest bool
- if we should return only the latest version if there is more than one result. Default to
true. If set tofalseand multiple versions are matching, throws an error. - long_
term_ boolsupport - if we should limit the search only to LTS (long term support) & ESR (extended support) versions. Default to
false. - ml bool
- if we should limit the search only to ML runtimes. Default to
false. - photon bool
- if we should limit the search only to Photon runtimes. Default to
false. - scala str
- if we should limit the search only to runtimes that are based on specific Scala version. Default to
2.12. - spark_
version str - if we should limit the search only to runtimes that are based on specific Spark version. Default to empty string. It could be specified as
3, or3.0, or full version, like,3.0.1.
- beta Boolean
- if we should limit the search only to runtimes that are in Beta stage. Default to
false. - genomics Boolean
- if we should limit the search only to Genomics (HLS) runtimes. Default to
false. - gpu Boolean
- if we should limit the search only to runtimes that support GPUs. Default to
false. - graviton Boolean
- if we should limit the search only to runtimes supporting AWS Graviton CPUs. Default to
false. - latest Boolean
- if we should return only the latest version if there is more than one result. Default to
true. If set tofalseand multiple versions are matching, throws an error. - long
Term BooleanSupport - if we should limit the search only to LTS (long term support) & ESR (extended support) versions. Default to
false. - ml Boolean
- if we should limit the search only to ML runtimes. Default to
false. - photon Boolean
- if we should limit the search only to Photon runtimes. Default to
false. - scala String
- if we should limit the search only to runtimes that are based on specific Scala version. Default to
2.12. - spark
Version String - if we should limit the search only to runtimes that are based on specific Spark version. Default to empty string. It could be specified as
3, or3.0, or full version, like,3.0.1.
getSparkVersion Result
The following output properties are available:
Package Details
- Repository
- databricks pulumi/pulumi-databricks
- License
- Apache-2.0
- Notes
- This Pulumi package is based on the
databricksTerraform Provider.
Viewing docs for Databricks v0.4.0 (Older version)
published on Monday, Mar 9, 2026 by Pulumi
published on Monday, Mar 9, 2026 by Pulumi
