1. Packages
  2. Packages
  3. Databricks Provider
  4. API Docs
  5. Job
Viewing docs for Databricks v0.4.0 (Older version)
published on Monday, Mar 9, 2026 by Pulumi
databricks logo
Viewing docs for Databricks v0.4.0 (Older version)
published on Monday, Mar 9, 2026 by Pulumi

    Import

    The resource job can be imported using the id of the job bash

     $ pulumi import databricks:index/job:Job this <job-id>
    

    Create Job Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new Job(name: string, args?: JobArgs, opts?: CustomResourceOptions);
    @overload
    def Job(resource_name: str,
            args: Optional[JobArgs] = None,
            opts: Optional[ResourceOptions] = None)
    
    @overload
    def Job(resource_name: str,
            opts: Optional[ResourceOptions] = None,
            always_running: Optional[bool] = None,
            email_notifications: Optional[JobEmailNotificationsArgs] = None,
            existing_cluster_id: Optional[str] = None,
            format: Optional[str] = None,
            git_source: Optional[JobGitSourceArgs] = None,
            job_clusters: Optional[Sequence[JobJobClusterArgs]] = None,
            libraries: Optional[Sequence[JobLibraryArgs]] = None,
            max_concurrent_runs: Optional[int] = None,
            max_retries: Optional[int] = None,
            min_retry_interval_millis: Optional[int] = None,
            name: Optional[str] = None,
            new_cluster: Optional[JobNewClusterArgs] = None,
            notebook_task: Optional[JobNotebookTaskArgs] = None,
            pipeline_task: Optional[JobPipelineTaskArgs] = None,
            python_wheel_task: Optional[JobPythonWheelTaskArgs] = None,
            retry_on_timeout: Optional[bool] = None,
            schedule: Optional[JobScheduleArgs] = None,
            spark_jar_task: Optional[JobSparkJarTaskArgs] = None,
            spark_python_task: Optional[JobSparkPythonTaskArgs] = None,
            spark_submit_task: Optional[JobSparkSubmitTaskArgs] = None,
            tasks: Optional[Sequence[JobTaskArgs]] = None,
            timeout_seconds: Optional[int] = None)
    func NewJob(ctx *Context, name string, args *JobArgs, opts ...ResourceOption) (*Job, error)
    public Job(string name, JobArgs? args = null, CustomResourceOptions? opts = null)
    public Job(String name, JobArgs args)
    public Job(String name, JobArgs args, CustomResourceOptions options)
    
    type: databricks:Job
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Constructor example

    The following reference example uses placeholder values for all input properties.

    var jobResource = new Databricks.Job("jobResource", new()
    {
        AlwaysRunning = false,
        EmailNotifications = new Databricks.Inputs.JobEmailNotificationsArgs
        {
            NoAlertForSkippedRuns = false,
            OnFailures = new[]
            {
                "string",
            },
            OnStarts = new[]
            {
                "string",
            },
            OnSuccesses = new[]
            {
                "string",
            },
        },
        ExistingClusterId = "string",
        Format = "string",
        GitSource = new Databricks.Inputs.JobGitSourceArgs
        {
            Url = "string",
            Branch = "string",
            Commit = "string",
            Provider = "string",
            Tag = "string",
        },
        JobClusters = new[]
        {
            new Databricks.Inputs.JobJobClusterArgs
            {
                JobClusterKey = "string",
                NewCluster = new Databricks.Inputs.JobJobClusterNewClusterArgs
                {
                    NumWorkers = 0,
                    SparkVersion = "string",
                    EnableElasticDisk = false,
                    DataSecurityMode = "string",
                    ClusterId = "string",
                    ClusterLogConf = new Databricks.Inputs.JobJobClusterNewClusterClusterLogConfArgs
                    {
                        Dbfs = new Databricks.Inputs.JobJobClusterNewClusterClusterLogConfDbfsArgs
                        {
                            Destination = "string",
                        },
                        S3 = new Databricks.Inputs.JobJobClusterNewClusterClusterLogConfS3Args
                        {
                            Destination = "string",
                            CannedAcl = "string",
                            EnableEncryption = false,
                            EncryptionType = "string",
                            Endpoint = "string",
                            KmsKey = "string",
                            Region = "string",
                        },
                    },
                    ClusterName = "string",
                    CustomTags = 
                    {
                        { "string", "any" },
                    },
                    EnableLocalDiskEncryption = false,
                    DockerImage = new Databricks.Inputs.JobJobClusterNewClusterDockerImageArgs
                    {
                        Url = "string",
                        BasicAuth = new Databricks.Inputs.JobJobClusterNewClusterDockerImageBasicAuthArgs
                        {
                            Password = "string",
                            Username = "string",
                        },
                    },
                    DriverInstancePoolId = "string",
                    DriverNodeTypeId = "string",
                    AzureAttributes = new Databricks.Inputs.JobJobClusterNewClusterAzureAttributesArgs
                    {
                        Availability = "string",
                        FirstOnDemand = 0,
                        SpotBidMaxPrice = 0,
                    },
                    Autoscale = new Databricks.Inputs.JobJobClusterNewClusterAutoscaleArgs
                    {
                        MaxWorkers = 0,
                        MinWorkers = 0,
                    },
                    NodeTypeId = "string",
                    IdempotencyToken = "string",
                    InitScripts = new[]
                    {
                        new Databricks.Inputs.JobJobClusterNewClusterInitScriptArgs
                        {
                            Dbfs = new Databricks.Inputs.JobJobClusterNewClusterInitScriptDbfsArgs
                            {
                                Destination = "string",
                            },
                            File = new Databricks.Inputs.JobJobClusterNewClusterInitScriptFileArgs
                            {
                                Destination = "string",
                            },
                            S3 = new Databricks.Inputs.JobJobClusterNewClusterInitScriptS3Args
                            {
                                Destination = "string",
                                CannedAcl = "string",
                                EnableEncryption = false,
                                EncryptionType = "string",
                                Endpoint = "string",
                                KmsKey = "string",
                                Region = "string",
                            },
                        },
                    },
                    InstancePoolId = "string",
                    GcpAttributes = new Databricks.Inputs.JobJobClusterNewClusterGcpAttributesArgs
                    {
                        Availability = "string",
                        BootDiskSize = 0,
                        GoogleServiceAccount = "string",
                        UsePreemptibleExecutors = false,
                        ZoneId = "string",
                    },
                    AwsAttributes = new Databricks.Inputs.JobJobClusterNewClusterAwsAttributesArgs
                    {
                        Availability = "string",
                        EbsVolumeCount = 0,
                        EbsVolumeSize = 0,
                        EbsVolumeType = "string",
                        FirstOnDemand = 0,
                        InstanceProfileArn = "string",
                        SpotBidPricePercent = 0,
                        ZoneId = "string",
                    },
                    PolicyId = "string",
                    SingleUserName = "string",
                    SparkConf = 
                    {
                        { "string", "any" },
                    },
                    SparkEnvVars = 
                    {
                        { "string", "any" },
                    },
                    AutoterminationMinutes = 0,
                    SshPublicKeys = new[]
                    {
                        "string",
                    },
                },
            },
        },
        Libraries = new[]
        {
            new Databricks.Inputs.JobLibraryArgs
            {
                Cran = new Databricks.Inputs.JobLibraryCranArgs
                {
                    Package = "string",
                    Repo = "string",
                },
                Egg = "string",
                Jar = "string",
                Maven = new Databricks.Inputs.JobLibraryMavenArgs
                {
                    Coordinates = "string",
                    Exclusions = new[]
                    {
                        "string",
                    },
                    Repo = "string",
                },
                Pypi = new Databricks.Inputs.JobLibraryPypiArgs
                {
                    Package = "string",
                    Repo = "string",
                },
                Whl = "string",
            },
        },
        MaxConcurrentRuns = 0,
        MaxRetries = 0,
        MinRetryIntervalMillis = 0,
        Name = "string",
        NewCluster = new Databricks.Inputs.JobNewClusterArgs
        {
            SparkVersion = "string",
            EnableElasticDisk = false,
            SparkConf = 
            {
                { "string", "any" },
            },
            AzureAttributes = new Databricks.Inputs.JobNewClusterAzureAttributesArgs
            {
                Availability = "string",
                FirstOnDemand = 0,
                SpotBidMaxPrice = 0,
            },
            ClusterId = "string",
            ClusterLogConf = new Databricks.Inputs.JobNewClusterClusterLogConfArgs
            {
                Dbfs = new Databricks.Inputs.JobNewClusterClusterLogConfDbfsArgs
                {
                    Destination = "string",
                },
                S3 = new Databricks.Inputs.JobNewClusterClusterLogConfS3Args
                {
                    Destination = "string",
                    CannedAcl = "string",
                    EnableEncryption = false,
                    EncryptionType = "string",
                    Endpoint = "string",
                    KmsKey = "string",
                    Region = "string",
                },
            },
            ClusterName = "string",
            CustomTags = 
            {
                { "string", "any" },
            },
            DataSecurityMode = "string",
            DockerImage = new Databricks.Inputs.JobNewClusterDockerImageArgs
            {
                Url = "string",
                BasicAuth = new Databricks.Inputs.JobNewClusterDockerImageBasicAuthArgs
                {
                    Password = "string",
                    Username = "string",
                },
            },
            DriverInstancePoolId = "string",
            DriverNodeTypeId = "string",
            Autoscale = new Databricks.Inputs.JobNewClusterAutoscaleArgs
            {
                MaxWorkers = 0,
                MinWorkers = 0,
            },
            AwsAttributes = new Databricks.Inputs.JobNewClusterAwsAttributesArgs
            {
                Availability = "string",
                EbsVolumeCount = 0,
                EbsVolumeSize = 0,
                EbsVolumeType = "string",
                FirstOnDemand = 0,
                InstanceProfileArn = "string",
                SpotBidPricePercent = 0,
                ZoneId = "string",
            },
            IdempotencyToken = "string",
            EnableLocalDiskEncryption = false,
            InitScripts = new[]
            {
                new Databricks.Inputs.JobNewClusterInitScriptArgs
                {
                    Dbfs = new Databricks.Inputs.JobNewClusterInitScriptDbfsArgs
                    {
                        Destination = "string",
                    },
                    File = new Databricks.Inputs.JobNewClusterInitScriptFileArgs
                    {
                        Destination = "string",
                    },
                    S3 = new Databricks.Inputs.JobNewClusterInitScriptS3Args
                    {
                        Destination = "string",
                        CannedAcl = "string",
                        EnableEncryption = false,
                        EncryptionType = "string",
                        Endpoint = "string",
                        KmsKey = "string",
                        Region = "string",
                    },
                },
            },
            InstancePoolId = "string",
            NodeTypeId = "string",
            NumWorkers = 0,
            PolicyId = "string",
            SingleUserName = "string",
            GcpAttributes = new Databricks.Inputs.JobNewClusterGcpAttributesArgs
            {
                Availability = "string",
                BootDiskSize = 0,
                GoogleServiceAccount = "string",
                UsePreemptibleExecutors = false,
                ZoneId = "string",
            },
            SparkEnvVars = 
            {
                { "string", "any" },
            },
            AutoterminationMinutes = 0,
            SshPublicKeys = new[]
            {
                "string",
            },
        },
        NotebookTask = new Databricks.Inputs.JobNotebookTaskArgs
        {
            NotebookPath = "string",
            BaseParameters = 
            {
                { "string", "any" },
            },
        },
        PipelineTask = new Databricks.Inputs.JobPipelineTaskArgs
        {
            PipelineId = "string",
        },
        PythonWheelTask = new Databricks.Inputs.JobPythonWheelTaskArgs
        {
            EntryPoint = "string",
            NamedParameters = 
            {
                { "string", "any" },
            },
            PackageName = "string",
            Parameters = new[]
            {
                "string",
            },
        },
        RetryOnTimeout = false,
        Schedule = new Databricks.Inputs.JobScheduleArgs
        {
            QuartzCronExpression = "string",
            TimezoneId = "string",
            PauseStatus = "string",
        },
        SparkJarTask = new Databricks.Inputs.JobSparkJarTaskArgs
        {
            JarUri = "string",
            MainClassName = "string",
            Parameters = new[]
            {
                "string",
            },
        },
        SparkPythonTask = new Databricks.Inputs.JobSparkPythonTaskArgs
        {
            PythonFile = "string",
            Parameters = new[]
            {
                "string",
            },
        },
        SparkSubmitTask = new Databricks.Inputs.JobSparkSubmitTaskArgs
        {
            Parameters = new[]
            {
                "string",
            },
        },
        Tasks = new[]
        {
            new Databricks.Inputs.JobTaskArgs
            {
                DependsOns = new[]
                {
                    new Databricks.Inputs.JobTaskDependsOnArgs
                    {
                        TaskKey = "string",
                    },
                },
                Description = "string",
                EmailNotifications = new Databricks.Inputs.JobTaskEmailNotificationsArgs
                {
                    NoAlertForSkippedRuns = false,
                    OnFailures = new[]
                    {
                        "string",
                    },
                    OnStarts = new[]
                    {
                        "string",
                    },
                    OnSuccesses = new[]
                    {
                        "string",
                    },
                },
                ExistingClusterId = "string",
                JobClusterKey = "string",
                Libraries = new[]
                {
                    new Databricks.Inputs.JobTaskLibraryArgs
                    {
                        Cran = new Databricks.Inputs.JobTaskLibraryCranArgs
                        {
                            Package = "string",
                            Repo = "string",
                        },
                        Egg = "string",
                        Jar = "string",
                        Maven = new Databricks.Inputs.JobTaskLibraryMavenArgs
                        {
                            Coordinates = "string",
                            Exclusions = new[]
                            {
                                "string",
                            },
                            Repo = "string",
                        },
                        Pypi = new Databricks.Inputs.JobTaskLibraryPypiArgs
                        {
                            Package = "string",
                            Repo = "string",
                        },
                        Whl = "string",
                    },
                },
                MaxRetries = 0,
                MinRetryIntervalMillis = 0,
                NewCluster = new Databricks.Inputs.JobTaskNewClusterArgs
                {
                    SparkVersion = "string",
                    EnableElasticDisk = false,
                    SparkConf = 
                    {
                        { "string", "any" },
                    },
                    AzureAttributes = new Databricks.Inputs.JobTaskNewClusterAzureAttributesArgs
                    {
                        Availability = "string",
                        FirstOnDemand = 0,
                        SpotBidMaxPrice = 0,
                    },
                    ClusterId = "string",
                    ClusterLogConf = new Databricks.Inputs.JobTaskNewClusterClusterLogConfArgs
                    {
                        Dbfs = new Databricks.Inputs.JobTaskNewClusterClusterLogConfDbfsArgs
                        {
                            Destination = "string",
                        },
                        S3 = new Databricks.Inputs.JobTaskNewClusterClusterLogConfS3Args
                        {
                            Destination = "string",
                            CannedAcl = "string",
                            EnableEncryption = false,
                            EncryptionType = "string",
                            Endpoint = "string",
                            KmsKey = "string",
                            Region = "string",
                        },
                    },
                    ClusterName = "string",
                    CustomTags = 
                    {
                        { "string", "any" },
                    },
                    DataSecurityMode = "string",
                    DockerImage = new Databricks.Inputs.JobTaskNewClusterDockerImageArgs
                    {
                        Url = "string",
                        BasicAuth = new Databricks.Inputs.JobTaskNewClusterDockerImageBasicAuthArgs
                        {
                            Password = "string",
                            Username = "string",
                        },
                    },
                    DriverInstancePoolId = "string",
                    DriverNodeTypeId = "string",
                    Autoscale = new Databricks.Inputs.JobTaskNewClusterAutoscaleArgs
                    {
                        MaxWorkers = 0,
                        MinWorkers = 0,
                    },
                    AwsAttributes = new Databricks.Inputs.JobTaskNewClusterAwsAttributesArgs
                    {
                        Availability = "string",
                        EbsVolumeCount = 0,
                        EbsVolumeSize = 0,
                        EbsVolumeType = "string",
                        FirstOnDemand = 0,
                        InstanceProfileArn = "string",
                        SpotBidPricePercent = 0,
                        ZoneId = "string",
                    },
                    IdempotencyToken = "string",
                    EnableLocalDiskEncryption = false,
                    InitScripts = new[]
                    {
                        new Databricks.Inputs.JobTaskNewClusterInitScriptArgs
                        {
                            Dbfs = new Databricks.Inputs.JobTaskNewClusterInitScriptDbfsArgs
                            {
                                Destination = "string",
                            },
                            File = new Databricks.Inputs.JobTaskNewClusterInitScriptFileArgs
                            {
                                Destination = "string",
                            },
                            S3 = new Databricks.Inputs.JobTaskNewClusterInitScriptS3Args
                            {
                                Destination = "string",
                                CannedAcl = "string",
                                EnableEncryption = false,
                                EncryptionType = "string",
                                Endpoint = "string",
                                KmsKey = "string",
                                Region = "string",
                            },
                        },
                    },
                    InstancePoolId = "string",
                    NodeTypeId = "string",
                    NumWorkers = 0,
                    PolicyId = "string",
                    SingleUserName = "string",
                    GcpAttributes = new Databricks.Inputs.JobTaskNewClusterGcpAttributesArgs
                    {
                        Availability = "string",
                        BootDiskSize = 0,
                        GoogleServiceAccount = "string",
                        UsePreemptibleExecutors = false,
                        ZoneId = "string",
                    },
                    SparkEnvVars = 
                    {
                        { "string", "any" },
                    },
                    AutoterminationMinutes = 0,
                    SshPublicKeys = new[]
                    {
                        "string",
                    },
                },
                NotebookTask = new Databricks.Inputs.JobTaskNotebookTaskArgs
                {
                    NotebookPath = "string",
                    BaseParameters = 
                    {
                        { "string", "any" },
                    },
                },
                PipelineTask = new Databricks.Inputs.JobTaskPipelineTaskArgs
                {
                    PipelineId = "string",
                },
                PythonWheelTask = new Databricks.Inputs.JobTaskPythonWheelTaskArgs
                {
                    EntryPoint = "string",
                    NamedParameters = 
                    {
                        { "string", "any" },
                    },
                    PackageName = "string",
                    Parameters = new[]
                    {
                        "string",
                    },
                },
                RetryOnTimeout = false,
                SparkJarTask = new Databricks.Inputs.JobTaskSparkJarTaskArgs
                {
                    JarUri = "string",
                    MainClassName = "string",
                    Parameters = new[]
                    {
                        "string",
                    },
                },
                SparkPythonTask = new Databricks.Inputs.JobTaskSparkPythonTaskArgs
                {
                    PythonFile = "string",
                    Parameters = new[]
                    {
                        "string",
                    },
                },
                SparkSubmitTask = new Databricks.Inputs.JobTaskSparkSubmitTaskArgs
                {
                    Parameters = new[]
                    {
                        "string",
                    },
                },
                TaskKey = "string",
                TimeoutSeconds = 0,
            },
        },
        TimeoutSeconds = 0,
    });
    
    example, err := databricks.NewJob(ctx, "jobResource", &databricks.JobArgs{
    	AlwaysRunning: pulumi.Bool(false),
    	EmailNotifications: &databricks.JobEmailNotificationsArgs{
    		NoAlertForSkippedRuns: pulumi.Bool(false),
    		OnFailures: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    		OnStarts: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    		OnSuccesses: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    	},
    	ExistingClusterId: pulumi.String("string"),
    	Format:            pulumi.String("string"),
    	GitSource: &databricks.JobGitSourceArgs{
    		Url:      pulumi.String("string"),
    		Branch:   pulumi.String("string"),
    		Commit:   pulumi.String("string"),
    		Provider: pulumi.String("string"),
    		Tag:      pulumi.String("string"),
    	},
    	JobClusters: databricks.JobJobClusterArray{
    		&databricks.JobJobClusterArgs{
    			JobClusterKey: pulumi.String("string"),
    			NewCluster: &databricks.JobJobClusterNewClusterArgs{
    				NumWorkers:        pulumi.Int(0),
    				SparkVersion:      pulumi.String("string"),
    				EnableElasticDisk: pulumi.Bool(false),
    				DataSecurityMode:  pulumi.String("string"),
    				ClusterId:         pulumi.String("string"),
    				ClusterLogConf: &databricks.JobJobClusterNewClusterClusterLogConfArgs{
    					Dbfs: &databricks.JobJobClusterNewClusterClusterLogConfDbfsArgs{
    						Destination: pulumi.String("string"),
    					},
    					S3: &databricks.JobJobClusterNewClusterClusterLogConfS3Args{
    						Destination:      pulumi.String("string"),
    						CannedAcl:        pulumi.String("string"),
    						EnableEncryption: pulumi.Bool(false),
    						EncryptionType:   pulumi.String("string"),
    						Endpoint:         pulumi.String("string"),
    						KmsKey:           pulumi.String("string"),
    						Region:           pulumi.String("string"),
    					},
    				},
    				ClusterName: pulumi.String("string"),
    				CustomTags: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				EnableLocalDiskEncryption: pulumi.Bool(false),
    				DockerImage: &databricks.JobJobClusterNewClusterDockerImageArgs{
    					Url: pulumi.String("string"),
    					BasicAuth: &databricks.JobJobClusterNewClusterDockerImageBasicAuthArgs{
    						Password: pulumi.String("string"),
    						Username: pulumi.String("string"),
    					},
    				},
    				DriverInstancePoolId: pulumi.String("string"),
    				DriverNodeTypeId:     pulumi.String("string"),
    				AzureAttributes: &databricks.JobJobClusterNewClusterAzureAttributesArgs{
    					Availability:    pulumi.String("string"),
    					FirstOnDemand:   pulumi.Int(0),
    					SpotBidMaxPrice: pulumi.Float64(0),
    				},
    				Autoscale: &databricks.JobJobClusterNewClusterAutoscaleArgs{
    					MaxWorkers: pulumi.Int(0),
    					MinWorkers: pulumi.Int(0),
    				},
    				NodeTypeId:       pulumi.String("string"),
    				IdempotencyToken: pulumi.String("string"),
    				InitScripts: databricks.JobJobClusterNewClusterInitScriptArray{
    					&databricks.JobJobClusterNewClusterInitScriptArgs{
    						Dbfs: &databricks.JobJobClusterNewClusterInitScriptDbfsArgs{
    							Destination: pulumi.String("string"),
    						},
    						File: &databricks.JobJobClusterNewClusterInitScriptFileArgs{
    							Destination: pulumi.String("string"),
    						},
    						S3: &databricks.JobJobClusterNewClusterInitScriptS3Args{
    							Destination:      pulumi.String("string"),
    							CannedAcl:        pulumi.String("string"),
    							EnableEncryption: pulumi.Bool(false),
    							EncryptionType:   pulumi.String("string"),
    							Endpoint:         pulumi.String("string"),
    							KmsKey:           pulumi.String("string"),
    							Region:           pulumi.String("string"),
    						},
    					},
    				},
    				InstancePoolId: pulumi.String("string"),
    				GcpAttributes: &databricks.JobJobClusterNewClusterGcpAttributesArgs{
    					Availability:            pulumi.String("string"),
    					BootDiskSize:            pulumi.Int(0),
    					GoogleServiceAccount:    pulumi.String("string"),
    					UsePreemptibleExecutors: pulumi.Bool(false),
    					ZoneId:                  pulumi.String("string"),
    				},
    				AwsAttributes: &databricks.JobJobClusterNewClusterAwsAttributesArgs{
    					Availability:        pulumi.String("string"),
    					EbsVolumeCount:      pulumi.Int(0),
    					EbsVolumeSize:       pulumi.Int(0),
    					EbsVolumeType:       pulumi.String("string"),
    					FirstOnDemand:       pulumi.Int(0),
    					InstanceProfileArn:  pulumi.String("string"),
    					SpotBidPricePercent: pulumi.Int(0),
    					ZoneId:              pulumi.String("string"),
    				},
    				PolicyId:       pulumi.String("string"),
    				SingleUserName: pulumi.String("string"),
    				SparkConf: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				SparkEnvVars: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				AutoterminationMinutes: pulumi.Int(0),
    				SshPublicKeys: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    			},
    		},
    	},
    	Libraries: databricks.JobLibraryArray{
    		&databricks.JobLibraryArgs{
    			Cran: &databricks.JobLibraryCranArgs{
    				Package: pulumi.String("string"),
    				Repo:    pulumi.String("string"),
    			},
    			Egg: pulumi.String("string"),
    			Jar: pulumi.String("string"),
    			Maven: &databricks.JobLibraryMavenArgs{
    				Coordinates: pulumi.String("string"),
    				Exclusions: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				Repo: pulumi.String("string"),
    			},
    			Pypi: &databricks.JobLibraryPypiArgs{
    				Package: pulumi.String("string"),
    				Repo:    pulumi.String("string"),
    			},
    			Whl: pulumi.String("string"),
    		},
    	},
    	MaxConcurrentRuns:      pulumi.Int(0),
    	MaxRetries:             pulumi.Int(0),
    	MinRetryIntervalMillis: pulumi.Int(0),
    	Name:                   pulumi.String("string"),
    	NewCluster: &databricks.JobNewClusterArgs{
    		SparkVersion:      pulumi.String("string"),
    		EnableElasticDisk: pulumi.Bool(false),
    		SparkConf: pulumi.Map{
    			"string": pulumi.Any("any"),
    		},
    		AzureAttributes: &databricks.JobNewClusterAzureAttributesArgs{
    			Availability:    pulumi.String("string"),
    			FirstOnDemand:   pulumi.Int(0),
    			SpotBidMaxPrice: pulumi.Float64(0),
    		},
    		ClusterId: pulumi.String("string"),
    		ClusterLogConf: &databricks.JobNewClusterClusterLogConfArgs{
    			Dbfs: &databricks.JobNewClusterClusterLogConfDbfsArgs{
    				Destination: pulumi.String("string"),
    			},
    			S3: &databricks.JobNewClusterClusterLogConfS3Args{
    				Destination:      pulumi.String("string"),
    				CannedAcl:        pulumi.String("string"),
    				EnableEncryption: pulumi.Bool(false),
    				EncryptionType:   pulumi.String("string"),
    				Endpoint:         pulumi.String("string"),
    				KmsKey:           pulumi.String("string"),
    				Region:           pulumi.String("string"),
    			},
    		},
    		ClusterName: pulumi.String("string"),
    		CustomTags: pulumi.Map{
    			"string": pulumi.Any("any"),
    		},
    		DataSecurityMode: pulumi.String("string"),
    		DockerImage: &databricks.JobNewClusterDockerImageArgs{
    			Url: pulumi.String("string"),
    			BasicAuth: &databricks.JobNewClusterDockerImageBasicAuthArgs{
    				Password: pulumi.String("string"),
    				Username: pulumi.String("string"),
    			},
    		},
    		DriverInstancePoolId: pulumi.String("string"),
    		DriverNodeTypeId:     pulumi.String("string"),
    		Autoscale: &databricks.JobNewClusterAutoscaleArgs{
    			MaxWorkers: pulumi.Int(0),
    			MinWorkers: pulumi.Int(0),
    		},
    		AwsAttributes: &databricks.JobNewClusterAwsAttributesArgs{
    			Availability:        pulumi.String("string"),
    			EbsVolumeCount:      pulumi.Int(0),
    			EbsVolumeSize:       pulumi.Int(0),
    			EbsVolumeType:       pulumi.String("string"),
    			FirstOnDemand:       pulumi.Int(0),
    			InstanceProfileArn:  pulumi.String("string"),
    			SpotBidPricePercent: pulumi.Int(0),
    			ZoneId:              pulumi.String("string"),
    		},
    		IdempotencyToken:          pulumi.String("string"),
    		EnableLocalDiskEncryption: pulumi.Bool(false),
    		InitScripts: databricks.JobNewClusterInitScriptArray{
    			&databricks.JobNewClusterInitScriptArgs{
    				Dbfs: &databricks.JobNewClusterInitScriptDbfsArgs{
    					Destination: pulumi.String("string"),
    				},
    				File: &databricks.JobNewClusterInitScriptFileArgs{
    					Destination: pulumi.String("string"),
    				},
    				S3: &databricks.JobNewClusterInitScriptS3Args{
    					Destination:      pulumi.String("string"),
    					CannedAcl:        pulumi.String("string"),
    					EnableEncryption: pulumi.Bool(false),
    					EncryptionType:   pulumi.String("string"),
    					Endpoint:         pulumi.String("string"),
    					KmsKey:           pulumi.String("string"),
    					Region:           pulumi.String("string"),
    				},
    			},
    		},
    		InstancePoolId: pulumi.String("string"),
    		NodeTypeId:     pulumi.String("string"),
    		NumWorkers:     pulumi.Int(0),
    		PolicyId:       pulumi.String("string"),
    		SingleUserName: pulumi.String("string"),
    		GcpAttributes: &databricks.JobNewClusterGcpAttributesArgs{
    			Availability:            pulumi.String("string"),
    			BootDiskSize:            pulumi.Int(0),
    			GoogleServiceAccount:    pulumi.String("string"),
    			UsePreemptibleExecutors: pulumi.Bool(false),
    			ZoneId:                  pulumi.String("string"),
    		},
    		SparkEnvVars: pulumi.Map{
    			"string": pulumi.Any("any"),
    		},
    		AutoterminationMinutes: pulumi.Int(0),
    		SshPublicKeys: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    	},
    	NotebookTask: &databricks.JobNotebookTaskArgs{
    		NotebookPath: pulumi.String("string"),
    		BaseParameters: pulumi.Map{
    			"string": pulumi.Any("any"),
    		},
    	},
    	PipelineTask: &databricks.JobPipelineTaskArgs{
    		PipelineId: pulumi.String("string"),
    	},
    	PythonWheelTask: &databricks.JobPythonWheelTaskArgs{
    		EntryPoint: pulumi.String("string"),
    		NamedParameters: pulumi.Map{
    			"string": pulumi.Any("any"),
    		},
    		PackageName: pulumi.String("string"),
    		Parameters: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    	},
    	RetryOnTimeout: pulumi.Bool(false),
    	Schedule: &databricks.JobScheduleArgs{
    		QuartzCronExpression: pulumi.String("string"),
    		TimezoneId:           pulumi.String("string"),
    		PauseStatus:          pulumi.String("string"),
    	},
    	SparkJarTask: &databricks.JobSparkJarTaskArgs{
    		JarUri:        pulumi.String("string"),
    		MainClassName: pulumi.String("string"),
    		Parameters: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    	},
    	SparkPythonTask: &databricks.JobSparkPythonTaskArgs{
    		PythonFile: pulumi.String("string"),
    		Parameters: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    	},
    	SparkSubmitTask: &databricks.JobSparkSubmitTaskArgs{
    		Parameters: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    	},
    	Tasks: databricks.JobTaskArray{
    		&databricks.JobTaskArgs{
    			DependsOns: databricks.JobTaskDependsOnArray{
    				&databricks.JobTaskDependsOnArgs{
    					TaskKey: pulumi.String("string"),
    				},
    			},
    			Description: pulumi.String("string"),
    			EmailNotifications: &databricks.JobTaskEmailNotificationsArgs{
    				NoAlertForSkippedRuns: pulumi.Bool(false),
    				OnFailures: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				OnStarts: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				OnSuccesses: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    			},
    			ExistingClusterId: pulumi.String("string"),
    			JobClusterKey:     pulumi.String("string"),
    			Libraries: databricks.JobTaskLibraryArray{
    				&databricks.JobTaskLibraryArgs{
    					Cran: &databricks.JobTaskLibraryCranArgs{
    						Package: pulumi.String("string"),
    						Repo:    pulumi.String("string"),
    					},
    					Egg: pulumi.String("string"),
    					Jar: pulumi.String("string"),
    					Maven: &databricks.JobTaskLibraryMavenArgs{
    						Coordinates: pulumi.String("string"),
    						Exclusions: pulumi.StringArray{
    							pulumi.String("string"),
    						},
    						Repo: pulumi.String("string"),
    					},
    					Pypi: &databricks.JobTaskLibraryPypiArgs{
    						Package: pulumi.String("string"),
    						Repo:    pulumi.String("string"),
    					},
    					Whl: pulumi.String("string"),
    				},
    			},
    			MaxRetries:             pulumi.Int(0),
    			MinRetryIntervalMillis: pulumi.Int(0),
    			NewCluster: &databricks.JobTaskNewClusterArgs{
    				SparkVersion:      pulumi.String("string"),
    				EnableElasticDisk: pulumi.Bool(false),
    				SparkConf: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				AzureAttributes: &databricks.JobTaskNewClusterAzureAttributesArgs{
    					Availability:    pulumi.String("string"),
    					FirstOnDemand:   pulumi.Int(0),
    					SpotBidMaxPrice: pulumi.Float64(0),
    				},
    				ClusterId: pulumi.String("string"),
    				ClusterLogConf: &databricks.JobTaskNewClusterClusterLogConfArgs{
    					Dbfs: &databricks.JobTaskNewClusterClusterLogConfDbfsArgs{
    						Destination: pulumi.String("string"),
    					},
    					S3: &databricks.JobTaskNewClusterClusterLogConfS3Args{
    						Destination:      pulumi.String("string"),
    						CannedAcl:        pulumi.String("string"),
    						EnableEncryption: pulumi.Bool(false),
    						EncryptionType:   pulumi.String("string"),
    						Endpoint:         pulumi.String("string"),
    						KmsKey:           pulumi.String("string"),
    						Region:           pulumi.String("string"),
    					},
    				},
    				ClusterName: pulumi.String("string"),
    				CustomTags: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				DataSecurityMode: pulumi.String("string"),
    				DockerImage: &databricks.JobTaskNewClusterDockerImageArgs{
    					Url: pulumi.String("string"),
    					BasicAuth: &databricks.JobTaskNewClusterDockerImageBasicAuthArgs{
    						Password: pulumi.String("string"),
    						Username: pulumi.String("string"),
    					},
    				},
    				DriverInstancePoolId: pulumi.String("string"),
    				DriverNodeTypeId:     pulumi.String("string"),
    				Autoscale: &databricks.JobTaskNewClusterAutoscaleArgs{
    					MaxWorkers: pulumi.Int(0),
    					MinWorkers: pulumi.Int(0),
    				},
    				AwsAttributes: &databricks.JobTaskNewClusterAwsAttributesArgs{
    					Availability:        pulumi.String("string"),
    					EbsVolumeCount:      pulumi.Int(0),
    					EbsVolumeSize:       pulumi.Int(0),
    					EbsVolumeType:       pulumi.String("string"),
    					FirstOnDemand:       pulumi.Int(0),
    					InstanceProfileArn:  pulumi.String("string"),
    					SpotBidPricePercent: pulumi.Int(0),
    					ZoneId:              pulumi.String("string"),
    				},
    				IdempotencyToken:          pulumi.String("string"),
    				EnableLocalDiskEncryption: pulumi.Bool(false),
    				InitScripts: databricks.JobTaskNewClusterInitScriptArray{
    					&databricks.JobTaskNewClusterInitScriptArgs{
    						Dbfs: &databricks.JobTaskNewClusterInitScriptDbfsArgs{
    							Destination: pulumi.String("string"),
    						},
    						File: &databricks.JobTaskNewClusterInitScriptFileArgs{
    							Destination: pulumi.String("string"),
    						},
    						S3: &databricks.JobTaskNewClusterInitScriptS3Args{
    							Destination:      pulumi.String("string"),
    							CannedAcl:        pulumi.String("string"),
    							EnableEncryption: pulumi.Bool(false),
    							EncryptionType:   pulumi.String("string"),
    							Endpoint:         pulumi.String("string"),
    							KmsKey:           pulumi.String("string"),
    							Region:           pulumi.String("string"),
    						},
    					},
    				},
    				InstancePoolId: pulumi.String("string"),
    				NodeTypeId:     pulumi.String("string"),
    				NumWorkers:     pulumi.Int(0),
    				PolicyId:       pulumi.String("string"),
    				SingleUserName: pulumi.String("string"),
    				GcpAttributes: &databricks.JobTaskNewClusterGcpAttributesArgs{
    					Availability:            pulumi.String("string"),
    					BootDiskSize:            pulumi.Int(0),
    					GoogleServiceAccount:    pulumi.String("string"),
    					UsePreemptibleExecutors: pulumi.Bool(false),
    					ZoneId:                  pulumi.String("string"),
    				},
    				SparkEnvVars: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				AutoterminationMinutes: pulumi.Int(0),
    				SshPublicKeys: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    			},
    			NotebookTask: &databricks.JobTaskNotebookTaskArgs{
    				NotebookPath: pulumi.String("string"),
    				BaseParameters: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    			},
    			PipelineTask: &databricks.JobTaskPipelineTaskArgs{
    				PipelineId: pulumi.String("string"),
    			},
    			PythonWheelTask: &databricks.JobTaskPythonWheelTaskArgs{
    				EntryPoint: pulumi.String("string"),
    				NamedParameters: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				PackageName: pulumi.String("string"),
    				Parameters: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    			},
    			RetryOnTimeout: pulumi.Bool(false),
    			SparkJarTask: &databricks.JobTaskSparkJarTaskArgs{
    				JarUri:        pulumi.String("string"),
    				MainClassName: pulumi.String("string"),
    				Parameters: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    			},
    			SparkPythonTask: &databricks.JobTaskSparkPythonTaskArgs{
    				PythonFile: pulumi.String("string"),
    				Parameters: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    			},
    			SparkSubmitTask: &databricks.JobTaskSparkSubmitTaskArgs{
    				Parameters: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    			},
    			TaskKey:        pulumi.String("string"),
    			TimeoutSeconds: pulumi.Int(0),
    		},
    	},
    	TimeoutSeconds: pulumi.Int(0),
    })
    
    var jobResource = new Job("jobResource", JobArgs.builder()
        .alwaysRunning(false)
        .emailNotifications(JobEmailNotificationsArgs.builder()
            .noAlertForSkippedRuns(false)
            .onFailures("string")
            .onStarts("string")
            .onSuccesses("string")
            .build())
        .existingClusterId("string")
        .format("string")
        .gitSource(JobGitSourceArgs.builder()
            .url("string")
            .branch("string")
            .commit("string")
            .provider("string")
            .tag("string")
            .build())
        .jobClusters(JobJobClusterArgs.builder()
            .jobClusterKey("string")
            .newCluster(JobJobClusterNewClusterArgs.builder()
                .numWorkers(0)
                .sparkVersion("string")
                .enableElasticDisk(false)
                .dataSecurityMode("string")
                .clusterId("string")
                .clusterLogConf(JobJobClusterNewClusterClusterLogConfArgs.builder()
                    .dbfs(JobJobClusterNewClusterClusterLogConfDbfsArgs.builder()
                        .destination("string")
                        .build())
                    .s3(JobJobClusterNewClusterClusterLogConfS3Args.builder()
                        .destination("string")
                        .cannedAcl("string")
                        .enableEncryption(false)
                        .encryptionType("string")
                        .endpoint("string")
                        .kmsKey("string")
                        .region("string")
                        .build())
                    .build())
                .clusterName("string")
                .customTags(Map.of("string", "any"))
                .enableLocalDiskEncryption(false)
                .dockerImage(JobJobClusterNewClusterDockerImageArgs.builder()
                    .url("string")
                    .basicAuth(JobJobClusterNewClusterDockerImageBasicAuthArgs.builder()
                        .password("string")
                        .username("string")
                        .build())
                    .build())
                .driverInstancePoolId("string")
                .driverNodeTypeId("string")
                .azureAttributes(JobJobClusterNewClusterAzureAttributesArgs.builder()
                    .availability("string")
                    .firstOnDemand(0)
                    .spotBidMaxPrice(0.0)
                    .build())
                .autoscale(JobJobClusterNewClusterAutoscaleArgs.builder()
                    .maxWorkers(0)
                    .minWorkers(0)
                    .build())
                .nodeTypeId("string")
                .idempotencyToken("string")
                .initScripts(JobJobClusterNewClusterInitScriptArgs.builder()
                    .dbfs(JobJobClusterNewClusterInitScriptDbfsArgs.builder()
                        .destination("string")
                        .build())
                    .file(JobJobClusterNewClusterInitScriptFileArgs.builder()
                        .destination("string")
                        .build())
                    .s3(JobJobClusterNewClusterInitScriptS3Args.builder()
                        .destination("string")
                        .cannedAcl("string")
                        .enableEncryption(false)
                        .encryptionType("string")
                        .endpoint("string")
                        .kmsKey("string")
                        .region("string")
                        .build())
                    .build())
                .instancePoolId("string")
                .gcpAttributes(JobJobClusterNewClusterGcpAttributesArgs.builder()
                    .availability("string")
                    .bootDiskSize(0)
                    .googleServiceAccount("string")
                    .usePreemptibleExecutors(false)
                    .zoneId("string")
                    .build())
                .awsAttributes(JobJobClusterNewClusterAwsAttributesArgs.builder()
                    .availability("string")
                    .ebsVolumeCount(0)
                    .ebsVolumeSize(0)
                    .ebsVolumeType("string")
                    .firstOnDemand(0)
                    .instanceProfileArn("string")
                    .spotBidPricePercent(0)
                    .zoneId("string")
                    .build())
                .policyId("string")
                .singleUserName("string")
                .sparkConf(Map.of("string", "any"))
                .sparkEnvVars(Map.of("string", "any"))
                .autoterminationMinutes(0)
                .sshPublicKeys("string")
                .build())
            .build())
        .libraries(JobLibraryArgs.builder()
            .cran(JobLibraryCranArgs.builder()
                .package_("string")
                .repo("string")
                .build())
            .egg("string")
            .jar("string")
            .maven(JobLibraryMavenArgs.builder()
                .coordinates("string")
                .exclusions("string")
                .repo("string")
                .build())
            .pypi(JobLibraryPypiArgs.builder()
                .package_("string")
                .repo("string")
                .build())
            .whl("string")
            .build())
        .maxConcurrentRuns(0)
        .maxRetries(0)
        .minRetryIntervalMillis(0)
        .name("string")
        .newCluster(JobNewClusterArgs.builder()
            .sparkVersion("string")
            .enableElasticDisk(false)
            .sparkConf(Map.of("string", "any"))
            .azureAttributes(JobNewClusterAzureAttributesArgs.builder()
                .availability("string")
                .firstOnDemand(0)
                .spotBidMaxPrice(0.0)
                .build())
            .clusterId("string")
            .clusterLogConf(JobNewClusterClusterLogConfArgs.builder()
                .dbfs(JobNewClusterClusterLogConfDbfsArgs.builder()
                    .destination("string")
                    .build())
                .s3(JobNewClusterClusterLogConfS3Args.builder()
                    .destination("string")
                    .cannedAcl("string")
                    .enableEncryption(false)
                    .encryptionType("string")
                    .endpoint("string")
                    .kmsKey("string")
                    .region("string")
                    .build())
                .build())
            .clusterName("string")
            .customTags(Map.of("string", "any"))
            .dataSecurityMode("string")
            .dockerImage(JobNewClusterDockerImageArgs.builder()
                .url("string")
                .basicAuth(JobNewClusterDockerImageBasicAuthArgs.builder()
                    .password("string")
                    .username("string")
                    .build())
                .build())
            .driverInstancePoolId("string")
            .driverNodeTypeId("string")
            .autoscale(JobNewClusterAutoscaleArgs.builder()
                .maxWorkers(0)
                .minWorkers(0)
                .build())
            .awsAttributes(JobNewClusterAwsAttributesArgs.builder()
                .availability("string")
                .ebsVolumeCount(0)
                .ebsVolumeSize(0)
                .ebsVolumeType("string")
                .firstOnDemand(0)
                .instanceProfileArn("string")
                .spotBidPricePercent(0)
                .zoneId("string")
                .build())
            .idempotencyToken("string")
            .enableLocalDiskEncryption(false)
            .initScripts(JobNewClusterInitScriptArgs.builder()
                .dbfs(JobNewClusterInitScriptDbfsArgs.builder()
                    .destination("string")
                    .build())
                .file(JobNewClusterInitScriptFileArgs.builder()
                    .destination("string")
                    .build())
                .s3(JobNewClusterInitScriptS3Args.builder()
                    .destination("string")
                    .cannedAcl("string")
                    .enableEncryption(false)
                    .encryptionType("string")
                    .endpoint("string")
                    .kmsKey("string")
                    .region("string")
                    .build())
                .build())
            .instancePoolId("string")
            .nodeTypeId("string")
            .numWorkers(0)
            .policyId("string")
            .singleUserName("string")
            .gcpAttributes(JobNewClusterGcpAttributesArgs.builder()
                .availability("string")
                .bootDiskSize(0)
                .googleServiceAccount("string")
                .usePreemptibleExecutors(false)
                .zoneId("string")
                .build())
            .sparkEnvVars(Map.of("string", "any"))
            .autoterminationMinutes(0)
            .sshPublicKeys("string")
            .build())
        .notebookTask(JobNotebookTaskArgs.builder()
            .notebookPath("string")
            .baseParameters(Map.of("string", "any"))
            .build())
        .pipelineTask(JobPipelineTaskArgs.builder()
            .pipelineId("string")
            .build())
        .pythonWheelTask(JobPythonWheelTaskArgs.builder()
            .entryPoint("string")
            .namedParameters(Map.of("string", "any"))
            .packageName("string")
            .parameters("string")
            .build())
        .retryOnTimeout(false)
        .schedule(JobScheduleArgs.builder()
            .quartzCronExpression("string")
            .timezoneId("string")
            .pauseStatus("string")
            .build())
        .sparkJarTask(JobSparkJarTaskArgs.builder()
            .jarUri("string")
            .mainClassName("string")
            .parameters("string")
            .build())
        .sparkPythonTask(JobSparkPythonTaskArgs.builder()
            .pythonFile("string")
            .parameters("string")
            .build())
        .sparkSubmitTask(JobSparkSubmitTaskArgs.builder()
            .parameters("string")
            .build())
        .tasks(JobTaskArgs.builder()
            .dependsOns(JobTaskDependsOnArgs.builder()
                .taskKey("string")
                .build())
            .description("string")
            .emailNotifications(JobTaskEmailNotificationsArgs.builder()
                .noAlertForSkippedRuns(false)
                .onFailures("string")
                .onStarts("string")
                .onSuccesses("string")
                .build())
            .existingClusterId("string")
            .jobClusterKey("string")
            .libraries(JobTaskLibraryArgs.builder()
                .cran(JobTaskLibraryCranArgs.builder()
                    .package_("string")
                    .repo("string")
                    .build())
                .egg("string")
                .jar("string")
                .maven(JobTaskLibraryMavenArgs.builder()
                    .coordinates("string")
                    .exclusions("string")
                    .repo("string")
                    .build())
                .pypi(JobTaskLibraryPypiArgs.builder()
                    .package_("string")
                    .repo("string")
                    .build())
                .whl("string")
                .build())
            .maxRetries(0)
            .minRetryIntervalMillis(0)
            .newCluster(JobTaskNewClusterArgs.builder()
                .sparkVersion("string")
                .enableElasticDisk(false)
                .sparkConf(Map.of("string", "any"))
                .azureAttributes(JobTaskNewClusterAzureAttributesArgs.builder()
                    .availability("string")
                    .firstOnDemand(0)
                    .spotBidMaxPrice(0.0)
                    .build())
                .clusterId("string")
                .clusterLogConf(JobTaskNewClusterClusterLogConfArgs.builder()
                    .dbfs(JobTaskNewClusterClusterLogConfDbfsArgs.builder()
                        .destination("string")
                        .build())
                    .s3(JobTaskNewClusterClusterLogConfS3Args.builder()
                        .destination("string")
                        .cannedAcl("string")
                        .enableEncryption(false)
                        .encryptionType("string")
                        .endpoint("string")
                        .kmsKey("string")
                        .region("string")
                        .build())
                    .build())
                .clusterName("string")
                .customTags(Map.of("string", "any"))
                .dataSecurityMode("string")
                .dockerImage(JobTaskNewClusterDockerImageArgs.builder()
                    .url("string")
                    .basicAuth(JobTaskNewClusterDockerImageBasicAuthArgs.builder()
                        .password("string")
                        .username("string")
                        .build())
                    .build())
                .driverInstancePoolId("string")
                .driverNodeTypeId("string")
                .autoscale(JobTaskNewClusterAutoscaleArgs.builder()
                    .maxWorkers(0)
                    .minWorkers(0)
                    .build())
                .awsAttributes(JobTaskNewClusterAwsAttributesArgs.builder()
                    .availability("string")
                    .ebsVolumeCount(0)
                    .ebsVolumeSize(0)
                    .ebsVolumeType("string")
                    .firstOnDemand(0)
                    .instanceProfileArn("string")
                    .spotBidPricePercent(0)
                    .zoneId("string")
                    .build())
                .idempotencyToken("string")
                .enableLocalDiskEncryption(false)
                .initScripts(JobTaskNewClusterInitScriptArgs.builder()
                    .dbfs(JobTaskNewClusterInitScriptDbfsArgs.builder()
                        .destination("string")
                        .build())
                    .file(JobTaskNewClusterInitScriptFileArgs.builder()
                        .destination("string")
                        .build())
                    .s3(JobTaskNewClusterInitScriptS3Args.builder()
                        .destination("string")
                        .cannedAcl("string")
                        .enableEncryption(false)
                        .encryptionType("string")
                        .endpoint("string")
                        .kmsKey("string")
                        .region("string")
                        .build())
                    .build())
                .instancePoolId("string")
                .nodeTypeId("string")
                .numWorkers(0)
                .policyId("string")
                .singleUserName("string")
                .gcpAttributes(JobTaskNewClusterGcpAttributesArgs.builder()
                    .availability("string")
                    .bootDiskSize(0)
                    .googleServiceAccount("string")
                    .usePreemptibleExecutors(false)
                    .zoneId("string")
                    .build())
                .sparkEnvVars(Map.of("string", "any"))
                .autoterminationMinutes(0)
                .sshPublicKeys("string")
                .build())
            .notebookTask(JobTaskNotebookTaskArgs.builder()
                .notebookPath("string")
                .baseParameters(Map.of("string", "any"))
                .build())
            .pipelineTask(JobTaskPipelineTaskArgs.builder()
                .pipelineId("string")
                .build())
            .pythonWheelTask(JobTaskPythonWheelTaskArgs.builder()
                .entryPoint("string")
                .namedParameters(Map.of("string", "any"))
                .packageName("string")
                .parameters("string")
                .build())
            .retryOnTimeout(false)
            .sparkJarTask(JobTaskSparkJarTaskArgs.builder()
                .jarUri("string")
                .mainClassName("string")
                .parameters("string")
                .build())
            .sparkPythonTask(JobTaskSparkPythonTaskArgs.builder()
                .pythonFile("string")
                .parameters("string")
                .build())
            .sparkSubmitTask(JobTaskSparkSubmitTaskArgs.builder()
                .parameters("string")
                .build())
            .taskKey("string")
            .timeoutSeconds(0)
            .build())
        .timeoutSeconds(0)
        .build());
    
    job_resource = databricks.Job("jobResource",
        always_running=False,
        email_notifications={
            "no_alert_for_skipped_runs": False,
            "on_failures": ["string"],
            "on_starts": ["string"],
            "on_successes": ["string"],
        },
        existing_cluster_id="string",
        format="string",
        git_source={
            "url": "string",
            "branch": "string",
            "commit": "string",
            "provider": "string",
            "tag": "string",
        },
        job_clusters=[{
            "job_cluster_key": "string",
            "new_cluster": {
                "num_workers": 0,
                "spark_version": "string",
                "enable_elastic_disk": False,
                "data_security_mode": "string",
                "cluster_id": "string",
                "cluster_log_conf": {
                    "dbfs": {
                        "destination": "string",
                    },
                    "s3": {
                        "destination": "string",
                        "canned_acl": "string",
                        "enable_encryption": False,
                        "encryption_type": "string",
                        "endpoint": "string",
                        "kms_key": "string",
                        "region": "string",
                    },
                },
                "cluster_name": "string",
                "custom_tags": {
                    "string": "any",
                },
                "enable_local_disk_encryption": False,
                "docker_image": {
                    "url": "string",
                    "basic_auth": {
                        "password": "string",
                        "username": "string",
                    },
                },
                "driver_instance_pool_id": "string",
                "driver_node_type_id": "string",
                "azure_attributes": {
                    "availability": "string",
                    "first_on_demand": 0,
                    "spot_bid_max_price": 0,
                },
                "autoscale": {
                    "max_workers": 0,
                    "min_workers": 0,
                },
                "node_type_id": "string",
                "idempotency_token": "string",
                "init_scripts": [{
                    "dbfs": {
                        "destination": "string",
                    },
                    "file": {
                        "destination": "string",
                    },
                    "s3": {
                        "destination": "string",
                        "canned_acl": "string",
                        "enable_encryption": False,
                        "encryption_type": "string",
                        "endpoint": "string",
                        "kms_key": "string",
                        "region": "string",
                    },
                }],
                "instance_pool_id": "string",
                "gcp_attributes": {
                    "availability": "string",
                    "boot_disk_size": 0,
                    "google_service_account": "string",
                    "use_preemptible_executors": False,
                    "zone_id": "string",
                },
                "aws_attributes": {
                    "availability": "string",
                    "ebs_volume_count": 0,
                    "ebs_volume_size": 0,
                    "ebs_volume_type": "string",
                    "first_on_demand": 0,
                    "instance_profile_arn": "string",
                    "spot_bid_price_percent": 0,
                    "zone_id": "string",
                },
                "policy_id": "string",
                "single_user_name": "string",
                "spark_conf": {
                    "string": "any",
                },
                "spark_env_vars": {
                    "string": "any",
                },
                "autotermination_minutes": 0,
                "ssh_public_keys": ["string"],
            },
        }],
        libraries=[{
            "cran": {
                "package": "string",
                "repo": "string",
            },
            "egg": "string",
            "jar": "string",
            "maven": {
                "coordinates": "string",
                "exclusions": ["string"],
                "repo": "string",
            },
            "pypi": {
                "package": "string",
                "repo": "string",
            },
            "whl": "string",
        }],
        max_concurrent_runs=0,
        max_retries=0,
        min_retry_interval_millis=0,
        name="string",
        new_cluster={
            "spark_version": "string",
            "enable_elastic_disk": False,
            "spark_conf": {
                "string": "any",
            },
            "azure_attributes": {
                "availability": "string",
                "first_on_demand": 0,
                "spot_bid_max_price": 0,
            },
            "cluster_id": "string",
            "cluster_log_conf": {
                "dbfs": {
                    "destination": "string",
                },
                "s3": {
                    "destination": "string",
                    "canned_acl": "string",
                    "enable_encryption": False,
                    "encryption_type": "string",
                    "endpoint": "string",
                    "kms_key": "string",
                    "region": "string",
                },
            },
            "cluster_name": "string",
            "custom_tags": {
                "string": "any",
            },
            "data_security_mode": "string",
            "docker_image": {
                "url": "string",
                "basic_auth": {
                    "password": "string",
                    "username": "string",
                },
            },
            "driver_instance_pool_id": "string",
            "driver_node_type_id": "string",
            "autoscale": {
                "max_workers": 0,
                "min_workers": 0,
            },
            "aws_attributes": {
                "availability": "string",
                "ebs_volume_count": 0,
                "ebs_volume_size": 0,
                "ebs_volume_type": "string",
                "first_on_demand": 0,
                "instance_profile_arn": "string",
                "spot_bid_price_percent": 0,
                "zone_id": "string",
            },
            "idempotency_token": "string",
            "enable_local_disk_encryption": False,
            "init_scripts": [{
                "dbfs": {
                    "destination": "string",
                },
                "file": {
                    "destination": "string",
                },
                "s3": {
                    "destination": "string",
                    "canned_acl": "string",
                    "enable_encryption": False,
                    "encryption_type": "string",
                    "endpoint": "string",
                    "kms_key": "string",
                    "region": "string",
                },
            }],
            "instance_pool_id": "string",
            "node_type_id": "string",
            "num_workers": 0,
            "policy_id": "string",
            "single_user_name": "string",
            "gcp_attributes": {
                "availability": "string",
                "boot_disk_size": 0,
                "google_service_account": "string",
                "use_preemptible_executors": False,
                "zone_id": "string",
            },
            "spark_env_vars": {
                "string": "any",
            },
            "autotermination_minutes": 0,
            "ssh_public_keys": ["string"],
        },
        notebook_task={
            "notebook_path": "string",
            "base_parameters": {
                "string": "any",
            },
        },
        pipeline_task={
            "pipeline_id": "string",
        },
        python_wheel_task={
            "entry_point": "string",
            "named_parameters": {
                "string": "any",
            },
            "package_name": "string",
            "parameters": ["string"],
        },
        retry_on_timeout=False,
        schedule={
            "quartz_cron_expression": "string",
            "timezone_id": "string",
            "pause_status": "string",
        },
        spark_jar_task={
            "jar_uri": "string",
            "main_class_name": "string",
            "parameters": ["string"],
        },
        spark_python_task={
            "python_file": "string",
            "parameters": ["string"],
        },
        spark_submit_task={
            "parameters": ["string"],
        },
        tasks=[{
            "depends_ons": [{
                "task_key": "string",
            }],
            "description": "string",
            "email_notifications": {
                "no_alert_for_skipped_runs": False,
                "on_failures": ["string"],
                "on_starts": ["string"],
                "on_successes": ["string"],
            },
            "existing_cluster_id": "string",
            "job_cluster_key": "string",
            "libraries": [{
                "cran": {
                    "package": "string",
                    "repo": "string",
                },
                "egg": "string",
                "jar": "string",
                "maven": {
                    "coordinates": "string",
                    "exclusions": ["string"],
                    "repo": "string",
                },
                "pypi": {
                    "package": "string",
                    "repo": "string",
                },
                "whl": "string",
            }],
            "max_retries": 0,
            "min_retry_interval_millis": 0,
            "new_cluster": {
                "spark_version": "string",
                "enable_elastic_disk": False,
                "spark_conf": {
                    "string": "any",
                },
                "azure_attributes": {
                    "availability": "string",
                    "first_on_demand": 0,
                    "spot_bid_max_price": 0,
                },
                "cluster_id": "string",
                "cluster_log_conf": {
                    "dbfs": {
                        "destination": "string",
                    },
                    "s3": {
                        "destination": "string",
                        "canned_acl": "string",
                        "enable_encryption": False,
                        "encryption_type": "string",
                        "endpoint": "string",
                        "kms_key": "string",
                        "region": "string",
                    },
                },
                "cluster_name": "string",
                "custom_tags": {
                    "string": "any",
                },
                "data_security_mode": "string",
                "docker_image": {
                    "url": "string",
                    "basic_auth": {
                        "password": "string",
                        "username": "string",
                    },
                },
                "driver_instance_pool_id": "string",
                "driver_node_type_id": "string",
                "autoscale": {
                    "max_workers": 0,
                    "min_workers": 0,
                },
                "aws_attributes": {
                    "availability": "string",
                    "ebs_volume_count": 0,
                    "ebs_volume_size": 0,
                    "ebs_volume_type": "string",
                    "first_on_demand": 0,
                    "instance_profile_arn": "string",
                    "spot_bid_price_percent": 0,
                    "zone_id": "string",
                },
                "idempotency_token": "string",
                "enable_local_disk_encryption": False,
                "init_scripts": [{
                    "dbfs": {
                        "destination": "string",
                    },
                    "file": {
                        "destination": "string",
                    },
                    "s3": {
                        "destination": "string",
                        "canned_acl": "string",
                        "enable_encryption": False,
                        "encryption_type": "string",
                        "endpoint": "string",
                        "kms_key": "string",
                        "region": "string",
                    },
                }],
                "instance_pool_id": "string",
                "node_type_id": "string",
                "num_workers": 0,
                "policy_id": "string",
                "single_user_name": "string",
                "gcp_attributes": {
                    "availability": "string",
                    "boot_disk_size": 0,
                    "google_service_account": "string",
                    "use_preemptible_executors": False,
                    "zone_id": "string",
                },
                "spark_env_vars": {
                    "string": "any",
                },
                "autotermination_minutes": 0,
                "ssh_public_keys": ["string"],
            },
            "notebook_task": {
                "notebook_path": "string",
                "base_parameters": {
                    "string": "any",
                },
            },
            "pipeline_task": {
                "pipeline_id": "string",
            },
            "python_wheel_task": {
                "entry_point": "string",
                "named_parameters": {
                    "string": "any",
                },
                "package_name": "string",
                "parameters": ["string"],
            },
            "retry_on_timeout": False,
            "spark_jar_task": {
                "jar_uri": "string",
                "main_class_name": "string",
                "parameters": ["string"],
            },
            "spark_python_task": {
                "python_file": "string",
                "parameters": ["string"],
            },
            "spark_submit_task": {
                "parameters": ["string"],
            },
            "task_key": "string",
            "timeout_seconds": 0,
        }],
        timeout_seconds=0)
    
    const jobResource = new databricks.Job("jobResource", {
        alwaysRunning: false,
        emailNotifications: {
            noAlertForSkippedRuns: false,
            onFailures: ["string"],
            onStarts: ["string"],
            onSuccesses: ["string"],
        },
        existingClusterId: "string",
        format: "string",
        gitSource: {
            url: "string",
            branch: "string",
            commit: "string",
            provider: "string",
            tag: "string",
        },
        jobClusters: [{
            jobClusterKey: "string",
            newCluster: {
                numWorkers: 0,
                sparkVersion: "string",
                enableElasticDisk: false,
                dataSecurityMode: "string",
                clusterId: "string",
                clusterLogConf: {
                    dbfs: {
                        destination: "string",
                    },
                    s3: {
                        destination: "string",
                        cannedAcl: "string",
                        enableEncryption: false,
                        encryptionType: "string",
                        endpoint: "string",
                        kmsKey: "string",
                        region: "string",
                    },
                },
                clusterName: "string",
                customTags: {
                    string: "any",
                },
                enableLocalDiskEncryption: false,
                dockerImage: {
                    url: "string",
                    basicAuth: {
                        password: "string",
                        username: "string",
                    },
                },
                driverInstancePoolId: "string",
                driverNodeTypeId: "string",
                azureAttributes: {
                    availability: "string",
                    firstOnDemand: 0,
                    spotBidMaxPrice: 0,
                },
                autoscale: {
                    maxWorkers: 0,
                    minWorkers: 0,
                },
                nodeTypeId: "string",
                idempotencyToken: "string",
                initScripts: [{
                    dbfs: {
                        destination: "string",
                    },
                    file: {
                        destination: "string",
                    },
                    s3: {
                        destination: "string",
                        cannedAcl: "string",
                        enableEncryption: false,
                        encryptionType: "string",
                        endpoint: "string",
                        kmsKey: "string",
                        region: "string",
                    },
                }],
                instancePoolId: "string",
                gcpAttributes: {
                    availability: "string",
                    bootDiskSize: 0,
                    googleServiceAccount: "string",
                    usePreemptibleExecutors: false,
                    zoneId: "string",
                },
                awsAttributes: {
                    availability: "string",
                    ebsVolumeCount: 0,
                    ebsVolumeSize: 0,
                    ebsVolumeType: "string",
                    firstOnDemand: 0,
                    instanceProfileArn: "string",
                    spotBidPricePercent: 0,
                    zoneId: "string",
                },
                policyId: "string",
                singleUserName: "string",
                sparkConf: {
                    string: "any",
                },
                sparkEnvVars: {
                    string: "any",
                },
                autoterminationMinutes: 0,
                sshPublicKeys: ["string"],
            },
        }],
        libraries: [{
            cran: {
                "package": "string",
                repo: "string",
            },
            egg: "string",
            jar: "string",
            maven: {
                coordinates: "string",
                exclusions: ["string"],
                repo: "string",
            },
            pypi: {
                "package": "string",
                repo: "string",
            },
            whl: "string",
        }],
        maxConcurrentRuns: 0,
        maxRetries: 0,
        minRetryIntervalMillis: 0,
        name: "string",
        newCluster: {
            sparkVersion: "string",
            enableElasticDisk: false,
            sparkConf: {
                string: "any",
            },
            azureAttributes: {
                availability: "string",
                firstOnDemand: 0,
                spotBidMaxPrice: 0,
            },
            clusterId: "string",
            clusterLogConf: {
                dbfs: {
                    destination: "string",
                },
                s3: {
                    destination: "string",
                    cannedAcl: "string",
                    enableEncryption: false,
                    encryptionType: "string",
                    endpoint: "string",
                    kmsKey: "string",
                    region: "string",
                },
            },
            clusterName: "string",
            customTags: {
                string: "any",
            },
            dataSecurityMode: "string",
            dockerImage: {
                url: "string",
                basicAuth: {
                    password: "string",
                    username: "string",
                },
            },
            driverInstancePoolId: "string",
            driverNodeTypeId: "string",
            autoscale: {
                maxWorkers: 0,
                minWorkers: 0,
            },
            awsAttributes: {
                availability: "string",
                ebsVolumeCount: 0,
                ebsVolumeSize: 0,
                ebsVolumeType: "string",
                firstOnDemand: 0,
                instanceProfileArn: "string",
                spotBidPricePercent: 0,
                zoneId: "string",
            },
            idempotencyToken: "string",
            enableLocalDiskEncryption: false,
            initScripts: [{
                dbfs: {
                    destination: "string",
                },
                file: {
                    destination: "string",
                },
                s3: {
                    destination: "string",
                    cannedAcl: "string",
                    enableEncryption: false,
                    encryptionType: "string",
                    endpoint: "string",
                    kmsKey: "string",
                    region: "string",
                },
            }],
            instancePoolId: "string",
            nodeTypeId: "string",
            numWorkers: 0,
            policyId: "string",
            singleUserName: "string",
            gcpAttributes: {
                availability: "string",
                bootDiskSize: 0,
                googleServiceAccount: "string",
                usePreemptibleExecutors: false,
                zoneId: "string",
            },
            sparkEnvVars: {
                string: "any",
            },
            autoterminationMinutes: 0,
            sshPublicKeys: ["string"],
        },
        notebookTask: {
            notebookPath: "string",
            baseParameters: {
                string: "any",
            },
        },
        pipelineTask: {
            pipelineId: "string",
        },
        pythonWheelTask: {
            entryPoint: "string",
            namedParameters: {
                string: "any",
            },
            packageName: "string",
            parameters: ["string"],
        },
        retryOnTimeout: false,
        schedule: {
            quartzCronExpression: "string",
            timezoneId: "string",
            pauseStatus: "string",
        },
        sparkJarTask: {
            jarUri: "string",
            mainClassName: "string",
            parameters: ["string"],
        },
        sparkPythonTask: {
            pythonFile: "string",
            parameters: ["string"],
        },
        sparkSubmitTask: {
            parameters: ["string"],
        },
        tasks: [{
            dependsOns: [{
                taskKey: "string",
            }],
            description: "string",
            emailNotifications: {
                noAlertForSkippedRuns: false,
                onFailures: ["string"],
                onStarts: ["string"],
                onSuccesses: ["string"],
            },
            existingClusterId: "string",
            jobClusterKey: "string",
            libraries: [{
                cran: {
                    "package": "string",
                    repo: "string",
                },
                egg: "string",
                jar: "string",
                maven: {
                    coordinates: "string",
                    exclusions: ["string"],
                    repo: "string",
                },
                pypi: {
                    "package": "string",
                    repo: "string",
                },
                whl: "string",
            }],
            maxRetries: 0,
            minRetryIntervalMillis: 0,
            newCluster: {
                sparkVersion: "string",
                enableElasticDisk: false,
                sparkConf: {
                    string: "any",
                },
                azureAttributes: {
                    availability: "string",
                    firstOnDemand: 0,
                    spotBidMaxPrice: 0,
                },
                clusterId: "string",
                clusterLogConf: {
                    dbfs: {
                        destination: "string",
                    },
                    s3: {
                        destination: "string",
                        cannedAcl: "string",
                        enableEncryption: false,
                        encryptionType: "string",
                        endpoint: "string",
                        kmsKey: "string",
                        region: "string",
                    },
                },
                clusterName: "string",
                customTags: {
                    string: "any",
                },
                dataSecurityMode: "string",
                dockerImage: {
                    url: "string",
                    basicAuth: {
                        password: "string",
                        username: "string",
                    },
                },
                driverInstancePoolId: "string",
                driverNodeTypeId: "string",
                autoscale: {
                    maxWorkers: 0,
                    minWorkers: 0,
                },
                awsAttributes: {
                    availability: "string",
                    ebsVolumeCount: 0,
                    ebsVolumeSize: 0,
                    ebsVolumeType: "string",
                    firstOnDemand: 0,
                    instanceProfileArn: "string",
                    spotBidPricePercent: 0,
                    zoneId: "string",
                },
                idempotencyToken: "string",
                enableLocalDiskEncryption: false,
                initScripts: [{
                    dbfs: {
                        destination: "string",
                    },
                    file: {
                        destination: "string",
                    },
                    s3: {
                        destination: "string",
                        cannedAcl: "string",
                        enableEncryption: false,
                        encryptionType: "string",
                        endpoint: "string",
                        kmsKey: "string",
                        region: "string",
                    },
                }],
                instancePoolId: "string",
                nodeTypeId: "string",
                numWorkers: 0,
                policyId: "string",
                singleUserName: "string",
                gcpAttributes: {
                    availability: "string",
                    bootDiskSize: 0,
                    googleServiceAccount: "string",
                    usePreemptibleExecutors: false,
                    zoneId: "string",
                },
                sparkEnvVars: {
                    string: "any",
                },
                autoterminationMinutes: 0,
                sshPublicKeys: ["string"],
            },
            notebookTask: {
                notebookPath: "string",
                baseParameters: {
                    string: "any",
                },
            },
            pipelineTask: {
                pipelineId: "string",
            },
            pythonWheelTask: {
                entryPoint: "string",
                namedParameters: {
                    string: "any",
                },
                packageName: "string",
                parameters: ["string"],
            },
            retryOnTimeout: false,
            sparkJarTask: {
                jarUri: "string",
                mainClassName: "string",
                parameters: ["string"],
            },
            sparkPythonTask: {
                pythonFile: "string",
                parameters: ["string"],
            },
            sparkSubmitTask: {
                parameters: ["string"],
            },
            taskKey: "string",
            timeoutSeconds: 0,
        }],
        timeoutSeconds: 0,
    });
    
    type: databricks:Job
    properties:
        alwaysRunning: false
        emailNotifications:
            noAlertForSkippedRuns: false
            onFailures:
                - string
            onStarts:
                - string
            onSuccesses:
                - string
        existingClusterId: string
        format: string
        gitSource:
            branch: string
            commit: string
            provider: string
            tag: string
            url: string
        jobClusters:
            - jobClusterKey: string
              newCluster:
                autoscale:
                    maxWorkers: 0
                    minWorkers: 0
                autoterminationMinutes: 0
                awsAttributes:
                    availability: string
                    ebsVolumeCount: 0
                    ebsVolumeSize: 0
                    ebsVolumeType: string
                    firstOnDemand: 0
                    instanceProfileArn: string
                    spotBidPricePercent: 0
                    zoneId: string
                azureAttributes:
                    availability: string
                    firstOnDemand: 0
                    spotBidMaxPrice: 0
                clusterId: string
                clusterLogConf:
                    dbfs:
                        destination: string
                    s3:
                        cannedAcl: string
                        destination: string
                        enableEncryption: false
                        encryptionType: string
                        endpoint: string
                        kmsKey: string
                        region: string
                clusterName: string
                customTags:
                    string: any
                dataSecurityMode: string
                dockerImage:
                    basicAuth:
                        password: string
                        username: string
                    url: string
                driverInstancePoolId: string
                driverNodeTypeId: string
                enableElasticDisk: false
                enableLocalDiskEncryption: false
                gcpAttributes:
                    availability: string
                    bootDiskSize: 0
                    googleServiceAccount: string
                    usePreemptibleExecutors: false
                    zoneId: string
                idempotencyToken: string
                initScripts:
                    - dbfs:
                        destination: string
                      file:
                        destination: string
                      s3:
                        cannedAcl: string
                        destination: string
                        enableEncryption: false
                        encryptionType: string
                        endpoint: string
                        kmsKey: string
                        region: string
                instancePoolId: string
                nodeTypeId: string
                numWorkers: 0
                policyId: string
                singleUserName: string
                sparkConf:
                    string: any
                sparkEnvVars:
                    string: any
                sparkVersion: string
                sshPublicKeys:
                    - string
        libraries:
            - cran:
                package: string
                repo: string
              egg: string
              jar: string
              maven:
                coordinates: string
                exclusions:
                    - string
                repo: string
              pypi:
                package: string
                repo: string
              whl: string
        maxConcurrentRuns: 0
        maxRetries: 0
        minRetryIntervalMillis: 0
        name: string
        newCluster:
            autoscale:
                maxWorkers: 0
                minWorkers: 0
            autoterminationMinutes: 0
            awsAttributes:
                availability: string
                ebsVolumeCount: 0
                ebsVolumeSize: 0
                ebsVolumeType: string
                firstOnDemand: 0
                instanceProfileArn: string
                spotBidPricePercent: 0
                zoneId: string
            azureAttributes:
                availability: string
                firstOnDemand: 0
                spotBidMaxPrice: 0
            clusterId: string
            clusterLogConf:
                dbfs:
                    destination: string
                s3:
                    cannedAcl: string
                    destination: string
                    enableEncryption: false
                    encryptionType: string
                    endpoint: string
                    kmsKey: string
                    region: string
            clusterName: string
            customTags:
                string: any
            dataSecurityMode: string
            dockerImage:
                basicAuth:
                    password: string
                    username: string
                url: string
            driverInstancePoolId: string
            driverNodeTypeId: string
            enableElasticDisk: false
            enableLocalDiskEncryption: false
            gcpAttributes:
                availability: string
                bootDiskSize: 0
                googleServiceAccount: string
                usePreemptibleExecutors: false
                zoneId: string
            idempotencyToken: string
            initScripts:
                - dbfs:
                    destination: string
                  file:
                    destination: string
                  s3:
                    cannedAcl: string
                    destination: string
                    enableEncryption: false
                    encryptionType: string
                    endpoint: string
                    kmsKey: string
                    region: string
            instancePoolId: string
            nodeTypeId: string
            numWorkers: 0
            policyId: string
            singleUserName: string
            sparkConf:
                string: any
            sparkEnvVars:
                string: any
            sparkVersion: string
            sshPublicKeys:
                - string
        notebookTask:
            baseParameters:
                string: any
            notebookPath: string
        pipelineTask:
            pipelineId: string
        pythonWheelTask:
            entryPoint: string
            namedParameters:
                string: any
            packageName: string
            parameters:
                - string
        retryOnTimeout: false
        schedule:
            pauseStatus: string
            quartzCronExpression: string
            timezoneId: string
        sparkJarTask:
            jarUri: string
            mainClassName: string
            parameters:
                - string
        sparkPythonTask:
            parameters:
                - string
            pythonFile: string
        sparkSubmitTask:
            parameters:
                - string
        tasks:
            - dependsOns:
                - taskKey: string
              description: string
              emailNotifications:
                noAlertForSkippedRuns: false
                onFailures:
                    - string
                onStarts:
                    - string
                onSuccesses:
                    - string
              existingClusterId: string
              jobClusterKey: string
              libraries:
                - cran:
                    package: string
                    repo: string
                  egg: string
                  jar: string
                  maven:
                    coordinates: string
                    exclusions:
                        - string
                    repo: string
                  pypi:
                    package: string
                    repo: string
                  whl: string
              maxRetries: 0
              minRetryIntervalMillis: 0
              newCluster:
                autoscale:
                    maxWorkers: 0
                    minWorkers: 0
                autoterminationMinutes: 0
                awsAttributes:
                    availability: string
                    ebsVolumeCount: 0
                    ebsVolumeSize: 0
                    ebsVolumeType: string
                    firstOnDemand: 0
                    instanceProfileArn: string
                    spotBidPricePercent: 0
                    zoneId: string
                azureAttributes:
                    availability: string
                    firstOnDemand: 0
                    spotBidMaxPrice: 0
                clusterId: string
                clusterLogConf:
                    dbfs:
                        destination: string
                    s3:
                        cannedAcl: string
                        destination: string
                        enableEncryption: false
                        encryptionType: string
                        endpoint: string
                        kmsKey: string
                        region: string
                clusterName: string
                customTags:
                    string: any
                dataSecurityMode: string
                dockerImage:
                    basicAuth:
                        password: string
                        username: string
                    url: string
                driverInstancePoolId: string
                driverNodeTypeId: string
                enableElasticDisk: false
                enableLocalDiskEncryption: false
                gcpAttributes:
                    availability: string
                    bootDiskSize: 0
                    googleServiceAccount: string
                    usePreemptibleExecutors: false
                    zoneId: string
                idempotencyToken: string
                initScripts:
                    - dbfs:
                        destination: string
                      file:
                        destination: string
                      s3:
                        cannedAcl: string
                        destination: string
                        enableEncryption: false
                        encryptionType: string
                        endpoint: string
                        kmsKey: string
                        region: string
                instancePoolId: string
                nodeTypeId: string
                numWorkers: 0
                policyId: string
                singleUserName: string
                sparkConf:
                    string: any
                sparkEnvVars:
                    string: any
                sparkVersion: string
                sshPublicKeys:
                    - string
              notebookTask:
                baseParameters:
                    string: any
                notebookPath: string
              pipelineTask:
                pipelineId: string
              pythonWheelTask:
                entryPoint: string
                namedParameters:
                    string: any
                packageName: string
                parameters:
                    - string
              retryOnTimeout: false
              sparkJarTask:
                jarUri: string
                mainClassName: string
                parameters:
                    - string
              sparkPythonTask:
                parameters:
                    - string
                pythonFile: string
              sparkSubmitTask:
                parameters:
                    - string
              taskKey: string
              timeoutSeconds: 0
        timeoutSeconds: 0
    

    Job Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.

    The Job resource accepts the following input properties:

    AlwaysRunning bool
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.
    EmailNotifications JobEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    ExistingClusterId string
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    Format string
    GitSource JobGitSource
    JobClusters List<JobJobCluster>
    Libraries List<JobLibrary>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    MaxConcurrentRuns int
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    MaxRetries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    MinRetryIntervalMillis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    Name string
    An optional name for the job. The default value is Untitled.
    NewCluster JobNewCluster
    Same set of parameters as for databricks.Cluster resource.
    NotebookTask JobNotebookTask
    PipelineTask JobPipelineTask
    PythonWheelTask JobPythonWheelTask
    RetryOnTimeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    Schedule JobSchedule
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    SparkJarTask JobSparkJarTask
    SparkPythonTask JobSparkPythonTask
    SparkSubmitTask JobSparkSubmitTask
    Tasks List<JobTask>
    TimeoutSeconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    AlwaysRunning bool
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.
    EmailNotifications JobEmailNotificationsArgs
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    ExistingClusterId string
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    Format string
    GitSource JobGitSourceArgs
    JobClusters []JobJobClusterArgs
    Libraries []JobLibraryArgs
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    MaxConcurrentRuns int
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    MaxRetries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    MinRetryIntervalMillis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    Name string
    An optional name for the job. The default value is Untitled.
    NewCluster JobNewClusterArgs
    Same set of parameters as for databricks.Cluster resource.
    NotebookTask JobNotebookTaskArgs
    PipelineTask JobPipelineTaskArgs
    PythonWheelTask JobPythonWheelTaskArgs
    RetryOnTimeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    Schedule JobScheduleArgs
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    SparkJarTask JobSparkJarTaskArgs
    SparkPythonTask JobSparkPythonTaskArgs
    SparkSubmitTask JobSparkSubmitTaskArgs
    Tasks []JobTaskArgs
    TimeoutSeconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    alwaysRunning Boolean
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.
    emailNotifications JobEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId String
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    format String
    gitSource JobGitSource
    jobClusters List<JobJobCluster>
    libraries List<JobLibrary>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    maxConcurrentRuns Integer
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    maxRetries Integer
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    minRetryIntervalMillis Integer
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    name String
    An optional name for the job. The default value is Untitled.
    newCluster JobNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebookTask JobNotebookTask
    pipelineTask JobPipelineTask
    pythonWheelTask JobPythonWheelTask
    retryOnTimeout Boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    schedule JobSchedule
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    sparkJarTask JobSparkJarTask
    sparkPythonTask JobSparkPythonTask
    sparkSubmitTask JobSparkSubmitTask
    tasks List<JobTask>
    timeoutSeconds Integer
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    alwaysRunning boolean
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.
    emailNotifications JobEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId string
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    format string
    gitSource JobGitSource
    jobClusters JobJobCluster[]
    libraries JobLibrary[]
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    maxConcurrentRuns number
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    maxRetries number
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    minRetryIntervalMillis number
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    name string
    An optional name for the job. The default value is Untitled.
    newCluster JobNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebookTask JobNotebookTask
    pipelineTask JobPipelineTask
    pythonWheelTask JobPythonWheelTask
    retryOnTimeout boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    schedule JobSchedule
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    sparkJarTask JobSparkJarTask
    sparkPythonTask JobSparkPythonTask
    sparkSubmitTask JobSparkSubmitTask
    tasks JobTask[]
    timeoutSeconds number
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    always_running bool
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.
    email_notifications JobEmailNotificationsArgs
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    existing_cluster_id str
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    format str
    git_source JobGitSourceArgs
    job_clusters Sequence[JobJobClusterArgs]
    libraries Sequence[JobLibraryArgs]
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    max_concurrent_runs int
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    max_retries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    min_retry_interval_millis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    name str
    An optional name for the job. The default value is Untitled.
    new_cluster JobNewClusterArgs
    Same set of parameters as for databricks.Cluster resource.
    notebook_task JobNotebookTaskArgs
    pipeline_task JobPipelineTaskArgs
    python_wheel_task JobPythonWheelTaskArgs
    retry_on_timeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    schedule JobScheduleArgs
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    spark_jar_task JobSparkJarTaskArgs
    spark_python_task JobSparkPythonTaskArgs
    spark_submit_task JobSparkSubmitTaskArgs
    tasks Sequence[JobTaskArgs]
    timeout_seconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    alwaysRunning Boolean
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.
    emailNotifications Property Map
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId String
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    format String
    gitSource Property Map
    jobClusters List<Property Map>
    libraries List<Property Map>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    maxConcurrentRuns Number
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    maxRetries Number
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    minRetryIntervalMillis Number
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    name String
    An optional name for the job. The default value is Untitled.
    newCluster Property Map
    Same set of parameters as for databricks.Cluster resource.
    notebookTask Property Map
    pipelineTask Property Map
    pythonWheelTask Property Map
    retryOnTimeout Boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    schedule Property Map
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    sparkJarTask Property Map
    sparkPythonTask Property Map
    sparkSubmitTask Property Map
    tasks List<Property Map>
    timeoutSeconds Number
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the Job resource produces the following output properties:

    Id string
    The provider-assigned unique ID for this managed resource.
    Url string
    URL of the job on the given workspace
    Id string
    The provider-assigned unique ID for this managed resource.
    Url string
    URL of the job on the given workspace
    id String
    The provider-assigned unique ID for this managed resource.
    url String
    URL of the job on the given workspace
    id string
    The provider-assigned unique ID for this managed resource.
    url string
    URL of the job on the given workspace
    id str
    The provider-assigned unique ID for this managed resource.
    url str
    URL of the job on the given workspace
    id String
    The provider-assigned unique ID for this managed resource.
    url String
    URL of the job on the given workspace

    Look up Existing Job Resource

    Get an existing Job resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.

    public static get(name: string, id: Input<ID>, state?: JobState, opts?: CustomResourceOptions): Job
    @staticmethod
    def get(resource_name: str,
            id: str,
            opts: Optional[ResourceOptions] = None,
            always_running: Optional[bool] = None,
            email_notifications: Optional[JobEmailNotificationsArgs] = None,
            existing_cluster_id: Optional[str] = None,
            format: Optional[str] = None,
            git_source: Optional[JobGitSourceArgs] = None,
            job_clusters: Optional[Sequence[JobJobClusterArgs]] = None,
            libraries: Optional[Sequence[JobLibraryArgs]] = None,
            max_concurrent_runs: Optional[int] = None,
            max_retries: Optional[int] = None,
            min_retry_interval_millis: Optional[int] = None,
            name: Optional[str] = None,
            new_cluster: Optional[JobNewClusterArgs] = None,
            notebook_task: Optional[JobNotebookTaskArgs] = None,
            pipeline_task: Optional[JobPipelineTaskArgs] = None,
            python_wheel_task: Optional[JobPythonWheelTaskArgs] = None,
            retry_on_timeout: Optional[bool] = None,
            schedule: Optional[JobScheduleArgs] = None,
            spark_jar_task: Optional[JobSparkJarTaskArgs] = None,
            spark_python_task: Optional[JobSparkPythonTaskArgs] = None,
            spark_submit_task: Optional[JobSparkSubmitTaskArgs] = None,
            tasks: Optional[Sequence[JobTaskArgs]] = None,
            timeout_seconds: Optional[int] = None,
            url: Optional[str] = None) -> Job
    func GetJob(ctx *Context, name string, id IDInput, state *JobState, opts ...ResourceOption) (*Job, error)
    public static Job Get(string name, Input<string> id, JobState? state, CustomResourceOptions? opts = null)
    public static Job get(String name, Output<String> id, JobState state, CustomResourceOptions options)
    resources:  _:    type: databricks:Job    get:      id: ${id}
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    resource_name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    The following state arguments are supported:
    AlwaysRunning bool
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.
    EmailNotifications JobEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    ExistingClusterId string
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    Format string
    GitSource JobGitSource
    JobClusters List<JobJobCluster>
    Libraries List<JobLibrary>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    MaxConcurrentRuns int
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    MaxRetries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    MinRetryIntervalMillis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    Name string
    An optional name for the job. The default value is Untitled.
    NewCluster JobNewCluster
    Same set of parameters as for databricks.Cluster resource.
    NotebookTask JobNotebookTask
    PipelineTask JobPipelineTask
    PythonWheelTask JobPythonWheelTask
    RetryOnTimeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    Schedule JobSchedule
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    SparkJarTask JobSparkJarTask
    SparkPythonTask JobSparkPythonTask
    SparkSubmitTask JobSparkSubmitTask
    Tasks List<JobTask>
    TimeoutSeconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    Url string
    URL of the job on the given workspace
    AlwaysRunning bool
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.
    EmailNotifications JobEmailNotificationsArgs
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    ExistingClusterId string
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    Format string
    GitSource JobGitSourceArgs
    JobClusters []JobJobClusterArgs
    Libraries []JobLibraryArgs
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    MaxConcurrentRuns int
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    MaxRetries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    MinRetryIntervalMillis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    Name string
    An optional name for the job. The default value is Untitled.
    NewCluster JobNewClusterArgs
    Same set of parameters as for databricks.Cluster resource.
    NotebookTask JobNotebookTaskArgs
    PipelineTask JobPipelineTaskArgs
    PythonWheelTask JobPythonWheelTaskArgs
    RetryOnTimeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    Schedule JobScheduleArgs
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    SparkJarTask JobSparkJarTaskArgs
    SparkPythonTask JobSparkPythonTaskArgs
    SparkSubmitTask JobSparkSubmitTaskArgs
    Tasks []JobTaskArgs
    TimeoutSeconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    Url string
    URL of the job on the given workspace
    alwaysRunning Boolean
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.
    emailNotifications JobEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId String
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    format String
    gitSource JobGitSource
    jobClusters List<JobJobCluster>
    libraries List<JobLibrary>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    maxConcurrentRuns Integer
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    maxRetries Integer
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    minRetryIntervalMillis Integer
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    name String
    An optional name for the job. The default value is Untitled.
    newCluster JobNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebookTask JobNotebookTask
    pipelineTask JobPipelineTask
    pythonWheelTask JobPythonWheelTask
    retryOnTimeout Boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    schedule JobSchedule
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    sparkJarTask JobSparkJarTask
    sparkPythonTask JobSparkPythonTask
    sparkSubmitTask JobSparkSubmitTask
    tasks List<JobTask>
    timeoutSeconds Integer
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    url String
    URL of the job on the given workspace
    alwaysRunning boolean
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.
    emailNotifications JobEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId string
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    format string
    gitSource JobGitSource
    jobClusters JobJobCluster[]
    libraries JobLibrary[]
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    maxConcurrentRuns number
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    maxRetries number
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    minRetryIntervalMillis number
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    name string
    An optional name for the job. The default value is Untitled.
    newCluster JobNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebookTask JobNotebookTask
    pipelineTask JobPipelineTask
    pythonWheelTask JobPythonWheelTask
    retryOnTimeout boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    schedule JobSchedule
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    sparkJarTask JobSparkJarTask
    sparkPythonTask JobSparkPythonTask
    sparkSubmitTask JobSparkSubmitTask
    tasks JobTask[]
    timeoutSeconds number
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    url string
    URL of the job on the given workspace
    always_running bool
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.
    email_notifications JobEmailNotificationsArgs
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    existing_cluster_id str
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    format str
    git_source JobGitSourceArgs
    job_clusters Sequence[JobJobClusterArgs]
    libraries Sequence[JobLibraryArgs]
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    max_concurrent_runs int
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    max_retries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    min_retry_interval_millis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    name str
    An optional name for the job. The default value is Untitled.
    new_cluster JobNewClusterArgs
    Same set of parameters as for databricks.Cluster resource.
    notebook_task JobNotebookTaskArgs
    pipeline_task JobPipelineTaskArgs
    python_wheel_task JobPythonWheelTaskArgs
    retry_on_timeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    schedule JobScheduleArgs
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    spark_jar_task JobSparkJarTaskArgs
    spark_python_task JobSparkPythonTaskArgs
    spark_submit_task JobSparkSubmitTaskArgs
    tasks Sequence[JobTaskArgs]
    timeout_seconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    url str
    URL of the job on the given workspace
    alwaysRunning Boolean
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.
    emailNotifications Property Map
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId String
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    format String
    gitSource Property Map
    jobClusters List<Property Map>
    libraries List<Property Map>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    maxConcurrentRuns Number
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    maxRetries Number
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    minRetryIntervalMillis Number
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    name String
    An optional name for the job. The default value is Untitled.
    newCluster Property Map
    Same set of parameters as for databricks.Cluster resource.
    notebookTask Property Map
    pipelineTask Property Map
    pythonWheelTask Property Map
    retryOnTimeout Boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    schedule Property Map
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    sparkJarTask Property Map
    sparkPythonTask Property Map
    sparkSubmitTask Property Map
    tasks List<Property Map>
    timeoutSeconds Number
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    url String
    URL of the job on the given workspace

    Supporting Types

    JobEmailNotifications, JobEmailNotificationsArgs

    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs
    OnFailures List<string>
    (List) list of emails to notify on failure
    OnStarts List<string>
    (List) list of emails to notify on failure
    OnSuccesses List<string>
    (List) list of emails to notify on failure
    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs
    OnFailures []string
    (List) list of emails to notify on failure
    OnStarts []string
    (List) list of emails to notify on failure
    OnSuccesses []string
    (List) list of emails to notify on failure
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs
    onFailures List<String>
    (List) list of emails to notify on failure
    onStarts List<String>
    (List) list of emails to notify on failure
    onSuccesses List<String>
    (List) list of emails to notify on failure
    noAlertForSkippedRuns boolean
    (Bool) don't send alert for skipped runs
    onFailures string[]
    (List) list of emails to notify on failure
    onStarts string[]
    (List) list of emails to notify on failure
    onSuccesses string[]
    (List) list of emails to notify on failure
    no_alert_for_skipped_runs bool
    (Bool) don't send alert for skipped runs
    on_failures Sequence[str]
    (List) list of emails to notify on failure
    on_starts Sequence[str]
    (List) list of emails to notify on failure
    on_successes Sequence[str]
    (List) list of emails to notify on failure
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs
    onFailures List<String>
    (List) list of emails to notify on failure
    onStarts List<String>
    (List) list of emails to notify on failure
    onSuccesses List<String>
    (List) list of emails to notify on failure

    JobGitSource, JobGitSourceArgs

    Url string
    URL of the job on the given workspace
    Branch string
    Commit string
    Provider string
    Tag string
    Url string
    URL of the job on the given workspace
    Branch string
    Commit string
    Provider string
    Tag string
    url String
    URL of the job on the given workspace
    branch String
    commit String
    provider String
    tag String
    url string
    URL of the job on the given workspace
    branch string
    commit string
    provider string
    tag string
    url str
    URL of the job on the given workspace
    branch str
    commit str
    provider str
    tag str
    url String
    URL of the job on the given workspace
    branch String
    commit String
    provider String
    tag String

    JobJobCluster, JobJobClusterArgs

    JobClusterKey string
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    NewCluster JobJobClusterNewCluster
    Same set of parameters as for databricks.Cluster resource.
    JobClusterKey string
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    NewCluster JobJobClusterNewCluster
    Same set of parameters as for databricks.Cluster resource.
    jobClusterKey String
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    newCluster JobJobClusterNewCluster
    Same set of parameters as for databricks.Cluster resource.
    jobClusterKey string
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    newCluster JobJobClusterNewCluster
    Same set of parameters as for databricks.Cluster resource.
    job_cluster_key str
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    new_cluster JobJobClusterNewCluster
    Same set of parameters as for databricks.Cluster resource.
    jobClusterKey String
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    newCluster Property Map
    Same set of parameters as for databricks.Cluster resource.

    JobJobClusterNewCluster, JobJobClusterNewClusterArgs

    JobJobClusterNewClusterAutoscale, JobJobClusterNewClusterAutoscaleArgs

    maxWorkers Integer
    minWorkers Integer
    maxWorkers number
    minWorkers number
    maxWorkers Number
    minWorkers Number

    JobJobClusterNewClusterAwsAttributes, JobJobClusterNewClusterAwsAttributesArgs

    JobJobClusterNewClusterAzureAttributes, JobJobClusterNewClusterAzureAttributesArgs

    JobJobClusterNewClusterClusterLogConf, JobJobClusterNewClusterClusterLogConfArgs

    JobJobClusterNewClusterClusterLogConfDbfs, JobJobClusterNewClusterClusterLogConfDbfsArgs

    JobJobClusterNewClusterClusterLogConfS3, JobJobClusterNewClusterClusterLogConfS3Args

    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String
    destination string
    cannedAcl string
    enableEncryption boolean
    encryptionType string
    endpoint string
    kmsKey string
    region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String

    JobJobClusterNewClusterDockerImage, JobJobClusterNewClusterDockerImageArgs

    Url string
    URL of the job on the given workspace
    BasicAuth JobJobClusterNewClusterDockerImageBasicAuth
    Url string
    URL of the job on the given workspace
    BasicAuth JobJobClusterNewClusterDockerImageBasicAuth
    url String
    URL of the job on the given workspace
    basicAuth JobJobClusterNewClusterDockerImageBasicAuth
    url string
    URL of the job on the given workspace
    basicAuth JobJobClusterNewClusterDockerImageBasicAuth
    url str
    URL of the job on the given workspace
    basic_auth JobJobClusterNewClusterDockerImageBasicAuth
    url String
    URL of the job on the given workspace
    basicAuth Property Map

    JobJobClusterNewClusterDockerImageBasicAuth, JobJobClusterNewClusterDockerImageBasicAuthArgs

    Password string
    Username string
    Password string
    Username string
    password String
    username String
    password string
    username string
    password String
    username String

    JobJobClusterNewClusterGcpAttributes, JobJobClusterNewClusterGcpAttributesArgs

    JobJobClusterNewClusterInitScript, JobJobClusterNewClusterInitScriptArgs

    JobJobClusterNewClusterInitScriptDbfs, JobJobClusterNewClusterInitScriptDbfsArgs

    JobJobClusterNewClusterInitScriptFile, JobJobClusterNewClusterInitScriptFileArgs

    JobJobClusterNewClusterInitScriptS3, JobJobClusterNewClusterInitScriptS3Args

    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String
    destination string
    cannedAcl string
    enableEncryption boolean
    encryptionType string
    endpoint string
    kmsKey string
    region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String

    JobLibrary, JobLibraryArgs

    JobLibraryCran, JobLibraryCranArgs

    Package string
    Repo string
    Package string
    Repo string
    package_ String
    repo String
    package string
    repo string
    package str
    repo str
    package String
    repo String

    JobLibraryMaven, JobLibraryMavenArgs

    Coordinates string
    Exclusions List<string>
    Repo string
    Coordinates string
    Exclusions []string
    Repo string
    coordinates String
    exclusions List<String>
    repo String
    coordinates string
    exclusions string[]
    repo string
    coordinates str
    exclusions Sequence[str]
    repo str
    coordinates String
    exclusions List<String>
    repo String

    JobLibraryPypi, JobLibraryPypiArgs

    Package string
    Repo string
    Package string
    Repo string
    package_ String
    repo String
    package string
    repo string
    package str
    repo str
    package String
    repo String

    JobNewCluster, JobNewClusterArgs

    JobNewClusterAutoscale, JobNewClusterAutoscaleArgs

    maxWorkers Integer
    minWorkers Integer
    maxWorkers number
    minWorkers number
    maxWorkers Number
    minWorkers Number

    JobNewClusterAwsAttributes, JobNewClusterAwsAttributesArgs

    JobNewClusterAzureAttributes, JobNewClusterAzureAttributesArgs

    JobNewClusterClusterLogConf, JobNewClusterClusterLogConfArgs

    JobNewClusterClusterLogConfDbfs, JobNewClusterClusterLogConfDbfsArgs

    JobNewClusterClusterLogConfS3, JobNewClusterClusterLogConfS3Args

    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String
    destination string
    cannedAcl string
    enableEncryption boolean
    encryptionType string
    endpoint string
    kmsKey string
    region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String

    JobNewClusterDockerImage, JobNewClusterDockerImageArgs

    Url string
    URL of the job on the given workspace
    BasicAuth JobNewClusterDockerImageBasicAuth
    Url string
    URL of the job on the given workspace
    BasicAuth JobNewClusterDockerImageBasicAuth
    url String
    URL of the job on the given workspace
    basicAuth JobNewClusterDockerImageBasicAuth
    url string
    URL of the job on the given workspace
    basicAuth JobNewClusterDockerImageBasicAuth
    url str
    URL of the job on the given workspace
    basic_auth JobNewClusterDockerImageBasicAuth
    url String
    URL of the job on the given workspace
    basicAuth Property Map

    JobNewClusterDockerImageBasicAuth, JobNewClusterDockerImageBasicAuthArgs

    Password string
    Username string
    Password string
    Username string
    password String
    username String
    password string
    username string
    password String
    username String

    JobNewClusterGcpAttributes, JobNewClusterGcpAttributesArgs

    JobNewClusterInitScript, JobNewClusterInitScriptArgs

    JobNewClusterInitScriptDbfs, JobNewClusterInitScriptDbfsArgs

    JobNewClusterInitScriptFile, JobNewClusterInitScriptFileArgs

    JobNewClusterInitScriptS3, JobNewClusterInitScriptS3Args

    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String
    destination string
    cannedAcl string
    enableEncryption boolean
    encryptionType string
    endpoint string
    kmsKey string
    region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String

    JobNotebookTask, JobNotebookTaskArgs

    NotebookPath string
    The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
    BaseParameters Dictionary<string, object>
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    NotebookPath string
    The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
    BaseParameters map[string]interface{}
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    notebookPath String
    The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
    baseParameters Map<String,Object>
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    notebookPath string
    The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
    baseParameters {[key: string]: any}
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    notebook_path str
    The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
    base_parameters Mapping[str, Any]
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    notebookPath String
    The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
    baseParameters Map<Any>
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.

    JobPipelineTask, JobPipelineTaskArgs

    PipelineId string
    The pipeline's unique ID.
    PipelineId string
    The pipeline's unique ID.
    pipelineId String
    The pipeline's unique ID.
    pipelineId string
    The pipeline's unique ID.
    pipeline_id str
    The pipeline's unique ID.
    pipelineId String
    The pipeline's unique ID.

    JobPythonWheelTask, JobPythonWheelTaskArgs

    EntryPoint string
    Python function as entry point for the task
    NamedParameters Dictionary<string, object>
    Named parameters for the task
    PackageName string
    Name of Python package
    Parameters List<string>
    Parameters for the task
    EntryPoint string
    Python function as entry point for the task
    NamedParameters map[string]interface{}
    Named parameters for the task
    PackageName string
    Name of Python package
    Parameters []string
    Parameters for the task
    entryPoint String
    Python function as entry point for the task
    namedParameters Map<String,Object>
    Named parameters for the task
    packageName String
    Name of Python package
    parameters List<String>
    Parameters for the task
    entryPoint string
    Python function as entry point for the task
    namedParameters {[key: string]: any}
    Named parameters for the task
    packageName string
    Name of Python package
    parameters string[]
    Parameters for the task
    entry_point str
    Python function as entry point for the task
    named_parameters Mapping[str, Any]
    Named parameters for the task
    package_name str
    Name of Python package
    parameters Sequence[str]
    Parameters for the task
    entryPoint String
    Python function as entry point for the task
    namedParameters Map<Any>
    Named parameters for the task
    packageName String
    Name of Python package
    parameters List<String>
    Parameters for the task

    JobSchedule, JobScheduleArgs

    QuartzCronExpression string
    A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
    TimezoneId string
    A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
    PauseStatus string
    Indicate whether this schedule is paused or not. Either “PAUSED” or “UNPAUSED”. When the pause_status field is omitted and a schedule is provided, the server will default to using "UNPAUSED" as a value for pause_status.
    QuartzCronExpression string
    A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
    TimezoneId string
    A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
    PauseStatus string
    Indicate whether this schedule is paused or not. Either “PAUSED” or “UNPAUSED”. When the pause_status field is omitted and a schedule is provided, the server will default to using "UNPAUSED" as a value for pause_status.
    quartzCronExpression String
    A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
    timezoneId String
    A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
    pauseStatus String
    Indicate whether this schedule is paused or not. Either “PAUSED” or “UNPAUSED”. When the pause_status field is omitted and a schedule is provided, the server will default to using "UNPAUSED" as a value for pause_status.
    quartzCronExpression string
    A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
    timezoneId string
    A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
    pauseStatus string
    Indicate whether this schedule is paused or not. Either “PAUSED” or “UNPAUSED”. When the pause_status field is omitted and a schedule is provided, the server will default to using "UNPAUSED" as a value for pause_status.
    quartz_cron_expression str
    A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
    timezone_id str
    A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
    pause_status str
    Indicate whether this schedule is paused or not. Either “PAUSED” or “UNPAUSED”. When the pause_status field is omitted and a schedule is provided, the server will default to using "UNPAUSED" as a value for pause_status.
    quartzCronExpression String
    A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
    timezoneId String
    A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
    pauseStatus String
    Indicate whether this schedule is paused or not. Either “PAUSED” or “UNPAUSED”. When the pause_status field is omitted and a schedule is provided, the server will default to using "UNPAUSED" as a value for pause_status.

    JobSparkJarTask, JobSparkJarTaskArgs

    JarUri string
    MainClassName string
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    Parameters List<string>
    Parameters for the task
    JarUri string
    MainClassName string
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    Parameters []string
    Parameters for the task
    jarUri String
    mainClassName String
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters List<String>
    Parameters for the task
    jarUri string
    mainClassName string
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters string[]
    Parameters for the task
    jar_uri str
    main_class_name str
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters Sequence[str]
    Parameters for the task
    jarUri String
    mainClassName String
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters List<String>
    Parameters for the task

    JobSparkPythonTask, JobSparkPythonTaskArgs

    PythonFile string
    The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
    Parameters List<string>
    Parameters for the task
    PythonFile string
    The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
    Parameters []string
    Parameters for the task
    pythonFile String
    The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
    parameters List<String>
    Parameters for the task
    pythonFile string
    The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
    parameters string[]
    Parameters for the task
    python_file str
    The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
    parameters Sequence[str]
    Parameters for the task
    pythonFile String
    The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
    parameters List<String>
    Parameters for the task

    JobSparkSubmitTask, JobSparkSubmitTaskArgs

    Parameters List<string>
    Parameters for the task
    Parameters []string
    Parameters for the task
    parameters List<String>
    Parameters for the task
    parameters string[]
    Parameters for the task
    parameters Sequence[str]
    Parameters for the task
    parameters List<String>
    Parameters for the task

    JobTask, JobTaskArgs

    DependsOns List<JobTaskDependsOn>
    Description string
    EmailNotifications JobTaskEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    ExistingClusterId string
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    JobClusterKey string
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    Libraries List<JobTaskLibrary>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    MaxRetries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    MinRetryIntervalMillis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    NewCluster JobTaskNewCluster
    Same set of parameters as for databricks.Cluster resource.
    NotebookTask JobTaskNotebookTask
    PipelineTask JobTaskPipelineTask
    PythonWheelTask JobTaskPythonWheelTask
    RetryOnTimeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    SparkJarTask JobTaskSparkJarTask
    SparkPythonTask JobTaskSparkPythonTask
    SparkSubmitTask JobTaskSparkSubmitTask
    TaskKey string
    TimeoutSeconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    DependsOns []JobTaskDependsOn
    Description string
    EmailNotifications JobTaskEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    ExistingClusterId string
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    JobClusterKey string
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    Libraries []JobTaskLibrary
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    MaxRetries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    MinRetryIntervalMillis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    NewCluster JobTaskNewCluster
    Same set of parameters as for databricks.Cluster resource.
    NotebookTask JobTaskNotebookTask
    PipelineTask JobTaskPipelineTask
    PythonWheelTask JobTaskPythonWheelTask
    RetryOnTimeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    SparkJarTask JobTaskSparkJarTask
    SparkPythonTask JobTaskSparkPythonTask
    SparkSubmitTask JobTaskSparkSubmitTask
    TaskKey string
    TimeoutSeconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    dependsOns List<JobTaskDependsOn>
    description String
    emailNotifications JobTaskEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId String
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    jobClusterKey String
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    libraries List<JobTaskLibrary>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    maxRetries Integer
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    minRetryIntervalMillis Integer
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    newCluster JobTaskNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebookTask JobTaskNotebookTask
    pipelineTask JobTaskPipelineTask
    pythonWheelTask JobTaskPythonWheelTask
    retryOnTimeout Boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    sparkJarTask JobTaskSparkJarTask
    sparkPythonTask JobTaskSparkPythonTask
    sparkSubmitTask JobTaskSparkSubmitTask
    taskKey String
    timeoutSeconds Integer
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    dependsOns JobTaskDependsOn[]
    description string
    emailNotifications JobTaskEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId string
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    jobClusterKey string
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    libraries JobTaskLibrary[]
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    maxRetries number
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    minRetryIntervalMillis number
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    newCluster JobTaskNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebookTask JobTaskNotebookTask
    pipelineTask JobTaskPipelineTask
    pythonWheelTask JobTaskPythonWheelTask
    retryOnTimeout boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    sparkJarTask JobTaskSparkJarTask
    sparkPythonTask JobTaskSparkPythonTask
    sparkSubmitTask JobTaskSparkSubmitTask
    taskKey string
    timeoutSeconds number
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    depends_ons Sequence[JobTaskDependsOn]
    description str
    email_notifications JobTaskEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    existing_cluster_id str
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    job_cluster_key str
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    libraries Sequence[JobTaskLibrary]
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    max_retries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    min_retry_interval_millis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    new_cluster JobTaskNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebook_task JobTaskNotebookTask
    pipeline_task JobTaskPipelineTask
    python_wheel_task JobTaskPythonWheelTask
    retry_on_timeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    spark_jar_task JobTaskSparkJarTask
    spark_python_task JobTaskSparkPythonTask
    spark_submit_task JobTaskSparkSubmitTask
    task_key str
    timeout_seconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    dependsOns List<Property Map>
    description String
    emailNotifications Property Map
    (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId String
    If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use new_cluster for greater reliability.
    jobClusterKey String
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    libraries List<Property Map>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
    maxRetries Number
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
    minRetryIntervalMillis Number
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    newCluster Property Map
    Same set of parameters as for databricks.Cluster resource.
    notebookTask Property Map
    pipelineTask Property Map
    pythonWheelTask Property Map
    retryOnTimeout Boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    sparkJarTask Property Map
    sparkPythonTask Property Map
    sparkSubmitTask Property Map
    taskKey String
    timeoutSeconds Number
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.

    JobTaskDependsOn, JobTaskDependsOnArgs

    TaskKey string
    TaskKey string
    taskKey String
    taskKey string
    taskKey String

    JobTaskEmailNotifications, JobTaskEmailNotificationsArgs

    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs
    OnFailures List<string>
    (List) list of emails to notify on failure
    OnStarts List<string>
    (List) list of emails to notify on failure
    OnSuccesses List<string>
    (List) list of emails to notify on failure
    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs
    OnFailures []string
    (List) list of emails to notify on failure
    OnStarts []string
    (List) list of emails to notify on failure
    OnSuccesses []string
    (List) list of emails to notify on failure
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs
    onFailures List<String>
    (List) list of emails to notify on failure
    onStarts List<String>
    (List) list of emails to notify on failure
    onSuccesses List<String>
    (List) list of emails to notify on failure
    noAlertForSkippedRuns boolean
    (Bool) don't send alert for skipped runs
    onFailures string[]
    (List) list of emails to notify on failure
    onStarts string[]
    (List) list of emails to notify on failure
    onSuccesses string[]
    (List) list of emails to notify on failure
    no_alert_for_skipped_runs bool
    (Bool) don't send alert for skipped runs
    on_failures Sequence[str]
    (List) list of emails to notify on failure
    on_starts Sequence[str]
    (List) list of emails to notify on failure
    on_successes Sequence[str]
    (List) list of emails to notify on failure
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs
    onFailures List<String>
    (List) list of emails to notify on failure
    onStarts List<String>
    (List) list of emails to notify on failure
    onSuccesses List<String>
    (List) list of emails to notify on failure

    JobTaskLibrary, JobTaskLibraryArgs

    JobTaskLibraryCran, JobTaskLibraryCranArgs

    Package string
    Repo string
    Package string
    Repo string
    package_ String
    repo String
    package string
    repo string
    package str
    repo str
    package String
    repo String

    JobTaskLibraryMaven, JobTaskLibraryMavenArgs

    Coordinates string
    Exclusions List<string>
    Repo string
    Coordinates string
    Exclusions []string
    Repo string
    coordinates String
    exclusions List<String>
    repo String
    coordinates string
    exclusions string[]
    repo string
    coordinates str
    exclusions Sequence[str]
    repo str
    coordinates String
    exclusions List<String>
    repo String

    JobTaskLibraryPypi, JobTaskLibraryPypiArgs

    Package string
    Repo string
    Package string
    Repo string
    package_ String
    repo String
    package string
    repo string
    package str
    repo str
    package String
    repo String

    JobTaskNewCluster, JobTaskNewClusterArgs

    JobTaskNewClusterAutoscale, JobTaskNewClusterAutoscaleArgs

    maxWorkers Integer
    minWorkers Integer
    maxWorkers number
    minWorkers number
    maxWorkers Number
    minWorkers Number

    JobTaskNewClusterAwsAttributes, JobTaskNewClusterAwsAttributesArgs

    JobTaskNewClusterAzureAttributes, JobTaskNewClusterAzureAttributesArgs

    JobTaskNewClusterClusterLogConf, JobTaskNewClusterClusterLogConfArgs

    JobTaskNewClusterClusterLogConfDbfs, JobTaskNewClusterClusterLogConfDbfsArgs

    JobTaskNewClusterClusterLogConfS3, JobTaskNewClusterClusterLogConfS3Args

    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String
    destination string
    cannedAcl string
    enableEncryption boolean
    encryptionType string
    endpoint string
    kmsKey string
    region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String

    JobTaskNewClusterDockerImage, JobTaskNewClusterDockerImageArgs

    Url string
    URL of the job on the given workspace
    BasicAuth JobTaskNewClusterDockerImageBasicAuth
    Url string
    URL of the job on the given workspace
    BasicAuth JobTaskNewClusterDockerImageBasicAuth
    url String
    URL of the job on the given workspace
    basicAuth JobTaskNewClusterDockerImageBasicAuth
    url string
    URL of the job on the given workspace
    basicAuth JobTaskNewClusterDockerImageBasicAuth
    url str
    URL of the job on the given workspace
    basic_auth JobTaskNewClusterDockerImageBasicAuth
    url String
    URL of the job on the given workspace
    basicAuth Property Map

    JobTaskNewClusterDockerImageBasicAuth, JobTaskNewClusterDockerImageBasicAuthArgs

    Password string
    Username string
    Password string
    Username string
    password String
    username String
    password string
    username string
    password String
    username String

    JobTaskNewClusterGcpAttributes, JobTaskNewClusterGcpAttributesArgs

    JobTaskNewClusterInitScript, JobTaskNewClusterInitScriptArgs

    JobTaskNewClusterInitScriptDbfs, JobTaskNewClusterInitScriptDbfsArgs

    JobTaskNewClusterInitScriptFile, JobTaskNewClusterInitScriptFileArgs

    JobTaskNewClusterInitScriptS3, JobTaskNewClusterInitScriptS3Args

    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String
    destination string
    cannedAcl string
    enableEncryption boolean
    encryptionType string
    endpoint string
    kmsKey string
    region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String

    JobTaskNotebookTask, JobTaskNotebookTaskArgs

    NotebookPath string
    The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
    BaseParameters Dictionary<string, object>
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    NotebookPath string
    The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
    BaseParameters map[string]interface{}
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    notebookPath String
    The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
    baseParameters Map<String,Object>
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    notebookPath string
    The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
    baseParameters {[key: string]: any}
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    notebook_path str
    The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
    base_parameters Mapping[str, Any]
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    notebookPath String
    The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
    baseParameters Map<Any>
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.

    JobTaskPipelineTask, JobTaskPipelineTaskArgs

    PipelineId string
    The pipeline's unique ID.
    PipelineId string
    The pipeline's unique ID.
    pipelineId String
    The pipeline's unique ID.
    pipelineId string
    The pipeline's unique ID.
    pipeline_id str
    The pipeline's unique ID.
    pipelineId String
    The pipeline's unique ID.

    JobTaskPythonWheelTask, JobTaskPythonWheelTaskArgs

    EntryPoint string
    Python function as entry point for the task
    NamedParameters Dictionary<string, object>
    Named parameters for the task
    PackageName string
    Name of Python package
    Parameters List<string>
    Parameters for the task
    EntryPoint string
    Python function as entry point for the task
    NamedParameters map[string]interface{}
    Named parameters for the task
    PackageName string
    Name of Python package
    Parameters []string
    Parameters for the task
    entryPoint String
    Python function as entry point for the task
    namedParameters Map<String,Object>
    Named parameters for the task
    packageName String
    Name of Python package
    parameters List<String>
    Parameters for the task
    entryPoint string
    Python function as entry point for the task
    namedParameters {[key: string]: any}
    Named parameters for the task
    packageName string
    Name of Python package
    parameters string[]
    Parameters for the task
    entry_point str
    Python function as entry point for the task
    named_parameters Mapping[str, Any]
    Named parameters for the task
    package_name str
    Name of Python package
    parameters Sequence[str]
    Parameters for the task
    entryPoint String
    Python function as entry point for the task
    namedParameters Map<Any>
    Named parameters for the task
    packageName String
    Name of Python package
    parameters List<String>
    Parameters for the task

    JobTaskSparkJarTask, JobTaskSparkJarTaskArgs

    JarUri string
    MainClassName string
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    Parameters List<string>
    Parameters for the task
    JarUri string
    MainClassName string
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    Parameters []string
    Parameters for the task
    jarUri String
    mainClassName String
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters List<String>
    Parameters for the task
    jarUri string
    mainClassName string
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters string[]
    Parameters for the task
    jar_uri str
    main_class_name str
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters Sequence[str]
    Parameters for the task
    jarUri String
    mainClassName String
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters List<String>
    Parameters for the task

    JobTaskSparkPythonTask, JobTaskSparkPythonTaskArgs

    PythonFile string
    The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
    Parameters List<string>
    Parameters for the task
    PythonFile string
    The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
    Parameters []string
    Parameters for the task
    pythonFile String
    The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
    parameters List<String>
    Parameters for the task
    pythonFile string
    The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
    parameters string[]
    Parameters for the task
    python_file str
    The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
    parameters Sequence[str]
    Parameters for the task
    pythonFile String
    The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
    parameters List<String>
    Parameters for the task

    JobTaskSparkSubmitTask, JobTaskSparkSubmitTaskArgs

    Parameters List<string>
    Parameters for the task
    Parameters []string
    Parameters for the task
    parameters List<String>
    Parameters for the task
    parameters string[]
    Parameters for the task
    parameters Sequence[str]
    Parameters for the task
    parameters List<String>
    Parameters for the task

    Package Details

    Repository
    databricks pulumi/pulumi-databricks
    License
    Apache-2.0
    Notes
    This Pulumi package is based on the databricks Terraform Provider.
    databricks logo
    Viewing docs for Databricks v0.4.0 (Older version)
    published on Monday, Mar 9, 2026 by Pulumi
      Try Pulumi Cloud free. Your team will thank you.