1. Packages
  2. Databricks
  3. API Docs
  4. Job
Databricks v1.35.0 published on Friday, Mar 29, 2024 by Pulumi

databricks.Job

Explore with Pulumi AI

databricks logo
Databricks v1.35.0 published on Friday, Mar 29, 2024 by Pulumi

    Import

    The resource job can be imported using the id of the job

    bash

    $ pulumi import databricks:index/job:Job this <job-id>
    

    Create Job Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new Job(name: string, args?: JobArgs, opts?: CustomResourceOptions);
    @overload
    def Job(resource_name: str,
            args: Optional[JobArgs] = None,
            opts: Optional[ResourceOptions] = None)
    
    @overload
    def Job(resource_name: str,
            opts: Optional[ResourceOptions] = None,
            always_running: Optional[bool] = None,
            computes: Optional[Sequence[JobComputeArgs]] = None,
            continuous: Optional[JobContinuousArgs] = None,
            control_run_state: Optional[bool] = None,
            dbt_task: Optional[JobDbtTaskArgs] = None,
            deployment: Optional[JobDeploymentArgs] = None,
            description: Optional[str] = None,
            edit_mode: Optional[str] = None,
            email_notifications: Optional[JobEmailNotificationsArgs] = None,
            existing_cluster_id: Optional[str] = None,
            format: Optional[str] = None,
            git_source: Optional[JobGitSourceArgs] = None,
            health: Optional[JobHealthArgs] = None,
            job_clusters: Optional[Sequence[JobJobClusterArgs]] = None,
            libraries: Optional[Sequence[JobLibraryArgs]] = None,
            max_concurrent_runs: Optional[int] = None,
            max_retries: Optional[int] = None,
            min_retry_interval_millis: Optional[int] = None,
            name: Optional[str] = None,
            new_cluster: Optional[JobNewClusterArgs] = None,
            notebook_task: Optional[JobNotebookTaskArgs] = None,
            notification_settings: Optional[JobNotificationSettingsArgs] = None,
            parameters: Optional[Sequence[JobParameterArgs]] = None,
            pipeline_task: Optional[JobPipelineTaskArgs] = None,
            python_wheel_task: Optional[JobPythonWheelTaskArgs] = None,
            queue: Optional[JobQueueArgs] = None,
            retry_on_timeout: Optional[bool] = None,
            run_as: Optional[JobRunAsArgs] = None,
            run_job_task: Optional[JobRunJobTaskArgs] = None,
            schedule: Optional[JobScheduleArgs] = None,
            spark_jar_task: Optional[JobSparkJarTaskArgs] = None,
            spark_python_task: Optional[JobSparkPythonTaskArgs] = None,
            spark_submit_task: Optional[JobSparkSubmitTaskArgs] = None,
            tags: Optional[Mapping[str, Any]] = None,
            tasks: Optional[Sequence[JobTaskArgs]] = None,
            timeout_seconds: Optional[int] = None,
            trigger: Optional[JobTriggerArgs] = None,
            webhook_notifications: Optional[JobWebhookNotificationsArgs] = None)
    func NewJob(ctx *Context, name string, args *JobArgs, opts ...ResourceOption) (*Job, error)
    public Job(string name, JobArgs? args = null, CustomResourceOptions? opts = null)
    public Job(String name, JobArgs args)
    public Job(String name, JobArgs args, CustomResourceOptions options)
    
    type: databricks:Job
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Example

    The following reference example uses placeholder values for all input properties.

    var jobResource = new Databricks.Job("jobResource", new()
    {
        Computes = new[]
        {
            new Databricks.Inputs.JobComputeArgs
            {
                ComputeKey = "string",
                Spec = new Databricks.Inputs.JobComputeSpecArgs
                {
                    Kind = "string",
                },
            },
        },
        Continuous = new Databricks.Inputs.JobContinuousArgs
        {
            PauseStatus = "string",
        },
        ControlRunState = false,
        Deployment = new Databricks.Inputs.JobDeploymentArgs
        {
            Kind = "string",
            MetadataFilePath = "string",
        },
        Description = "string",
        EditMode = "string",
        EmailNotifications = new Databricks.Inputs.JobEmailNotificationsArgs
        {
            NoAlertForSkippedRuns = false,
            OnDurationWarningThresholdExceededs = new[]
            {
                "string",
            },
            OnFailures = new[]
            {
                "string",
            },
            OnStarts = new[]
            {
                "string",
            },
            OnSuccesses = new[]
            {
                "string",
            },
        },
        ExistingClusterId = "string",
        Format = "string",
        GitSource = new Databricks.Inputs.JobGitSourceArgs
        {
            Url = "string",
            Branch = "string",
            Commit = "string",
            JobSource = new Databricks.Inputs.JobGitSourceJobSourceArgs
            {
                ImportFromGitBranch = "string",
                JobConfigPath = "string",
                DirtyState = "string",
            },
            Provider = "string",
            Tag = "string",
        },
        Health = new Databricks.Inputs.JobHealthArgs
        {
            Rules = new[]
            {
                new Databricks.Inputs.JobHealthRuleArgs
                {
                    Metric = "string",
                    Op = "string",
                    Value = 0,
                },
            },
        },
        JobClusters = new[]
        {
            new Databricks.Inputs.JobJobClusterArgs
            {
                JobClusterKey = "string",
                NewCluster = new Databricks.Inputs.JobJobClusterNewClusterArgs
                {
                    SparkVersion = "string",
                    EnableElasticDisk = false,
                    ClusterId = "string",
                    EnableLocalDiskEncryption = false,
                    AzureAttributes = new Databricks.Inputs.JobJobClusterNewClusterAzureAttributesArgs
                    {
                        Availability = "string",
                        FirstOnDemand = 0,
                        SpotBidMaxPrice = 0,
                    },
                    GcpAttributes = new Databricks.Inputs.JobJobClusterNewClusterGcpAttributesArgs
                    {
                        Availability = "string",
                        BootDiskSize = 0,
                        GoogleServiceAccount = "string",
                        LocalSsdCount = 0,
                        UsePreemptibleExecutors = false,
                        ZoneId = "string",
                    },
                    ClusterLogConf = new Databricks.Inputs.JobJobClusterNewClusterClusterLogConfArgs
                    {
                        Dbfs = new Databricks.Inputs.JobJobClusterNewClusterClusterLogConfDbfsArgs
                        {
                            Destination = "string",
                        },
                        S3 = new Databricks.Inputs.JobJobClusterNewClusterClusterLogConfS3Args
                        {
                            Destination = "string",
                            CannedAcl = "string",
                            EnableEncryption = false,
                            EncryptionType = "string",
                            Endpoint = "string",
                            KmsKey = "string",
                            Region = "string",
                        },
                    },
                    ClusterMountInfos = new[]
                    {
                        new Databricks.Inputs.JobJobClusterNewClusterClusterMountInfoArgs
                        {
                            LocalMountDirPath = "string",
                            NetworkFilesystemInfo = new Databricks.Inputs.JobJobClusterNewClusterClusterMountInfoNetworkFilesystemInfoArgs
                            {
                                ServerAddress = "string",
                                MountOptions = "string",
                            },
                            RemoteMountDirPath = "string",
                        },
                    },
                    ClusterName = "string",
                    CustomTags = 
                    {
                        { "string", "any" },
                    },
                    DataSecurityMode = "string",
                    DockerImage = new Databricks.Inputs.JobJobClusterNewClusterDockerImageArgs
                    {
                        Url = "string",
                        BasicAuth = new Databricks.Inputs.JobJobClusterNewClusterDockerImageBasicAuthArgs
                        {
                            Password = "string",
                            Username = "string",
                        },
                    },
                    IdempotencyToken = "string",
                    DriverNodeTypeId = "string",
                    ApplyPolicyDefaultValues = false,
                    AwsAttributes = new Databricks.Inputs.JobJobClusterNewClusterAwsAttributesArgs
                    {
                        Availability = "string",
                        EbsVolumeCount = 0,
                        EbsVolumeSize = 0,
                        EbsVolumeType = "string",
                        FirstOnDemand = 0,
                        InstanceProfileArn = "string",
                        SpotBidPricePercent = 0,
                        ZoneId = "string",
                    },
                    AutoterminationMinutes = 0,
                    DriverInstancePoolId = "string",
                    InitScripts = new[]
                    {
                        new Databricks.Inputs.JobJobClusterNewClusterInitScriptArgs
                        {
                            Abfss = new Databricks.Inputs.JobJobClusterNewClusterInitScriptAbfssArgs
                            {
                                Destination = "string",
                            },
                            File = new Databricks.Inputs.JobJobClusterNewClusterInitScriptFileArgs
                            {
                                Destination = "string",
                            },
                            Gcs = new Databricks.Inputs.JobJobClusterNewClusterInitScriptGcsArgs
                            {
                                Destination = "string",
                            },
                            S3 = new Databricks.Inputs.JobJobClusterNewClusterInitScriptS3Args
                            {
                                Destination = "string",
                                CannedAcl = "string",
                                EnableEncryption = false,
                                EncryptionType = "string",
                                Endpoint = "string",
                                KmsKey = "string",
                                Region = "string",
                            },
                            Volumes = new Databricks.Inputs.JobJobClusterNewClusterInitScriptVolumesArgs
                            {
                                Destination = "string",
                            },
                            Workspace = new Databricks.Inputs.JobJobClusterNewClusterInitScriptWorkspaceArgs
                            {
                                Destination = "string",
                            },
                        },
                    },
                    InstancePoolId = "string",
                    NodeTypeId = "string",
                    NumWorkers = 0,
                    PolicyId = "string",
                    RuntimeEngine = "string",
                    SingleUserName = "string",
                    SparkConf = 
                    {
                        { "string", "any" },
                    },
                    SparkEnvVars = 
                    {
                        { "string", "any" },
                    },
                    Autoscale = new Databricks.Inputs.JobJobClusterNewClusterAutoscaleArgs
                    {
                        MaxWorkers = 0,
                        MinWorkers = 0,
                    },
                    SshPublicKeys = new[]
                    {
                        "string",
                    },
                    WorkloadType = new Databricks.Inputs.JobJobClusterNewClusterWorkloadTypeArgs
                    {
                        Clients = new Databricks.Inputs.JobJobClusterNewClusterWorkloadTypeClientsArgs
                        {
                            Jobs = false,
                            Notebooks = false,
                        },
                    },
                },
            },
        },
        Libraries = new[]
        {
            new Databricks.Inputs.JobLibraryArgs
            {
                Cran = new Databricks.Inputs.JobLibraryCranArgs
                {
                    Package = "string",
                    Repo = "string",
                },
                Egg = "string",
                Jar = "string",
                Maven = new Databricks.Inputs.JobLibraryMavenArgs
                {
                    Coordinates = "string",
                    Exclusions = new[]
                    {
                        "string",
                    },
                    Repo = "string",
                },
                Pypi = new Databricks.Inputs.JobLibraryPypiArgs
                {
                    Package = "string",
                    Repo = "string",
                },
                Whl = "string",
            },
        },
        MaxConcurrentRuns = 0,
        Name = "string",
        NewCluster = new Databricks.Inputs.JobNewClusterArgs
        {
            SparkVersion = "string",
            EnableElasticDisk = false,
            ClusterId = "string",
            EnableLocalDiskEncryption = false,
            AzureAttributes = new Databricks.Inputs.JobNewClusterAzureAttributesArgs
            {
                Availability = "string",
                FirstOnDemand = 0,
                SpotBidMaxPrice = 0,
            },
            GcpAttributes = new Databricks.Inputs.JobNewClusterGcpAttributesArgs
            {
                Availability = "string",
                BootDiskSize = 0,
                GoogleServiceAccount = "string",
                LocalSsdCount = 0,
                UsePreemptibleExecutors = false,
                ZoneId = "string",
            },
            ClusterLogConf = new Databricks.Inputs.JobNewClusterClusterLogConfArgs
            {
                Dbfs = new Databricks.Inputs.JobNewClusterClusterLogConfDbfsArgs
                {
                    Destination = "string",
                },
                S3 = new Databricks.Inputs.JobNewClusterClusterLogConfS3Args
                {
                    Destination = "string",
                    CannedAcl = "string",
                    EnableEncryption = false,
                    EncryptionType = "string",
                    Endpoint = "string",
                    KmsKey = "string",
                    Region = "string",
                },
            },
            ClusterMountInfos = new[]
            {
                new Databricks.Inputs.JobNewClusterClusterMountInfoArgs
                {
                    LocalMountDirPath = "string",
                    NetworkFilesystemInfo = new Databricks.Inputs.JobNewClusterClusterMountInfoNetworkFilesystemInfoArgs
                    {
                        ServerAddress = "string",
                        MountOptions = "string",
                    },
                    RemoteMountDirPath = "string",
                },
            },
            ClusterName = "string",
            CustomTags = 
            {
                { "string", "any" },
            },
            DataSecurityMode = "string",
            DockerImage = new Databricks.Inputs.JobNewClusterDockerImageArgs
            {
                Url = "string",
                BasicAuth = new Databricks.Inputs.JobNewClusterDockerImageBasicAuthArgs
                {
                    Password = "string",
                    Username = "string",
                },
            },
            IdempotencyToken = "string",
            DriverNodeTypeId = "string",
            ApplyPolicyDefaultValues = false,
            AwsAttributes = new Databricks.Inputs.JobNewClusterAwsAttributesArgs
            {
                Availability = "string",
                EbsVolumeCount = 0,
                EbsVolumeSize = 0,
                EbsVolumeType = "string",
                FirstOnDemand = 0,
                InstanceProfileArn = "string",
                SpotBidPricePercent = 0,
                ZoneId = "string",
            },
            AutoterminationMinutes = 0,
            DriverInstancePoolId = "string",
            InitScripts = new[]
            {
                new Databricks.Inputs.JobNewClusterInitScriptArgs
                {
                    Abfss = new Databricks.Inputs.JobNewClusterInitScriptAbfssArgs
                    {
                        Destination = "string",
                    },
                    File = new Databricks.Inputs.JobNewClusterInitScriptFileArgs
                    {
                        Destination = "string",
                    },
                    Gcs = new Databricks.Inputs.JobNewClusterInitScriptGcsArgs
                    {
                        Destination = "string",
                    },
                    S3 = new Databricks.Inputs.JobNewClusterInitScriptS3Args
                    {
                        Destination = "string",
                        CannedAcl = "string",
                        EnableEncryption = false,
                        EncryptionType = "string",
                        Endpoint = "string",
                        KmsKey = "string",
                        Region = "string",
                    },
                    Volumes = new Databricks.Inputs.JobNewClusterInitScriptVolumesArgs
                    {
                        Destination = "string",
                    },
                    Workspace = new Databricks.Inputs.JobNewClusterInitScriptWorkspaceArgs
                    {
                        Destination = "string",
                    },
                },
            },
            InstancePoolId = "string",
            NodeTypeId = "string",
            NumWorkers = 0,
            PolicyId = "string",
            RuntimeEngine = "string",
            SingleUserName = "string",
            SparkConf = 
            {
                { "string", "any" },
            },
            SparkEnvVars = 
            {
                { "string", "any" },
            },
            Autoscale = new Databricks.Inputs.JobNewClusterAutoscaleArgs
            {
                MaxWorkers = 0,
                MinWorkers = 0,
            },
            SshPublicKeys = new[]
            {
                "string",
            },
            WorkloadType = new Databricks.Inputs.JobNewClusterWorkloadTypeArgs
            {
                Clients = new Databricks.Inputs.JobNewClusterWorkloadTypeClientsArgs
                {
                    Jobs = false,
                    Notebooks = false,
                },
            },
        },
        NotificationSettings = new Databricks.Inputs.JobNotificationSettingsArgs
        {
            NoAlertForCanceledRuns = false,
            NoAlertForSkippedRuns = false,
        },
        Parameters = new[]
        {
            new Databricks.Inputs.JobParameterArgs
            {
                Default = "string",
                Name = "string",
            },
        },
        Queue = new Databricks.Inputs.JobQueueArgs
        {
            Enabled = false,
        },
        RunAs = new Databricks.Inputs.JobRunAsArgs
        {
            ServicePrincipalName = "string",
            UserName = "string",
        },
        Schedule = new Databricks.Inputs.JobScheduleArgs
        {
            QuartzCronExpression = "string",
            TimezoneId = "string",
            PauseStatus = "string",
        },
        Tags = 
        {
            { "string", "any" },
        },
        Tasks = new[]
        {
            new Databricks.Inputs.JobTaskArgs
            {
                ComputeKey = "string",
                ConditionTask = new Databricks.Inputs.JobTaskConditionTaskArgs
                {
                    Left = "string",
                    Op = "string",
                    Right = "string",
                },
                DbtTask = new Databricks.Inputs.JobTaskDbtTaskArgs
                {
                    Commands = new[]
                    {
                        "string",
                    },
                    Catalog = "string",
                    ProfilesDirectory = "string",
                    ProjectDirectory = "string",
                    Schema = "string",
                    Source = "string",
                    WarehouseId = "string",
                },
                DependsOns = new[]
                {
                    new Databricks.Inputs.JobTaskDependsOnArgs
                    {
                        TaskKey = "string",
                        Outcome = "string",
                    },
                },
                Description = "string",
                EmailNotifications = new Databricks.Inputs.JobTaskEmailNotificationsArgs
                {
                    NoAlertForSkippedRuns = false,
                    OnDurationWarningThresholdExceededs = new[]
                    {
                        "string",
                    },
                    OnFailures = new[]
                    {
                        "string",
                    },
                    OnStarts = new[]
                    {
                        "string",
                    },
                    OnSuccesses = new[]
                    {
                        "string",
                    },
                },
                ExistingClusterId = "string",
                ForEachTask = new Databricks.Inputs.JobTaskForEachTaskArgs
                {
                    Inputs = "string",
                    Task = new Databricks.Inputs.JobTaskForEachTaskTaskArgs
                    {
                        ComputeKey = "string",
                        ConditionTask = new Databricks.Inputs.JobTaskForEachTaskTaskConditionTaskArgs
                        {
                            Left = "string",
                            Op = "string",
                            Right = "string",
                        },
                        DbtTask = new Databricks.Inputs.JobTaskForEachTaskTaskDbtTaskArgs
                        {
                            Commands = new[]
                            {
                                "string",
                            },
                            Catalog = "string",
                            ProfilesDirectory = "string",
                            ProjectDirectory = "string",
                            Schema = "string",
                            Source = "string",
                            WarehouseId = "string",
                        },
                        DependsOns = new[]
                        {
                            new Databricks.Inputs.JobTaskForEachTaskTaskDependsOnArgs
                            {
                                TaskKey = "string",
                                Outcome = "string",
                            },
                        },
                        Description = "string",
                        EmailNotifications = new Databricks.Inputs.JobTaskForEachTaskTaskEmailNotificationsArgs
                        {
                            NoAlertForSkippedRuns = false,
                            OnDurationWarningThresholdExceededs = new[]
                            {
                                "string",
                            },
                            OnFailures = new[]
                            {
                                "string",
                            },
                            OnStarts = new[]
                            {
                                "string",
                            },
                            OnSuccesses = new[]
                            {
                                "string",
                            },
                        },
                        ExistingClusterId = "string",
                        Health = new Databricks.Inputs.JobTaskForEachTaskTaskHealthArgs
                        {
                            Rules = new[]
                            {
                                new Databricks.Inputs.JobTaskForEachTaskTaskHealthRuleArgs
                                {
                                    Metric = "string",
                                    Op = "string",
                                    Value = 0,
                                },
                            },
                        },
                        JobClusterKey = "string",
                        Libraries = new[]
                        {
                            new Databricks.Inputs.JobTaskForEachTaskTaskLibraryArgs
                            {
                                Cran = new Databricks.Inputs.JobTaskForEachTaskTaskLibraryCranArgs
                                {
                                    Package = "string",
                                    Repo = "string",
                                },
                                Egg = "string",
                                Jar = "string",
                                Maven = new Databricks.Inputs.JobTaskForEachTaskTaskLibraryMavenArgs
                                {
                                    Coordinates = "string",
                                    Exclusions = new[]
                                    {
                                        "string",
                                    },
                                    Repo = "string",
                                },
                                Pypi = new Databricks.Inputs.JobTaskForEachTaskTaskLibraryPypiArgs
                                {
                                    Package = "string",
                                    Repo = "string",
                                },
                                Whl = "string",
                            },
                        },
                        MaxRetries = 0,
                        MinRetryIntervalMillis = 0,
                        NewCluster = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterArgs
                        {
                            NumWorkers = 0,
                            SparkVersion = "string",
                            ClusterMountInfos = new[]
                            {
                                new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterClusterMountInfoArgs
                                {
                                    LocalMountDirPath = "string",
                                    NetworkFilesystemInfo = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterClusterMountInfoNetworkFilesystemInfoArgs
                                    {
                                        ServerAddress = "string",
                                        MountOptions = "string",
                                    },
                                    RemoteMountDirPath = "string",
                                },
                            },
                            Autoscale = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterAutoscaleArgs
                            {
                                MaxWorkers = 0,
                                MinWorkers = 0,
                            },
                            AzureAttributes = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterAzureAttributesArgs
                            {
                                Availability = "string",
                                FirstOnDemand = 0,
                                SpotBidMaxPrice = 0,
                            },
                            ClusterId = "string",
                            ClusterLogConf = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterClusterLogConfArgs
                            {
                                Dbfs = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterClusterLogConfDbfsArgs
                                {
                                    Destination = "string",
                                },
                                S3 = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterClusterLogConfS3Args
                                {
                                    Destination = "string",
                                    CannedAcl = "string",
                                    EnableEncryption = false,
                                    EncryptionType = "string",
                                    Endpoint = "string",
                                    KmsKey = "string",
                                    Region = "string",
                                },
                            },
                            ApplyPolicyDefaultValues = false,
                            ClusterName = "string",
                            CustomTags = 
                            {
                                { "string", "any" },
                            },
                            DataSecurityMode = "string",
                            DockerImage = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterDockerImageArgs
                            {
                                Url = "string",
                                BasicAuth = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterDockerImageBasicAuthArgs
                                {
                                    Password = "string",
                                    Username = "string",
                                },
                            },
                            DriverInstancePoolId = "string",
                            DriverNodeTypeId = "string",
                            WorkloadType = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterWorkloadTypeArgs
                            {
                                Clients = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterWorkloadTypeClientsArgs
                                {
                                    Jobs = false,
                                    Notebooks = false,
                                },
                            },
                            AwsAttributes = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterAwsAttributesArgs
                            {
                                Availability = "string",
                                EbsVolumeCount = 0,
                                EbsVolumeSize = 0,
                                EbsVolumeType = "string",
                                FirstOnDemand = 0,
                                InstanceProfileArn = "string",
                                SpotBidPricePercent = 0,
                                ZoneId = "string",
                            },
                            NodeTypeId = "string",
                            IdempotencyToken = "string",
                            InitScripts = new[]
                            {
                                new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterInitScriptArgs
                                {
                                    Abfss = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterInitScriptAbfssArgs
                                    {
                                        Destination = "string",
                                    },
                                    Dbfs = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterInitScriptDbfsArgs
                                    {
                                        Destination = "string",
                                    },
                                    File = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterInitScriptFileArgs
                                    {
                                        Destination = "string",
                                    },
                                    Gcs = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterInitScriptGcsArgs
                                    {
                                        Destination = "string",
                                    },
                                    S3 = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterInitScriptS3Args
                                    {
                                        Destination = "string",
                                        CannedAcl = "string",
                                        EnableEncryption = false,
                                        EncryptionType = "string",
                                        Endpoint = "string",
                                        KmsKey = "string",
                                        Region = "string",
                                    },
                                    Volumes = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterInitScriptVolumesArgs
                                    {
                                        Destination = "string",
                                    },
                                    Workspace = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterInitScriptWorkspaceArgs
                                    {
                                        Destination = "string",
                                    },
                                },
                            },
                            InstancePoolId = "string",
                            GcpAttributes = new Databricks.Inputs.JobTaskForEachTaskTaskNewClusterGcpAttributesArgs
                            {
                                Availability = "string",
                                BootDiskSize = 0,
                                GoogleServiceAccount = "string",
                                LocalSsdCount = 0,
                                UsePreemptibleExecutors = false,
                                ZoneId = "string",
                            },
                            AutoterminationMinutes = 0,
                            PolicyId = "string",
                            RuntimeEngine = "string",
                            SingleUserName = "string",
                            SparkConf = 
                            {
                                { "string", "any" },
                            },
                            SparkEnvVars = 
                            {
                                { "string", "any" },
                            },
                            EnableLocalDiskEncryption = false,
                            SshPublicKeys = new[]
                            {
                                "string",
                            },
                            EnableElasticDisk = false,
                        },
                        NotebookTask = new Databricks.Inputs.JobTaskForEachTaskTaskNotebookTaskArgs
                        {
                            NotebookPath = "string",
                            BaseParameters = 
                            {
                                { "string", "any" },
                            },
                            Source = "string",
                        },
                        NotificationSettings = new Databricks.Inputs.JobTaskForEachTaskTaskNotificationSettingsArgs
                        {
                            AlertOnLastAttempt = false,
                            NoAlertForCanceledRuns = false,
                            NoAlertForSkippedRuns = false,
                        },
                        PipelineTask = new Databricks.Inputs.JobTaskForEachTaskTaskPipelineTaskArgs
                        {
                            PipelineId = "string",
                            FullRefresh = false,
                        },
                        PythonWheelTask = new Databricks.Inputs.JobTaskForEachTaskTaskPythonWheelTaskArgs
                        {
                            EntryPoint = "string",
                            NamedParameters = 
                            {
                                { "string", "any" },
                            },
                            PackageName = "string",
                            Parameters = new[]
                            {
                                "string",
                            },
                        },
                        RetryOnTimeout = false,
                        RunIf = "string",
                        RunJobTask = new Databricks.Inputs.JobTaskForEachTaskTaskRunJobTaskArgs
                        {
                            JobId = 0,
                            JobParameters = 
                            {
                                { "string", "any" },
                            },
                        },
                        SparkJarTask = new Databricks.Inputs.JobTaskForEachTaskTaskSparkJarTaskArgs
                        {
                            JarUri = "string",
                            MainClassName = "string",
                            Parameters = new[]
                            {
                                "string",
                            },
                        },
                        SparkPythonTask = new Databricks.Inputs.JobTaskForEachTaskTaskSparkPythonTaskArgs
                        {
                            PythonFile = "string",
                            Parameters = new[]
                            {
                                "string",
                            },
                            Source = "string",
                        },
                        SparkSubmitTask = new Databricks.Inputs.JobTaskForEachTaskTaskSparkSubmitTaskArgs
                        {
                            Parameters = new[]
                            {
                                "string",
                            },
                        },
                        SqlTask = new Databricks.Inputs.JobTaskForEachTaskTaskSqlTaskArgs
                        {
                            Alert = new Databricks.Inputs.JobTaskForEachTaskTaskSqlTaskAlertArgs
                            {
                                AlertId = "string",
                                Subscriptions = new[]
                                {
                                    new Databricks.Inputs.JobTaskForEachTaskTaskSqlTaskAlertSubscriptionArgs
                                    {
                                        DestinationId = "string",
                                        UserName = "string",
                                    },
                                },
                                PauseSubscriptions = false,
                            },
                            Dashboard = new Databricks.Inputs.JobTaskForEachTaskTaskSqlTaskDashboardArgs
                            {
                                DashboardId = "string",
                                CustomSubject = "string",
                                PauseSubscriptions = false,
                                Subscriptions = new[]
                                {
                                    new Databricks.Inputs.JobTaskForEachTaskTaskSqlTaskDashboardSubscriptionArgs
                                    {
                                        DestinationId = "string",
                                        UserName = "string",
                                    },
                                },
                            },
                            File = new Databricks.Inputs.JobTaskForEachTaskTaskSqlTaskFileArgs
                            {
                                Path = "string",
                                Source = "string",
                            },
                            Parameters = 
                            {
                                { "string", "any" },
                            },
                            Query = new Databricks.Inputs.JobTaskForEachTaskTaskSqlTaskQueryArgs
                            {
                                QueryId = "string",
                            },
                            WarehouseId = "string",
                        },
                        TaskKey = "string",
                        TimeoutSeconds = 0,
                        WebhookNotifications = new Databricks.Inputs.JobTaskForEachTaskTaskWebhookNotificationsArgs
                        {
                            OnDurationWarningThresholdExceededs = new[]
                            {
                                new Databricks.Inputs.JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceededArgs
                                {
                                    Id = "string",
                                },
                            },
                            OnFailures = new[]
                            {
                                new Databricks.Inputs.JobTaskForEachTaskTaskWebhookNotificationsOnFailureArgs
                                {
                                    Id = "string",
                                },
                            },
                            OnStarts = new[]
                            {
                                new Databricks.Inputs.JobTaskForEachTaskTaskWebhookNotificationsOnStartArgs
                                {
                                    Id = "string",
                                },
                            },
                            OnSuccesses = new[]
                            {
                                new Databricks.Inputs.JobTaskForEachTaskTaskWebhookNotificationsOnSuccessArgs
                                {
                                    Id = "string",
                                },
                            },
                        },
                    },
                    Concurrency = 0,
                },
                Health = new Databricks.Inputs.JobTaskHealthArgs
                {
                    Rules = new[]
                    {
                        new Databricks.Inputs.JobTaskHealthRuleArgs
                        {
                            Metric = "string",
                            Op = "string",
                            Value = 0,
                        },
                    },
                },
                JobClusterKey = "string",
                Libraries = new[]
                {
                    new Databricks.Inputs.JobTaskLibraryArgs
                    {
                        Cran = new Databricks.Inputs.JobTaskLibraryCranArgs
                        {
                            Package = "string",
                            Repo = "string",
                        },
                        Egg = "string",
                        Jar = "string",
                        Maven = new Databricks.Inputs.JobTaskLibraryMavenArgs
                        {
                            Coordinates = "string",
                            Exclusions = new[]
                            {
                                "string",
                            },
                            Repo = "string",
                        },
                        Pypi = new Databricks.Inputs.JobTaskLibraryPypiArgs
                        {
                            Package = "string",
                            Repo = "string",
                        },
                        Whl = "string",
                    },
                },
                MaxRetries = 0,
                MinRetryIntervalMillis = 0,
                NewCluster = new Databricks.Inputs.JobTaskNewClusterArgs
                {
                    SparkVersion = "string",
                    EnableElasticDisk = false,
                    ClusterId = "string",
                    EnableLocalDiskEncryption = false,
                    AzureAttributes = new Databricks.Inputs.JobTaskNewClusterAzureAttributesArgs
                    {
                        Availability = "string",
                        FirstOnDemand = 0,
                        SpotBidMaxPrice = 0,
                    },
                    GcpAttributes = new Databricks.Inputs.JobTaskNewClusterGcpAttributesArgs
                    {
                        Availability = "string",
                        BootDiskSize = 0,
                        GoogleServiceAccount = "string",
                        LocalSsdCount = 0,
                        UsePreemptibleExecutors = false,
                        ZoneId = "string",
                    },
                    ClusterLogConf = new Databricks.Inputs.JobTaskNewClusterClusterLogConfArgs
                    {
                        Dbfs = new Databricks.Inputs.JobTaskNewClusterClusterLogConfDbfsArgs
                        {
                            Destination = "string",
                        },
                        S3 = new Databricks.Inputs.JobTaskNewClusterClusterLogConfS3Args
                        {
                            Destination = "string",
                            CannedAcl = "string",
                            EnableEncryption = false,
                            EncryptionType = "string",
                            Endpoint = "string",
                            KmsKey = "string",
                            Region = "string",
                        },
                    },
                    ClusterMountInfos = new[]
                    {
                        new Databricks.Inputs.JobTaskNewClusterClusterMountInfoArgs
                        {
                            LocalMountDirPath = "string",
                            NetworkFilesystemInfo = new Databricks.Inputs.JobTaskNewClusterClusterMountInfoNetworkFilesystemInfoArgs
                            {
                                ServerAddress = "string",
                                MountOptions = "string",
                            },
                            RemoteMountDirPath = "string",
                        },
                    },
                    ClusterName = "string",
                    CustomTags = 
                    {
                        { "string", "any" },
                    },
                    DataSecurityMode = "string",
                    DockerImage = new Databricks.Inputs.JobTaskNewClusterDockerImageArgs
                    {
                        Url = "string",
                        BasicAuth = new Databricks.Inputs.JobTaskNewClusterDockerImageBasicAuthArgs
                        {
                            Password = "string",
                            Username = "string",
                        },
                    },
                    IdempotencyToken = "string",
                    DriverNodeTypeId = "string",
                    ApplyPolicyDefaultValues = false,
                    AwsAttributes = new Databricks.Inputs.JobTaskNewClusterAwsAttributesArgs
                    {
                        Availability = "string",
                        EbsVolumeCount = 0,
                        EbsVolumeSize = 0,
                        EbsVolumeType = "string",
                        FirstOnDemand = 0,
                        InstanceProfileArn = "string",
                        SpotBidPricePercent = 0,
                        ZoneId = "string",
                    },
                    AutoterminationMinutes = 0,
                    DriverInstancePoolId = "string",
                    InitScripts = new[]
                    {
                        new Databricks.Inputs.JobTaskNewClusterInitScriptArgs
                        {
                            Abfss = new Databricks.Inputs.JobTaskNewClusterInitScriptAbfssArgs
                            {
                                Destination = "string",
                            },
                            File = new Databricks.Inputs.JobTaskNewClusterInitScriptFileArgs
                            {
                                Destination = "string",
                            },
                            Gcs = new Databricks.Inputs.JobTaskNewClusterInitScriptGcsArgs
                            {
                                Destination = "string",
                            },
                            S3 = new Databricks.Inputs.JobTaskNewClusterInitScriptS3Args
                            {
                                Destination = "string",
                                CannedAcl = "string",
                                EnableEncryption = false,
                                EncryptionType = "string",
                                Endpoint = "string",
                                KmsKey = "string",
                                Region = "string",
                            },
                            Volumes = new Databricks.Inputs.JobTaskNewClusterInitScriptVolumesArgs
                            {
                                Destination = "string",
                            },
                            Workspace = new Databricks.Inputs.JobTaskNewClusterInitScriptWorkspaceArgs
                            {
                                Destination = "string",
                            },
                        },
                    },
                    InstancePoolId = "string",
                    NodeTypeId = "string",
                    NumWorkers = 0,
                    PolicyId = "string",
                    RuntimeEngine = "string",
                    SingleUserName = "string",
                    SparkConf = 
                    {
                        { "string", "any" },
                    },
                    SparkEnvVars = 
                    {
                        { "string", "any" },
                    },
                    Autoscale = new Databricks.Inputs.JobTaskNewClusterAutoscaleArgs
                    {
                        MaxWorkers = 0,
                        MinWorkers = 0,
                    },
                    SshPublicKeys = new[]
                    {
                        "string",
                    },
                    WorkloadType = new Databricks.Inputs.JobTaskNewClusterWorkloadTypeArgs
                    {
                        Clients = new Databricks.Inputs.JobTaskNewClusterWorkloadTypeClientsArgs
                        {
                            Jobs = false,
                            Notebooks = false,
                        },
                    },
                },
                NotebookTask = new Databricks.Inputs.JobTaskNotebookTaskArgs
                {
                    NotebookPath = "string",
                    BaseParameters = 
                    {
                        { "string", "any" },
                    },
                    Source = "string",
                },
                NotificationSettings = new Databricks.Inputs.JobTaskNotificationSettingsArgs
                {
                    AlertOnLastAttempt = false,
                    NoAlertForCanceledRuns = false,
                    NoAlertForSkippedRuns = false,
                },
                PipelineTask = new Databricks.Inputs.JobTaskPipelineTaskArgs
                {
                    PipelineId = "string",
                    FullRefresh = false,
                },
                PythonWheelTask = new Databricks.Inputs.JobTaskPythonWheelTaskArgs
                {
                    EntryPoint = "string",
                    NamedParameters = 
                    {
                        { "string", "any" },
                    },
                    PackageName = "string",
                    Parameters = new[]
                    {
                        "string",
                    },
                },
                RetryOnTimeout = false,
                RunIf = "string",
                RunJobTask = new Databricks.Inputs.JobTaskRunJobTaskArgs
                {
                    JobId = 0,
                    JobParameters = 
                    {
                        { "string", "any" },
                    },
                },
                SparkJarTask = new Databricks.Inputs.JobTaskSparkJarTaskArgs
                {
                    JarUri = "string",
                    MainClassName = "string",
                    Parameters = new[]
                    {
                        "string",
                    },
                },
                SparkPythonTask = new Databricks.Inputs.JobTaskSparkPythonTaskArgs
                {
                    PythonFile = "string",
                    Parameters = new[]
                    {
                        "string",
                    },
                    Source = "string",
                },
                SparkSubmitTask = new Databricks.Inputs.JobTaskSparkSubmitTaskArgs
                {
                    Parameters = new[]
                    {
                        "string",
                    },
                },
                SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                {
                    Alert = new Databricks.Inputs.JobTaskSqlTaskAlertArgs
                    {
                        AlertId = "string",
                        Subscriptions = new[]
                        {
                            new Databricks.Inputs.JobTaskSqlTaskAlertSubscriptionArgs
                            {
                                DestinationId = "string",
                                UserName = "string",
                            },
                        },
                        PauseSubscriptions = false,
                    },
                    Dashboard = new Databricks.Inputs.JobTaskSqlTaskDashboardArgs
                    {
                        DashboardId = "string",
                        CustomSubject = "string",
                        PauseSubscriptions = false,
                        Subscriptions = new[]
                        {
                            new Databricks.Inputs.JobTaskSqlTaskDashboardSubscriptionArgs
                            {
                                DestinationId = "string",
                                UserName = "string",
                            },
                        },
                    },
                    File = new Databricks.Inputs.JobTaskSqlTaskFileArgs
                    {
                        Path = "string",
                        Source = "string",
                    },
                    Parameters = 
                    {
                        { "string", "any" },
                    },
                    Query = new Databricks.Inputs.JobTaskSqlTaskQueryArgs
                    {
                        QueryId = "string",
                    },
                    WarehouseId = "string",
                },
                TaskKey = "string",
                TimeoutSeconds = 0,
                WebhookNotifications = new Databricks.Inputs.JobTaskWebhookNotificationsArgs
                {
                    OnDurationWarningThresholdExceededs = new[]
                    {
                        new Databricks.Inputs.JobTaskWebhookNotificationsOnDurationWarningThresholdExceededArgs
                        {
                            Id = "string",
                        },
                    },
                    OnFailures = new[]
                    {
                        new Databricks.Inputs.JobTaskWebhookNotificationsOnFailureArgs
                        {
                            Id = "string",
                        },
                    },
                    OnStarts = new[]
                    {
                        new Databricks.Inputs.JobTaskWebhookNotificationsOnStartArgs
                        {
                            Id = "string",
                        },
                    },
                    OnSuccesses = new[]
                    {
                        new Databricks.Inputs.JobTaskWebhookNotificationsOnSuccessArgs
                        {
                            Id = "string",
                        },
                    },
                },
            },
        },
        TimeoutSeconds = 0,
        Trigger = new Databricks.Inputs.JobTriggerArgs
        {
            FileArrival = new Databricks.Inputs.JobTriggerFileArrivalArgs
            {
                Url = "string",
                MinTimeBetweenTriggersSeconds = 0,
                WaitAfterLastChangeSeconds = 0,
            },
            PauseStatus = "string",
            TableUpdate = new Databricks.Inputs.JobTriggerTableUpdateArgs
            {
                TableNames = new[]
                {
                    "string",
                },
                Condition = "string",
                MinTimeBetweenTriggersSeconds = 0,
                WaitAfterLastChangeSeconds = 0,
            },
        },
        WebhookNotifications = new Databricks.Inputs.JobWebhookNotificationsArgs
        {
            OnDurationWarningThresholdExceededs = new[]
            {
                new Databricks.Inputs.JobWebhookNotificationsOnDurationWarningThresholdExceededArgs
                {
                    Id = "string",
                },
            },
            OnFailures = new[]
            {
                new Databricks.Inputs.JobWebhookNotificationsOnFailureArgs
                {
                    Id = "string",
                },
            },
            OnStarts = new[]
            {
                new Databricks.Inputs.JobWebhookNotificationsOnStartArgs
                {
                    Id = "string",
                },
            },
            OnSuccesses = new[]
            {
                new Databricks.Inputs.JobWebhookNotificationsOnSuccessArgs
                {
                    Id = "string",
                },
            },
        },
    });
    
    example, err := databricks.NewJob(ctx, "jobResource", &databricks.JobArgs{
    	Computes: databricks.JobComputeArray{
    		&databricks.JobComputeArgs{
    			ComputeKey: pulumi.String("string"),
    			Spec: &databricks.JobComputeSpecArgs{
    				Kind: pulumi.String("string"),
    			},
    		},
    	},
    	Continuous: &databricks.JobContinuousArgs{
    		PauseStatus: pulumi.String("string"),
    	},
    	ControlRunState: pulumi.Bool(false),
    	Deployment: &databricks.JobDeploymentArgs{
    		Kind:             pulumi.String("string"),
    		MetadataFilePath: pulumi.String("string"),
    	},
    	Description: pulumi.String("string"),
    	EditMode:    pulumi.String("string"),
    	EmailNotifications: &databricks.JobEmailNotificationsArgs{
    		NoAlertForSkippedRuns: pulumi.Bool(false),
    		OnDurationWarningThresholdExceededs: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    		OnFailures: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    		OnStarts: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    		OnSuccesses: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    	},
    	ExistingClusterId: pulumi.String("string"),
    	Format:            pulumi.String("string"),
    	GitSource: &databricks.JobGitSourceArgs{
    		Url:    pulumi.String("string"),
    		Branch: pulumi.String("string"),
    		Commit: pulumi.String("string"),
    		JobSource: &databricks.JobGitSourceJobSourceArgs{
    			ImportFromGitBranch: pulumi.String("string"),
    			JobConfigPath:       pulumi.String("string"),
    			DirtyState:          pulumi.String("string"),
    		},
    		Provider: pulumi.String("string"),
    		Tag:      pulumi.String("string"),
    	},
    	Health: &databricks.JobHealthArgs{
    		Rules: databricks.JobHealthRuleArray{
    			&databricks.JobHealthRuleArgs{
    				Metric: pulumi.String("string"),
    				Op:     pulumi.String("string"),
    				Value:  pulumi.Int(0),
    			},
    		},
    	},
    	JobClusters: databricks.JobJobClusterArray{
    		&databricks.JobJobClusterArgs{
    			JobClusterKey: pulumi.String("string"),
    			NewCluster: &databricks.JobJobClusterNewClusterArgs{
    				SparkVersion:              pulumi.String("string"),
    				EnableElasticDisk:         pulumi.Bool(false),
    				ClusterId:                 pulumi.String("string"),
    				EnableLocalDiskEncryption: pulumi.Bool(false),
    				AzureAttributes: &databricks.JobJobClusterNewClusterAzureAttributesArgs{
    					Availability:    pulumi.String("string"),
    					FirstOnDemand:   pulumi.Int(0),
    					SpotBidMaxPrice: pulumi.Float64(0),
    				},
    				GcpAttributes: &databricks.JobJobClusterNewClusterGcpAttributesArgs{
    					Availability:            pulumi.String("string"),
    					BootDiskSize:            pulumi.Int(0),
    					GoogleServiceAccount:    pulumi.String("string"),
    					LocalSsdCount:           pulumi.Int(0),
    					UsePreemptibleExecutors: pulumi.Bool(false),
    					ZoneId:                  pulumi.String("string"),
    				},
    				ClusterLogConf: &databricks.JobJobClusterNewClusterClusterLogConfArgs{
    					Dbfs: &databricks.JobJobClusterNewClusterClusterLogConfDbfsArgs{
    						Destination: pulumi.String("string"),
    					},
    					S3: &databricks.JobJobClusterNewClusterClusterLogConfS3Args{
    						Destination:      pulumi.String("string"),
    						CannedAcl:        pulumi.String("string"),
    						EnableEncryption: pulumi.Bool(false),
    						EncryptionType:   pulumi.String("string"),
    						Endpoint:         pulumi.String("string"),
    						KmsKey:           pulumi.String("string"),
    						Region:           pulumi.String("string"),
    					},
    				},
    				ClusterMountInfos: databricks.JobJobClusterNewClusterClusterMountInfoArray{
    					&databricks.JobJobClusterNewClusterClusterMountInfoArgs{
    						LocalMountDirPath: pulumi.String("string"),
    						NetworkFilesystemInfo: &databricks.JobJobClusterNewClusterClusterMountInfoNetworkFilesystemInfoArgs{
    							ServerAddress: pulumi.String("string"),
    							MountOptions:  pulumi.String("string"),
    						},
    						RemoteMountDirPath: pulumi.String("string"),
    					},
    				},
    				ClusterName: pulumi.String("string"),
    				CustomTags: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				DataSecurityMode: pulumi.String("string"),
    				DockerImage: &databricks.JobJobClusterNewClusterDockerImageArgs{
    					Url: pulumi.String("string"),
    					BasicAuth: &databricks.JobJobClusterNewClusterDockerImageBasicAuthArgs{
    						Password: pulumi.String("string"),
    						Username: pulumi.String("string"),
    					},
    				},
    				IdempotencyToken:         pulumi.String("string"),
    				DriverNodeTypeId:         pulumi.String("string"),
    				ApplyPolicyDefaultValues: pulumi.Bool(false),
    				AwsAttributes: &databricks.JobJobClusterNewClusterAwsAttributesArgs{
    					Availability:        pulumi.String("string"),
    					EbsVolumeCount:      pulumi.Int(0),
    					EbsVolumeSize:       pulumi.Int(0),
    					EbsVolumeType:       pulumi.String("string"),
    					FirstOnDemand:       pulumi.Int(0),
    					InstanceProfileArn:  pulumi.String("string"),
    					SpotBidPricePercent: pulumi.Int(0),
    					ZoneId:              pulumi.String("string"),
    				},
    				AutoterminationMinutes: pulumi.Int(0),
    				DriverInstancePoolId:   pulumi.String("string"),
    				InitScripts: databricks.JobJobClusterNewClusterInitScriptArray{
    					&databricks.JobJobClusterNewClusterInitScriptArgs{
    						Abfss: &databricks.JobJobClusterNewClusterInitScriptAbfssArgs{
    							Destination: pulumi.String("string"),
    						},
    						File: &databricks.JobJobClusterNewClusterInitScriptFileArgs{
    							Destination: pulumi.String("string"),
    						},
    						Gcs: &databricks.JobJobClusterNewClusterInitScriptGcsArgs{
    							Destination: pulumi.String("string"),
    						},
    						S3: &databricks.JobJobClusterNewClusterInitScriptS3Args{
    							Destination:      pulumi.String("string"),
    							CannedAcl:        pulumi.String("string"),
    							EnableEncryption: pulumi.Bool(false),
    							EncryptionType:   pulumi.String("string"),
    							Endpoint:         pulumi.String("string"),
    							KmsKey:           pulumi.String("string"),
    							Region:           pulumi.String("string"),
    						},
    						Volumes: &databricks.JobJobClusterNewClusterInitScriptVolumesArgs{
    							Destination: pulumi.String("string"),
    						},
    						Workspace: &databricks.JobJobClusterNewClusterInitScriptWorkspaceArgs{
    							Destination: pulumi.String("string"),
    						},
    					},
    				},
    				InstancePoolId: pulumi.String("string"),
    				NodeTypeId:     pulumi.String("string"),
    				NumWorkers:     pulumi.Int(0),
    				PolicyId:       pulumi.String("string"),
    				RuntimeEngine:  pulumi.String("string"),
    				SingleUserName: pulumi.String("string"),
    				SparkConf: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				SparkEnvVars: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				Autoscale: &databricks.JobJobClusterNewClusterAutoscaleArgs{
    					MaxWorkers: pulumi.Int(0),
    					MinWorkers: pulumi.Int(0),
    				},
    				SshPublicKeys: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				WorkloadType: &databricks.JobJobClusterNewClusterWorkloadTypeArgs{
    					Clients: &databricks.JobJobClusterNewClusterWorkloadTypeClientsArgs{
    						Jobs:      pulumi.Bool(false),
    						Notebooks: pulumi.Bool(false),
    					},
    				},
    			},
    		},
    	},
    	Libraries: databricks.JobLibraryArray{
    		&databricks.JobLibraryArgs{
    			Cran: &databricks.JobLibraryCranArgs{
    				Package: pulumi.String("string"),
    				Repo:    pulumi.String("string"),
    			},
    			Egg: pulumi.String("string"),
    			Jar: pulumi.String("string"),
    			Maven: &databricks.JobLibraryMavenArgs{
    				Coordinates: pulumi.String("string"),
    				Exclusions: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				Repo: pulumi.String("string"),
    			},
    			Pypi: &databricks.JobLibraryPypiArgs{
    				Package: pulumi.String("string"),
    				Repo:    pulumi.String("string"),
    			},
    			Whl: pulumi.String("string"),
    		},
    	},
    	MaxConcurrentRuns: pulumi.Int(0),
    	Name:              pulumi.String("string"),
    	NewCluster: &databricks.JobNewClusterArgs{
    		SparkVersion:              pulumi.String("string"),
    		EnableElasticDisk:         pulumi.Bool(false),
    		ClusterId:                 pulumi.String("string"),
    		EnableLocalDiskEncryption: pulumi.Bool(false),
    		AzureAttributes: &databricks.JobNewClusterAzureAttributesArgs{
    			Availability:    pulumi.String("string"),
    			FirstOnDemand:   pulumi.Int(0),
    			SpotBidMaxPrice: pulumi.Float64(0),
    		},
    		GcpAttributes: &databricks.JobNewClusterGcpAttributesArgs{
    			Availability:            pulumi.String("string"),
    			BootDiskSize:            pulumi.Int(0),
    			GoogleServiceAccount:    pulumi.String("string"),
    			LocalSsdCount:           pulumi.Int(0),
    			UsePreemptibleExecutors: pulumi.Bool(false),
    			ZoneId:                  pulumi.String("string"),
    		},
    		ClusterLogConf: &databricks.JobNewClusterClusterLogConfArgs{
    			Dbfs: &databricks.JobNewClusterClusterLogConfDbfsArgs{
    				Destination: pulumi.String("string"),
    			},
    			S3: &databricks.JobNewClusterClusterLogConfS3Args{
    				Destination:      pulumi.String("string"),
    				CannedAcl:        pulumi.String("string"),
    				EnableEncryption: pulumi.Bool(false),
    				EncryptionType:   pulumi.String("string"),
    				Endpoint:         pulumi.String("string"),
    				KmsKey:           pulumi.String("string"),
    				Region:           pulumi.String("string"),
    			},
    		},
    		ClusterMountInfos: databricks.JobNewClusterClusterMountInfoArray{
    			&databricks.JobNewClusterClusterMountInfoArgs{
    				LocalMountDirPath: pulumi.String("string"),
    				NetworkFilesystemInfo: &databricks.JobNewClusterClusterMountInfoNetworkFilesystemInfoArgs{
    					ServerAddress: pulumi.String("string"),
    					MountOptions:  pulumi.String("string"),
    				},
    				RemoteMountDirPath: pulumi.String("string"),
    			},
    		},
    		ClusterName: pulumi.String("string"),
    		CustomTags: pulumi.Map{
    			"string": pulumi.Any("any"),
    		},
    		DataSecurityMode: pulumi.String("string"),
    		DockerImage: &databricks.JobNewClusterDockerImageArgs{
    			Url: pulumi.String("string"),
    			BasicAuth: &databricks.JobNewClusterDockerImageBasicAuthArgs{
    				Password: pulumi.String("string"),
    				Username: pulumi.String("string"),
    			},
    		},
    		IdempotencyToken:         pulumi.String("string"),
    		DriverNodeTypeId:         pulumi.String("string"),
    		ApplyPolicyDefaultValues: pulumi.Bool(false),
    		AwsAttributes: &databricks.JobNewClusterAwsAttributesArgs{
    			Availability:        pulumi.String("string"),
    			EbsVolumeCount:      pulumi.Int(0),
    			EbsVolumeSize:       pulumi.Int(0),
    			EbsVolumeType:       pulumi.String("string"),
    			FirstOnDemand:       pulumi.Int(0),
    			InstanceProfileArn:  pulumi.String("string"),
    			SpotBidPricePercent: pulumi.Int(0),
    			ZoneId:              pulumi.String("string"),
    		},
    		AutoterminationMinutes: pulumi.Int(0),
    		DriverInstancePoolId:   pulumi.String("string"),
    		InitScripts: databricks.JobNewClusterInitScriptArray{
    			&databricks.JobNewClusterInitScriptArgs{
    				Abfss: &databricks.JobNewClusterInitScriptAbfssArgs{
    					Destination: pulumi.String("string"),
    				},
    				File: &databricks.JobNewClusterInitScriptFileArgs{
    					Destination: pulumi.String("string"),
    				},
    				Gcs: &databricks.JobNewClusterInitScriptGcsArgs{
    					Destination: pulumi.String("string"),
    				},
    				S3: &databricks.JobNewClusterInitScriptS3Args{
    					Destination:      pulumi.String("string"),
    					CannedAcl:        pulumi.String("string"),
    					EnableEncryption: pulumi.Bool(false),
    					EncryptionType:   pulumi.String("string"),
    					Endpoint:         pulumi.String("string"),
    					KmsKey:           pulumi.String("string"),
    					Region:           pulumi.String("string"),
    				},
    				Volumes: &databricks.JobNewClusterInitScriptVolumesArgs{
    					Destination: pulumi.String("string"),
    				},
    				Workspace: &databricks.JobNewClusterInitScriptWorkspaceArgs{
    					Destination: pulumi.String("string"),
    				},
    			},
    		},
    		InstancePoolId: pulumi.String("string"),
    		NodeTypeId:     pulumi.String("string"),
    		NumWorkers:     pulumi.Int(0),
    		PolicyId:       pulumi.String("string"),
    		RuntimeEngine:  pulumi.String("string"),
    		SingleUserName: pulumi.String("string"),
    		SparkConf: pulumi.Map{
    			"string": pulumi.Any("any"),
    		},
    		SparkEnvVars: pulumi.Map{
    			"string": pulumi.Any("any"),
    		},
    		Autoscale: &databricks.JobNewClusterAutoscaleArgs{
    			MaxWorkers: pulumi.Int(0),
    			MinWorkers: pulumi.Int(0),
    		},
    		SshPublicKeys: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    		WorkloadType: &databricks.JobNewClusterWorkloadTypeArgs{
    			Clients: &databricks.JobNewClusterWorkloadTypeClientsArgs{
    				Jobs:      pulumi.Bool(false),
    				Notebooks: pulumi.Bool(false),
    			},
    		},
    	},
    	NotificationSettings: &databricks.JobNotificationSettingsArgs{
    		NoAlertForCanceledRuns: pulumi.Bool(false),
    		NoAlertForSkippedRuns:  pulumi.Bool(false),
    	},
    	Parameters: databricks.JobParameterArray{
    		&databricks.JobParameterArgs{
    			Default: pulumi.String("string"),
    			Name:    pulumi.String("string"),
    		},
    	},
    	Queue: &databricks.JobQueueArgs{
    		Enabled: pulumi.Bool(false),
    	},
    	RunAs: &databricks.JobRunAsArgs{
    		ServicePrincipalName: pulumi.String("string"),
    		UserName:             pulumi.String("string"),
    	},
    	Schedule: &databricks.JobScheduleArgs{
    		QuartzCronExpression: pulumi.String("string"),
    		TimezoneId:           pulumi.String("string"),
    		PauseStatus:          pulumi.String("string"),
    	},
    	Tags: pulumi.Map{
    		"string": pulumi.Any("any"),
    	},
    	Tasks: databricks.JobTaskArray{
    		&databricks.JobTaskArgs{
    			ComputeKey: pulumi.String("string"),
    			ConditionTask: &databricks.JobTaskConditionTaskArgs{
    				Left:  pulumi.String("string"),
    				Op:    pulumi.String("string"),
    				Right: pulumi.String("string"),
    			},
    			DbtTask: &databricks.JobTaskDbtTaskArgs{
    				Commands: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				Catalog:           pulumi.String("string"),
    				ProfilesDirectory: pulumi.String("string"),
    				ProjectDirectory:  pulumi.String("string"),
    				Schema:            pulumi.String("string"),
    				Source:            pulumi.String("string"),
    				WarehouseId:       pulumi.String("string"),
    			},
    			DependsOns: databricks.JobTaskDependsOnArray{
    				&databricks.JobTaskDependsOnArgs{
    					TaskKey: pulumi.String("string"),
    					Outcome: pulumi.String("string"),
    				},
    			},
    			Description: pulumi.String("string"),
    			EmailNotifications: &databricks.JobTaskEmailNotificationsArgs{
    				NoAlertForSkippedRuns: pulumi.Bool(false),
    				OnDurationWarningThresholdExceededs: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				OnFailures: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				OnStarts: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				OnSuccesses: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    			},
    			ExistingClusterId: pulumi.String("string"),
    			ForEachTask: &databricks.JobTaskForEachTaskArgs{
    				Inputs: pulumi.String("string"),
    				Task: &databricks.JobTaskForEachTaskTaskArgs{
    					ComputeKey: pulumi.String("string"),
    					ConditionTask: &databricks.JobTaskForEachTaskTaskConditionTaskArgs{
    						Left:  pulumi.String("string"),
    						Op:    pulumi.String("string"),
    						Right: pulumi.String("string"),
    					},
    					DbtTask: &databricks.JobTaskForEachTaskTaskDbtTaskArgs{
    						Commands: pulumi.StringArray{
    							pulumi.String("string"),
    						},
    						Catalog:           pulumi.String("string"),
    						ProfilesDirectory: pulumi.String("string"),
    						ProjectDirectory:  pulumi.String("string"),
    						Schema:            pulumi.String("string"),
    						Source:            pulumi.String("string"),
    						WarehouseId:       pulumi.String("string"),
    					},
    					DependsOns: databricks.JobTaskForEachTaskTaskDependsOnArray{
    						&databricks.JobTaskForEachTaskTaskDependsOnArgs{
    							TaskKey: pulumi.String("string"),
    							Outcome: pulumi.String("string"),
    						},
    					},
    					Description: pulumi.String("string"),
    					EmailNotifications: &databricks.JobTaskForEachTaskTaskEmailNotificationsArgs{
    						NoAlertForSkippedRuns: pulumi.Bool(false),
    						OnDurationWarningThresholdExceededs: pulumi.StringArray{
    							pulumi.String("string"),
    						},
    						OnFailures: pulumi.StringArray{
    							pulumi.String("string"),
    						},
    						OnStarts: pulumi.StringArray{
    							pulumi.String("string"),
    						},
    						OnSuccesses: pulumi.StringArray{
    							pulumi.String("string"),
    						},
    					},
    					ExistingClusterId: pulumi.String("string"),
    					Health: &databricks.JobTaskForEachTaskTaskHealthArgs{
    						Rules: databricks.JobTaskForEachTaskTaskHealthRuleArray{
    							&databricks.JobTaskForEachTaskTaskHealthRuleArgs{
    								Metric: pulumi.String("string"),
    								Op:     pulumi.String("string"),
    								Value:  pulumi.Int(0),
    							},
    						},
    					},
    					JobClusterKey: pulumi.String("string"),
    					Libraries: databricks.JobTaskForEachTaskTaskLibraryArray{
    						&databricks.JobTaskForEachTaskTaskLibraryArgs{
    							Cran: &databricks.JobTaskForEachTaskTaskLibraryCranArgs{
    								Package: pulumi.String("string"),
    								Repo:    pulumi.String("string"),
    							},
    							Egg: pulumi.String("string"),
    							Jar: pulumi.String("string"),
    							Maven: &databricks.JobTaskForEachTaskTaskLibraryMavenArgs{
    								Coordinates: pulumi.String("string"),
    								Exclusions: pulumi.StringArray{
    									pulumi.String("string"),
    								},
    								Repo: pulumi.String("string"),
    							},
    							Pypi: &databricks.JobTaskForEachTaskTaskLibraryPypiArgs{
    								Package: pulumi.String("string"),
    								Repo:    pulumi.String("string"),
    							},
    							Whl: pulumi.String("string"),
    						},
    					},
    					MaxRetries:             pulumi.Int(0),
    					MinRetryIntervalMillis: pulumi.Int(0),
    					NewCluster: &databricks.JobTaskForEachTaskTaskNewClusterArgs{
    						NumWorkers:   pulumi.Int(0),
    						SparkVersion: pulumi.String("string"),
    						ClusterMountInfos: databricks.JobTaskForEachTaskTaskNewClusterClusterMountInfoArray{
    							&databricks.JobTaskForEachTaskTaskNewClusterClusterMountInfoArgs{
    								LocalMountDirPath: pulumi.String("string"),
    								NetworkFilesystemInfo: &databricks.JobTaskForEachTaskTaskNewClusterClusterMountInfoNetworkFilesystemInfoArgs{
    									ServerAddress: pulumi.String("string"),
    									MountOptions:  pulumi.String("string"),
    								},
    								RemoteMountDirPath: pulumi.String("string"),
    							},
    						},
    						Autoscale: &databricks.JobTaskForEachTaskTaskNewClusterAutoscaleArgs{
    							MaxWorkers: pulumi.Int(0),
    							MinWorkers: pulumi.Int(0),
    						},
    						AzureAttributes: &databricks.JobTaskForEachTaskTaskNewClusterAzureAttributesArgs{
    							Availability:    pulumi.String("string"),
    							FirstOnDemand:   pulumi.Int(0),
    							SpotBidMaxPrice: pulumi.Float64(0),
    						},
    						ClusterId: pulumi.String("string"),
    						ClusterLogConf: &databricks.JobTaskForEachTaskTaskNewClusterClusterLogConfArgs{
    							Dbfs: &databricks.JobTaskForEachTaskTaskNewClusterClusterLogConfDbfsArgs{
    								Destination: pulumi.String("string"),
    							},
    							S3: &databricks.JobTaskForEachTaskTaskNewClusterClusterLogConfS3Args{
    								Destination:      pulumi.String("string"),
    								CannedAcl:        pulumi.String("string"),
    								EnableEncryption: pulumi.Bool(false),
    								EncryptionType:   pulumi.String("string"),
    								Endpoint:         pulumi.String("string"),
    								KmsKey:           pulumi.String("string"),
    								Region:           pulumi.String("string"),
    							},
    						},
    						ApplyPolicyDefaultValues: pulumi.Bool(false),
    						ClusterName:              pulumi.String("string"),
    						CustomTags: pulumi.Map{
    							"string": pulumi.Any("any"),
    						},
    						DataSecurityMode: pulumi.String("string"),
    						DockerImage: &databricks.JobTaskForEachTaskTaskNewClusterDockerImageArgs{
    							Url: pulumi.String("string"),
    							BasicAuth: &databricks.JobTaskForEachTaskTaskNewClusterDockerImageBasicAuthArgs{
    								Password: pulumi.String("string"),
    								Username: pulumi.String("string"),
    							},
    						},
    						DriverInstancePoolId: pulumi.String("string"),
    						DriverNodeTypeId:     pulumi.String("string"),
    						WorkloadType: &databricks.JobTaskForEachTaskTaskNewClusterWorkloadTypeArgs{
    							Clients: &databricks.JobTaskForEachTaskTaskNewClusterWorkloadTypeClientsArgs{
    								Jobs:      pulumi.Bool(false),
    								Notebooks: pulumi.Bool(false),
    							},
    						},
    						AwsAttributes: &databricks.JobTaskForEachTaskTaskNewClusterAwsAttributesArgs{
    							Availability:        pulumi.String("string"),
    							EbsVolumeCount:      pulumi.Int(0),
    							EbsVolumeSize:       pulumi.Int(0),
    							EbsVolumeType:       pulumi.String("string"),
    							FirstOnDemand:       pulumi.Int(0),
    							InstanceProfileArn:  pulumi.String("string"),
    							SpotBidPricePercent: pulumi.Int(0),
    							ZoneId:              pulumi.String("string"),
    						},
    						NodeTypeId:       pulumi.String("string"),
    						IdempotencyToken: pulumi.String("string"),
    						InitScripts: databricks.JobTaskForEachTaskTaskNewClusterInitScriptArray{
    							&databricks.JobTaskForEachTaskTaskNewClusterInitScriptArgs{
    								Abfss: &databricks.JobTaskForEachTaskTaskNewClusterInitScriptAbfssArgs{
    									Destination: pulumi.String("string"),
    								},
    								Dbfs: &databricks.JobTaskForEachTaskTaskNewClusterInitScriptDbfsArgs{
    									Destination: pulumi.String("string"),
    								},
    								File: &databricks.JobTaskForEachTaskTaskNewClusterInitScriptFileArgs{
    									Destination: pulumi.String("string"),
    								},
    								Gcs: &databricks.JobTaskForEachTaskTaskNewClusterInitScriptGcsArgs{
    									Destination: pulumi.String("string"),
    								},
    								S3: &databricks.JobTaskForEachTaskTaskNewClusterInitScriptS3Args{
    									Destination:      pulumi.String("string"),
    									CannedAcl:        pulumi.String("string"),
    									EnableEncryption: pulumi.Bool(false),
    									EncryptionType:   pulumi.String("string"),
    									Endpoint:         pulumi.String("string"),
    									KmsKey:           pulumi.String("string"),
    									Region:           pulumi.String("string"),
    								},
    								Volumes: &databricks.JobTaskForEachTaskTaskNewClusterInitScriptVolumesArgs{
    									Destination: pulumi.String("string"),
    								},
    								Workspace: &databricks.JobTaskForEachTaskTaskNewClusterInitScriptWorkspaceArgs{
    									Destination: pulumi.String("string"),
    								},
    							},
    						},
    						InstancePoolId: pulumi.String("string"),
    						GcpAttributes: &databricks.JobTaskForEachTaskTaskNewClusterGcpAttributesArgs{
    							Availability:            pulumi.String("string"),
    							BootDiskSize:            pulumi.Int(0),
    							GoogleServiceAccount:    pulumi.String("string"),
    							LocalSsdCount:           pulumi.Int(0),
    							UsePreemptibleExecutors: pulumi.Bool(false),
    							ZoneId:                  pulumi.String("string"),
    						},
    						AutoterminationMinutes: pulumi.Int(0),
    						PolicyId:               pulumi.String("string"),
    						RuntimeEngine:          pulumi.String("string"),
    						SingleUserName:         pulumi.String("string"),
    						SparkConf: pulumi.Map{
    							"string": pulumi.Any("any"),
    						},
    						SparkEnvVars: pulumi.Map{
    							"string": pulumi.Any("any"),
    						},
    						EnableLocalDiskEncryption: pulumi.Bool(false),
    						SshPublicKeys: pulumi.StringArray{
    							pulumi.String("string"),
    						},
    						EnableElasticDisk: pulumi.Bool(false),
    					},
    					NotebookTask: &databricks.JobTaskForEachTaskTaskNotebookTaskArgs{
    						NotebookPath: pulumi.String("string"),
    						BaseParameters: pulumi.Map{
    							"string": pulumi.Any("any"),
    						},
    						Source: pulumi.String("string"),
    					},
    					NotificationSettings: &databricks.JobTaskForEachTaskTaskNotificationSettingsArgs{
    						AlertOnLastAttempt:     pulumi.Bool(false),
    						NoAlertForCanceledRuns: pulumi.Bool(false),
    						NoAlertForSkippedRuns:  pulumi.Bool(false),
    					},
    					PipelineTask: &databricks.JobTaskForEachTaskTaskPipelineTaskArgs{
    						PipelineId:  pulumi.String("string"),
    						FullRefresh: pulumi.Bool(false),
    					},
    					PythonWheelTask: &databricks.JobTaskForEachTaskTaskPythonWheelTaskArgs{
    						EntryPoint: pulumi.String("string"),
    						NamedParameters: pulumi.Map{
    							"string": pulumi.Any("any"),
    						},
    						PackageName: pulumi.String("string"),
    						Parameters: pulumi.StringArray{
    							pulumi.String("string"),
    						},
    					},
    					RetryOnTimeout: pulumi.Bool(false),
    					RunIf:          pulumi.String("string"),
    					RunJobTask: &databricks.JobTaskForEachTaskTaskRunJobTaskArgs{
    						JobId: pulumi.Int(0),
    						JobParameters: pulumi.Map{
    							"string": pulumi.Any("any"),
    						},
    					},
    					SparkJarTask: &databricks.JobTaskForEachTaskTaskSparkJarTaskArgs{
    						JarUri:        pulumi.String("string"),
    						MainClassName: pulumi.String("string"),
    						Parameters: pulumi.StringArray{
    							pulumi.String("string"),
    						},
    					},
    					SparkPythonTask: &databricks.JobTaskForEachTaskTaskSparkPythonTaskArgs{
    						PythonFile: pulumi.String("string"),
    						Parameters: pulumi.StringArray{
    							pulumi.String("string"),
    						},
    						Source: pulumi.String("string"),
    					},
    					SparkSubmitTask: &databricks.JobTaskForEachTaskTaskSparkSubmitTaskArgs{
    						Parameters: pulumi.StringArray{
    							pulumi.String("string"),
    						},
    					},
    					SqlTask: &databricks.JobTaskForEachTaskTaskSqlTaskArgs{
    						Alert: &databricks.JobTaskForEachTaskTaskSqlTaskAlertArgs{
    							AlertId: pulumi.String("string"),
    							Subscriptions: databricks.JobTaskForEachTaskTaskSqlTaskAlertSubscriptionArray{
    								&databricks.JobTaskForEachTaskTaskSqlTaskAlertSubscriptionArgs{
    									DestinationId: pulumi.String("string"),
    									UserName:      pulumi.String("string"),
    								},
    							},
    							PauseSubscriptions: pulumi.Bool(false),
    						},
    						Dashboard: &databricks.JobTaskForEachTaskTaskSqlTaskDashboardArgs{
    							DashboardId:        pulumi.String("string"),
    							CustomSubject:      pulumi.String("string"),
    							PauseSubscriptions: pulumi.Bool(false),
    							Subscriptions: databricks.JobTaskForEachTaskTaskSqlTaskDashboardSubscriptionArray{
    								&databricks.JobTaskForEachTaskTaskSqlTaskDashboardSubscriptionArgs{
    									DestinationId: pulumi.String("string"),
    									UserName:      pulumi.String("string"),
    								},
    							},
    						},
    						File: &databricks.JobTaskForEachTaskTaskSqlTaskFileArgs{
    							Path:   pulumi.String("string"),
    							Source: pulumi.String("string"),
    						},
    						Parameters: pulumi.Map{
    							"string": pulumi.Any("any"),
    						},
    						Query: &databricks.JobTaskForEachTaskTaskSqlTaskQueryArgs{
    							QueryId: pulumi.String("string"),
    						},
    						WarehouseId: pulumi.String("string"),
    					},
    					TaskKey:        pulumi.String("string"),
    					TimeoutSeconds: pulumi.Int(0),
    					WebhookNotifications: &databricks.JobTaskForEachTaskTaskWebhookNotificationsArgs{
    						OnDurationWarningThresholdExceededs: databricks.JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceededArray{
    							&databricks.JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceededArgs{
    								Id: pulumi.String("string"),
    							},
    						},
    						OnFailures: databricks.JobTaskForEachTaskTaskWebhookNotificationsOnFailureArray{
    							&databricks.JobTaskForEachTaskTaskWebhookNotificationsOnFailureArgs{
    								Id: pulumi.String("string"),
    							},
    						},
    						OnStarts: databricks.JobTaskForEachTaskTaskWebhookNotificationsOnStartArray{
    							&databricks.JobTaskForEachTaskTaskWebhookNotificationsOnStartArgs{
    								Id: pulumi.String("string"),
    							},
    						},
    						OnSuccesses: databricks.JobTaskForEachTaskTaskWebhookNotificationsOnSuccessArray{
    							&databricks.JobTaskForEachTaskTaskWebhookNotificationsOnSuccessArgs{
    								Id: pulumi.String("string"),
    							},
    						},
    					},
    				},
    				Concurrency: pulumi.Int(0),
    			},
    			Health: &databricks.JobTaskHealthArgs{
    				Rules: databricks.JobTaskHealthRuleArray{
    					&databricks.JobTaskHealthRuleArgs{
    						Metric: pulumi.String("string"),
    						Op:     pulumi.String("string"),
    						Value:  pulumi.Int(0),
    					},
    				},
    			},
    			JobClusterKey: pulumi.String("string"),
    			Libraries: databricks.JobTaskLibraryArray{
    				&databricks.JobTaskLibraryArgs{
    					Cran: &databricks.JobTaskLibraryCranArgs{
    						Package: pulumi.String("string"),
    						Repo:    pulumi.String("string"),
    					},
    					Egg: pulumi.String("string"),
    					Jar: pulumi.String("string"),
    					Maven: &databricks.JobTaskLibraryMavenArgs{
    						Coordinates: pulumi.String("string"),
    						Exclusions: pulumi.StringArray{
    							pulumi.String("string"),
    						},
    						Repo: pulumi.String("string"),
    					},
    					Pypi: &databricks.JobTaskLibraryPypiArgs{
    						Package: pulumi.String("string"),
    						Repo:    pulumi.String("string"),
    					},
    					Whl: pulumi.String("string"),
    				},
    			},
    			MaxRetries:             pulumi.Int(0),
    			MinRetryIntervalMillis: pulumi.Int(0),
    			NewCluster: &databricks.JobTaskNewClusterArgs{
    				SparkVersion:              pulumi.String("string"),
    				EnableElasticDisk:         pulumi.Bool(false),
    				ClusterId:                 pulumi.String("string"),
    				EnableLocalDiskEncryption: pulumi.Bool(false),
    				AzureAttributes: &databricks.JobTaskNewClusterAzureAttributesArgs{
    					Availability:    pulumi.String("string"),
    					FirstOnDemand:   pulumi.Int(0),
    					SpotBidMaxPrice: pulumi.Float64(0),
    				},
    				GcpAttributes: &databricks.JobTaskNewClusterGcpAttributesArgs{
    					Availability:            pulumi.String("string"),
    					BootDiskSize:            pulumi.Int(0),
    					GoogleServiceAccount:    pulumi.String("string"),
    					LocalSsdCount:           pulumi.Int(0),
    					UsePreemptibleExecutors: pulumi.Bool(false),
    					ZoneId:                  pulumi.String("string"),
    				},
    				ClusterLogConf: &databricks.JobTaskNewClusterClusterLogConfArgs{
    					Dbfs: &databricks.JobTaskNewClusterClusterLogConfDbfsArgs{
    						Destination: pulumi.String("string"),
    					},
    					S3: &databricks.JobTaskNewClusterClusterLogConfS3Args{
    						Destination:      pulumi.String("string"),
    						CannedAcl:        pulumi.String("string"),
    						EnableEncryption: pulumi.Bool(false),
    						EncryptionType:   pulumi.String("string"),
    						Endpoint:         pulumi.String("string"),
    						KmsKey:           pulumi.String("string"),
    						Region:           pulumi.String("string"),
    					},
    				},
    				ClusterMountInfos: databricks.JobTaskNewClusterClusterMountInfoArray{
    					&databricks.JobTaskNewClusterClusterMountInfoArgs{
    						LocalMountDirPath: pulumi.String("string"),
    						NetworkFilesystemInfo: &databricks.JobTaskNewClusterClusterMountInfoNetworkFilesystemInfoArgs{
    							ServerAddress: pulumi.String("string"),
    							MountOptions:  pulumi.String("string"),
    						},
    						RemoteMountDirPath: pulumi.String("string"),
    					},
    				},
    				ClusterName: pulumi.String("string"),
    				CustomTags: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				DataSecurityMode: pulumi.String("string"),
    				DockerImage: &databricks.JobTaskNewClusterDockerImageArgs{
    					Url: pulumi.String("string"),
    					BasicAuth: &databricks.JobTaskNewClusterDockerImageBasicAuthArgs{
    						Password: pulumi.String("string"),
    						Username: pulumi.String("string"),
    					},
    				},
    				IdempotencyToken:         pulumi.String("string"),
    				DriverNodeTypeId:         pulumi.String("string"),
    				ApplyPolicyDefaultValues: pulumi.Bool(false),
    				AwsAttributes: &databricks.JobTaskNewClusterAwsAttributesArgs{
    					Availability:        pulumi.String("string"),
    					EbsVolumeCount:      pulumi.Int(0),
    					EbsVolumeSize:       pulumi.Int(0),
    					EbsVolumeType:       pulumi.String("string"),
    					FirstOnDemand:       pulumi.Int(0),
    					InstanceProfileArn:  pulumi.String("string"),
    					SpotBidPricePercent: pulumi.Int(0),
    					ZoneId:              pulumi.String("string"),
    				},
    				AutoterminationMinutes: pulumi.Int(0),
    				DriverInstancePoolId:   pulumi.String("string"),
    				InitScripts: databricks.JobTaskNewClusterInitScriptArray{
    					&databricks.JobTaskNewClusterInitScriptArgs{
    						Abfss: &databricks.JobTaskNewClusterInitScriptAbfssArgs{
    							Destination: pulumi.String("string"),
    						},
    						File: &databricks.JobTaskNewClusterInitScriptFileArgs{
    							Destination: pulumi.String("string"),
    						},
    						Gcs: &databricks.JobTaskNewClusterInitScriptGcsArgs{
    							Destination: pulumi.String("string"),
    						},
    						S3: &databricks.JobTaskNewClusterInitScriptS3Args{
    							Destination:      pulumi.String("string"),
    							CannedAcl:        pulumi.String("string"),
    							EnableEncryption: pulumi.Bool(false),
    							EncryptionType:   pulumi.String("string"),
    							Endpoint:         pulumi.String("string"),
    							KmsKey:           pulumi.String("string"),
    							Region:           pulumi.String("string"),
    						},
    						Volumes: &databricks.JobTaskNewClusterInitScriptVolumesArgs{
    							Destination: pulumi.String("string"),
    						},
    						Workspace: &databricks.JobTaskNewClusterInitScriptWorkspaceArgs{
    							Destination: pulumi.String("string"),
    						},
    					},
    				},
    				InstancePoolId: pulumi.String("string"),
    				NodeTypeId:     pulumi.String("string"),
    				NumWorkers:     pulumi.Int(0),
    				PolicyId:       pulumi.String("string"),
    				RuntimeEngine:  pulumi.String("string"),
    				SingleUserName: pulumi.String("string"),
    				SparkConf: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				SparkEnvVars: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				Autoscale: &databricks.JobTaskNewClusterAutoscaleArgs{
    					MaxWorkers: pulumi.Int(0),
    					MinWorkers: pulumi.Int(0),
    				},
    				SshPublicKeys: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				WorkloadType: &databricks.JobTaskNewClusterWorkloadTypeArgs{
    					Clients: &databricks.JobTaskNewClusterWorkloadTypeClientsArgs{
    						Jobs:      pulumi.Bool(false),
    						Notebooks: pulumi.Bool(false),
    					},
    				},
    			},
    			NotebookTask: &databricks.JobTaskNotebookTaskArgs{
    				NotebookPath: pulumi.String("string"),
    				BaseParameters: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				Source: pulumi.String("string"),
    			},
    			NotificationSettings: &databricks.JobTaskNotificationSettingsArgs{
    				AlertOnLastAttempt:     pulumi.Bool(false),
    				NoAlertForCanceledRuns: pulumi.Bool(false),
    				NoAlertForSkippedRuns:  pulumi.Bool(false),
    			},
    			PipelineTask: &databricks.JobTaskPipelineTaskArgs{
    				PipelineId:  pulumi.String("string"),
    				FullRefresh: pulumi.Bool(false),
    			},
    			PythonWheelTask: &databricks.JobTaskPythonWheelTaskArgs{
    				EntryPoint: pulumi.String("string"),
    				NamedParameters: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				PackageName: pulumi.String("string"),
    				Parameters: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    			},
    			RetryOnTimeout: pulumi.Bool(false),
    			RunIf:          pulumi.String("string"),
    			RunJobTask: &databricks.JobTaskRunJobTaskArgs{
    				JobId: pulumi.Int(0),
    				JobParameters: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    			},
    			SparkJarTask: &databricks.JobTaskSparkJarTaskArgs{
    				JarUri:        pulumi.String("string"),
    				MainClassName: pulumi.String("string"),
    				Parameters: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    			},
    			SparkPythonTask: &databricks.JobTaskSparkPythonTaskArgs{
    				PythonFile: pulumi.String("string"),
    				Parameters: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				Source: pulumi.String("string"),
    			},
    			SparkSubmitTask: &databricks.JobTaskSparkSubmitTaskArgs{
    				Parameters: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    			},
    			SqlTask: &databricks.JobTaskSqlTaskArgs{
    				Alert: &databricks.JobTaskSqlTaskAlertArgs{
    					AlertId: pulumi.String("string"),
    					Subscriptions: databricks.JobTaskSqlTaskAlertSubscriptionArray{
    						&databricks.JobTaskSqlTaskAlertSubscriptionArgs{
    							DestinationId: pulumi.String("string"),
    							UserName:      pulumi.String("string"),
    						},
    					},
    					PauseSubscriptions: pulumi.Bool(false),
    				},
    				Dashboard: &databricks.JobTaskSqlTaskDashboardArgs{
    					DashboardId:        pulumi.String("string"),
    					CustomSubject:      pulumi.String("string"),
    					PauseSubscriptions: pulumi.Bool(false),
    					Subscriptions: databricks.JobTaskSqlTaskDashboardSubscriptionArray{
    						&databricks.JobTaskSqlTaskDashboardSubscriptionArgs{
    							DestinationId: pulumi.String("string"),
    							UserName:      pulumi.String("string"),
    						},
    					},
    				},
    				File: &databricks.JobTaskSqlTaskFileArgs{
    					Path:   pulumi.String("string"),
    					Source: pulumi.String("string"),
    				},
    				Parameters: pulumi.Map{
    					"string": pulumi.Any("any"),
    				},
    				Query: &databricks.JobTaskSqlTaskQueryArgs{
    					QueryId: pulumi.String("string"),
    				},
    				WarehouseId: pulumi.String("string"),
    			},
    			TaskKey:        pulumi.String("string"),
    			TimeoutSeconds: pulumi.Int(0),
    			WebhookNotifications: &databricks.JobTaskWebhookNotificationsArgs{
    				OnDurationWarningThresholdExceededs: databricks.JobTaskWebhookNotificationsOnDurationWarningThresholdExceededArray{
    					&databricks.JobTaskWebhookNotificationsOnDurationWarningThresholdExceededArgs{
    						Id: pulumi.String("string"),
    					},
    				},
    				OnFailures: databricks.JobTaskWebhookNotificationsOnFailureArray{
    					&databricks.JobTaskWebhookNotificationsOnFailureArgs{
    						Id: pulumi.String("string"),
    					},
    				},
    				OnStarts: databricks.JobTaskWebhookNotificationsOnStartArray{
    					&databricks.JobTaskWebhookNotificationsOnStartArgs{
    						Id: pulumi.String("string"),
    					},
    				},
    				OnSuccesses: databricks.JobTaskWebhookNotificationsOnSuccessArray{
    					&databricks.JobTaskWebhookNotificationsOnSuccessArgs{
    						Id: pulumi.String("string"),
    					},
    				},
    			},
    		},
    	},
    	TimeoutSeconds: pulumi.Int(0),
    	Trigger: &databricks.JobTriggerArgs{
    		FileArrival: &databricks.JobTriggerFileArrivalArgs{
    			Url:                           pulumi.String("string"),
    			MinTimeBetweenTriggersSeconds: pulumi.Int(0),
    			WaitAfterLastChangeSeconds:    pulumi.Int(0),
    		},
    		PauseStatus: pulumi.String("string"),
    		TableUpdate: &databricks.JobTriggerTableUpdateArgs{
    			TableNames: pulumi.StringArray{
    				pulumi.String("string"),
    			},
    			Condition:                     pulumi.String("string"),
    			MinTimeBetweenTriggersSeconds: pulumi.Int(0),
    			WaitAfterLastChangeSeconds:    pulumi.Int(0),
    		},
    	},
    	WebhookNotifications: &databricks.JobWebhookNotificationsArgs{
    		OnDurationWarningThresholdExceededs: databricks.JobWebhookNotificationsOnDurationWarningThresholdExceededArray{
    			&databricks.JobWebhookNotificationsOnDurationWarningThresholdExceededArgs{
    				Id: pulumi.String("string"),
    			},
    		},
    		OnFailures: databricks.JobWebhookNotificationsOnFailureArray{
    			&databricks.JobWebhookNotificationsOnFailureArgs{
    				Id: pulumi.String("string"),
    			},
    		},
    		OnStarts: databricks.JobWebhookNotificationsOnStartArray{
    			&databricks.JobWebhookNotificationsOnStartArgs{
    				Id: pulumi.String("string"),
    			},
    		},
    		OnSuccesses: databricks.JobWebhookNotificationsOnSuccessArray{
    			&databricks.JobWebhookNotificationsOnSuccessArgs{
    				Id: pulumi.String("string"),
    			},
    		},
    	},
    })
    
    var jobResource = new Job("jobResource", JobArgs.builder()        
        .computes(JobComputeArgs.builder()
            .computeKey("string")
            .spec(JobComputeSpecArgs.builder()
                .kind("string")
                .build())
            .build())
        .continuous(JobContinuousArgs.builder()
            .pauseStatus("string")
            .build())
        .controlRunState(false)
        .deployment(JobDeploymentArgs.builder()
            .kind("string")
            .metadataFilePath("string")
            .build())
        .description("string")
        .editMode("string")
        .emailNotifications(JobEmailNotificationsArgs.builder()
            .noAlertForSkippedRuns(false)
            .onDurationWarningThresholdExceededs("string")
            .onFailures("string")
            .onStarts("string")
            .onSuccesses("string")
            .build())
        .existingClusterId("string")
        .format("string")
        .gitSource(JobGitSourceArgs.builder()
            .url("string")
            .branch("string")
            .commit("string")
            .jobSource(JobGitSourceJobSourceArgs.builder()
                .importFromGitBranch("string")
                .jobConfigPath("string")
                .dirtyState("string")
                .build())
            .provider("string")
            .tag("string")
            .build())
        .health(JobHealthArgs.builder()
            .rules(JobHealthRuleArgs.builder()
                .metric("string")
                .op("string")
                .value(0)
                .build())
            .build())
        .jobClusters(JobJobClusterArgs.builder()
            .jobClusterKey("string")
            .newCluster(JobJobClusterNewClusterArgs.builder()
                .sparkVersion("string")
                .enableElasticDisk(false)
                .clusterId("string")
                .enableLocalDiskEncryption(false)
                .azureAttributes(JobJobClusterNewClusterAzureAttributesArgs.builder()
                    .availability("string")
                    .firstOnDemand(0)
                    .spotBidMaxPrice(0)
                    .build())
                .gcpAttributes(JobJobClusterNewClusterGcpAttributesArgs.builder()
                    .availability("string")
                    .bootDiskSize(0)
                    .googleServiceAccount("string")
                    .localSsdCount(0)
                    .usePreemptibleExecutors(false)
                    .zoneId("string")
                    .build())
                .clusterLogConf(JobJobClusterNewClusterClusterLogConfArgs.builder()
                    .dbfs(JobJobClusterNewClusterClusterLogConfDbfsArgs.builder()
                        .destination("string")
                        .build())
                    .s3(JobJobClusterNewClusterClusterLogConfS3Args.builder()
                        .destination("string")
                        .cannedAcl("string")
                        .enableEncryption(false)
                        .encryptionType("string")
                        .endpoint("string")
                        .kmsKey("string")
                        .region("string")
                        .build())
                    .build())
                .clusterMountInfos(JobJobClusterNewClusterClusterMountInfoArgs.builder()
                    .localMountDirPath("string")
                    .networkFilesystemInfo(JobJobClusterNewClusterClusterMountInfoNetworkFilesystemInfoArgs.builder()
                        .serverAddress("string")
                        .mountOptions("string")
                        .build())
                    .remoteMountDirPath("string")
                    .build())
                .clusterName("string")
                .customTags(Map.of("string", "any"))
                .dataSecurityMode("string")
                .dockerImage(JobJobClusterNewClusterDockerImageArgs.builder()
                    .url("string")
                    .basicAuth(JobJobClusterNewClusterDockerImageBasicAuthArgs.builder()
                        .password("string")
                        .username("string")
                        .build())
                    .build())
                .idempotencyToken("string")
                .driverNodeTypeId("string")
                .applyPolicyDefaultValues(false)
                .awsAttributes(JobJobClusterNewClusterAwsAttributesArgs.builder()
                    .availability("string")
                    .ebsVolumeCount(0)
                    .ebsVolumeSize(0)
                    .ebsVolumeType("string")
                    .firstOnDemand(0)
                    .instanceProfileArn("string")
                    .spotBidPricePercent(0)
                    .zoneId("string")
                    .build())
                .autoterminationMinutes(0)
                .driverInstancePoolId("string")
                .initScripts(JobJobClusterNewClusterInitScriptArgs.builder()
                    .abfss(JobJobClusterNewClusterInitScriptAbfssArgs.builder()
                        .destination("string")
                        .build())
                    .file(JobJobClusterNewClusterInitScriptFileArgs.builder()
                        .destination("string")
                        .build())
                    .gcs(JobJobClusterNewClusterInitScriptGcsArgs.builder()
                        .destination("string")
                        .build())
                    .s3(JobJobClusterNewClusterInitScriptS3Args.builder()
                        .destination("string")
                        .cannedAcl("string")
                        .enableEncryption(false)
                        .encryptionType("string")
                        .endpoint("string")
                        .kmsKey("string")
                        .region("string")
                        .build())
                    .volumes(JobJobClusterNewClusterInitScriptVolumesArgs.builder()
                        .destination("string")
                        .build())
                    .workspace(JobJobClusterNewClusterInitScriptWorkspaceArgs.builder()
                        .destination("string")
                        .build())
                    .build())
                .instancePoolId("string")
                .nodeTypeId("string")
                .numWorkers(0)
                .policyId("string")
                .runtimeEngine("string")
                .singleUserName("string")
                .sparkConf(Map.of("string", "any"))
                .sparkEnvVars(Map.of("string", "any"))
                .autoscale(JobJobClusterNewClusterAutoscaleArgs.builder()
                    .maxWorkers(0)
                    .minWorkers(0)
                    .build())
                .sshPublicKeys("string")
                .workloadType(JobJobClusterNewClusterWorkloadTypeArgs.builder()
                    .clients(JobJobClusterNewClusterWorkloadTypeClientsArgs.builder()
                        .jobs(false)
                        .notebooks(false)
                        .build())
                    .build())
                .build())
            .build())
        .libraries(JobLibraryArgs.builder()
            .cran(JobLibraryCranArgs.builder()
                .package_("string")
                .repo("string")
                .build())
            .egg("string")
            .jar("string")
            .maven(JobLibraryMavenArgs.builder()
                .coordinates("string")
                .exclusions("string")
                .repo("string")
                .build())
            .pypi(JobLibraryPypiArgs.builder()
                .package_("string")
                .repo("string")
                .build())
            .whl("string")
            .build())
        .maxConcurrentRuns(0)
        .name("string")
        .newCluster(JobNewClusterArgs.builder()
            .sparkVersion("string")
            .enableElasticDisk(false)
            .clusterId("string")
            .enableLocalDiskEncryption(false)
            .azureAttributes(JobNewClusterAzureAttributesArgs.builder()
                .availability("string")
                .firstOnDemand(0)
                .spotBidMaxPrice(0)
                .build())
            .gcpAttributes(JobNewClusterGcpAttributesArgs.builder()
                .availability("string")
                .bootDiskSize(0)
                .googleServiceAccount("string")
                .localSsdCount(0)
                .usePreemptibleExecutors(false)
                .zoneId("string")
                .build())
            .clusterLogConf(JobNewClusterClusterLogConfArgs.builder()
                .dbfs(JobNewClusterClusterLogConfDbfsArgs.builder()
                    .destination("string")
                    .build())
                .s3(JobNewClusterClusterLogConfS3Args.builder()
                    .destination("string")
                    .cannedAcl("string")
                    .enableEncryption(false)
                    .encryptionType("string")
                    .endpoint("string")
                    .kmsKey("string")
                    .region("string")
                    .build())
                .build())
            .clusterMountInfos(JobNewClusterClusterMountInfoArgs.builder()
                .localMountDirPath("string")
                .networkFilesystemInfo(JobNewClusterClusterMountInfoNetworkFilesystemInfoArgs.builder()
                    .serverAddress("string")
                    .mountOptions("string")
                    .build())
                .remoteMountDirPath("string")
                .build())
            .clusterName("string")
            .customTags(Map.of("string", "any"))
            .dataSecurityMode("string")
            .dockerImage(JobNewClusterDockerImageArgs.builder()
                .url("string")
                .basicAuth(JobNewClusterDockerImageBasicAuthArgs.builder()
                    .password("string")
                    .username("string")
                    .build())
                .build())
            .idempotencyToken("string")
            .driverNodeTypeId("string")
            .applyPolicyDefaultValues(false)
            .awsAttributes(JobNewClusterAwsAttributesArgs.builder()
                .availability("string")
                .ebsVolumeCount(0)
                .ebsVolumeSize(0)
                .ebsVolumeType("string")
                .firstOnDemand(0)
                .instanceProfileArn("string")
                .spotBidPricePercent(0)
                .zoneId("string")
                .build())
            .autoterminationMinutes(0)
            .driverInstancePoolId("string")
            .initScripts(JobNewClusterInitScriptArgs.builder()
                .abfss(JobNewClusterInitScriptAbfssArgs.builder()
                    .destination("string")
                    .build())
                .file(JobNewClusterInitScriptFileArgs.builder()
                    .destination("string")
                    .build())
                .gcs(JobNewClusterInitScriptGcsArgs.builder()
                    .destination("string")
                    .build())
                .s3(JobNewClusterInitScriptS3Args.builder()
                    .destination("string")
                    .cannedAcl("string")
                    .enableEncryption(false)
                    .encryptionType("string")
                    .endpoint("string")
                    .kmsKey("string")
                    .region("string")
                    .build())
                .volumes(JobNewClusterInitScriptVolumesArgs.builder()
                    .destination("string")
                    .build())
                .workspace(JobNewClusterInitScriptWorkspaceArgs.builder()
                    .destination("string")
                    .build())
                .build())
            .instancePoolId("string")
            .nodeTypeId("string")
            .numWorkers(0)
            .policyId("string")
            .runtimeEngine("string")
            .singleUserName("string")
            .sparkConf(Map.of("string", "any"))
            .sparkEnvVars(Map.of("string", "any"))
            .autoscale(JobNewClusterAutoscaleArgs.builder()
                .maxWorkers(0)
                .minWorkers(0)
                .build())
            .sshPublicKeys("string")
            .workloadType(JobNewClusterWorkloadTypeArgs.builder()
                .clients(JobNewClusterWorkloadTypeClientsArgs.builder()
                    .jobs(false)
                    .notebooks(false)
                    .build())
                .build())
            .build())
        .notificationSettings(JobNotificationSettingsArgs.builder()
            .noAlertForCanceledRuns(false)
            .noAlertForSkippedRuns(false)
            .build())
        .parameters(JobParameterArgs.builder()
            .default_("string")
            .name("string")
            .build())
        .queue(JobQueueArgs.builder()
            .enabled(false)
            .build())
        .runAs(JobRunAsArgs.builder()
            .servicePrincipalName("string")
            .userName("string")
            .build())
        .schedule(JobScheduleArgs.builder()
            .quartzCronExpression("string")
            .timezoneId("string")
            .pauseStatus("string")
            .build())
        .tags(Map.of("string", "any"))
        .tasks(JobTaskArgs.builder()
            .computeKey("string")
            .conditionTask(JobTaskConditionTaskArgs.builder()
                .left("string")
                .op("string")
                .right("string")
                .build())
            .dbtTask(JobTaskDbtTaskArgs.builder()
                .commands("string")
                .catalog("string")
                .profilesDirectory("string")
                .projectDirectory("string")
                .schema("string")
                .source("string")
                .warehouseId("string")
                .build())
            .dependsOns(JobTaskDependsOnArgs.builder()
                .taskKey("string")
                .outcome("string")
                .build())
            .description("string")
            .emailNotifications(JobTaskEmailNotificationsArgs.builder()
                .noAlertForSkippedRuns(false)
                .onDurationWarningThresholdExceededs("string")
                .onFailures("string")
                .onStarts("string")
                .onSuccesses("string")
                .build())
            .existingClusterId("string")
            .forEachTask(JobTaskForEachTaskArgs.builder()
                .inputs("string")
                .task(JobTaskForEachTaskTaskArgs.builder()
                    .computeKey("string")
                    .conditionTask(JobTaskForEachTaskTaskConditionTaskArgs.builder()
                        .left("string")
                        .op("string")
                        .right("string")
                        .build())
                    .dbtTask(JobTaskForEachTaskTaskDbtTaskArgs.builder()
                        .commands("string")
                        .catalog("string")
                        .profilesDirectory("string")
                        .projectDirectory("string")
                        .schema("string")
                        .source("string")
                        .warehouseId("string")
                        .build())
                    .dependsOns(JobTaskForEachTaskTaskDependsOnArgs.builder()
                        .taskKey("string")
                        .outcome("string")
                        .build())
                    .description("string")
                    .emailNotifications(JobTaskForEachTaskTaskEmailNotificationsArgs.builder()
                        .noAlertForSkippedRuns(false)
                        .onDurationWarningThresholdExceededs("string")
                        .onFailures("string")
                        .onStarts("string")
                        .onSuccesses("string")
                        .build())
                    .existingClusterId("string")
                    .health(JobTaskForEachTaskTaskHealthArgs.builder()
                        .rules(JobTaskForEachTaskTaskHealthRuleArgs.builder()
                            .metric("string")
                            .op("string")
                            .value(0)
                            .build())
                        .build())
                    .jobClusterKey("string")
                    .libraries(JobTaskForEachTaskTaskLibraryArgs.builder()
                        .cran(JobTaskForEachTaskTaskLibraryCranArgs.builder()
                            .package_("string")
                            .repo("string")
                            .build())
                        .egg("string")
                        .jar("string")
                        .maven(JobTaskForEachTaskTaskLibraryMavenArgs.builder()
                            .coordinates("string")
                            .exclusions("string")
                            .repo("string")
                            .build())
                        .pypi(JobTaskForEachTaskTaskLibraryPypiArgs.builder()
                            .package_("string")
                            .repo("string")
                            .build())
                        .whl("string")
                        .build())
                    .maxRetries(0)
                    .minRetryIntervalMillis(0)
                    .newCluster(JobTaskForEachTaskTaskNewClusterArgs.builder()
                        .numWorkers(0)
                        .sparkVersion("string")
                        .clusterMountInfos(JobTaskForEachTaskTaskNewClusterClusterMountInfoArgs.builder()
                            .localMountDirPath("string")
                            .networkFilesystemInfo(JobTaskForEachTaskTaskNewClusterClusterMountInfoNetworkFilesystemInfoArgs.builder()
                                .serverAddress("string")
                                .mountOptions("string")
                                .build())
                            .remoteMountDirPath("string")
                            .build())
                        .autoscale(JobTaskForEachTaskTaskNewClusterAutoscaleArgs.builder()
                            .maxWorkers(0)
                            .minWorkers(0)
                            .build())
                        .azureAttributes(JobTaskForEachTaskTaskNewClusterAzureAttributesArgs.builder()
                            .availability("string")
                            .firstOnDemand(0)
                            .spotBidMaxPrice(0)
                            .build())
                        .clusterId("string")
                        .clusterLogConf(JobTaskForEachTaskTaskNewClusterClusterLogConfArgs.builder()
                            .dbfs(JobTaskForEachTaskTaskNewClusterClusterLogConfDbfsArgs.builder()
                                .destination("string")
                                .build())
                            .s3(JobTaskForEachTaskTaskNewClusterClusterLogConfS3Args.builder()
                                .destination("string")
                                .cannedAcl("string")
                                .enableEncryption(false)
                                .encryptionType("string")
                                .endpoint("string")
                                .kmsKey("string")
                                .region("string")
                                .build())
                            .build())
                        .applyPolicyDefaultValues(false)
                        .clusterName("string")
                        .customTags(Map.of("string", "any"))
                        .dataSecurityMode("string")
                        .dockerImage(JobTaskForEachTaskTaskNewClusterDockerImageArgs.builder()
                            .url("string")
                            .basicAuth(JobTaskForEachTaskTaskNewClusterDockerImageBasicAuthArgs.builder()
                                .password("string")
                                .username("string")
                                .build())
                            .build())
                        .driverInstancePoolId("string")
                        .driverNodeTypeId("string")
                        .workloadType(JobTaskForEachTaskTaskNewClusterWorkloadTypeArgs.builder()
                            .clients(JobTaskForEachTaskTaskNewClusterWorkloadTypeClientsArgs.builder()
                                .jobs(false)
                                .notebooks(false)
                                .build())
                            .build())
                        .awsAttributes(JobTaskForEachTaskTaskNewClusterAwsAttributesArgs.builder()
                            .availability("string")
                            .ebsVolumeCount(0)
                            .ebsVolumeSize(0)
                            .ebsVolumeType("string")
                            .firstOnDemand(0)
                            .instanceProfileArn("string")
                            .spotBidPricePercent(0)
                            .zoneId("string")
                            .build())
                        .nodeTypeId("string")
                        .idempotencyToken("string")
                        .initScripts(JobTaskForEachTaskTaskNewClusterInitScriptArgs.builder()
                            .abfss(JobTaskForEachTaskTaskNewClusterInitScriptAbfssArgs.builder()
                                .destination("string")
                                .build())
                            .dbfs(JobTaskForEachTaskTaskNewClusterInitScriptDbfsArgs.builder()
                                .destination("string")
                                .build())
                            .file(JobTaskForEachTaskTaskNewClusterInitScriptFileArgs.builder()
                                .destination("string")
                                .build())
                            .gcs(JobTaskForEachTaskTaskNewClusterInitScriptGcsArgs.builder()
                                .destination("string")
                                .build())
                            .s3(JobTaskForEachTaskTaskNewClusterInitScriptS3Args.builder()
                                .destination("string")
                                .cannedAcl("string")
                                .enableEncryption(false)
                                .encryptionType("string")
                                .endpoint("string")
                                .kmsKey("string")
                                .region("string")
                                .build())
                            .volumes(JobTaskForEachTaskTaskNewClusterInitScriptVolumesArgs.builder()
                                .destination("string")
                                .build())
                            .workspace(JobTaskForEachTaskTaskNewClusterInitScriptWorkspaceArgs.builder()
                                .destination("string")
                                .build())
                            .build())
                        .instancePoolId("string")
                        .gcpAttributes(JobTaskForEachTaskTaskNewClusterGcpAttributesArgs.builder()
                            .availability("string")
                            .bootDiskSize(0)
                            .googleServiceAccount("string")
                            .localSsdCount(0)
                            .usePreemptibleExecutors(false)
                            .zoneId("string")
                            .build())
                        .autoterminationMinutes(0)
                        .policyId("string")
                        .runtimeEngine("string")
                        .singleUserName("string")
                        .sparkConf(Map.of("string", "any"))
                        .sparkEnvVars(Map.of("string", "any"))
                        .enableLocalDiskEncryption(false)
                        .sshPublicKeys("string")
                        .enableElasticDisk(false)
                        .build())
                    .notebookTask(JobTaskForEachTaskTaskNotebookTaskArgs.builder()
                        .notebookPath("string")
                        .baseParameters(Map.of("string", "any"))
                        .source("string")
                        .build())
                    .notificationSettings(JobTaskForEachTaskTaskNotificationSettingsArgs.builder()
                        .alertOnLastAttempt(false)
                        .noAlertForCanceledRuns(false)
                        .noAlertForSkippedRuns(false)
                        .build())
                    .pipelineTask(JobTaskForEachTaskTaskPipelineTaskArgs.builder()
                        .pipelineId("string")
                        .fullRefresh(false)
                        .build())
                    .pythonWheelTask(JobTaskForEachTaskTaskPythonWheelTaskArgs.builder()
                        .entryPoint("string")
                        .namedParameters(Map.of("string", "any"))
                        .packageName("string")
                        .parameters("string")
                        .build())
                    .retryOnTimeout(false)
                    .runIf("string")
                    .runJobTask(JobTaskForEachTaskTaskRunJobTaskArgs.builder()
                        .jobId(0)
                        .jobParameters(Map.of("string", "any"))
                        .build())
                    .sparkJarTask(JobTaskForEachTaskTaskSparkJarTaskArgs.builder()
                        .jarUri("string")
                        .mainClassName("string")
                        .parameters("string")
                        .build())
                    .sparkPythonTask(JobTaskForEachTaskTaskSparkPythonTaskArgs.builder()
                        .pythonFile("string")
                        .parameters("string")
                        .source("string")
                        .build())
                    .sparkSubmitTask(JobTaskForEachTaskTaskSparkSubmitTaskArgs.builder()
                        .parameters("string")
                        .build())
                    .sqlTask(JobTaskForEachTaskTaskSqlTaskArgs.builder()
                        .alert(JobTaskForEachTaskTaskSqlTaskAlertArgs.builder()
                            .alertId("string")
                            .subscriptions(JobTaskForEachTaskTaskSqlTaskAlertSubscriptionArgs.builder()
                                .destinationId("string")
                                .userName("string")
                                .build())
                            .pauseSubscriptions(false)
                            .build())
                        .dashboard(JobTaskForEachTaskTaskSqlTaskDashboardArgs.builder()
                            .dashboardId("string")
                            .customSubject("string")
                            .pauseSubscriptions(false)
                            .subscriptions(JobTaskForEachTaskTaskSqlTaskDashboardSubscriptionArgs.builder()
                                .destinationId("string")
                                .userName("string")
                                .build())
                            .build())
                        .file(JobTaskForEachTaskTaskSqlTaskFileArgs.builder()
                            .path("string")
                            .source("string")
                            .build())
                        .parameters(Map.of("string", "any"))
                        .query(JobTaskForEachTaskTaskSqlTaskQueryArgs.builder()
                            .queryId("string")
                            .build())
                        .warehouseId("string")
                        .build())
                    .taskKey("string")
                    .timeoutSeconds(0)
                    .webhookNotifications(JobTaskForEachTaskTaskWebhookNotificationsArgs.builder()
                        .onDurationWarningThresholdExceededs(JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceededArgs.builder()
                            .id("string")
                            .build())
                        .onFailures(JobTaskForEachTaskTaskWebhookNotificationsOnFailureArgs.builder()
                            .id("string")
                            .build())
                        .onStarts(JobTaskForEachTaskTaskWebhookNotificationsOnStartArgs.builder()
                            .id("string")
                            .build())
                        .onSuccesses(JobTaskForEachTaskTaskWebhookNotificationsOnSuccessArgs.builder()
                            .id("string")
                            .build())
                        .build())
                    .build())
                .concurrency(0)
                .build())
            .health(JobTaskHealthArgs.builder()
                .rules(JobTaskHealthRuleArgs.builder()
                    .metric("string")
                    .op("string")
                    .value(0)
                    .build())
                .build())
            .jobClusterKey("string")
            .libraries(JobTaskLibraryArgs.builder()
                .cran(JobTaskLibraryCranArgs.builder()
                    .package_("string")
                    .repo("string")
                    .build())
                .egg("string")
                .jar("string")
                .maven(JobTaskLibraryMavenArgs.builder()
                    .coordinates("string")
                    .exclusions("string")
                    .repo("string")
                    .build())
                .pypi(JobTaskLibraryPypiArgs.builder()
                    .package_("string")
                    .repo("string")
                    .build())
                .whl("string")
                .build())
            .maxRetries(0)
            .minRetryIntervalMillis(0)
            .newCluster(JobTaskNewClusterArgs.builder()
                .sparkVersion("string")
                .enableElasticDisk(false)
                .clusterId("string")
                .enableLocalDiskEncryption(false)
                .azureAttributes(JobTaskNewClusterAzureAttributesArgs.builder()
                    .availability("string")
                    .firstOnDemand(0)
                    .spotBidMaxPrice(0)
                    .build())
                .gcpAttributes(JobTaskNewClusterGcpAttributesArgs.builder()
                    .availability("string")
                    .bootDiskSize(0)
                    .googleServiceAccount("string")
                    .localSsdCount(0)
                    .usePreemptibleExecutors(false)
                    .zoneId("string")
                    .build())
                .clusterLogConf(JobTaskNewClusterClusterLogConfArgs.builder()
                    .dbfs(JobTaskNewClusterClusterLogConfDbfsArgs.builder()
                        .destination("string")
                        .build())
                    .s3(JobTaskNewClusterClusterLogConfS3Args.builder()
                        .destination("string")
                        .cannedAcl("string")
                        .enableEncryption(false)
                        .encryptionType("string")
                        .endpoint("string")
                        .kmsKey("string")
                        .region("string")
                        .build())
                    .build())
                .clusterMountInfos(JobTaskNewClusterClusterMountInfoArgs.builder()
                    .localMountDirPath("string")
                    .networkFilesystemInfo(JobTaskNewClusterClusterMountInfoNetworkFilesystemInfoArgs.builder()
                        .serverAddress("string")
                        .mountOptions("string")
                        .build())
                    .remoteMountDirPath("string")
                    .build())
                .clusterName("string")
                .customTags(Map.of("string", "any"))
                .dataSecurityMode("string")
                .dockerImage(JobTaskNewClusterDockerImageArgs.builder()
                    .url("string")
                    .basicAuth(JobTaskNewClusterDockerImageBasicAuthArgs.builder()
                        .password("string")
                        .username("string")
                        .build())
                    .build())
                .idempotencyToken("string")
                .driverNodeTypeId("string")
                .applyPolicyDefaultValues(false)
                .awsAttributes(JobTaskNewClusterAwsAttributesArgs.builder()
                    .availability("string")
                    .ebsVolumeCount(0)
                    .ebsVolumeSize(0)
                    .ebsVolumeType("string")
                    .firstOnDemand(0)
                    .instanceProfileArn("string")
                    .spotBidPricePercent(0)
                    .zoneId("string")
                    .build())
                .autoterminationMinutes(0)
                .driverInstancePoolId("string")
                .initScripts(JobTaskNewClusterInitScriptArgs.builder()
                    .abfss(JobTaskNewClusterInitScriptAbfssArgs.builder()
                        .destination("string")
                        .build())
                    .file(JobTaskNewClusterInitScriptFileArgs.builder()
                        .destination("string")
                        .build())
                    .gcs(JobTaskNewClusterInitScriptGcsArgs.builder()
                        .destination("string")
                        .build())
                    .s3(JobTaskNewClusterInitScriptS3Args.builder()
                        .destination("string")
                        .cannedAcl("string")
                        .enableEncryption(false)
                        .encryptionType("string")
                        .endpoint("string")
                        .kmsKey("string")
                        .region("string")
                        .build())
                    .volumes(JobTaskNewClusterInitScriptVolumesArgs.builder()
                        .destination("string")
                        .build())
                    .workspace(JobTaskNewClusterInitScriptWorkspaceArgs.builder()
                        .destination("string")
                        .build())
                    .build())
                .instancePoolId("string")
                .nodeTypeId("string")
                .numWorkers(0)
                .policyId("string")
                .runtimeEngine("string")
                .singleUserName("string")
                .sparkConf(Map.of("string", "any"))
                .sparkEnvVars(Map.of("string", "any"))
                .autoscale(JobTaskNewClusterAutoscaleArgs.builder()
                    .maxWorkers(0)
                    .minWorkers(0)
                    .build())
                .sshPublicKeys("string")
                .workloadType(JobTaskNewClusterWorkloadTypeArgs.builder()
                    .clients(JobTaskNewClusterWorkloadTypeClientsArgs.builder()
                        .jobs(false)
                        .notebooks(false)
                        .build())
                    .build())
                .build())
            .notebookTask(JobTaskNotebookTaskArgs.builder()
                .notebookPath("string")
                .baseParameters(Map.of("string", "any"))
                .source("string")
                .build())
            .notificationSettings(JobTaskNotificationSettingsArgs.builder()
                .alertOnLastAttempt(false)
                .noAlertForCanceledRuns(false)
                .noAlertForSkippedRuns(false)
                .build())
            .pipelineTask(JobTaskPipelineTaskArgs.builder()
                .pipelineId("string")
                .fullRefresh(false)
                .build())
            .pythonWheelTask(JobTaskPythonWheelTaskArgs.builder()
                .entryPoint("string")
                .namedParameters(Map.of("string", "any"))
                .packageName("string")
                .parameters("string")
                .build())
            .retryOnTimeout(false)
            .runIf("string")
            .runJobTask(JobTaskRunJobTaskArgs.builder()
                .jobId(0)
                .jobParameters(Map.of("string", "any"))
                .build())
            .sparkJarTask(JobTaskSparkJarTaskArgs.builder()
                .jarUri("string")
                .mainClassName("string")
                .parameters("string")
                .build())
            .sparkPythonTask(JobTaskSparkPythonTaskArgs.builder()
                .pythonFile("string")
                .parameters("string")
                .source("string")
                .build())
            .sparkSubmitTask(JobTaskSparkSubmitTaskArgs.builder()
                .parameters("string")
                .build())
            .sqlTask(JobTaskSqlTaskArgs.builder()
                .alert(JobTaskSqlTaskAlertArgs.builder()
                    .alertId("string")
                    .subscriptions(JobTaskSqlTaskAlertSubscriptionArgs.builder()
                        .destinationId("string")
                        .userName("string")
                        .build())
                    .pauseSubscriptions(false)
                    .build())
                .dashboard(JobTaskSqlTaskDashboardArgs.builder()
                    .dashboardId("string")
                    .customSubject("string")
                    .pauseSubscriptions(false)
                    .subscriptions(JobTaskSqlTaskDashboardSubscriptionArgs.builder()
                        .destinationId("string")
                        .userName("string")
                        .build())
                    .build())
                .file(JobTaskSqlTaskFileArgs.builder()
                    .path("string")
                    .source("string")
                    .build())
                .parameters(Map.of("string", "any"))
                .query(JobTaskSqlTaskQueryArgs.builder()
                    .queryId("string")
                    .build())
                .warehouseId("string")
                .build())
            .taskKey("string")
            .timeoutSeconds(0)
            .webhookNotifications(JobTaskWebhookNotificationsArgs.builder()
                .onDurationWarningThresholdExceededs(JobTaskWebhookNotificationsOnDurationWarningThresholdExceededArgs.builder()
                    .id("string")
                    .build())
                .onFailures(JobTaskWebhookNotificationsOnFailureArgs.builder()
                    .id("string")
                    .build())
                .onStarts(JobTaskWebhookNotificationsOnStartArgs.builder()
                    .id("string")
                    .build())
                .onSuccesses(JobTaskWebhookNotificationsOnSuccessArgs.builder()
                    .id("string")
                    .build())
                .build())
            .build())
        .timeoutSeconds(0)
        .trigger(JobTriggerArgs.builder()
            .fileArrival(JobTriggerFileArrivalArgs.builder()
                .url("string")
                .minTimeBetweenTriggersSeconds(0)
                .waitAfterLastChangeSeconds(0)
                .build())
            .pauseStatus("string")
            .tableUpdate(JobTriggerTableUpdateArgs.builder()
                .tableNames("string")
                .condition("string")
                .minTimeBetweenTriggersSeconds(0)
                .waitAfterLastChangeSeconds(0)
                .build())
            .build())
        .webhookNotifications(JobWebhookNotificationsArgs.builder()
            .onDurationWarningThresholdExceededs(JobWebhookNotificationsOnDurationWarningThresholdExceededArgs.builder()
                .id("string")
                .build())
            .onFailures(JobWebhookNotificationsOnFailureArgs.builder()
                .id("string")
                .build())
            .onStarts(JobWebhookNotificationsOnStartArgs.builder()
                .id("string")
                .build())
            .onSuccesses(JobWebhookNotificationsOnSuccessArgs.builder()
                .id("string")
                .build())
            .build())
        .build());
    
    job_resource = databricks.Job("jobResource",
        computes=[databricks.JobComputeArgs(
            compute_key="string",
            spec=databricks.JobComputeSpecArgs(
                kind="string",
            ),
        )],
        continuous=databricks.JobContinuousArgs(
            pause_status="string",
        ),
        control_run_state=False,
        deployment=databricks.JobDeploymentArgs(
            kind="string",
            metadata_file_path="string",
        ),
        description="string",
        edit_mode="string",
        email_notifications=databricks.JobEmailNotificationsArgs(
            no_alert_for_skipped_runs=False,
            on_duration_warning_threshold_exceededs=["string"],
            on_failures=["string"],
            on_starts=["string"],
            on_successes=["string"],
        ),
        existing_cluster_id="string",
        format="string",
        git_source=databricks.JobGitSourceArgs(
            url="string",
            branch="string",
            commit="string",
            job_source=databricks.JobGitSourceJobSourceArgs(
                import_from_git_branch="string",
                job_config_path="string",
                dirty_state="string",
            ),
            provider="string",
            tag="string",
        ),
        health=databricks.JobHealthArgs(
            rules=[databricks.JobHealthRuleArgs(
                metric="string",
                op="string",
                value=0,
            )],
        ),
        job_clusters=[databricks.JobJobClusterArgs(
            job_cluster_key="string",
            new_cluster=databricks.JobJobClusterNewClusterArgs(
                spark_version="string",
                enable_elastic_disk=False,
                cluster_id="string",
                enable_local_disk_encryption=False,
                azure_attributes=databricks.JobJobClusterNewClusterAzureAttributesArgs(
                    availability="string",
                    first_on_demand=0,
                    spot_bid_max_price=0,
                ),
                gcp_attributes=databricks.JobJobClusterNewClusterGcpAttributesArgs(
                    availability="string",
                    boot_disk_size=0,
                    google_service_account="string",
                    local_ssd_count=0,
                    use_preemptible_executors=False,
                    zone_id="string",
                ),
                cluster_log_conf=databricks.JobJobClusterNewClusterClusterLogConfArgs(
                    dbfs=databricks.JobJobClusterNewClusterClusterLogConfDbfsArgs(
                        destination="string",
                    ),
                    s3=databricks.JobJobClusterNewClusterClusterLogConfS3Args(
                        destination="string",
                        canned_acl="string",
                        enable_encryption=False,
                        encryption_type="string",
                        endpoint="string",
                        kms_key="string",
                        region="string",
                    ),
                ),
                cluster_mount_infos=[databricks.JobJobClusterNewClusterClusterMountInfoArgs(
                    local_mount_dir_path="string",
                    network_filesystem_info=databricks.JobJobClusterNewClusterClusterMountInfoNetworkFilesystemInfoArgs(
                        server_address="string",
                        mount_options="string",
                    ),
                    remote_mount_dir_path="string",
                )],
                cluster_name="string",
                custom_tags={
                    "string": "any",
                },
                data_security_mode="string",
                docker_image=databricks.JobJobClusterNewClusterDockerImageArgs(
                    url="string",
                    basic_auth=databricks.JobJobClusterNewClusterDockerImageBasicAuthArgs(
                        password="string",
                        username="string",
                    ),
                ),
                idempotency_token="string",
                driver_node_type_id="string",
                apply_policy_default_values=False,
                aws_attributes=databricks.JobJobClusterNewClusterAwsAttributesArgs(
                    availability="string",
                    ebs_volume_count=0,
                    ebs_volume_size=0,
                    ebs_volume_type="string",
                    first_on_demand=0,
                    instance_profile_arn="string",
                    spot_bid_price_percent=0,
                    zone_id="string",
                ),
                autotermination_minutes=0,
                driver_instance_pool_id="string",
                init_scripts=[databricks.JobJobClusterNewClusterInitScriptArgs(
                    abfss=databricks.JobJobClusterNewClusterInitScriptAbfssArgs(
                        destination="string",
                    ),
                    file=databricks.JobJobClusterNewClusterInitScriptFileArgs(
                        destination="string",
                    ),
                    gcs=databricks.JobJobClusterNewClusterInitScriptGcsArgs(
                        destination="string",
                    ),
                    s3=databricks.JobJobClusterNewClusterInitScriptS3Args(
                        destination="string",
                        canned_acl="string",
                        enable_encryption=False,
                        encryption_type="string",
                        endpoint="string",
                        kms_key="string",
                        region="string",
                    ),
                    volumes=databricks.JobJobClusterNewClusterInitScriptVolumesArgs(
                        destination="string",
                    ),
                    workspace=databricks.JobJobClusterNewClusterInitScriptWorkspaceArgs(
                        destination="string",
                    ),
                )],
                instance_pool_id="string",
                node_type_id="string",
                num_workers=0,
                policy_id="string",
                runtime_engine="string",
                single_user_name="string",
                spark_conf={
                    "string": "any",
                },
                spark_env_vars={
                    "string": "any",
                },
                autoscale=databricks.JobJobClusterNewClusterAutoscaleArgs(
                    max_workers=0,
                    min_workers=0,
                ),
                ssh_public_keys=["string"],
                workload_type=databricks.JobJobClusterNewClusterWorkloadTypeArgs(
                    clients=databricks.JobJobClusterNewClusterWorkloadTypeClientsArgs(
                        jobs=False,
                        notebooks=False,
                    ),
                ),
            ),
        )],
        libraries=[databricks.JobLibraryArgs(
            cran=databricks.JobLibraryCranArgs(
                package="string",
                repo="string",
            ),
            egg="string",
            jar="string",
            maven=databricks.JobLibraryMavenArgs(
                coordinates="string",
                exclusions=["string"],
                repo="string",
            ),
            pypi=databricks.JobLibraryPypiArgs(
                package="string",
                repo="string",
            ),
            whl="string",
        )],
        max_concurrent_runs=0,
        name="string",
        new_cluster=databricks.JobNewClusterArgs(
            spark_version="string",
            enable_elastic_disk=False,
            cluster_id="string",
            enable_local_disk_encryption=False,
            azure_attributes=databricks.JobNewClusterAzureAttributesArgs(
                availability="string",
                first_on_demand=0,
                spot_bid_max_price=0,
            ),
            gcp_attributes=databricks.JobNewClusterGcpAttributesArgs(
                availability="string",
                boot_disk_size=0,
                google_service_account="string",
                local_ssd_count=0,
                use_preemptible_executors=False,
                zone_id="string",
            ),
            cluster_log_conf=databricks.JobNewClusterClusterLogConfArgs(
                dbfs=databricks.JobNewClusterClusterLogConfDbfsArgs(
                    destination="string",
                ),
                s3=databricks.JobNewClusterClusterLogConfS3Args(
                    destination="string",
                    canned_acl="string",
                    enable_encryption=False,
                    encryption_type="string",
                    endpoint="string",
                    kms_key="string",
                    region="string",
                ),
            ),
            cluster_mount_infos=[databricks.JobNewClusterClusterMountInfoArgs(
                local_mount_dir_path="string",
                network_filesystem_info=databricks.JobNewClusterClusterMountInfoNetworkFilesystemInfoArgs(
                    server_address="string",
                    mount_options="string",
                ),
                remote_mount_dir_path="string",
            )],
            cluster_name="string",
            custom_tags={
                "string": "any",
            },
            data_security_mode="string",
            docker_image=databricks.JobNewClusterDockerImageArgs(
                url="string",
                basic_auth=databricks.JobNewClusterDockerImageBasicAuthArgs(
                    password="string",
                    username="string",
                ),
            ),
            idempotency_token="string",
            driver_node_type_id="string",
            apply_policy_default_values=False,
            aws_attributes=databricks.JobNewClusterAwsAttributesArgs(
                availability="string",
                ebs_volume_count=0,
                ebs_volume_size=0,
                ebs_volume_type="string",
                first_on_demand=0,
                instance_profile_arn="string",
                spot_bid_price_percent=0,
                zone_id="string",
            ),
            autotermination_minutes=0,
            driver_instance_pool_id="string",
            init_scripts=[databricks.JobNewClusterInitScriptArgs(
                abfss=databricks.JobNewClusterInitScriptAbfssArgs(
                    destination="string",
                ),
                file=databricks.JobNewClusterInitScriptFileArgs(
                    destination="string",
                ),
                gcs=databricks.JobNewClusterInitScriptGcsArgs(
                    destination="string",
                ),
                s3=databricks.JobNewClusterInitScriptS3Args(
                    destination="string",
                    canned_acl="string",
                    enable_encryption=False,
                    encryption_type="string",
                    endpoint="string",
                    kms_key="string",
                    region="string",
                ),
                volumes=databricks.JobNewClusterInitScriptVolumesArgs(
                    destination="string",
                ),
                workspace=databricks.JobNewClusterInitScriptWorkspaceArgs(
                    destination="string",
                ),
            )],
            instance_pool_id="string",
            node_type_id="string",
            num_workers=0,
            policy_id="string",
            runtime_engine="string",
            single_user_name="string",
            spark_conf={
                "string": "any",
            },
            spark_env_vars={
                "string": "any",
            },
            autoscale=databricks.JobNewClusterAutoscaleArgs(
                max_workers=0,
                min_workers=0,
            ),
            ssh_public_keys=["string"],
            workload_type=databricks.JobNewClusterWorkloadTypeArgs(
                clients=databricks.JobNewClusterWorkloadTypeClientsArgs(
                    jobs=False,
                    notebooks=False,
                ),
            ),
        ),
        notification_settings=databricks.JobNotificationSettingsArgs(
            no_alert_for_canceled_runs=False,
            no_alert_for_skipped_runs=False,
        ),
        parameters=[databricks.JobParameterArgs(
            default="string",
            name="string",
        )],
        queue=databricks.JobQueueArgs(
            enabled=False,
        ),
        run_as=databricks.JobRunAsArgs(
            service_principal_name="string",
            user_name="string",
        ),
        schedule=databricks.JobScheduleArgs(
            quartz_cron_expression="string",
            timezone_id="string",
            pause_status="string",
        ),
        tags={
            "string": "any",
        },
        tasks=[databricks.JobTaskArgs(
            compute_key="string",
            condition_task=databricks.JobTaskConditionTaskArgs(
                left="string",
                op="string",
                right="string",
            ),
            dbt_task=databricks.JobTaskDbtTaskArgs(
                commands=["string"],
                catalog="string",
                profiles_directory="string",
                project_directory="string",
                schema="string",
                source="string",
                warehouse_id="string",
            ),
            depends_ons=[databricks.JobTaskDependsOnArgs(
                task_key="string",
                outcome="string",
            )],
            description="string",
            email_notifications=databricks.JobTaskEmailNotificationsArgs(
                no_alert_for_skipped_runs=False,
                on_duration_warning_threshold_exceededs=["string"],
                on_failures=["string"],
                on_starts=["string"],
                on_successes=["string"],
            ),
            existing_cluster_id="string",
            for_each_task=databricks.JobTaskForEachTaskArgs(
                inputs="string",
                task=databricks.JobTaskForEachTaskTaskArgs(
                    compute_key="string",
                    condition_task=databricks.JobTaskForEachTaskTaskConditionTaskArgs(
                        left="string",
                        op="string",
                        right="string",
                    ),
                    dbt_task=databricks.JobTaskForEachTaskTaskDbtTaskArgs(
                        commands=["string"],
                        catalog="string",
                        profiles_directory="string",
                        project_directory="string",
                        schema="string",
                        source="string",
                        warehouse_id="string",
                    ),
                    depends_ons=[databricks.JobTaskForEachTaskTaskDependsOnArgs(
                        task_key="string",
                        outcome="string",
                    )],
                    description="string",
                    email_notifications=databricks.JobTaskForEachTaskTaskEmailNotificationsArgs(
                        no_alert_for_skipped_runs=False,
                        on_duration_warning_threshold_exceededs=["string"],
                        on_failures=["string"],
                        on_starts=["string"],
                        on_successes=["string"],
                    ),
                    existing_cluster_id="string",
                    health=databricks.JobTaskForEachTaskTaskHealthArgs(
                        rules=[databricks.JobTaskForEachTaskTaskHealthRuleArgs(
                            metric="string",
                            op="string",
                            value=0,
                        )],
                    ),
                    job_cluster_key="string",
                    libraries=[databricks.JobTaskForEachTaskTaskLibraryArgs(
                        cran=databricks.JobTaskForEachTaskTaskLibraryCranArgs(
                            package="string",
                            repo="string",
                        ),
                        egg="string",
                        jar="string",
                        maven=databricks.JobTaskForEachTaskTaskLibraryMavenArgs(
                            coordinates="string",
                            exclusions=["string"],
                            repo="string",
                        ),
                        pypi=databricks.JobTaskForEachTaskTaskLibraryPypiArgs(
                            package="string",
                            repo="string",
                        ),
                        whl="string",
                    )],
                    max_retries=0,
                    min_retry_interval_millis=0,
                    new_cluster=databricks.JobTaskForEachTaskTaskNewClusterArgs(
                        num_workers=0,
                        spark_version="string",
                        cluster_mount_infos=[databricks.JobTaskForEachTaskTaskNewClusterClusterMountInfoArgs(
                            local_mount_dir_path="string",
                            network_filesystem_info=databricks.JobTaskForEachTaskTaskNewClusterClusterMountInfoNetworkFilesystemInfoArgs(
                                server_address="string",
                                mount_options="string",
                            ),
                            remote_mount_dir_path="string",
                        )],
                        autoscale=databricks.JobTaskForEachTaskTaskNewClusterAutoscaleArgs(
                            max_workers=0,
                            min_workers=0,
                        ),
                        azure_attributes=databricks.JobTaskForEachTaskTaskNewClusterAzureAttributesArgs(
                            availability="string",
                            first_on_demand=0,
                            spot_bid_max_price=0,
                        ),
                        cluster_id="string",
                        cluster_log_conf=databricks.JobTaskForEachTaskTaskNewClusterClusterLogConfArgs(
                            dbfs=databricks.JobTaskForEachTaskTaskNewClusterClusterLogConfDbfsArgs(
                                destination="string",
                            ),
                            s3=databricks.JobTaskForEachTaskTaskNewClusterClusterLogConfS3Args(
                                destination="string",
                                canned_acl="string",
                                enable_encryption=False,
                                encryption_type="string",
                                endpoint="string",
                                kms_key="string",
                                region="string",
                            ),
                        ),
                        apply_policy_default_values=False,
                        cluster_name="string",
                        custom_tags={
                            "string": "any",
                        },
                        data_security_mode="string",
                        docker_image=databricks.JobTaskForEachTaskTaskNewClusterDockerImageArgs(
                            url="string",
                            basic_auth=databricks.JobTaskForEachTaskTaskNewClusterDockerImageBasicAuthArgs(
                                password="string",
                                username="string",
                            ),
                        ),
                        driver_instance_pool_id="string",
                        driver_node_type_id="string",
                        workload_type=databricks.JobTaskForEachTaskTaskNewClusterWorkloadTypeArgs(
                            clients=databricks.JobTaskForEachTaskTaskNewClusterWorkloadTypeClientsArgs(
                                jobs=False,
                                notebooks=False,
                            ),
                        ),
                        aws_attributes=databricks.JobTaskForEachTaskTaskNewClusterAwsAttributesArgs(
                            availability="string",
                            ebs_volume_count=0,
                            ebs_volume_size=0,
                            ebs_volume_type="string",
                            first_on_demand=0,
                            instance_profile_arn="string",
                            spot_bid_price_percent=0,
                            zone_id="string",
                        ),
                        node_type_id="string",
                        idempotency_token="string",
                        init_scripts=[databricks.JobTaskForEachTaskTaskNewClusterInitScriptArgs(
                            abfss=databricks.JobTaskForEachTaskTaskNewClusterInitScriptAbfssArgs(
                                destination="string",
                            ),
                            dbfs=databricks.JobTaskForEachTaskTaskNewClusterInitScriptDbfsArgs(
                                destination="string",
                            ),
                            file=databricks.JobTaskForEachTaskTaskNewClusterInitScriptFileArgs(
                                destination="string",
                            ),
                            gcs=databricks.JobTaskForEachTaskTaskNewClusterInitScriptGcsArgs(
                                destination="string",
                            ),
                            s3=databricks.JobTaskForEachTaskTaskNewClusterInitScriptS3Args(
                                destination="string",
                                canned_acl="string",
                                enable_encryption=False,
                                encryption_type="string",
                                endpoint="string",
                                kms_key="string",
                                region="string",
                            ),
                            volumes=databricks.JobTaskForEachTaskTaskNewClusterInitScriptVolumesArgs(
                                destination="string",
                            ),
                            workspace=databricks.JobTaskForEachTaskTaskNewClusterInitScriptWorkspaceArgs(
                                destination="string",
                            ),
                        )],
                        instance_pool_id="string",
                        gcp_attributes=databricks.JobTaskForEachTaskTaskNewClusterGcpAttributesArgs(
                            availability="string",
                            boot_disk_size=0,
                            google_service_account="string",
                            local_ssd_count=0,
                            use_preemptible_executors=False,
                            zone_id="string",
                        ),
                        autotermination_minutes=0,
                        policy_id="string",
                        runtime_engine="string",
                        single_user_name="string",
                        spark_conf={
                            "string": "any",
                        },
                        spark_env_vars={
                            "string": "any",
                        },
                        enable_local_disk_encryption=False,
                        ssh_public_keys=["string"],
                        enable_elastic_disk=False,
                    ),
                    notebook_task=databricks.JobTaskForEachTaskTaskNotebookTaskArgs(
                        notebook_path="string",
                        base_parameters={
                            "string": "any",
                        },
                        source="string",
                    ),
                    notification_settings=databricks.JobTaskForEachTaskTaskNotificationSettingsArgs(
                        alert_on_last_attempt=False,
                        no_alert_for_canceled_runs=False,
                        no_alert_for_skipped_runs=False,
                    ),
                    pipeline_task=databricks.JobTaskForEachTaskTaskPipelineTaskArgs(
                        pipeline_id="string",
                        full_refresh=False,
                    ),
                    python_wheel_task=databricks.JobTaskForEachTaskTaskPythonWheelTaskArgs(
                        entry_point="string",
                        named_parameters={
                            "string": "any",
                        },
                        package_name="string",
                        parameters=["string"],
                    ),
                    retry_on_timeout=False,
                    run_if="string",
                    run_job_task=databricks.JobTaskForEachTaskTaskRunJobTaskArgs(
                        job_id=0,
                        job_parameters={
                            "string": "any",
                        },
                    ),
                    spark_jar_task=databricks.JobTaskForEachTaskTaskSparkJarTaskArgs(
                        jar_uri="string",
                        main_class_name="string",
                        parameters=["string"],
                    ),
                    spark_python_task=databricks.JobTaskForEachTaskTaskSparkPythonTaskArgs(
                        python_file="string",
                        parameters=["string"],
                        source="string",
                    ),
                    spark_submit_task=databricks.JobTaskForEachTaskTaskSparkSubmitTaskArgs(
                        parameters=["string"],
                    ),
                    sql_task=databricks.JobTaskForEachTaskTaskSqlTaskArgs(
                        alert=databricks.JobTaskForEachTaskTaskSqlTaskAlertArgs(
                            alert_id="string",
                            subscriptions=[databricks.JobTaskForEachTaskTaskSqlTaskAlertSubscriptionArgs(
                                destination_id="string",
                                user_name="string",
                            )],
                            pause_subscriptions=False,
                        ),
                        dashboard=databricks.JobTaskForEachTaskTaskSqlTaskDashboardArgs(
                            dashboard_id="string",
                            custom_subject="string",
                            pause_subscriptions=False,
                            subscriptions=[databricks.JobTaskForEachTaskTaskSqlTaskDashboardSubscriptionArgs(
                                destination_id="string",
                                user_name="string",
                            )],
                        ),
                        file=databricks.JobTaskForEachTaskTaskSqlTaskFileArgs(
                            path="string",
                            source="string",
                        ),
                        parameters={
                            "string": "any",
                        },
                        query=databricks.JobTaskForEachTaskTaskSqlTaskQueryArgs(
                            query_id="string",
                        ),
                        warehouse_id="string",
                    ),
                    task_key="string",
                    timeout_seconds=0,
                    webhook_notifications=databricks.JobTaskForEachTaskTaskWebhookNotificationsArgs(
                        on_duration_warning_threshold_exceededs=[databricks.JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceededArgs(
                            id="string",
                        )],
                        on_failures=[databricks.JobTaskForEachTaskTaskWebhookNotificationsOnFailureArgs(
                            id="string",
                        )],
                        on_starts=[databricks.JobTaskForEachTaskTaskWebhookNotificationsOnStartArgs(
                            id="string",
                        )],
                        on_successes=[databricks.JobTaskForEachTaskTaskWebhookNotificationsOnSuccessArgs(
                            id="string",
                        )],
                    ),
                ),
                concurrency=0,
            ),
            health=databricks.JobTaskHealthArgs(
                rules=[databricks.JobTaskHealthRuleArgs(
                    metric="string",
                    op="string",
                    value=0,
                )],
            ),
            job_cluster_key="string",
            libraries=[databricks.JobTaskLibraryArgs(
                cran=databricks.JobTaskLibraryCranArgs(
                    package="string",
                    repo="string",
                ),
                egg="string",
                jar="string",
                maven=databricks.JobTaskLibraryMavenArgs(
                    coordinates="string",
                    exclusions=["string"],
                    repo="string",
                ),
                pypi=databricks.JobTaskLibraryPypiArgs(
                    package="string",
                    repo="string",
                ),
                whl="string",
            )],
            max_retries=0,
            min_retry_interval_millis=0,
            new_cluster=databricks.JobTaskNewClusterArgs(
                spark_version="string",
                enable_elastic_disk=False,
                cluster_id="string",
                enable_local_disk_encryption=False,
                azure_attributes=databricks.JobTaskNewClusterAzureAttributesArgs(
                    availability="string",
                    first_on_demand=0,
                    spot_bid_max_price=0,
                ),
                gcp_attributes=databricks.JobTaskNewClusterGcpAttributesArgs(
                    availability="string",
                    boot_disk_size=0,
                    google_service_account="string",
                    local_ssd_count=0,
                    use_preemptible_executors=False,
                    zone_id="string",
                ),
                cluster_log_conf=databricks.JobTaskNewClusterClusterLogConfArgs(
                    dbfs=databricks.JobTaskNewClusterClusterLogConfDbfsArgs(
                        destination="string",
                    ),
                    s3=databricks.JobTaskNewClusterClusterLogConfS3Args(
                        destination="string",
                        canned_acl="string",
                        enable_encryption=False,
                        encryption_type="string",
                        endpoint="string",
                        kms_key="string",
                        region="string",
                    ),
                ),
                cluster_mount_infos=[databricks.JobTaskNewClusterClusterMountInfoArgs(
                    local_mount_dir_path="string",
                    network_filesystem_info=databricks.JobTaskNewClusterClusterMountInfoNetworkFilesystemInfoArgs(
                        server_address="string",
                        mount_options="string",
                    ),
                    remote_mount_dir_path="string",
                )],
                cluster_name="string",
                custom_tags={
                    "string": "any",
                },
                data_security_mode="string",
                docker_image=databricks.JobTaskNewClusterDockerImageArgs(
                    url="string",
                    basic_auth=databricks.JobTaskNewClusterDockerImageBasicAuthArgs(
                        password="string",
                        username="string",
                    ),
                ),
                idempotency_token="string",
                driver_node_type_id="string",
                apply_policy_default_values=False,
                aws_attributes=databricks.JobTaskNewClusterAwsAttributesArgs(
                    availability="string",
                    ebs_volume_count=0,
                    ebs_volume_size=0,
                    ebs_volume_type="string",
                    first_on_demand=0,
                    instance_profile_arn="string",
                    spot_bid_price_percent=0,
                    zone_id="string",
                ),
                autotermination_minutes=0,
                driver_instance_pool_id="string",
                init_scripts=[databricks.JobTaskNewClusterInitScriptArgs(
                    abfss=databricks.JobTaskNewClusterInitScriptAbfssArgs(
                        destination="string",
                    ),
                    file=databricks.JobTaskNewClusterInitScriptFileArgs(
                        destination="string",
                    ),
                    gcs=databricks.JobTaskNewClusterInitScriptGcsArgs(
                        destination="string",
                    ),
                    s3=databricks.JobTaskNewClusterInitScriptS3Args(
                        destination="string",
                        canned_acl="string",
                        enable_encryption=False,
                        encryption_type="string",
                        endpoint="string",
                        kms_key="string",
                        region="string",
                    ),
                    volumes=databricks.JobTaskNewClusterInitScriptVolumesArgs(
                        destination="string",
                    ),
                    workspace=databricks.JobTaskNewClusterInitScriptWorkspaceArgs(
                        destination="string",
                    ),
                )],
                instance_pool_id="string",
                node_type_id="string",
                num_workers=0,
                policy_id="string",
                runtime_engine="string",
                single_user_name="string",
                spark_conf={
                    "string": "any",
                },
                spark_env_vars={
                    "string": "any",
                },
                autoscale=databricks.JobTaskNewClusterAutoscaleArgs(
                    max_workers=0,
                    min_workers=0,
                ),
                ssh_public_keys=["string"],
                workload_type=databricks.JobTaskNewClusterWorkloadTypeArgs(
                    clients=databricks.JobTaskNewClusterWorkloadTypeClientsArgs(
                        jobs=False,
                        notebooks=False,
                    ),
                ),
            ),
            notebook_task=databricks.JobTaskNotebookTaskArgs(
                notebook_path="string",
                base_parameters={
                    "string": "any",
                },
                source="string",
            ),
            notification_settings=databricks.JobTaskNotificationSettingsArgs(
                alert_on_last_attempt=False,
                no_alert_for_canceled_runs=False,
                no_alert_for_skipped_runs=False,
            ),
            pipeline_task=databricks.JobTaskPipelineTaskArgs(
                pipeline_id="string",
                full_refresh=False,
            ),
            python_wheel_task=databricks.JobTaskPythonWheelTaskArgs(
                entry_point="string",
                named_parameters={
                    "string": "any",
                },
                package_name="string",
                parameters=["string"],
            ),
            retry_on_timeout=False,
            run_if="string",
            run_job_task=databricks.JobTaskRunJobTaskArgs(
                job_id=0,
                job_parameters={
                    "string": "any",
                },
            ),
            spark_jar_task=databricks.JobTaskSparkJarTaskArgs(
                jar_uri="string",
                main_class_name="string",
                parameters=["string"],
            ),
            spark_python_task=databricks.JobTaskSparkPythonTaskArgs(
                python_file="string",
                parameters=["string"],
                source="string",
            ),
            spark_submit_task=databricks.JobTaskSparkSubmitTaskArgs(
                parameters=["string"],
            ),
            sql_task=databricks.JobTaskSqlTaskArgs(
                alert=databricks.JobTaskSqlTaskAlertArgs(
                    alert_id="string",
                    subscriptions=[databricks.JobTaskSqlTaskAlertSubscriptionArgs(
                        destination_id="string",
                        user_name="string",
                    )],
                    pause_subscriptions=False,
                ),
                dashboard=databricks.JobTaskSqlTaskDashboardArgs(
                    dashboard_id="string",
                    custom_subject="string",
                    pause_subscriptions=False,
                    subscriptions=[databricks.JobTaskSqlTaskDashboardSubscriptionArgs(
                        destination_id="string",
                        user_name="string",
                    )],
                ),
                file=databricks.JobTaskSqlTaskFileArgs(
                    path="string",
                    source="string",
                ),
                parameters={
                    "string": "any",
                },
                query=databricks.JobTaskSqlTaskQueryArgs(
                    query_id="string",
                ),
                warehouse_id="string",
            ),
            task_key="string",
            timeout_seconds=0,
            webhook_notifications=databricks.JobTaskWebhookNotificationsArgs(
                on_duration_warning_threshold_exceededs=[databricks.JobTaskWebhookNotificationsOnDurationWarningThresholdExceededArgs(
                    id="string",
                )],
                on_failures=[databricks.JobTaskWebhookNotificationsOnFailureArgs(
                    id="string",
                )],
                on_starts=[databricks.JobTaskWebhookNotificationsOnStartArgs(
                    id="string",
                )],
                on_successes=[databricks.JobTaskWebhookNotificationsOnSuccessArgs(
                    id="string",
                )],
            ),
        )],
        timeout_seconds=0,
        trigger=databricks.JobTriggerArgs(
            file_arrival=databricks.JobTriggerFileArrivalArgs(
                url="string",
                min_time_between_triggers_seconds=0,
                wait_after_last_change_seconds=0,
            ),
            pause_status="string",
            table_update=databricks.JobTriggerTableUpdateArgs(
                table_names=["string"],
                condition="string",
                min_time_between_triggers_seconds=0,
                wait_after_last_change_seconds=0,
            ),
        ),
        webhook_notifications=databricks.JobWebhookNotificationsArgs(
            on_duration_warning_threshold_exceededs=[databricks.JobWebhookNotificationsOnDurationWarningThresholdExceededArgs(
                id="string",
            )],
            on_failures=[databricks.JobWebhookNotificationsOnFailureArgs(
                id="string",
            )],
            on_starts=[databricks.JobWebhookNotificationsOnStartArgs(
                id="string",
            )],
            on_successes=[databricks.JobWebhookNotificationsOnSuccessArgs(
                id="string",
            )],
        ))
    
    const jobResource = new databricks.Job("jobResource", {
        computes: [{
            computeKey: "string",
            spec: {
                kind: "string",
            },
        }],
        continuous: {
            pauseStatus: "string",
        },
        controlRunState: false,
        deployment: {
            kind: "string",
            metadataFilePath: "string",
        },
        description: "string",
        editMode: "string",
        emailNotifications: {
            noAlertForSkippedRuns: false,
            onDurationWarningThresholdExceededs: ["string"],
            onFailures: ["string"],
            onStarts: ["string"],
            onSuccesses: ["string"],
        },
        existingClusterId: "string",
        format: "string",
        gitSource: {
            url: "string",
            branch: "string",
            commit: "string",
            jobSource: {
                importFromGitBranch: "string",
                jobConfigPath: "string",
                dirtyState: "string",
            },
            provider: "string",
            tag: "string",
        },
        health: {
            rules: [{
                metric: "string",
                op: "string",
                value: 0,
            }],
        },
        jobClusters: [{
            jobClusterKey: "string",
            newCluster: {
                sparkVersion: "string",
                enableElasticDisk: false,
                clusterId: "string",
                enableLocalDiskEncryption: false,
                azureAttributes: {
                    availability: "string",
                    firstOnDemand: 0,
                    spotBidMaxPrice: 0,
                },
                gcpAttributes: {
                    availability: "string",
                    bootDiskSize: 0,
                    googleServiceAccount: "string",
                    localSsdCount: 0,
                    usePreemptibleExecutors: false,
                    zoneId: "string",
                },
                clusterLogConf: {
                    dbfs: {
                        destination: "string",
                    },
                    s3: {
                        destination: "string",
                        cannedAcl: "string",
                        enableEncryption: false,
                        encryptionType: "string",
                        endpoint: "string",
                        kmsKey: "string",
                        region: "string",
                    },
                },
                clusterMountInfos: [{
                    localMountDirPath: "string",
                    networkFilesystemInfo: {
                        serverAddress: "string",
                        mountOptions: "string",
                    },
                    remoteMountDirPath: "string",
                }],
                clusterName: "string",
                customTags: {
                    string: "any",
                },
                dataSecurityMode: "string",
                dockerImage: {
                    url: "string",
                    basicAuth: {
                        password: "string",
                        username: "string",
                    },
                },
                idempotencyToken: "string",
                driverNodeTypeId: "string",
                applyPolicyDefaultValues: false,
                awsAttributes: {
                    availability: "string",
                    ebsVolumeCount: 0,
                    ebsVolumeSize: 0,
                    ebsVolumeType: "string",
                    firstOnDemand: 0,
                    instanceProfileArn: "string",
                    spotBidPricePercent: 0,
                    zoneId: "string",
                },
                autoterminationMinutes: 0,
                driverInstancePoolId: "string",
                initScripts: [{
                    abfss: {
                        destination: "string",
                    },
                    file: {
                        destination: "string",
                    },
                    gcs: {
                        destination: "string",
                    },
                    s3: {
                        destination: "string",
                        cannedAcl: "string",
                        enableEncryption: false,
                        encryptionType: "string",
                        endpoint: "string",
                        kmsKey: "string",
                        region: "string",
                    },
                    volumes: {
                        destination: "string",
                    },
                    workspace: {
                        destination: "string",
                    },
                }],
                instancePoolId: "string",
                nodeTypeId: "string",
                numWorkers: 0,
                policyId: "string",
                runtimeEngine: "string",
                singleUserName: "string",
                sparkConf: {
                    string: "any",
                },
                sparkEnvVars: {
                    string: "any",
                },
                autoscale: {
                    maxWorkers: 0,
                    minWorkers: 0,
                },
                sshPublicKeys: ["string"],
                workloadType: {
                    clients: {
                        jobs: false,
                        notebooks: false,
                    },
                },
            },
        }],
        libraries: [{
            cran: {
                "package": "string",
                repo: "string",
            },
            egg: "string",
            jar: "string",
            maven: {
                coordinates: "string",
                exclusions: ["string"],
                repo: "string",
            },
            pypi: {
                "package": "string",
                repo: "string",
            },
            whl: "string",
        }],
        maxConcurrentRuns: 0,
        name: "string",
        newCluster: {
            sparkVersion: "string",
            enableElasticDisk: false,
            clusterId: "string",
            enableLocalDiskEncryption: false,
            azureAttributes: {
                availability: "string",
                firstOnDemand: 0,
                spotBidMaxPrice: 0,
            },
            gcpAttributes: {
                availability: "string",
                bootDiskSize: 0,
                googleServiceAccount: "string",
                localSsdCount: 0,
                usePreemptibleExecutors: false,
                zoneId: "string",
            },
            clusterLogConf: {
                dbfs: {
                    destination: "string",
                },
                s3: {
                    destination: "string",
                    cannedAcl: "string",
                    enableEncryption: false,
                    encryptionType: "string",
                    endpoint: "string",
                    kmsKey: "string",
                    region: "string",
                },
            },
            clusterMountInfos: [{
                localMountDirPath: "string",
                networkFilesystemInfo: {
                    serverAddress: "string",
                    mountOptions: "string",
                },
                remoteMountDirPath: "string",
            }],
            clusterName: "string",
            customTags: {
                string: "any",
            },
            dataSecurityMode: "string",
            dockerImage: {
                url: "string",
                basicAuth: {
                    password: "string",
                    username: "string",
                },
            },
            idempotencyToken: "string",
            driverNodeTypeId: "string",
            applyPolicyDefaultValues: false,
            awsAttributes: {
                availability: "string",
                ebsVolumeCount: 0,
                ebsVolumeSize: 0,
                ebsVolumeType: "string",
                firstOnDemand: 0,
                instanceProfileArn: "string",
                spotBidPricePercent: 0,
                zoneId: "string",
            },
            autoterminationMinutes: 0,
            driverInstancePoolId: "string",
            initScripts: [{
                abfss: {
                    destination: "string",
                },
                file: {
                    destination: "string",
                },
                gcs: {
                    destination: "string",
                },
                s3: {
                    destination: "string",
                    cannedAcl: "string",
                    enableEncryption: false,
                    encryptionType: "string",
                    endpoint: "string",
                    kmsKey: "string",
                    region: "string",
                },
                volumes: {
                    destination: "string",
                },
                workspace: {
                    destination: "string",
                },
            }],
            instancePoolId: "string",
            nodeTypeId: "string",
            numWorkers: 0,
            policyId: "string",
            runtimeEngine: "string",
            singleUserName: "string",
            sparkConf: {
                string: "any",
            },
            sparkEnvVars: {
                string: "any",
            },
            autoscale: {
                maxWorkers: 0,
                minWorkers: 0,
            },
            sshPublicKeys: ["string"],
            workloadType: {
                clients: {
                    jobs: false,
                    notebooks: false,
                },
            },
        },
        notificationSettings: {
            noAlertForCanceledRuns: false,
            noAlertForSkippedRuns: false,
        },
        parameters: [{
            "default": "string",
            name: "string",
        }],
        queue: {
            enabled: false,
        },
        runAs: {
            servicePrincipalName: "string",
            userName: "string",
        },
        schedule: {
            quartzCronExpression: "string",
            timezoneId: "string",
            pauseStatus: "string",
        },
        tags: {
            string: "any",
        },
        tasks: [{
            computeKey: "string",
            conditionTask: {
                left: "string",
                op: "string",
                right: "string",
            },
            dbtTask: {
                commands: ["string"],
                catalog: "string",
                profilesDirectory: "string",
                projectDirectory: "string",
                schema: "string",
                source: "string",
                warehouseId: "string",
            },
            dependsOns: [{
                taskKey: "string",
                outcome: "string",
            }],
            description: "string",
            emailNotifications: {
                noAlertForSkippedRuns: false,
                onDurationWarningThresholdExceededs: ["string"],
                onFailures: ["string"],
                onStarts: ["string"],
                onSuccesses: ["string"],
            },
            existingClusterId: "string",
            forEachTask: {
                inputs: "string",
                task: {
                    computeKey: "string",
                    conditionTask: {
                        left: "string",
                        op: "string",
                        right: "string",
                    },
                    dbtTask: {
                        commands: ["string"],
                        catalog: "string",
                        profilesDirectory: "string",
                        projectDirectory: "string",
                        schema: "string",
                        source: "string",
                        warehouseId: "string",
                    },
                    dependsOns: [{
                        taskKey: "string",
                        outcome: "string",
                    }],
                    description: "string",
                    emailNotifications: {
                        noAlertForSkippedRuns: false,
                        onDurationWarningThresholdExceededs: ["string"],
                        onFailures: ["string"],
                        onStarts: ["string"],
                        onSuccesses: ["string"],
                    },
                    existingClusterId: "string",
                    health: {
                        rules: [{
                            metric: "string",
                            op: "string",
                            value: 0,
                        }],
                    },
                    jobClusterKey: "string",
                    libraries: [{
                        cran: {
                            "package": "string",
                            repo: "string",
                        },
                        egg: "string",
                        jar: "string",
                        maven: {
                            coordinates: "string",
                            exclusions: ["string"],
                            repo: "string",
                        },
                        pypi: {
                            "package": "string",
                            repo: "string",
                        },
                        whl: "string",
                    }],
                    maxRetries: 0,
                    minRetryIntervalMillis: 0,
                    newCluster: {
                        numWorkers: 0,
                        sparkVersion: "string",
                        clusterMountInfos: [{
                            localMountDirPath: "string",
                            networkFilesystemInfo: {
                                serverAddress: "string",
                                mountOptions: "string",
                            },
                            remoteMountDirPath: "string",
                        }],
                        autoscale: {
                            maxWorkers: 0,
                            minWorkers: 0,
                        },
                        azureAttributes: {
                            availability: "string",
                            firstOnDemand: 0,
                            spotBidMaxPrice: 0,
                        },
                        clusterId: "string",
                        clusterLogConf: {
                            dbfs: {
                                destination: "string",
                            },
                            s3: {
                                destination: "string",
                                cannedAcl: "string",
                                enableEncryption: false,
                                encryptionType: "string",
                                endpoint: "string",
                                kmsKey: "string",
                                region: "string",
                            },
                        },
                        applyPolicyDefaultValues: false,
                        clusterName: "string",
                        customTags: {
                            string: "any",
                        },
                        dataSecurityMode: "string",
                        dockerImage: {
                            url: "string",
                            basicAuth: {
                                password: "string",
                                username: "string",
                            },
                        },
                        driverInstancePoolId: "string",
                        driverNodeTypeId: "string",
                        workloadType: {
                            clients: {
                                jobs: false,
                                notebooks: false,
                            },
                        },
                        awsAttributes: {
                            availability: "string",
                            ebsVolumeCount: 0,
                            ebsVolumeSize: 0,
                            ebsVolumeType: "string",
                            firstOnDemand: 0,
                            instanceProfileArn: "string",
                            spotBidPricePercent: 0,
                            zoneId: "string",
                        },
                        nodeTypeId: "string",
                        idempotencyToken: "string",
                        initScripts: [{
                            abfss: {
                                destination: "string",
                            },
                            dbfs: {
                                destination: "string",
                            },
                            file: {
                                destination: "string",
                            },
                            gcs: {
                                destination: "string",
                            },
                            s3: {
                                destination: "string",
                                cannedAcl: "string",
                                enableEncryption: false,
                                encryptionType: "string",
                                endpoint: "string",
                                kmsKey: "string",
                                region: "string",
                            },
                            volumes: {
                                destination: "string",
                            },
                            workspace: {
                                destination: "string",
                            },
                        }],
                        instancePoolId: "string",
                        gcpAttributes: {
                            availability: "string",
                            bootDiskSize: 0,
                            googleServiceAccount: "string",
                            localSsdCount: 0,
                            usePreemptibleExecutors: false,
                            zoneId: "string",
                        },
                        autoterminationMinutes: 0,
                        policyId: "string",
                        runtimeEngine: "string",
                        singleUserName: "string",
                        sparkConf: {
                            string: "any",
                        },
                        sparkEnvVars: {
                            string: "any",
                        },
                        enableLocalDiskEncryption: false,
                        sshPublicKeys: ["string"],
                        enableElasticDisk: false,
                    },
                    notebookTask: {
                        notebookPath: "string",
                        baseParameters: {
                            string: "any",
                        },
                        source: "string",
                    },
                    notificationSettings: {
                        alertOnLastAttempt: false,
                        noAlertForCanceledRuns: false,
                        noAlertForSkippedRuns: false,
                    },
                    pipelineTask: {
                        pipelineId: "string",
                        fullRefresh: false,
                    },
                    pythonWheelTask: {
                        entryPoint: "string",
                        namedParameters: {
                            string: "any",
                        },
                        packageName: "string",
                        parameters: ["string"],
                    },
                    retryOnTimeout: false,
                    runIf: "string",
                    runJobTask: {
                        jobId: 0,
                        jobParameters: {
                            string: "any",
                        },
                    },
                    sparkJarTask: {
                        jarUri: "string",
                        mainClassName: "string",
                        parameters: ["string"],
                    },
                    sparkPythonTask: {
                        pythonFile: "string",
                        parameters: ["string"],
                        source: "string",
                    },
                    sparkSubmitTask: {
                        parameters: ["string"],
                    },
                    sqlTask: {
                        alert: {
                            alertId: "string",
                            subscriptions: [{
                                destinationId: "string",
                                userName: "string",
                            }],
                            pauseSubscriptions: false,
                        },
                        dashboard: {
                            dashboardId: "string",
                            customSubject: "string",
                            pauseSubscriptions: false,
                            subscriptions: [{
                                destinationId: "string",
                                userName: "string",
                            }],
                        },
                        file: {
                            path: "string",
                            source: "string",
                        },
                        parameters: {
                            string: "any",
                        },
                        query: {
                            queryId: "string",
                        },
                        warehouseId: "string",
                    },
                    taskKey: "string",
                    timeoutSeconds: 0,
                    webhookNotifications: {
                        onDurationWarningThresholdExceededs: [{
                            id: "string",
                        }],
                        onFailures: [{
                            id: "string",
                        }],
                        onStarts: [{
                            id: "string",
                        }],
                        onSuccesses: [{
                            id: "string",
                        }],
                    },
                },
                concurrency: 0,
            },
            health: {
                rules: [{
                    metric: "string",
                    op: "string",
                    value: 0,
                }],
            },
            jobClusterKey: "string",
            libraries: [{
                cran: {
                    "package": "string",
                    repo: "string",
                },
                egg: "string",
                jar: "string",
                maven: {
                    coordinates: "string",
                    exclusions: ["string"],
                    repo: "string",
                },
                pypi: {
                    "package": "string",
                    repo: "string",
                },
                whl: "string",
            }],
            maxRetries: 0,
            minRetryIntervalMillis: 0,
            newCluster: {
                sparkVersion: "string",
                enableElasticDisk: false,
                clusterId: "string",
                enableLocalDiskEncryption: false,
                azureAttributes: {
                    availability: "string",
                    firstOnDemand: 0,
                    spotBidMaxPrice: 0,
                },
                gcpAttributes: {
                    availability: "string",
                    bootDiskSize: 0,
                    googleServiceAccount: "string",
                    localSsdCount: 0,
                    usePreemptibleExecutors: false,
                    zoneId: "string",
                },
                clusterLogConf: {
                    dbfs: {
                        destination: "string",
                    },
                    s3: {
                        destination: "string",
                        cannedAcl: "string",
                        enableEncryption: false,
                        encryptionType: "string",
                        endpoint: "string",
                        kmsKey: "string",
                        region: "string",
                    },
                },
                clusterMountInfos: [{
                    localMountDirPath: "string",
                    networkFilesystemInfo: {
                        serverAddress: "string",
                        mountOptions: "string",
                    },
                    remoteMountDirPath: "string",
                }],
                clusterName: "string",
                customTags: {
                    string: "any",
                },
                dataSecurityMode: "string",
                dockerImage: {
                    url: "string",
                    basicAuth: {
                        password: "string",
                        username: "string",
                    },
                },
                idempotencyToken: "string",
                driverNodeTypeId: "string",
                applyPolicyDefaultValues: false,
                awsAttributes: {
                    availability: "string",
                    ebsVolumeCount: 0,
                    ebsVolumeSize: 0,
                    ebsVolumeType: "string",
                    firstOnDemand: 0,
                    instanceProfileArn: "string",
                    spotBidPricePercent: 0,
                    zoneId: "string",
                },
                autoterminationMinutes: 0,
                driverInstancePoolId: "string",
                initScripts: [{
                    abfss: {
                        destination: "string",
                    },
                    file: {
                        destination: "string",
                    },
                    gcs: {
                        destination: "string",
                    },
                    s3: {
                        destination: "string",
                        cannedAcl: "string",
                        enableEncryption: false,
                        encryptionType: "string",
                        endpoint: "string",
                        kmsKey: "string",
                        region: "string",
                    },
                    volumes: {
                        destination: "string",
                    },
                    workspace: {
                        destination: "string",
                    },
                }],
                instancePoolId: "string",
                nodeTypeId: "string",
                numWorkers: 0,
                policyId: "string",
                runtimeEngine: "string",
                singleUserName: "string",
                sparkConf: {
                    string: "any",
                },
                sparkEnvVars: {
                    string: "any",
                },
                autoscale: {
                    maxWorkers: 0,
                    minWorkers: 0,
                },
                sshPublicKeys: ["string"],
                workloadType: {
                    clients: {
                        jobs: false,
                        notebooks: false,
                    },
                },
            },
            notebookTask: {
                notebookPath: "string",
                baseParameters: {
                    string: "any",
                },
                source: "string",
            },
            notificationSettings: {
                alertOnLastAttempt: false,
                noAlertForCanceledRuns: false,
                noAlertForSkippedRuns: false,
            },
            pipelineTask: {
                pipelineId: "string",
                fullRefresh: false,
            },
            pythonWheelTask: {
                entryPoint: "string",
                namedParameters: {
                    string: "any",
                },
                packageName: "string",
                parameters: ["string"],
            },
            retryOnTimeout: false,
            runIf: "string",
            runJobTask: {
                jobId: 0,
                jobParameters: {
                    string: "any",
                },
            },
            sparkJarTask: {
                jarUri: "string",
                mainClassName: "string",
                parameters: ["string"],
            },
            sparkPythonTask: {
                pythonFile: "string",
                parameters: ["string"],
                source: "string",
            },
            sparkSubmitTask: {
                parameters: ["string"],
            },
            sqlTask: {
                alert: {
                    alertId: "string",
                    subscriptions: [{
                        destinationId: "string",
                        userName: "string",
                    }],
                    pauseSubscriptions: false,
                },
                dashboard: {
                    dashboardId: "string",
                    customSubject: "string",
                    pauseSubscriptions: false,
                    subscriptions: [{
                        destinationId: "string",
                        userName: "string",
                    }],
                },
                file: {
                    path: "string",
                    source: "string",
                },
                parameters: {
                    string: "any",
                },
                query: {
                    queryId: "string",
                },
                warehouseId: "string",
            },
            taskKey: "string",
            timeoutSeconds: 0,
            webhookNotifications: {
                onDurationWarningThresholdExceededs: [{
                    id: "string",
                }],
                onFailures: [{
                    id: "string",
                }],
                onStarts: [{
                    id: "string",
                }],
                onSuccesses: [{
                    id: "string",
                }],
            },
        }],
        timeoutSeconds: 0,
        trigger: {
            fileArrival: {
                url: "string",
                minTimeBetweenTriggersSeconds: 0,
                waitAfterLastChangeSeconds: 0,
            },
            pauseStatus: "string",
            tableUpdate: {
                tableNames: ["string"],
                condition: "string",
                minTimeBetweenTriggersSeconds: 0,
                waitAfterLastChangeSeconds: 0,
            },
        },
        webhookNotifications: {
            onDurationWarningThresholdExceededs: [{
                id: "string",
            }],
            onFailures: [{
                id: "string",
            }],
            onStarts: [{
                id: "string",
            }],
            onSuccesses: [{
                id: "string",
            }],
        },
    });
    
    type: databricks:Job
    properties:
        computes:
            - computeKey: string
              spec:
                kind: string
        continuous:
            pauseStatus: string
        controlRunState: false
        deployment:
            kind: string
            metadataFilePath: string
        description: string
        editMode: string
        emailNotifications:
            noAlertForSkippedRuns: false
            onDurationWarningThresholdExceededs:
                - string
            onFailures:
                - string
            onStarts:
                - string
            onSuccesses:
                - string
        existingClusterId: string
        format: string
        gitSource:
            branch: string
            commit: string
            jobSource:
                dirtyState: string
                importFromGitBranch: string
                jobConfigPath: string
            provider: string
            tag: string
            url: string
        health:
            rules:
                - metric: string
                  op: string
                  value: 0
        jobClusters:
            - jobClusterKey: string
              newCluster:
                applyPolicyDefaultValues: false
                autoscale:
                    maxWorkers: 0
                    minWorkers: 0
                autoterminationMinutes: 0
                awsAttributes:
                    availability: string
                    ebsVolumeCount: 0
                    ebsVolumeSize: 0
                    ebsVolumeType: string
                    firstOnDemand: 0
                    instanceProfileArn: string
                    spotBidPricePercent: 0
                    zoneId: string
                azureAttributes:
                    availability: string
                    firstOnDemand: 0
                    spotBidMaxPrice: 0
                clusterId: string
                clusterLogConf:
                    dbfs:
                        destination: string
                    s3:
                        cannedAcl: string
                        destination: string
                        enableEncryption: false
                        encryptionType: string
                        endpoint: string
                        kmsKey: string
                        region: string
                clusterMountInfos:
                    - localMountDirPath: string
                      networkFilesystemInfo:
                        mountOptions: string
                        serverAddress: string
                      remoteMountDirPath: string
                clusterName: string
                customTags:
                    string: any
                dataSecurityMode: string
                dockerImage:
                    basicAuth:
                        password: string
                        username: string
                    url: string
                driverInstancePoolId: string
                driverNodeTypeId: string
                enableElasticDisk: false
                enableLocalDiskEncryption: false
                gcpAttributes:
                    availability: string
                    bootDiskSize: 0
                    googleServiceAccount: string
                    localSsdCount: 0
                    usePreemptibleExecutors: false
                    zoneId: string
                idempotencyToken: string
                initScripts:
                    - abfss:
                        destination: string
                      file:
                        destination: string
                      gcs:
                        destination: string
                      s3:
                        cannedAcl: string
                        destination: string
                        enableEncryption: false
                        encryptionType: string
                        endpoint: string
                        kmsKey: string
                        region: string
                      volumes:
                        destination: string
                      workspace:
                        destination: string
                instancePoolId: string
                nodeTypeId: string
                numWorkers: 0
                policyId: string
                runtimeEngine: string
                singleUserName: string
                sparkConf:
                    string: any
                sparkEnvVars:
                    string: any
                sparkVersion: string
                sshPublicKeys:
                    - string
                workloadType:
                    clients:
                        jobs: false
                        notebooks: false
        libraries:
            - cran:
                package: string
                repo: string
              egg: string
              jar: string
              maven:
                coordinates: string
                exclusions:
                    - string
                repo: string
              pypi:
                package: string
                repo: string
              whl: string
        maxConcurrentRuns: 0
        name: string
        newCluster:
            applyPolicyDefaultValues: false
            autoscale:
                maxWorkers: 0
                minWorkers: 0
            autoterminationMinutes: 0
            awsAttributes:
                availability: string
                ebsVolumeCount: 0
                ebsVolumeSize: 0
                ebsVolumeType: string
                firstOnDemand: 0
                instanceProfileArn: string
                spotBidPricePercent: 0
                zoneId: string
            azureAttributes:
                availability: string
                firstOnDemand: 0
                spotBidMaxPrice: 0
            clusterId: string
            clusterLogConf:
                dbfs:
                    destination: string
                s3:
                    cannedAcl: string
                    destination: string
                    enableEncryption: false
                    encryptionType: string
                    endpoint: string
                    kmsKey: string
                    region: string
            clusterMountInfos:
                - localMountDirPath: string
                  networkFilesystemInfo:
                    mountOptions: string
                    serverAddress: string
                  remoteMountDirPath: string
            clusterName: string
            customTags:
                string: any
            dataSecurityMode: string
            dockerImage:
                basicAuth:
                    password: string
                    username: string
                url: string
            driverInstancePoolId: string
            driverNodeTypeId: string
            enableElasticDisk: false
            enableLocalDiskEncryption: false
            gcpAttributes:
                availability: string
                bootDiskSize: 0
                googleServiceAccount: string
                localSsdCount: 0
                usePreemptibleExecutors: false
                zoneId: string
            idempotencyToken: string
            initScripts:
                - abfss:
                    destination: string
                  file:
                    destination: string
                  gcs:
                    destination: string
                  s3:
                    cannedAcl: string
                    destination: string
                    enableEncryption: false
                    encryptionType: string
                    endpoint: string
                    kmsKey: string
                    region: string
                  volumes:
                    destination: string
                  workspace:
                    destination: string
            instancePoolId: string
            nodeTypeId: string
            numWorkers: 0
            policyId: string
            runtimeEngine: string
            singleUserName: string
            sparkConf:
                string: any
            sparkEnvVars:
                string: any
            sparkVersion: string
            sshPublicKeys:
                - string
            workloadType:
                clients:
                    jobs: false
                    notebooks: false
        notificationSettings:
            noAlertForCanceledRuns: false
            noAlertForSkippedRuns: false
        parameters:
            - default: string
              name: string
        queue:
            enabled: false
        runAs:
            servicePrincipalName: string
            userName: string
        schedule:
            pauseStatus: string
            quartzCronExpression: string
            timezoneId: string
        tags:
            string: any
        tasks:
            - computeKey: string
              conditionTask:
                left: string
                op: string
                right: string
              dbtTask:
                catalog: string
                commands:
                    - string
                profilesDirectory: string
                projectDirectory: string
                schema: string
                source: string
                warehouseId: string
              dependsOns:
                - outcome: string
                  taskKey: string
              description: string
              emailNotifications:
                noAlertForSkippedRuns: false
                onDurationWarningThresholdExceededs:
                    - string
                onFailures:
                    - string
                onStarts:
                    - string
                onSuccesses:
                    - string
              existingClusterId: string
              forEachTask:
                concurrency: 0
                inputs: string
                task:
                    computeKey: string
                    conditionTask:
                        left: string
                        op: string
                        right: string
                    dbtTask:
                        catalog: string
                        commands:
                            - string
                        profilesDirectory: string
                        projectDirectory: string
                        schema: string
                        source: string
                        warehouseId: string
                    dependsOns:
                        - outcome: string
                          taskKey: string
                    description: string
                    emailNotifications:
                        noAlertForSkippedRuns: false
                        onDurationWarningThresholdExceededs:
                            - string
                        onFailures:
                            - string
                        onStarts:
                            - string
                        onSuccesses:
                            - string
                    existingClusterId: string
                    health:
                        rules:
                            - metric: string
                              op: string
                              value: 0
                    jobClusterKey: string
                    libraries:
                        - cran:
                            package: string
                            repo: string
                          egg: string
                          jar: string
                          maven:
                            coordinates: string
                            exclusions:
                                - string
                            repo: string
                          pypi:
                            package: string
                            repo: string
                          whl: string
                    maxRetries: 0
                    minRetryIntervalMillis: 0
                    newCluster:
                        applyPolicyDefaultValues: false
                        autoscale:
                            maxWorkers: 0
                            minWorkers: 0
                        autoterminationMinutes: 0
                        awsAttributes:
                            availability: string
                            ebsVolumeCount: 0
                            ebsVolumeSize: 0
                            ebsVolumeType: string
                            firstOnDemand: 0
                            instanceProfileArn: string
                            spotBidPricePercent: 0
                            zoneId: string
                        azureAttributes:
                            availability: string
                            firstOnDemand: 0
                            spotBidMaxPrice: 0
                        clusterId: string
                        clusterLogConf:
                            dbfs:
                                destination: string
                            s3:
                                cannedAcl: string
                                destination: string
                                enableEncryption: false
                                encryptionType: string
                                endpoint: string
                                kmsKey: string
                                region: string
                        clusterMountInfos:
                            - localMountDirPath: string
                              networkFilesystemInfo:
                                mountOptions: string
                                serverAddress: string
                              remoteMountDirPath: string
                        clusterName: string
                        customTags:
                            string: any
                        dataSecurityMode: string
                        dockerImage:
                            basicAuth:
                                password: string
                                username: string
                            url: string
                        driverInstancePoolId: string
                        driverNodeTypeId: string
                        enableElasticDisk: false
                        enableLocalDiskEncryption: false
                        gcpAttributes:
                            availability: string
                            bootDiskSize: 0
                            googleServiceAccount: string
                            localSsdCount: 0
                            usePreemptibleExecutors: false
                            zoneId: string
                        idempotencyToken: string
                        initScripts:
                            - abfss:
                                destination: string
                              dbfs:
                                destination: string
                              file:
                                destination: string
                              gcs:
                                destination: string
                              s3:
                                cannedAcl: string
                                destination: string
                                enableEncryption: false
                                encryptionType: string
                                endpoint: string
                                kmsKey: string
                                region: string
                              volumes:
                                destination: string
                              workspace:
                                destination: string
                        instancePoolId: string
                        nodeTypeId: string
                        numWorkers: 0
                        policyId: string
                        runtimeEngine: string
                        singleUserName: string
                        sparkConf:
                            string: any
                        sparkEnvVars:
                            string: any
                        sparkVersion: string
                        sshPublicKeys:
                            - string
                        workloadType:
                            clients:
                                jobs: false
                                notebooks: false
                    notebookTask:
                        baseParameters:
                            string: any
                        notebookPath: string
                        source: string
                    notificationSettings:
                        alertOnLastAttempt: false
                        noAlertForCanceledRuns: false
                        noAlertForSkippedRuns: false
                    pipelineTask:
                        fullRefresh: false
                        pipelineId: string
                    pythonWheelTask:
                        entryPoint: string
                        namedParameters:
                            string: any
                        packageName: string
                        parameters:
                            - string
                    retryOnTimeout: false
                    runIf: string
                    runJobTask:
                        jobId: 0
                        jobParameters:
                            string: any
                    sparkJarTask:
                        jarUri: string
                        mainClassName: string
                        parameters:
                            - string
                    sparkPythonTask:
                        parameters:
                            - string
                        pythonFile: string
                        source: string
                    sparkSubmitTask:
                        parameters:
                            - string
                    sqlTask:
                        alert:
                            alertId: string
                            pauseSubscriptions: false
                            subscriptions:
                                - destinationId: string
                                  userName: string
                        dashboard:
                            customSubject: string
                            dashboardId: string
                            pauseSubscriptions: false
                            subscriptions:
                                - destinationId: string
                                  userName: string
                        file:
                            path: string
                            source: string
                        parameters:
                            string: any
                        query:
                            queryId: string
                        warehouseId: string
                    taskKey: string
                    timeoutSeconds: 0
                    webhookNotifications:
                        onDurationWarningThresholdExceededs:
                            - id: string
                        onFailures:
                            - id: string
                        onStarts:
                            - id: string
                        onSuccesses:
                            - id: string
              health:
                rules:
                    - metric: string
                      op: string
                      value: 0
              jobClusterKey: string
              libraries:
                - cran:
                    package: string
                    repo: string
                  egg: string
                  jar: string
                  maven:
                    coordinates: string
                    exclusions:
                        - string
                    repo: string
                  pypi:
                    package: string
                    repo: string
                  whl: string
              maxRetries: 0
              minRetryIntervalMillis: 0
              newCluster:
                applyPolicyDefaultValues: false
                autoscale:
                    maxWorkers: 0
                    minWorkers: 0
                autoterminationMinutes: 0
                awsAttributes:
                    availability: string
                    ebsVolumeCount: 0
                    ebsVolumeSize: 0
                    ebsVolumeType: string
                    firstOnDemand: 0
                    instanceProfileArn: string
                    spotBidPricePercent: 0
                    zoneId: string
                azureAttributes:
                    availability: string
                    firstOnDemand: 0
                    spotBidMaxPrice: 0
                clusterId: string
                clusterLogConf:
                    dbfs:
                        destination: string
                    s3:
                        cannedAcl: string
                        destination: string
                        enableEncryption: false
                        encryptionType: string
                        endpoint: string
                        kmsKey: string
                        region: string
                clusterMountInfos:
                    - localMountDirPath: string
                      networkFilesystemInfo:
                        mountOptions: string
                        serverAddress: string
                      remoteMountDirPath: string
                clusterName: string
                customTags:
                    string: any
                dataSecurityMode: string
                dockerImage:
                    basicAuth:
                        password: string
                        username: string
                    url: string
                driverInstancePoolId: string
                driverNodeTypeId: string
                enableElasticDisk: false
                enableLocalDiskEncryption: false
                gcpAttributes:
                    availability: string
                    bootDiskSize: 0
                    googleServiceAccount: string
                    localSsdCount: 0
                    usePreemptibleExecutors: false
                    zoneId: string
                idempotencyToken: string
                initScripts:
                    - abfss:
                        destination: string
                      file:
                        destination: string
                      gcs:
                        destination: string
                      s3:
                        cannedAcl: string
                        destination: string
                        enableEncryption: false
                        encryptionType: string
                        endpoint: string
                        kmsKey: string
                        region: string
                      volumes:
                        destination: string
                      workspace:
                        destination: string
                instancePoolId: string
                nodeTypeId: string
                numWorkers: 0
                policyId: string
                runtimeEngine: string
                singleUserName: string
                sparkConf:
                    string: any
                sparkEnvVars:
                    string: any
                sparkVersion: string
                sshPublicKeys:
                    - string
                workloadType:
                    clients:
                        jobs: false
                        notebooks: false
              notebookTask:
                baseParameters:
                    string: any
                notebookPath: string
                source: string
              notificationSettings:
                alertOnLastAttempt: false
                noAlertForCanceledRuns: false
                noAlertForSkippedRuns: false
              pipelineTask:
                fullRefresh: false
                pipelineId: string
              pythonWheelTask:
                entryPoint: string
                namedParameters:
                    string: any
                packageName: string
                parameters:
                    - string
              retryOnTimeout: false
              runIf: string
              runJobTask:
                jobId: 0
                jobParameters:
                    string: any
              sparkJarTask:
                jarUri: string
                mainClassName: string
                parameters:
                    - string
              sparkPythonTask:
                parameters:
                    - string
                pythonFile: string
                source: string
              sparkSubmitTask:
                parameters:
                    - string
              sqlTask:
                alert:
                    alertId: string
                    pauseSubscriptions: false
                    subscriptions:
                        - destinationId: string
                          userName: string
                dashboard:
                    customSubject: string
                    dashboardId: string
                    pauseSubscriptions: false
                    subscriptions:
                        - destinationId: string
                          userName: string
                file:
                    path: string
                    source: string
                parameters:
                    string: any
                query:
                    queryId: string
                warehouseId: string
              taskKey: string
              timeoutSeconds: 0
              webhookNotifications:
                onDurationWarningThresholdExceededs:
                    - id: string
                onFailures:
                    - id: string
                onStarts:
                    - id: string
                onSuccesses:
                    - id: string
        timeoutSeconds: 0
        trigger:
            fileArrival:
                minTimeBetweenTriggersSeconds: 0
                url: string
                waitAfterLastChangeSeconds: 0
            pauseStatus: string
            tableUpdate:
                condition: string
                minTimeBetweenTriggersSeconds: 0
                tableNames:
                    - string
                waitAfterLastChangeSeconds: 0
        webhookNotifications:
            onDurationWarningThresholdExceededs:
                - id: string
            onFailures:
                - id: string
            onStarts:
                - id: string
            onSuccesses:
                - id: string
    

    Job Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    The Job resource accepts the following input properties:

    AlwaysRunning bool
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.

    Deprecated: always_running will be replaced by control_run_state in the next major release.

    Computes List<JobCompute>
    Continuous JobContinuous
    ControlRunState bool

    (Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the pause_status by stopping the current active run. This flag cannot be set for non-continuous jobs.

    When migrating from always_running to control_run_state, set continuous as follows:

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    DbtTask JobDbtTask

    Deprecated: should be used inside a task block and not inside a job block

    Deployment JobDeployment
    Description string
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    EditMode string
    EmailNotifications JobEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    ExistingClusterId string
    Format string
    GitSource JobGitSource
    Health JobHealth
    An optional block that specifies the health conditions for the job (described below).
    JobClusters List<JobJobCluster>
    A list of job databricks.Cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. Multi-task syntax
    Libraries List<JobLibrary>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section of the databricks.Cluster resource for more information.
    MaxConcurrentRuns int
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    MaxRetries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.

    Deprecated: should be used inside a task block and not inside a job block

    MinRetryIntervalMillis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.

    Deprecated: should be used inside a task block and not inside a job block

    Name string
    An optional name for the job. The default value is Untitled.
    NewCluster JobNewCluster
    Same set of parameters as for databricks.Cluster resource.
    NotebookTask JobNotebookTask

    Deprecated: should be used inside a task block and not inside a job block

    NotificationSettings JobNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    Parameters List<JobParameter>
    PipelineTask JobPipelineTask

    Deprecated: should be used inside a task block and not inside a job block

    PythonWheelTask JobPythonWheelTask

    Deprecated: should be used inside a task block and not inside a job block

    Queue JobQueue
    RetryOnTimeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.

    Deprecated: should be used inside a task block and not inside a job block

    RunAs JobRunAs
    RunJobTask JobRunJobTask

    Deprecated: should be used inside a task block and not inside a job block

    Schedule JobSchedule
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    SparkJarTask JobSparkJarTask

    Deprecated: should be used inside a task block and not inside a job block

    SparkPythonTask JobSparkPythonTask

    Deprecated: should be used inside a task block and not inside a job block

    SparkSubmitTask JobSparkSubmitTask

    Deprecated: should be used inside a task block and not inside a job block

    Tags Dictionary<string, object>
    Tasks List<JobTask>
    Task to run against the inputs list.
    TimeoutSeconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    Trigger JobTrigger
    WebhookNotifications JobWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    AlwaysRunning bool
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.

    Deprecated: always_running will be replaced by control_run_state in the next major release.

    Computes []JobComputeArgs
    Continuous JobContinuousArgs
    ControlRunState bool

    (Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the pause_status by stopping the current active run. This flag cannot be set for non-continuous jobs.

    When migrating from always_running to control_run_state, set continuous as follows:

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    DbtTask JobDbtTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    Deployment JobDeploymentArgs
    Description string
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    EditMode string
    EmailNotifications JobEmailNotificationsArgs
    (List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    ExistingClusterId string
    Format string
    GitSource JobGitSourceArgs
    Health JobHealthArgs
    An optional block that specifies the health conditions for the job (described below).
    JobClusters []JobJobClusterArgs
    A list of job databricks.Cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. Multi-task syntax
    Libraries []JobLibraryArgs
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section of the databricks.Cluster resource for more information.
    MaxConcurrentRuns int
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    MaxRetries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.

    Deprecated: should be used inside a task block and not inside a job block

    MinRetryIntervalMillis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.

    Deprecated: should be used inside a task block and not inside a job block

    Name string
    An optional name for the job. The default value is Untitled.
    NewCluster JobNewClusterArgs
    Same set of parameters as for databricks.Cluster resource.
    NotebookTask JobNotebookTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    NotificationSettings JobNotificationSettingsArgs
    An optional block controlling the notification settings on the job level (described below).
    Parameters []JobParameterArgs
    PipelineTask JobPipelineTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    PythonWheelTask JobPythonWheelTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    Queue JobQueueArgs
    RetryOnTimeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.

    Deprecated: should be used inside a task block and not inside a job block

    RunAs JobRunAsArgs
    RunJobTask JobRunJobTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    Schedule JobScheduleArgs
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    SparkJarTask JobSparkJarTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    SparkPythonTask JobSparkPythonTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    SparkSubmitTask JobSparkSubmitTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    Tags map[string]interface{}
    Tasks []JobTaskArgs
    Task to run against the inputs list.
    TimeoutSeconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    Trigger JobTriggerArgs
    WebhookNotifications JobWebhookNotificationsArgs
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    alwaysRunning Boolean
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.

    Deprecated: always_running will be replaced by control_run_state in the next major release.

    computes List<JobCompute>
    continuous JobContinuous
    controlRunState Boolean

    (Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the pause_status by stopping the current active run. This flag cannot be set for non-continuous jobs.

    When migrating from always_running to control_run_state, set continuous as follows:

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    dbtTask JobDbtTask

    Deprecated: should be used inside a task block and not inside a job block

    deployment JobDeployment
    description String
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    editMode String
    emailNotifications JobEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId String
    format String
    gitSource JobGitSource
    health JobHealth
    An optional block that specifies the health conditions for the job (described below).
    jobClusters List<JobJobCluster>
    A list of job databricks.Cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. Multi-task syntax
    libraries List<JobLibrary>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section of the databricks.Cluster resource for more information.
    maxConcurrentRuns Integer
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    maxRetries Integer
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.

    Deprecated: should be used inside a task block and not inside a job block

    minRetryIntervalMillis Integer
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.

    Deprecated: should be used inside a task block and not inside a job block

    name String
    An optional name for the job. The default value is Untitled.
    newCluster JobNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebookTask JobNotebookTask

    Deprecated: should be used inside a task block and not inside a job block

    notificationSettings JobNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    parameters List<JobParameter>
    pipelineTask JobPipelineTask

    Deprecated: should be used inside a task block and not inside a job block

    pythonWheelTask JobPythonWheelTask

    Deprecated: should be used inside a task block and not inside a job block

    queue JobQueue
    retryOnTimeout Boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.

    Deprecated: should be used inside a task block and not inside a job block

    runAs JobRunAs
    runJobTask JobRunJobTask

    Deprecated: should be used inside a task block and not inside a job block

    schedule JobSchedule
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    sparkJarTask JobSparkJarTask

    Deprecated: should be used inside a task block and not inside a job block

    sparkPythonTask JobSparkPythonTask

    Deprecated: should be used inside a task block and not inside a job block

    sparkSubmitTask JobSparkSubmitTask

    Deprecated: should be used inside a task block and not inside a job block

    tags Map<String,Object>
    tasks List<JobTask>
    Task to run against the inputs list.
    timeoutSeconds Integer
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    trigger JobTrigger
    webhookNotifications JobWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    alwaysRunning boolean
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.

    Deprecated: always_running will be replaced by control_run_state in the next major release.

    computes JobCompute[]
    continuous JobContinuous
    controlRunState boolean

    (Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the pause_status by stopping the current active run. This flag cannot be set for non-continuous jobs.

    When migrating from always_running to control_run_state, set continuous as follows:

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    dbtTask JobDbtTask

    Deprecated: should be used inside a task block and not inside a job block

    deployment JobDeployment
    description string
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    editMode string
    emailNotifications JobEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId string
    format string
    gitSource JobGitSource
    health JobHealth
    An optional block that specifies the health conditions for the job (described below).
    jobClusters JobJobCluster[]
    A list of job databricks.Cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. Multi-task syntax
    libraries JobLibrary[]
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section of the databricks.Cluster resource for more information.
    maxConcurrentRuns number
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    maxRetries number
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.

    Deprecated: should be used inside a task block and not inside a job block

    minRetryIntervalMillis number
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.

    Deprecated: should be used inside a task block and not inside a job block

    name string
    An optional name for the job. The default value is Untitled.
    newCluster JobNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebookTask JobNotebookTask

    Deprecated: should be used inside a task block and not inside a job block

    notificationSettings JobNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    parameters JobParameter[]
    pipelineTask JobPipelineTask

    Deprecated: should be used inside a task block and not inside a job block

    pythonWheelTask JobPythonWheelTask

    Deprecated: should be used inside a task block and not inside a job block

    queue JobQueue
    retryOnTimeout boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.

    Deprecated: should be used inside a task block and not inside a job block

    runAs JobRunAs
    runJobTask JobRunJobTask

    Deprecated: should be used inside a task block and not inside a job block

    schedule JobSchedule
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    sparkJarTask JobSparkJarTask

    Deprecated: should be used inside a task block and not inside a job block

    sparkPythonTask JobSparkPythonTask

    Deprecated: should be used inside a task block and not inside a job block

    sparkSubmitTask JobSparkSubmitTask

    Deprecated: should be used inside a task block and not inside a job block

    tags {[key: string]: any}
    tasks JobTask[]
    Task to run against the inputs list.
    timeoutSeconds number
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    trigger JobTrigger
    webhookNotifications JobWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    always_running bool
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.

    Deprecated: always_running will be replaced by control_run_state in the next major release.

    computes Sequence[JobComputeArgs]
    continuous JobContinuousArgs
    control_run_state bool

    (Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the pause_status by stopping the current active run. This flag cannot be set for non-continuous jobs.

    When migrating from always_running to control_run_state, set continuous as follows:

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    dbt_task JobDbtTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    deployment JobDeploymentArgs
    description str
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    edit_mode str
    email_notifications JobEmailNotificationsArgs
    (List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existing_cluster_id str
    format str
    git_source JobGitSourceArgs
    health JobHealthArgs
    An optional block that specifies the health conditions for the job (described below).
    job_clusters Sequence[JobJobClusterArgs]
    A list of job databricks.Cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. Multi-task syntax
    libraries Sequence[JobLibraryArgs]
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section of the databricks.Cluster resource for more information.
    max_concurrent_runs int
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    max_retries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.

    Deprecated: should be used inside a task block and not inside a job block

    min_retry_interval_millis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.

    Deprecated: should be used inside a task block and not inside a job block

    name str
    An optional name for the job. The default value is Untitled.
    new_cluster JobNewClusterArgs
    Same set of parameters as for databricks.Cluster resource.
    notebook_task JobNotebookTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    notification_settings JobNotificationSettingsArgs
    An optional block controlling the notification settings on the job level (described below).
    parameters Sequence[JobParameterArgs]
    pipeline_task JobPipelineTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    python_wheel_task JobPythonWheelTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    queue JobQueueArgs
    retry_on_timeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.

    Deprecated: should be used inside a task block and not inside a job block

    run_as JobRunAsArgs
    run_job_task JobRunJobTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    schedule JobScheduleArgs
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    spark_jar_task JobSparkJarTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    spark_python_task JobSparkPythonTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    spark_submit_task JobSparkSubmitTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    tags Mapping[str, Any]
    tasks Sequence[JobTaskArgs]
    Task to run against the inputs list.
    timeout_seconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    trigger JobTriggerArgs
    webhook_notifications JobWebhookNotificationsArgs
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    alwaysRunning Boolean
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.

    Deprecated: always_running will be replaced by control_run_state in the next major release.

    computes List<Property Map>
    continuous Property Map
    controlRunState Boolean

    (Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the pause_status by stopping the current active run. This flag cannot be set for non-continuous jobs.

    When migrating from always_running to control_run_state, set continuous as follows:

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    dbtTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    deployment Property Map
    description String
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    editMode String
    emailNotifications Property Map
    (List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId String
    format String
    gitSource Property Map
    health Property Map
    An optional block that specifies the health conditions for the job (described below).
    jobClusters List<Property Map>
    A list of job databricks.Cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. Multi-task syntax
    libraries List<Property Map>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section of the databricks.Cluster resource for more information.
    maxConcurrentRuns Number
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    maxRetries Number
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.

    Deprecated: should be used inside a task block and not inside a job block

    minRetryIntervalMillis Number
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.

    Deprecated: should be used inside a task block and not inside a job block

    name String
    An optional name for the job. The default value is Untitled.
    newCluster Property Map
    Same set of parameters as for databricks.Cluster resource.
    notebookTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    notificationSettings Property Map
    An optional block controlling the notification settings on the job level (described below).
    parameters List<Property Map>
    pipelineTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    pythonWheelTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    queue Property Map
    retryOnTimeout Boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.

    Deprecated: should be used inside a task block and not inside a job block

    runAs Property Map
    runJobTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    schedule Property Map
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    sparkJarTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    sparkPythonTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    sparkSubmitTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    tags Map<Any>
    tasks List<Property Map>
    Task to run against the inputs list.
    timeoutSeconds Number
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    trigger Property Map
    webhookNotifications Property Map
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the Job resource produces the following output properties:

    Id string
    The provider-assigned unique ID for this managed resource.
    Url string
    URL of the Git repository to use.
    Id string
    The provider-assigned unique ID for this managed resource.
    Url string
    URL of the Git repository to use.
    id String
    The provider-assigned unique ID for this managed resource.
    url String
    URL of the Git repository to use.
    id string
    The provider-assigned unique ID for this managed resource.
    url string
    URL of the Git repository to use.
    id str
    The provider-assigned unique ID for this managed resource.
    url str
    URL of the Git repository to use.
    id String
    The provider-assigned unique ID for this managed resource.
    url String
    URL of the Git repository to use.

    Look up Existing Job Resource

    Get an existing Job resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.

    public static get(name: string, id: Input<ID>, state?: JobState, opts?: CustomResourceOptions): Job
    @staticmethod
    def get(resource_name: str,
            id: str,
            opts: Optional[ResourceOptions] = None,
            always_running: Optional[bool] = None,
            computes: Optional[Sequence[JobComputeArgs]] = None,
            continuous: Optional[JobContinuousArgs] = None,
            control_run_state: Optional[bool] = None,
            dbt_task: Optional[JobDbtTaskArgs] = None,
            deployment: Optional[JobDeploymentArgs] = None,
            description: Optional[str] = None,
            edit_mode: Optional[str] = None,
            email_notifications: Optional[JobEmailNotificationsArgs] = None,
            existing_cluster_id: Optional[str] = None,
            format: Optional[str] = None,
            git_source: Optional[JobGitSourceArgs] = None,
            health: Optional[JobHealthArgs] = None,
            job_clusters: Optional[Sequence[JobJobClusterArgs]] = None,
            libraries: Optional[Sequence[JobLibraryArgs]] = None,
            max_concurrent_runs: Optional[int] = None,
            max_retries: Optional[int] = None,
            min_retry_interval_millis: Optional[int] = None,
            name: Optional[str] = None,
            new_cluster: Optional[JobNewClusterArgs] = None,
            notebook_task: Optional[JobNotebookTaskArgs] = None,
            notification_settings: Optional[JobNotificationSettingsArgs] = None,
            parameters: Optional[Sequence[JobParameterArgs]] = None,
            pipeline_task: Optional[JobPipelineTaskArgs] = None,
            python_wheel_task: Optional[JobPythonWheelTaskArgs] = None,
            queue: Optional[JobQueueArgs] = None,
            retry_on_timeout: Optional[bool] = None,
            run_as: Optional[JobRunAsArgs] = None,
            run_job_task: Optional[JobRunJobTaskArgs] = None,
            schedule: Optional[JobScheduleArgs] = None,
            spark_jar_task: Optional[JobSparkJarTaskArgs] = None,
            spark_python_task: Optional[JobSparkPythonTaskArgs] = None,
            spark_submit_task: Optional[JobSparkSubmitTaskArgs] = None,
            tags: Optional[Mapping[str, Any]] = None,
            tasks: Optional[Sequence[JobTaskArgs]] = None,
            timeout_seconds: Optional[int] = None,
            trigger: Optional[JobTriggerArgs] = None,
            url: Optional[str] = None,
            webhook_notifications: Optional[JobWebhookNotificationsArgs] = None) -> Job
    func GetJob(ctx *Context, name string, id IDInput, state *JobState, opts ...ResourceOption) (*Job, error)
    public static Job Get(string name, Input<string> id, JobState? state, CustomResourceOptions? opts = null)
    public static Job get(String name, Output<String> id, JobState state, CustomResourceOptions options)
    Resource lookup is not supported in YAML
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    resource_name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    The following state arguments are supported:
    AlwaysRunning bool
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.

    Deprecated: always_running will be replaced by control_run_state in the next major release.

    Computes List<JobCompute>
    Continuous JobContinuous
    ControlRunState bool

    (Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the pause_status by stopping the current active run. This flag cannot be set for non-continuous jobs.

    When migrating from always_running to control_run_state, set continuous as follows:

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    DbtTask JobDbtTask

    Deprecated: should be used inside a task block and not inside a job block

    Deployment JobDeployment
    Description string
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    EditMode string
    EmailNotifications JobEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    ExistingClusterId string
    Format string
    GitSource JobGitSource
    Health JobHealth
    An optional block that specifies the health conditions for the job (described below).
    JobClusters List<JobJobCluster>
    A list of job databricks.Cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. Multi-task syntax
    Libraries List<JobLibrary>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section of the databricks.Cluster resource for more information.
    MaxConcurrentRuns int
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    MaxRetries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.

    Deprecated: should be used inside a task block and not inside a job block

    MinRetryIntervalMillis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.

    Deprecated: should be used inside a task block and not inside a job block

    Name string
    An optional name for the job. The default value is Untitled.
    NewCluster JobNewCluster
    Same set of parameters as for databricks.Cluster resource.
    NotebookTask JobNotebookTask

    Deprecated: should be used inside a task block and not inside a job block

    NotificationSettings JobNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    Parameters List<JobParameter>
    PipelineTask JobPipelineTask

    Deprecated: should be used inside a task block and not inside a job block

    PythonWheelTask JobPythonWheelTask

    Deprecated: should be used inside a task block and not inside a job block

    Queue JobQueue
    RetryOnTimeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.

    Deprecated: should be used inside a task block and not inside a job block

    RunAs JobRunAs
    RunJobTask JobRunJobTask

    Deprecated: should be used inside a task block and not inside a job block

    Schedule JobSchedule
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    SparkJarTask JobSparkJarTask

    Deprecated: should be used inside a task block and not inside a job block

    SparkPythonTask JobSparkPythonTask

    Deprecated: should be used inside a task block and not inside a job block

    SparkSubmitTask JobSparkSubmitTask

    Deprecated: should be used inside a task block and not inside a job block

    Tags Dictionary<string, object>
    Tasks List<JobTask>
    Task to run against the inputs list.
    TimeoutSeconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    Trigger JobTrigger
    Url string
    URL of the Git repository to use.
    WebhookNotifications JobWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    AlwaysRunning bool
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.

    Deprecated: always_running will be replaced by control_run_state in the next major release.

    Computes []JobComputeArgs
    Continuous JobContinuousArgs
    ControlRunState bool

    (Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the pause_status by stopping the current active run. This flag cannot be set for non-continuous jobs.

    When migrating from always_running to control_run_state, set continuous as follows:

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    DbtTask JobDbtTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    Deployment JobDeploymentArgs
    Description string
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    EditMode string
    EmailNotifications JobEmailNotificationsArgs
    (List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    ExistingClusterId string
    Format string
    GitSource JobGitSourceArgs
    Health JobHealthArgs
    An optional block that specifies the health conditions for the job (described below).
    JobClusters []JobJobClusterArgs
    A list of job databricks.Cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. Multi-task syntax
    Libraries []JobLibraryArgs
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section of the databricks.Cluster resource for more information.
    MaxConcurrentRuns int
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    MaxRetries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.

    Deprecated: should be used inside a task block and not inside a job block

    MinRetryIntervalMillis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.

    Deprecated: should be used inside a task block and not inside a job block

    Name string
    An optional name for the job. The default value is Untitled.
    NewCluster JobNewClusterArgs
    Same set of parameters as for databricks.Cluster resource.
    NotebookTask JobNotebookTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    NotificationSettings JobNotificationSettingsArgs
    An optional block controlling the notification settings on the job level (described below).
    Parameters []JobParameterArgs
    PipelineTask JobPipelineTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    PythonWheelTask JobPythonWheelTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    Queue JobQueueArgs
    RetryOnTimeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.

    Deprecated: should be used inside a task block and not inside a job block

    RunAs JobRunAsArgs
    RunJobTask JobRunJobTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    Schedule JobScheduleArgs
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    SparkJarTask JobSparkJarTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    SparkPythonTask JobSparkPythonTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    SparkSubmitTask JobSparkSubmitTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    Tags map[string]interface{}
    Tasks []JobTaskArgs
    Task to run against the inputs list.
    TimeoutSeconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    Trigger JobTriggerArgs
    Url string
    URL of the Git repository to use.
    WebhookNotifications JobWebhookNotificationsArgs
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    alwaysRunning Boolean
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.

    Deprecated: always_running will be replaced by control_run_state in the next major release.

    computes List<JobCompute>
    continuous JobContinuous
    controlRunState Boolean

    (Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the pause_status by stopping the current active run. This flag cannot be set for non-continuous jobs.

    When migrating from always_running to control_run_state, set continuous as follows:

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    dbtTask JobDbtTask

    Deprecated: should be used inside a task block and not inside a job block

    deployment JobDeployment
    description String
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    editMode String
    emailNotifications JobEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId String
    format String
    gitSource JobGitSource
    health JobHealth
    An optional block that specifies the health conditions for the job (described below).
    jobClusters List<JobJobCluster>
    A list of job databricks.Cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. Multi-task syntax
    libraries List<JobLibrary>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section of the databricks.Cluster resource for more information.
    maxConcurrentRuns Integer
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    maxRetries Integer
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.

    Deprecated: should be used inside a task block and not inside a job block

    minRetryIntervalMillis Integer
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.

    Deprecated: should be used inside a task block and not inside a job block

    name String
    An optional name for the job. The default value is Untitled.
    newCluster JobNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebookTask JobNotebookTask

    Deprecated: should be used inside a task block and not inside a job block

    notificationSettings JobNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    parameters List<JobParameter>
    pipelineTask JobPipelineTask

    Deprecated: should be used inside a task block and not inside a job block

    pythonWheelTask JobPythonWheelTask

    Deprecated: should be used inside a task block and not inside a job block

    queue JobQueue
    retryOnTimeout Boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.

    Deprecated: should be used inside a task block and not inside a job block

    runAs JobRunAs
    runJobTask JobRunJobTask

    Deprecated: should be used inside a task block and not inside a job block

    schedule JobSchedule
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    sparkJarTask JobSparkJarTask

    Deprecated: should be used inside a task block and not inside a job block

    sparkPythonTask JobSparkPythonTask

    Deprecated: should be used inside a task block and not inside a job block

    sparkSubmitTask JobSparkSubmitTask

    Deprecated: should be used inside a task block and not inside a job block

    tags Map<String,Object>
    tasks List<JobTask>
    Task to run against the inputs list.
    timeoutSeconds Integer
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    trigger JobTrigger
    url String
    URL of the Git repository to use.
    webhookNotifications JobWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    alwaysRunning boolean
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.

    Deprecated: always_running will be replaced by control_run_state in the next major release.

    computes JobCompute[]
    continuous JobContinuous
    controlRunState boolean

    (Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the pause_status by stopping the current active run. This flag cannot be set for non-continuous jobs.

    When migrating from always_running to control_run_state, set continuous as follows:

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    dbtTask JobDbtTask

    Deprecated: should be used inside a task block and not inside a job block

    deployment JobDeployment
    description string
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    editMode string
    emailNotifications JobEmailNotifications
    (List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId string
    format string
    gitSource JobGitSource
    health JobHealth
    An optional block that specifies the health conditions for the job (described below).
    jobClusters JobJobCluster[]
    A list of job databricks.Cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. Multi-task syntax
    libraries JobLibrary[]
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section of the databricks.Cluster resource for more information.
    maxConcurrentRuns number
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    maxRetries number
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.

    Deprecated: should be used inside a task block and not inside a job block

    minRetryIntervalMillis number
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.

    Deprecated: should be used inside a task block and not inside a job block

    name string
    An optional name for the job. The default value is Untitled.
    newCluster JobNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebookTask JobNotebookTask

    Deprecated: should be used inside a task block and not inside a job block

    notificationSettings JobNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    parameters JobParameter[]
    pipelineTask JobPipelineTask

    Deprecated: should be used inside a task block and not inside a job block

    pythonWheelTask JobPythonWheelTask

    Deprecated: should be used inside a task block and not inside a job block

    queue JobQueue
    retryOnTimeout boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.

    Deprecated: should be used inside a task block and not inside a job block

    runAs JobRunAs
    runJobTask JobRunJobTask

    Deprecated: should be used inside a task block and not inside a job block

    schedule JobSchedule
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    sparkJarTask JobSparkJarTask

    Deprecated: should be used inside a task block and not inside a job block

    sparkPythonTask JobSparkPythonTask

    Deprecated: should be used inside a task block and not inside a job block

    sparkSubmitTask JobSparkSubmitTask

    Deprecated: should be used inside a task block and not inside a job block

    tags {[key: string]: any}
    tasks JobTask[]
    Task to run against the inputs list.
    timeoutSeconds number
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    trigger JobTrigger
    url string
    URL of the Git repository to use.
    webhookNotifications JobWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    always_running bool
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.

    Deprecated: always_running will be replaced by control_run_state in the next major release.

    computes Sequence[JobComputeArgs]
    continuous JobContinuousArgs
    control_run_state bool

    (Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the pause_status by stopping the current active run. This flag cannot be set for non-continuous jobs.

    When migrating from always_running to control_run_state, set continuous as follows:

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    dbt_task JobDbtTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    deployment JobDeploymentArgs
    description str
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    edit_mode str
    email_notifications JobEmailNotificationsArgs
    (List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existing_cluster_id str
    format str
    git_source JobGitSourceArgs
    health JobHealthArgs
    An optional block that specifies the health conditions for the job (described below).
    job_clusters Sequence[JobJobClusterArgs]
    A list of job databricks.Cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. Multi-task syntax
    libraries Sequence[JobLibraryArgs]
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section of the databricks.Cluster resource for more information.
    max_concurrent_runs int
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    max_retries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.

    Deprecated: should be used inside a task block and not inside a job block

    min_retry_interval_millis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.

    Deprecated: should be used inside a task block and not inside a job block

    name str
    An optional name for the job. The default value is Untitled.
    new_cluster JobNewClusterArgs
    Same set of parameters as for databricks.Cluster resource.
    notebook_task JobNotebookTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    notification_settings JobNotificationSettingsArgs
    An optional block controlling the notification settings on the job level (described below).
    parameters Sequence[JobParameterArgs]
    pipeline_task JobPipelineTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    python_wheel_task JobPythonWheelTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    queue JobQueueArgs
    retry_on_timeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.

    Deprecated: should be used inside a task block and not inside a job block

    run_as JobRunAsArgs
    run_job_task JobRunJobTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    schedule JobScheduleArgs
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    spark_jar_task JobSparkJarTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    spark_python_task JobSparkPythonTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    spark_submit_task JobSparkSubmitTaskArgs

    Deprecated: should be used inside a task block and not inside a job block

    tags Mapping[str, Any]
    tasks Sequence[JobTaskArgs]
    Task to run against the inputs list.
    timeout_seconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    trigger JobTriggerArgs
    url str
    URL of the Git repository to use.
    webhook_notifications JobWebhookNotificationsArgs
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    alwaysRunning Boolean
    (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with parameters specified in spark_jar_task or spark_submit_task or spark_python_task or notebook_task blocks.

    Deprecated: always_running will be replaced by control_run_state in the next major release.

    computes List<Property Map>
    continuous Property Map
    controlRunState Boolean

    (Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the pause_status by stopping the current active run. This flag cannot be set for non-continuous jobs.

    When migrating from always_running to control_run_state, set continuous as follows:

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    dbtTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    deployment Property Map
    description String
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    editMode String
    emailNotifications Property Map
    (List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId String
    format String
    gitSource Property Map
    health Property Map
    An optional block that specifies the health conditions for the job (described below).
    jobClusters List<Property Map>
    A list of job databricks.Cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. Multi-task syntax
    libraries List<Property Map>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section of the databricks.Cluster resource for more information.
    maxConcurrentRuns Number
    (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
    maxRetries Number
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.

    Deprecated: should be used inside a task block and not inside a job block

    minRetryIntervalMillis Number
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.

    Deprecated: should be used inside a task block and not inside a job block

    name String
    An optional name for the job. The default value is Untitled.
    newCluster Property Map
    Same set of parameters as for databricks.Cluster resource.
    notebookTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    notificationSettings Property Map
    An optional block controlling the notification settings on the job level (described below).
    parameters List<Property Map>
    pipelineTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    pythonWheelTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    queue Property Map
    retryOnTimeout Boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.

    Deprecated: should be used inside a task block and not inside a job block

    runAs Property Map
    runJobTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    schedule Property Map
    (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
    sparkJarTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    sparkPythonTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    sparkSubmitTask Property Map

    Deprecated: should be used inside a task block and not inside a job block

    tags Map<Any>
    tasks List<Property Map>
    Task to run against the inputs list.
    timeoutSeconds Number
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    trigger Property Map
    url String
    URL of the Git repository to use.
    webhookNotifications Property Map
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.

    Supporting Types

    JobCompute, JobComputeArgs

    JobComputeSpec, JobComputeSpecArgs

    Kind string
    Kind string
    kind String
    kind string
    kind str
    kind String

    JobContinuous, JobContinuousArgs

    PauseStatus string
    Indicate whether this continuous job is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted in the block, the server will default to using UNPAUSED as a value for pause_status.
    PauseStatus string
    Indicate whether this continuous job is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted in the block, the server will default to using UNPAUSED as a value for pause_status.
    pauseStatus String
    Indicate whether this continuous job is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted in the block, the server will default to using UNPAUSED as a value for pause_status.
    pauseStatus string
    Indicate whether this continuous job is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted in the block, the server will default to using UNPAUSED as a value for pause_status.
    pause_status str
    Indicate whether this continuous job is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted in the block, the server will default to using UNPAUSED as a value for pause_status.
    pauseStatus String
    Indicate whether this continuous job is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted in the block, the server will default to using UNPAUSED as a value for pause_status.

    JobDbtTask, JobDbtTaskArgs

    Commands List<string>
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    Catalog string
    The name of the catalog to use inside Unity Catalog.
    ProfilesDirectory string
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    ProjectDirectory string
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    Schema string
    The name of the schema dbt should run in. Defaults to default.
    Source string
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    WarehouseId string

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    Commands []string
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    Catalog string
    The name of the catalog to use inside Unity Catalog.
    ProfilesDirectory string
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    ProjectDirectory string
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    Schema string
    The name of the schema dbt should run in. Defaults to default.
    Source string
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    WarehouseId string

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    commands List<String>
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    catalog String
    The name of the catalog to use inside Unity Catalog.
    profilesDirectory String
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    projectDirectory String
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    schema String
    The name of the schema dbt should run in. Defaults to default.
    source String
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    warehouseId String

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    commands string[]
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    catalog string
    The name of the catalog to use inside Unity Catalog.
    profilesDirectory string
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    projectDirectory string
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    schema string
    The name of the schema dbt should run in. Defaults to default.
    source string
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    warehouseId string

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    commands Sequence[str]
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    catalog str
    The name of the catalog to use inside Unity Catalog.
    profiles_directory str
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    project_directory str
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    schema str
    The name of the schema dbt should run in. Defaults to default.
    source str
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    warehouse_id str

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    commands List<String>
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    catalog String
    The name of the catalog to use inside Unity Catalog.
    profilesDirectory String
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    projectDirectory String
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    schema String
    The name of the schema dbt should run in. Defaults to default.
    source String
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    warehouseId String

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    JobDeployment, JobDeploymentArgs

    Kind string
    MetadataFilePath string
    Kind string
    MetadataFilePath string
    kind String
    metadataFilePath String
    kind string
    metadataFilePath string
    kind String
    metadataFilePath String

    JobEmailNotifications, JobEmailNotificationsArgs

    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    OnDurationWarningThresholdExceededs List<string>
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    OnFailures List<string>
    (List) list of emails to notify when the run fails.
    OnStarts List<string>
    (List) list of emails to notify when the run starts.
    OnSuccesses List<string>
    (List) list of emails to notify when the run completes successfully.
    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    OnDurationWarningThresholdExceededs []string
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    OnFailures []string
    (List) list of emails to notify when the run fails.
    OnStarts []string
    (List) list of emails to notify when the run starts.
    OnSuccesses []string
    (List) list of emails to notify when the run completes successfully.
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    onDurationWarningThresholdExceededs List<String>
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    onFailures List<String>
    (List) list of emails to notify when the run fails.
    onStarts List<String>
    (List) list of emails to notify when the run starts.
    onSuccesses List<String>
    (List) list of emails to notify when the run completes successfully.
    noAlertForSkippedRuns boolean
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    onDurationWarningThresholdExceededs string[]
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    onFailures string[]
    (List) list of emails to notify when the run fails.
    onStarts string[]
    (List) list of emails to notify when the run starts.
    onSuccesses string[]
    (List) list of emails to notify when the run completes successfully.
    no_alert_for_skipped_runs bool
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    on_duration_warning_threshold_exceededs Sequence[str]
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    on_failures Sequence[str]
    (List) list of emails to notify when the run fails.
    on_starts Sequence[str]
    (List) list of emails to notify when the run starts.
    on_successes Sequence[str]
    (List) list of emails to notify when the run completes successfully.
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    onDurationWarningThresholdExceededs List<String>
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    onFailures List<String>
    (List) list of emails to notify when the run fails.
    onStarts List<String>
    (List) list of emails to notify when the run starts.
    onSuccesses List<String>
    (List) list of emails to notify when the run completes successfully.

    JobGitSource, JobGitSourceArgs

    Url string
    URL of the Git repository to use.
    Branch string
    name of the Git branch to use. Conflicts with tag and commit.
    Commit string
    hash of Git commit to use. Conflicts with branch and tag.
    JobSource JobGitSourceJobSource
    Provider string
    case insensitive name of the Git provider. Following values are supported right now (could be a subject for change, consult Repos API documentation): gitHub, gitHubEnterprise, bitbucketCloud, bitbucketServer, azureDevOpsServices, gitLab, gitLabEnterpriseEdition.
    Tag string
    name of the Git branch to use. Conflicts with branch and commit.
    Url string
    URL of the Git repository to use.
    Branch string
    name of the Git branch to use. Conflicts with tag and commit.
    Commit string
    hash of Git commit to use. Conflicts with branch and tag.
    JobSource JobGitSourceJobSource
    Provider string
    case insensitive name of the Git provider. Following values are supported right now (could be a subject for change, consult Repos API documentation): gitHub, gitHubEnterprise, bitbucketCloud, bitbucketServer, azureDevOpsServices, gitLab, gitLabEnterpriseEdition.
    Tag string
    name of the Git branch to use. Conflicts with branch and commit.
    url String
    URL of the Git repository to use.
    branch String
    name of the Git branch to use. Conflicts with tag and commit.
    commit String
    hash of Git commit to use. Conflicts with branch and tag.
    jobSource JobGitSourceJobSource
    provider String
    case insensitive name of the Git provider. Following values are supported right now (could be a subject for change, consult Repos API documentation): gitHub, gitHubEnterprise, bitbucketCloud, bitbucketServer, azureDevOpsServices, gitLab, gitLabEnterpriseEdition.
    tag String
    name of the Git branch to use. Conflicts with branch and commit.
    url string
    URL of the Git repository to use.
    branch string
    name of the Git branch to use. Conflicts with tag and commit.
    commit string
    hash of Git commit to use. Conflicts with branch and tag.
    jobSource JobGitSourceJobSource
    provider string
    case insensitive name of the Git provider. Following values are supported right now (could be a subject for change, consult Repos API documentation): gitHub, gitHubEnterprise, bitbucketCloud, bitbucketServer, azureDevOpsServices, gitLab, gitLabEnterpriseEdition.
    tag string
    name of the Git branch to use. Conflicts with branch and commit.
    url str
    URL of the Git repository to use.
    branch str
    name of the Git branch to use. Conflicts with tag and commit.
    commit str
    hash of Git commit to use. Conflicts with branch and tag.
    job_source JobGitSourceJobSource
    provider str
    case insensitive name of the Git provider. Following values are supported right now (could be a subject for change, consult Repos API documentation): gitHub, gitHubEnterprise, bitbucketCloud, bitbucketServer, azureDevOpsServices, gitLab, gitLabEnterpriseEdition.
    tag str
    name of the Git branch to use. Conflicts with branch and commit.
    url String
    URL of the Git repository to use.
    branch String
    name of the Git branch to use. Conflicts with tag and commit.
    commit String
    hash of Git commit to use. Conflicts with branch and tag.
    jobSource Property Map
    provider String
    case insensitive name of the Git provider. Following values are supported right now (could be a subject for change, consult Repos API documentation): gitHub, gitHubEnterprise, bitbucketCloud, bitbucketServer, azureDevOpsServices, gitLab, gitLabEnterpriseEdition.
    tag String
    name of the Git branch to use. Conflicts with branch and commit.

    JobGitSourceJobSource, JobGitSourceJobSourceArgs

    JobHealth, JobHealthArgs

    Rules List<JobHealthRule>
    list of rules that are represented as objects with the following attributes:
    Rules []JobHealthRule
    list of rules that are represented as objects with the following attributes:
    rules List<JobHealthRule>
    list of rules that are represented as objects with the following attributes:
    rules JobHealthRule[]
    list of rules that are represented as objects with the following attributes:
    rules Sequence[JobHealthRule]
    list of rules that are represented as objects with the following attributes:
    rules List<Property Map>
    list of rules that are represented as objects with the following attributes:

    JobHealthRule, JobHealthRuleArgs

    Metric string
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    Op string

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    Value int
    integer value used to compare to the given metric.
    Metric string
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    Op string

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    Value int
    integer value used to compare to the given metric.
    metric String
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    op String

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    value Integer
    integer value used to compare to the given metric.
    metric string
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    op string

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    value number
    integer value used to compare to the given metric.
    metric str
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    op str

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    value int
    integer value used to compare to the given metric.
    metric String
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    op String

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    value Number
    integer value used to compare to the given metric.

    JobJobCluster, JobJobClusterArgs

    JobClusterKey string
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    NewCluster JobJobClusterNewCluster
    Same set of parameters as for databricks.Cluster resource.
    JobClusterKey string
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    NewCluster JobJobClusterNewCluster
    Same set of parameters as for databricks.Cluster resource.
    jobClusterKey String
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    newCluster JobJobClusterNewCluster
    Same set of parameters as for databricks.Cluster resource.
    jobClusterKey string
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    newCluster JobJobClusterNewCluster
    Same set of parameters as for databricks.Cluster resource.
    job_cluster_key str
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    new_cluster JobJobClusterNewCluster
    Same set of parameters as for databricks.Cluster resource.
    jobClusterKey String
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    newCluster Property Map
    Same set of parameters as for databricks.Cluster resource.

    JobJobClusterNewCluster, JobJobClusterNewClusterArgs

    SparkVersion string
    ApplyPolicyDefaultValues bool
    Autoscale JobJobClusterNewClusterAutoscale
    AutoterminationMinutes int
    AwsAttributes JobJobClusterNewClusterAwsAttributes
    AzureAttributes JobJobClusterNewClusterAzureAttributes
    ClusterId string
    ClusterLogConf JobJobClusterNewClusterClusterLogConf
    ClusterMountInfos List<JobJobClusterNewClusterClusterMountInfo>
    ClusterName string
    CustomTags Dictionary<string, object>
    DataSecurityMode string
    DockerImage JobJobClusterNewClusterDockerImage
    DriverInstancePoolId string
    DriverNodeTypeId string
    EnableElasticDisk bool
    EnableLocalDiskEncryption bool
    GcpAttributes JobJobClusterNewClusterGcpAttributes
    IdempotencyToken string
    InitScripts List<JobJobClusterNewClusterInitScript>
    InstancePoolId string
    NodeTypeId string
    NumWorkers int
    PolicyId string
    RuntimeEngine string
    SingleUserName string
    SparkConf Dictionary<string, object>
    SparkEnvVars Dictionary<string, object>
    SshPublicKeys List<string>
    WorkloadType JobJobClusterNewClusterWorkloadType
    SparkVersion string
    ApplyPolicyDefaultValues bool
    Autoscale JobJobClusterNewClusterAutoscale
    AutoterminationMinutes int
    AwsAttributes JobJobClusterNewClusterAwsAttributes
    AzureAttributes JobJobClusterNewClusterAzureAttributes
    ClusterId string
    ClusterLogConf JobJobClusterNewClusterClusterLogConf
    ClusterMountInfos []JobJobClusterNewClusterClusterMountInfo
    ClusterName string
    CustomTags map[string]interface{}
    DataSecurityMode string
    DockerImage JobJobClusterNewClusterDockerImage
    DriverInstancePoolId string
    DriverNodeTypeId string
    EnableElasticDisk bool
    EnableLocalDiskEncryption bool
    GcpAttributes JobJobClusterNewClusterGcpAttributes
    IdempotencyToken string
    InitScripts []JobJobClusterNewClusterInitScript
    InstancePoolId string
    NodeTypeId string
    NumWorkers int
    PolicyId string
    RuntimeEngine string
    SingleUserName string
    SparkConf map[string]interface{}
    SparkEnvVars map[string]interface{}
    SshPublicKeys []string
    WorkloadType JobJobClusterNewClusterWorkloadType
    sparkVersion String
    applyPolicyDefaultValues Boolean
    autoscale JobJobClusterNewClusterAutoscale
    autoterminationMinutes Integer
    awsAttributes JobJobClusterNewClusterAwsAttributes
    azureAttributes JobJobClusterNewClusterAzureAttributes
    clusterId String
    clusterLogConf JobJobClusterNewClusterClusterLogConf
    clusterMountInfos List<JobJobClusterNewClusterClusterMountInfo>
    clusterName String
    customTags Map<String,Object>
    dataSecurityMode String
    dockerImage JobJobClusterNewClusterDockerImage
    driverInstancePoolId String
    driverNodeTypeId String
    enableElasticDisk Boolean
    enableLocalDiskEncryption Boolean
    gcpAttributes JobJobClusterNewClusterGcpAttributes
    idempotencyToken String
    initScripts List<JobJobClusterNewClusterInitScript>
    instancePoolId String
    nodeTypeId String
    numWorkers Integer
    policyId String
    runtimeEngine String
    singleUserName String
    sparkConf Map<String,Object>
    sparkEnvVars Map<String,Object>
    sshPublicKeys List<String>
    workloadType JobJobClusterNewClusterWorkloadType
    sparkVersion string
    applyPolicyDefaultValues boolean
    autoscale JobJobClusterNewClusterAutoscale
    autoterminationMinutes number
    awsAttributes JobJobClusterNewClusterAwsAttributes
    azureAttributes JobJobClusterNewClusterAzureAttributes
    clusterId string
    clusterLogConf JobJobClusterNewClusterClusterLogConf
    clusterMountInfos JobJobClusterNewClusterClusterMountInfo[]
    clusterName string
    customTags {[key: string]: any}
    dataSecurityMode string
    dockerImage JobJobClusterNewClusterDockerImage
    driverInstancePoolId string
    driverNodeTypeId string
    enableElasticDisk boolean
    enableLocalDiskEncryption boolean
    gcpAttributes JobJobClusterNewClusterGcpAttributes
    idempotencyToken string
    initScripts JobJobClusterNewClusterInitScript[]
    instancePoolId string
    nodeTypeId string
    numWorkers number
    policyId string
    runtimeEngine string
    singleUserName string
    sparkConf {[key: string]: any}
    sparkEnvVars {[key: string]: any}
    sshPublicKeys string[]
    workloadType JobJobClusterNewClusterWorkloadType
    spark_version str
    apply_policy_default_values bool
    autoscale JobJobClusterNewClusterAutoscale
    autotermination_minutes int
    aws_attributes JobJobClusterNewClusterAwsAttributes
    azure_attributes JobJobClusterNewClusterAzureAttributes
    cluster_id str
    cluster_log_conf JobJobClusterNewClusterClusterLogConf
    cluster_mount_infos Sequence[JobJobClusterNewClusterClusterMountInfo]
    cluster_name str
    custom_tags Mapping[str, Any]
    data_security_mode str
    docker_image JobJobClusterNewClusterDockerImage
    driver_instance_pool_id str
    driver_node_type_id str
    enable_elastic_disk bool
    enable_local_disk_encryption bool
    gcp_attributes JobJobClusterNewClusterGcpAttributes
    idempotency_token str
    init_scripts Sequence[JobJobClusterNewClusterInitScript]
    instance_pool_id str
    node_type_id str
    num_workers int
    policy_id str
    runtime_engine str
    single_user_name str
    spark_conf Mapping[str, Any]
    spark_env_vars Mapping[str, Any]
    ssh_public_keys Sequence[str]
    workload_type JobJobClusterNewClusterWorkloadType

    JobJobClusterNewClusterAutoscale, JobJobClusterNewClusterAutoscaleArgs

    maxWorkers Integer
    minWorkers Integer
    maxWorkers number
    minWorkers number
    maxWorkers Number
    minWorkers Number

    JobJobClusterNewClusterAwsAttributes, JobJobClusterNewClusterAwsAttributesArgs

    JobJobClusterNewClusterAzureAttributes, JobJobClusterNewClusterAzureAttributesArgs

    JobJobClusterNewClusterClusterLogConf, JobJobClusterNewClusterClusterLogConfArgs

    JobJobClusterNewClusterClusterLogConfDbfs, JobJobClusterNewClusterClusterLogConfDbfsArgs

    JobJobClusterNewClusterClusterLogConfS3, JobJobClusterNewClusterClusterLogConfS3Args

    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String
    destination string
    cannedAcl string
    enableEncryption boolean
    encryptionType string
    endpoint string
    kmsKey string
    region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String

    JobJobClusterNewClusterClusterMountInfo, JobJobClusterNewClusterClusterMountInfoArgs

    JobJobClusterNewClusterClusterMountInfoNetworkFilesystemInfo, JobJobClusterNewClusterClusterMountInfoNetworkFilesystemInfoArgs

    JobJobClusterNewClusterDockerImage, JobJobClusterNewClusterDockerImageArgs

    Url string
    URL of the Git repository to use.
    BasicAuth JobJobClusterNewClusterDockerImageBasicAuth
    Url string
    URL of the Git repository to use.
    BasicAuth JobJobClusterNewClusterDockerImageBasicAuth
    url String
    URL of the Git repository to use.
    basicAuth JobJobClusterNewClusterDockerImageBasicAuth
    url string
    URL of the Git repository to use.
    basicAuth JobJobClusterNewClusterDockerImageBasicAuth
    url str
    URL of the Git repository to use.
    basic_auth JobJobClusterNewClusterDockerImageBasicAuth
    url String
    URL of the Git repository to use.
    basicAuth Property Map

    JobJobClusterNewClusterDockerImageBasicAuth, JobJobClusterNewClusterDockerImageBasicAuthArgs

    Password string
    Username string
    Password string
    Username string
    password String
    username String
    password string
    username string
    password String
    username String

    JobJobClusterNewClusterGcpAttributes, JobJobClusterNewClusterGcpAttributesArgs

    JobJobClusterNewClusterInitScript, JobJobClusterNewClusterInitScriptArgs

    abfss Property Map
    dbfs Property Map

    Deprecated: For init scripts use 'volumes', 'workspace' or cloud storage location instead of 'dbfs'.

    file Property Map
    block consisting of single string fields:
    gcs Property Map
    s3 Property Map
    volumes Property Map
    workspace Property Map

    JobJobClusterNewClusterInitScriptAbfss, JobJobClusterNewClusterInitScriptAbfssArgs

    JobJobClusterNewClusterInitScriptDbfs, JobJobClusterNewClusterInitScriptDbfsArgs

    JobJobClusterNewClusterInitScriptFile, JobJobClusterNewClusterInitScriptFileArgs

    JobJobClusterNewClusterInitScriptGcs, JobJobClusterNewClusterInitScriptGcsArgs

    JobJobClusterNewClusterInitScriptS3, JobJobClusterNewClusterInitScriptS3Args

    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String
    destination string
    cannedAcl string
    enableEncryption boolean
    encryptionType string
    endpoint string
    kmsKey string
    region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String

    JobJobClusterNewClusterInitScriptVolumes, JobJobClusterNewClusterInitScriptVolumesArgs

    JobJobClusterNewClusterInitScriptWorkspace, JobJobClusterNewClusterInitScriptWorkspaceArgs

    JobJobClusterNewClusterWorkloadType, JobJobClusterNewClusterWorkloadTypeArgs

    JobJobClusterNewClusterWorkloadTypeClients, JobJobClusterNewClusterWorkloadTypeClientsArgs

    Jobs bool
    Notebooks bool
    Jobs bool
    Notebooks bool
    jobs Boolean
    notebooks Boolean
    jobs boolean
    notebooks boolean
    jobs bool
    notebooks bool
    jobs Boolean
    notebooks Boolean

    JobLibrary, JobLibraryArgs

    JobLibraryCran, JobLibraryCranArgs

    Package string
    Repo string
    Package string
    Repo string
    package_ String
    repo String
    package string
    repo string
    package str
    repo str
    package String
    repo String

    JobLibraryMaven, JobLibraryMavenArgs

    Coordinates string
    Exclusions List<string>
    Repo string
    Coordinates string
    Exclusions []string
    Repo string
    coordinates String
    exclusions List<String>
    repo String
    coordinates string
    exclusions string[]
    repo string
    coordinates str
    exclusions Sequence[str]
    repo str
    coordinates String
    exclusions List<String>
    repo String

    JobLibraryPypi, JobLibraryPypiArgs

    Package string
    Repo string
    Package string
    Repo string
    package_ String
    repo String
    package string
    repo string
    package str
    repo str
    package String
    repo String

    JobNewCluster, JobNewClusterArgs

    SparkVersion string
    ApplyPolicyDefaultValues bool
    Autoscale JobNewClusterAutoscale
    AutoterminationMinutes int
    AwsAttributes JobNewClusterAwsAttributes
    AzureAttributes JobNewClusterAzureAttributes
    ClusterId string
    ClusterLogConf JobNewClusterClusterLogConf
    ClusterMountInfos List<JobNewClusterClusterMountInfo>
    ClusterName string
    CustomTags Dictionary<string, object>
    DataSecurityMode string
    DockerImage JobNewClusterDockerImage
    DriverInstancePoolId string
    DriverNodeTypeId string
    EnableElasticDisk bool
    EnableLocalDiskEncryption bool
    GcpAttributes JobNewClusterGcpAttributes
    IdempotencyToken string
    InitScripts List<JobNewClusterInitScript>
    InstancePoolId string
    NodeTypeId string
    NumWorkers int
    PolicyId string
    RuntimeEngine string
    SingleUserName string
    SparkConf Dictionary<string, object>
    SparkEnvVars Dictionary<string, object>
    SshPublicKeys List<string>
    WorkloadType JobNewClusterWorkloadType
    SparkVersion string
    ApplyPolicyDefaultValues bool
    Autoscale JobNewClusterAutoscale
    AutoterminationMinutes int
    AwsAttributes JobNewClusterAwsAttributes
    AzureAttributes JobNewClusterAzureAttributes
    ClusterId string
    ClusterLogConf JobNewClusterClusterLogConf
    ClusterMountInfos []JobNewClusterClusterMountInfo
    ClusterName string
    CustomTags map[string]interface{}
    DataSecurityMode string
    DockerImage JobNewClusterDockerImage
    DriverInstancePoolId string
    DriverNodeTypeId string
    EnableElasticDisk bool
    EnableLocalDiskEncryption bool
    GcpAttributes JobNewClusterGcpAttributes
    IdempotencyToken string
    InitScripts []JobNewClusterInitScript
    InstancePoolId string
    NodeTypeId string
    NumWorkers int
    PolicyId string
    RuntimeEngine string
    SingleUserName string
    SparkConf map[string]interface{}
    SparkEnvVars map[string]interface{}
    SshPublicKeys []string
    WorkloadType JobNewClusterWorkloadType
    sparkVersion String
    applyPolicyDefaultValues Boolean
    autoscale JobNewClusterAutoscale
    autoterminationMinutes Integer
    awsAttributes JobNewClusterAwsAttributes
    azureAttributes JobNewClusterAzureAttributes
    clusterId String
    clusterLogConf JobNewClusterClusterLogConf
    clusterMountInfos List<JobNewClusterClusterMountInfo>
    clusterName String
    customTags Map<String,Object>
    dataSecurityMode String
    dockerImage JobNewClusterDockerImage
    driverInstancePoolId String
    driverNodeTypeId String
    enableElasticDisk Boolean
    enableLocalDiskEncryption Boolean
    gcpAttributes JobNewClusterGcpAttributes
    idempotencyToken String
    initScripts List<JobNewClusterInitScript>
    instancePoolId String
    nodeTypeId String
    numWorkers Integer
    policyId String
    runtimeEngine String
    singleUserName String
    sparkConf Map<String,Object>
    sparkEnvVars Map<String,Object>
    sshPublicKeys List<String>
    workloadType JobNewClusterWorkloadType
    sparkVersion string
    applyPolicyDefaultValues boolean
    autoscale JobNewClusterAutoscale
    autoterminationMinutes number
    awsAttributes JobNewClusterAwsAttributes
    azureAttributes JobNewClusterAzureAttributes
    clusterId string
    clusterLogConf JobNewClusterClusterLogConf
    clusterMountInfos JobNewClusterClusterMountInfo[]
    clusterName string
    customTags {[key: string]: any}
    dataSecurityMode string
    dockerImage JobNewClusterDockerImage
    driverInstancePoolId string
    driverNodeTypeId string
    enableElasticDisk boolean
    enableLocalDiskEncryption boolean
    gcpAttributes JobNewClusterGcpAttributes
    idempotencyToken string
    initScripts JobNewClusterInitScript[]
    instancePoolId string
    nodeTypeId string
    numWorkers number
    policyId string
    runtimeEngine string
    singleUserName string
    sparkConf {[key: string]: any}
    sparkEnvVars {[key: string]: any}
    sshPublicKeys string[]
    workloadType JobNewClusterWorkloadType
    spark_version str
    apply_policy_default_values bool
    autoscale JobNewClusterAutoscale
    autotermination_minutes int
    aws_attributes JobNewClusterAwsAttributes
    azure_attributes JobNewClusterAzureAttributes
    cluster_id str
    cluster_log_conf JobNewClusterClusterLogConf
    cluster_mount_infos Sequence[JobNewClusterClusterMountInfo]
    cluster_name str
    custom_tags Mapping[str, Any]
    data_security_mode str
    docker_image JobNewClusterDockerImage
    driver_instance_pool_id str
    driver_node_type_id str
    enable_elastic_disk bool
    enable_local_disk_encryption bool
    gcp_attributes JobNewClusterGcpAttributes
    idempotency_token str
    init_scripts Sequence[JobNewClusterInitScript]
    instance_pool_id str
    node_type_id str
    num_workers int
    policy_id str
    runtime_engine str
    single_user_name str
    spark_conf Mapping[str, Any]
    spark_env_vars Mapping[str, Any]
    ssh_public_keys Sequence[str]
    workload_type JobNewClusterWorkloadType

    JobNewClusterAutoscale, JobNewClusterAutoscaleArgs

    maxWorkers Integer
    minWorkers Integer
    maxWorkers number
    minWorkers number
    maxWorkers Number
    minWorkers Number

    JobNewClusterAwsAttributes, JobNewClusterAwsAttributesArgs

    JobNewClusterAzureAttributes, JobNewClusterAzureAttributesArgs

    JobNewClusterClusterLogConf, JobNewClusterClusterLogConfArgs

    JobNewClusterClusterLogConfDbfs, JobNewClusterClusterLogConfDbfsArgs

    JobNewClusterClusterLogConfS3, JobNewClusterClusterLogConfS3Args

    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String
    destination string
    cannedAcl string
    enableEncryption boolean
    encryptionType string
    endpoint string
    kmsKey string
    region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String

    JobNewClusterClusterMountInfo, JobNewClusterClusterMountInfoArgs

    JobNewClusterClusterMountInfoNetworkFilesystemInfo, JobNewClusterClusterMountInfoNetworkFilesystemInfoArgs

    JobNewClusterDockerImage, JobNewClusterDockerImageArgs

    Url string
    URL of the Git repository to use.
    BasicAuth JobNewClusterDockerImageBasicAuth
    Url string
    URL of the Git repository to use.
    BasicAuth JobNewClusterDockerImageBasicAuth
    url String
    URL of the Git repository to use.
    basicAuth JobNewClusterDockerImageBasicAuth
    url string
    URL of the Git repository to use.
    basicAuth JobNewClusterDockerImageBasicAuth
    url str
    URL of the Git repository to use.
    basic_auth JobNewClusterDockerImageBasicAuth
    url String
    URL of the Git repository to use.
    basicAuth Property Map

    JobNewClusterDockerImageBasicAuth, JobNewClusterDockerImageBasicAuthArgs

    Password string
    Username string
    Password string
    Username string
    password String
    username String
    password string
    username string
    password String
    username String

    JobNewClusterGcpAttributes, JobNewClusterGcpAttributesArgs

    JobNewClusterInitScript, JobNewClusterInitScriptArgs

    abfss Property Map
    dbfs Property Map

    Deprecated: For init scripts use 'volumes', 'workspace' or cloud storage location instead of 'dbfs'.

    file Property Map
    block consisting of single string fields:
    gcs Property Map
    s3 Property Map
    volumes Property Map
    workspace Property Map

    JobNewClusterInitScriptAbfss, JobNewClusterInitScriptAbfssArgs

    JobNewClusterInitScriptDbfs, JobNewClusterInitScriptDbfsArgs

    JobNewClusterInitScriptFile, JobNewClusterInitScriptFileArgs

    JobNewClusterInitScriptGcs, JobNewClusterInitScriptGcsArgs

    JobNewClusterInitScriptS3, JobNewClusterInitScriptS3Args

    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String
    destination string
    cannedAcl string
    enableEncryption boolean
    encryptionType string
    endpoint string
    kmsKey string
    region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String

    JobNewClusterInitScriptVolumes, JobNewClusterInitScriptVolumesArgs

    JobNewClusterInitScriptWorkspace, JobNewClusterInitScriptWorkspaceArgs

    JobNewClusterWorkloadType, JobNewClusterWorkloadTypeArgs

    JobNewClusterWorkloadTypeClients, JobNewClusterWorkloadTypeClientsArgs

    Jobs bool
    Notebooks bool
    Jobs bool
    Notebooks bool
    jobs Boolean
    notebooks Boolean
    jobs boolean
    notebooks boolean
    jobs bool
    notebooks bool
    jobs Boolean
    notebooks Boolean

    JobNotebookTask, JobNotebookTaskArgs

    NotebookPath string
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    BaseParameters Dictionary<string, object>
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    Source string
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.
    NotebookPath string
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    BaseParameters map[string]interface{}
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    Source string
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.
    notebookPath String
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    baseParameters Map<String,Object>
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    source String
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.
    notebookPath string
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    baseParameters {[key: string]: any}
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    source string
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.
    notebook_path str
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    base_parameters Mapping[str, Any]
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    source str
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.
    notebookPath String
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    baseParameters Map<Any>
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    source String
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.

    JobNotificationSettings, JobNotificationSettingsArgs

    NoAlertForCanceledRuns bool
    (Bool) don't send alert for cancelled runs.
    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs.
    NoAlertForCanceledRuns bool
    (Bool) don't send alert for cancelled runs.
    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs.
    noAlertForCanceledRuns Boolean
    (Bool) don't send alert for cancelled runs.
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs.
    noAlertForCanceledRuns boolean
    (Bool) don't send alert for cancelled runs.
    noAlertForSkippedRuns boolean
    (Bool) don't send alert for skipped runs.
    no_alert_for_canceled_runs bool
    (Bool) don't send alert for cancelled runs.
    no_alert_for_skipped_runs bool
    (Bool) don't send alert for skipped runs.
    noAlertForCanceledRuns Boolean
    (Bool) don't send alert for cancelled runs.
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs.

    JobParameter, JobParameterArgs

    Default string
    Default value of the parameter.
    Name string
    The name of the defined parameter. May only contain alphanumeric characters, _, -, and ..
    Default string
    Default value of the parameter.
    Name string
    The name of the defined parameter. May only contain alphanumeric characters, _, -, and ..
    default_ String
    Default value of the parameter.
    name String
    The name of the defined parameter. May only contain alphanumeric characters, _, -, and ..
    default string
    Default value of the parameter.
    name string
    The name of the defined parameter. May only contain alphanumeric characters, _, -, and ..
    default str
    Default value of the parameter.
    name str
    The name of the defined parameter. May only contain alphanumeric characters, _, -, and ..
    default String
    Default value of the parameter.
    name String
    The name of the defined parameter. May only contain alphanumeric characters, _, -, and ..

    JobPipelineTask, JobPipelineTaskArgs

    PipelineId string
    The pipeline's unique ID.
    FullRefresh bool

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    PipelineId string
    The pipeline's unique ID.
    FullRefresh bool

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    pipelineId String
    The pipeline's unique ID.
    fullRefresh Boolean

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    pipelineId string
    The pipeline's unique ID.
    fullRefresh boolean

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    pipeline_id str
    The pipeline's unique ID.
    full_refresh bool

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    pipelineId String
    The pipeline's unique ID.
    fullRefresh Boolean

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    JobPythonWheelTask, JobPythonWheelTaskArgs

    EntryPoint string
    Python function as entry point for the task
    NamedParameters Dictionary<string, object>
    Named parameters for the task
    PackageName string
    Name of Python package
    Parameters List<string>
    Parameters for the task
    EntryPoint string
    Python function as entry point for the task
    NamedParameters map[string]interface{}
    Named parameters for the task
    PackageName string
    Name of Python package
    Parameters []string
    Parameters for the task
    entryPoint String
    Python function as entry point for the task
    namedParameters Map<String,Object>
    Named parameters for the task
    packageName String
    Name of Python package
    parameters List<String>
    Parameters for the task
    entryPoint string
    Python function as entry point for the task
    namedParameters {[key: string]: any}
    Named parameters for the task
    packageName string
    Name of Python package
    parameters string[]
    Parameters for the task
    entry_point str
    Python function as entry point for the task
    named_parameters Mapping[str, Any]
    Named parameters for the task
    package_name str
    Name of Python package
    parameters Sequence[str]
    Parameters for the task
    entryPoint String
    Python function as entry point for the task
    namedParameters Map<Any>
    Named parameters for the task
    packageName String
    Name of Python package
    parameters List<String>
    Parameters for the task

    JobQueue, JobQueueArgs

    Enabled bool
    If true, enable queueing for the job.
    Enabled bool
    If true, enable queueing for the job.
    enabled Boolean
    If true, enable queueing for the job.
    enabled boolean
    If true, enable queueing for the job.
    enabled bool
    If true, enable queueing for the job.
    enabled Boolean
    If true, enable queueing for the job.

    JobRunAs, JobRunAsArgs

    ServicePrincipalName string

    The application ID of an active service principal. Setting this field requires the servicePrincipal/user role.

    Example:

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const _this = new databricks.Job("this", {runAs: { servicePrincipalName: "8d23ae77-912e-4a19-81e4-b9c3f5cc9349", }});

    import pulumi
    import pulumi_databricks as databricks
    
    this = databricks.Job("this", run_as=databricks.JobRunAsArgs(
        service_principal_name="8d23ae77-912e-4a19-81e4-b9c3f5cc9349",
    ))
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var @this = new Databricks.Job("this", new()
        {
            RunAs = new Databricks.Inputs.JobRunAsArgs
            {
                ServicePrincipalName = "8d23ae77-912e-4a19-81e4-b9c3f5cc9349",
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "this", &databricks.JobArgs{
    			RunAs: &databricks.JobRunAsArgs{
    				ServicePrincipalName: pulumi.String("8d23ae77-912e-4a19-81e4-b9c3f5cc9349"),
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobRunAsArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var this_ = new Job("this", JobArgs.builder()        
                .runAs(JobRunAsArgs.builder()
                    .servicePrincipalName("8d23ae77-912e-4a19-81e4-b9c3f5cc9349")
                    .build())
                .build());
    
        }
    }
    
    resources:
      this:
        type: databricks:Job
        properties:
          # ...
          runAs:
            servicePrincipalName: 8d23ae77-912e-4a19-81e4-b9c3f5cc9349
    
    UserName string
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    ServicePrincipalName string

    The application ID of an active service principal. Setting this field requires the servicePrincipal/user role.

    Example:

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const _this = new databricks.Job("this", {runAs: { servicePrincipalName: "8d23ae77-912e-4a19-81e4-b9c3f5cc9349", }});

    import pulumi
    import pulumi_databricks as databricks
    
    this = databricks.Job("this", run_as=databricks.JobRunAsArgs(
        service_principal_name="8d23ae77-912e-4a19-81e4-b9c3f5cc9349",
    ))
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var @this = new Databricks.Job("this", new()
        {
            RunAs = new Databricks.Inputs.JobRunAsArgs
            {
                ServicePrincipalName = "8d23ae77-912e-4a19-81e4-b9c3f5cc9349",
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "this", &databricks.JobArgs{
    			RunAs: &databricks.JobRunAsArgs{
    				ServicePrincipalName: pulumi.String("8d23ae77-912e-4a19-81e4-b9c3f5cc9349"),
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobRunAsArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var this_ = new Job("this", JobArgs.builder()        
                .runAs(JobRunAsArgs.builder()
                    .servicePrincipalName("8d23ae77-912e-4a19-81e4-b9c3f5cc9349")
                    .build())
                .build());
    
        }
    }
    
    resources:
      this:
        type: databricks:Job
        properties:
          # ...
          runAs:
            servicePrincipalName: 8d23ae77-912e-4a19-81e4-b9c3f5cc9349
    
    UserName string
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    servicePrincipalName String

    The application ID of an active service principal. Setting this field requires the servicePrincipal/user role.

    Example:

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const _this = new databricks.Job("this", {runAs: { servicePrincipalName: "8d23ae77-912e-4a19-81e4-b9c3f5cc9349", }});

    import pulumi
    import pulumi_databricks as databricks
    
    this = databricks.Job("this", run_as=databricks.JobRunAsArgs(
        service_principal_name="8d23ae77-912e-4a19-81e4-b9c3f5cc9349",
    ))
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var @this = new Databricks.Job("this", new()
        {
            RunAs = new Databricks.Inputs.JobRunAsArgs
            {
                ServicePrincipalName = "8d23ae77-912e-4a19-81e4-b9c3f5cc9349",
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "this", &databricks.JobArgs{
    			RunAs: &databricks.JobRunAsArgs{
    				ServicePrincipalName: pulumi.String("8d23ae77-912e-4a19-81e4-b9c3f5cc9349"),
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobRunAsArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var this_ = new Job("this", JobArgs.builder()        
                .runAs(JobRunAsArgs.builder()
                    .servicePrincipalName("8d23ae77-912e-4a19-81e4-b9c3f5cc9349")
                    .build())
                .build());
    
        }
    }
    
    resources:
      this:
        type: databricks:Job
        properties:
          # ...
          runAs:
            servicePrincipalName: 8d23ae77-912e-4a19-81e4-b9c3f5cc9349
    
    userName String
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    servicePrincipalName string

    The application ID of an active service principal. Setting this field requires the servicePrincipal/user role.

    Example:

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const _this = new databricks.Job("this", {runAs: { servicePrincipalName: "8d23ae77-912e-4a19-81e4-b9c3f5cc9349", }});

    import pulumi
    import pulumi_databricks as databricks
    
    this = databricks.Job("this", run_as=databricks.JobRunAsArgs(
        service_principal_name="8d23ae77-912e-4a19-81e4-b9c3f5cc9349",
    ))
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var @this = new Databricks.Job("this", new()
        {
            RunAs = new Databricks.Inputs.JobRunAsArgs
            {
                ServicePrincipalName = "8d23ae77-912e-4a19-81e4-b9c3f5cc9349",
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "this", &databricks.JobArgs{
    			RunAs: &databricks.JobRunAsArgs{
    				ServicePrincipalName: pulumi.String("8d23ae77-912e-4a19-81e4-b9c3f5cc9349"),
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobRunAsArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var this_ = new Job("this", JobArgs.builder()        
                .runAs(JobRunAsArgs.builder()
                    .servicePrincipalName("8d23ae77-912e-4a19-81e4-b9c3f5cc9349")
                    .build())
                .build());
    
        }
    }
    
    resources:
      this:
        type: databricks:Job
        properties:
          # ...
          runAs:
            servicePrincipalName: 8d23ae77-912e-4a19-81e4-b9c3f5cc9349
    
    userName string
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    service_principal_name str

    The application ID of an active service principal. Setting this field requires the servicePrincipal/user role.

    Example:

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const _this = new databricks.Job("this", {runAs: { servicePrincipalName: "8d23ae77-912e-4a19-81e4-b9c3f5cc9349", }});

    import pulumi
    import pulumi_databricks as databricks
    
    this = databricks.Job("this", run_as=databricks.JobRunAsArgs(
        service_principal_name="8d23ae77-912e-4a19-81e4-b9c3f5cc9349",
    ))
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var @this = new Databricks.Job("this", new()
        {
            RunAs = new Databricks.Inputs.JobRunAsArgs
            {
                ServicePrincipalName = "8d23ae77-912e-4a19-81e4-b9c3f5cc9349",
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "this", &databricks.JobArgs{
    			RunAs: &databricks.JobRunAsArgs{
    				ServicePrincipalName: pulumi.String("8d23ae77-912e-4a19-81e4-b9c3f5cc9349"),
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobRunAsArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var this_ = new Job("this", JobArgs.builder()        
                .runAs(JobRunAsArgs.builder()
                    .servicePrincipalName("8d23ae77-912e-4a19-81e4-b9c3f5cc9349")
                    .build())
                .build());
    
        }
    }
    
    resources:
      this:
        type: databricks:Job
        properties:
          # ...
          runAs:
            servicePrincipalName: 8d23ae77-912e-4a19-81e4-b9c3f5cc9349
    
    user_name str
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    servicePrincipalName String

    The application ID of an active service principal. Setting this field requires the servicePrincipal/user role.

    Example:

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const _this = new databricks.Job("this", {runAs: { servicePrincipalName: "8d23ae77-912e-4a19-81e4-b9c3f5cc9349", }});

    import pulumi
    import pulumi_databricks as databricks
    
    this = databricks.Job("this", run_as=databricks.JobRunAsArgs(
        service_principal_name="8d23ae77-912e-4a19-81e4-b9c3f5cc9349",
    ))
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var @this = new Databricks.Job("this", new()
        {
            RunAs = new Databricks.Inputs.JobRunAsArgs
            {
                ServicePrincipalName = "8d23ae77-912e-4a19-81e4-b9c3f5cc9349",
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "this", &databricks.JobArgs{
    			RunAs: &databricks.JobRunAsArgs{
    				ServicePrincipalName: pulumi.String("8d23ae77-912e-4a19-81e4-b9c3f5cc9349"),
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobRunAsArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var this_ = new Job("this", JobArgs.builder()        
                .runAs(JobRunAsArgs.builder()
                    .servicePrincipalName("8d23ae77-912e-4a19-81e4-b9c3f5cc9349")
                    .build())
                .build());
    
        }
    }
    
    resources:
      this:
        type: databricks:Job
        properties:
          # ...
          runAs:
            servicePrincipalName: 8d23ae77-912e-4a19-81e4-b9c3f5cc9349
    
    userName String
    The email of an active workspace user. Non-admin users can only set this field to their own email.

    JobRunJobTask, JobRunJobTaskArgs

    JobId int
    (String) ID of the job
    JobParameters Dictionary<string, object>
    (Map) Job parameters for the task
    JobId int
    (String) ID of the job
    JobParameters map[string]interface{}
    (Map) Job parameters for the task
    jobId Integer
    (String) ID of the job
    jobParameters Map<String,Object>
    (Map) Job parameters for the task
    jobId number
    (String) ID of the job
    jobParameters {[key: string]: any}
    (Map) Job parameters for the task
    job_id int
    (String) ID of the job
    job_parameters Mapping[str, Any]
    (Map) Job parameters for the task
    jobId Number
    (String) ID of the job
    jobParameters Map<Any>
    (Map) Job parameters for the task

    JobSchedule, JobScheduleArgs

    QuartzCronExpression string
    A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
    TimezoneId string
    A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
    PauseStatus string
    Indicate whether this schedule is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted and a schedule is provided, the server will default to using UNPAUSED as a value for pause_status.
    QuartzCronExpression string
    A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
    TimezoneId string
    A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
    PauseStatus string
    Indicate whether this schedule is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted and a schedule is provided, the server will default to using UNPAUSED as a value for pause_status.
    quartzCronExpression String
    A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
    timezoneId String
    A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
    pauseStatus String
    Indicate whether this schedule is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted and a schedule is provided, the server will default to using UNPAUSED as a value for pause_status.
    quartzCronExpression string
    A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
    timezoneId string
    A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
    pauseStatus string
    Indicate whether this schedule is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted and a schedule is provided, the server will default to using UNPAUSED as a value for pause_status.
    quartz_cron_expression str
    A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
    timezone_id str
    A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
    pause_status str
    Indicate whether this schedule is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted and a schedule is provided, the server will default to using UNPAUSED as a value for pause_status.
    quartzCronExpression String
    A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
    timezoneId String
    A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
    pauseStatus String
    Indicate whether this schedule is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted and a schedule is provided, the server will default to using UNPAUSED as a value for pause_status.

    JobSparkJarTask, JobSparkJarTaskArgs

    JarUri string
    MainClassName string
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    Parameters List<string>
    (List) Parameters passed to the main method.
    JarUri string
    MainClassName string
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    Parameters []string
    (List) Parameters passed to the main method.
    jarUri String
    mainClassName String
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters List<String>
    (List) Parameters passed to the main method.
    jarUri string
    mainClassName string
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters string[]
    (List) Parameters passed to the main method.
    jar_uri str
    main_class_name str
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters Sequence[str]
    (List) Parameters passed to the main method.
    jarUri String
    mainClassName String
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters List<String>
    (List) Parameters passed to the main method.

    JobSparkPythonTask, JobSparkPythonTaskArgs

    PythonFile string
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    Parameters List<string>
    (List) Command line parameters passed to the Python file.
    Source string
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.
    PythonFile string
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    Parameters []string
    (List) Command line parameters passed to the Python file.
    Source string
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.
    pythonFile String
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    parameters List<String>
    (List) Command line parameters passed to the Python file.
    source String
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.
    pythonFile string
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    parameters string[]
    (List) Command line parameters passed to the Python file.
    source string
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.
    python_file str
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    parameters Sequence[str]
    (List) Command line parameters passed to the Python file.
    source str
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.
    pythonFile String
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    parameters List<String>
    (List) Command line parameters passed to the Python file.
    source String
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.

    JobSparkSubmitTask, JobSparkSubmitTaskArgs

    Parameters List<string>
    (List) Command-line parameters passed to spark submit.
    Parameters []string
    (List) Command-line parameters passed to spark submit.
    parameters List<String>
    (List) Command-line parameters passed to spark submit.
    parameters string[]
    (List) Command-line parameters passed to spark submit.
    parameters Sequence[str]
    (List) Command-line parameters passed to spark submit.
    parameters List<String>
    (List) Command-line parameters passed to spark submit.

    JobTask, JobTaskArgs

    ComputeKey string
    ConditionTask JobTaskConditionTask
    DbtTask JobTaskDbtTask
    DependsOns List<JobTaskDependsOn>
    block specifying dependency(-ies) for a given task.
    Description string
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    EmailNotifications JobTaskEmailNotifications
    (List) An optional set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    ExistingClusterId string
    ForEachTask JobTaskForEachTask
    Health JobTaskHealth
    block described below that specifies health conditions for a given task.
    JobClusterKey string
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    Libraries List<JobTaskLibrary>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job.
    MaxRetries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.
    MinRetryIntervalMillis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    NewCluster JobTaskNewCluster
    Same set of parameters as for databricks.Cluster resource.
    NotebookTask JobTaskNotebookTask
    NotificationSettings JobTaskNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    PipelineTask JobTaskPipelineTask
    PythonWheelTask JobTaskPythonWheelTask
    RetryOnTimeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    RunIf string
    An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. When omitted, defaults to ALL_SUCCESS.
    RunJobTask JobTaskRunJobTask
    SparkJarTask JobTaskSparkJarTask
    SparkPythonTask JobTaskSparkPythonTask
    SparkSubmitTask JobTaskSparkSubmitTask
    SqlTask JobTaskSqlTask
    TaskKey string
    string specifying an unique key for a given task.

    • *_task - (Required) one of the specific task blocks described below:
    TimeoutSeconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    WebhookNotifications JobTaskWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    ComputeKey string
    ConditionTask JobTaskConditionTask
    DbtTask JobTaskDbtTask
    DependsOns []JobTaskDependsOn
    block specifying dependency(-ies) for a given task.
    Description string
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    EmailNotifications JobTaskEmailNotifications
    (List) An optional set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    ExistingClusterId string
    ForEachTask JobTaskForEachTask
    Health JobTaskHealth
    block described below that specifies health conditions for a given task.
    JobClusterKey string
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    Libraries []JobTaskLibrary
    (Set) An optional list of libraries to be installed on the cluster that will execute the job.
    MaxRetries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.
    MinRetryIntervalMillis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    NewCluster JobTaskNewCluster
    Same set of parameters as for databricks.Cluster resource.
    NotebookTask JobTaskNotebookTask
    NotificationSettings JobTaskNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    PipelineTask JobTaskPipelineTask
    PythonWheelTask JobTaskPythonWheelTask
    RetryOnTimeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    RunIf string
    An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. When omitted, defaults to ALL_SUCCESS.
    RunJobTask JobTaskRunJobTask
    SparkJarTask JobTaskSparkJarTask
    SparkPythonTask JobTaskSparkPythonTask
    SparkSubmitTask JobTaskSparkSubmitTask
    SqlTask JobTaskSqlTask
    TaskKey string
    string specifying an unique key for a given task.

    • *_task - (Required) one of the specific task blocks described below:
    TimeoutSeconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    WebhookNotifications JobTaskWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    computeKey String
    conditionTask JobTaskConditionTask
    dbtTask JobTaskDbtTask
    dependsOns List<JobTaskDependsOn>
    block specifying dependency(-ies) for a given task.
    description String
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    emailNotifications JobTaskEmailNotifications
    (List) An optional set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId String
    forEachTask JobTaskForEachTask
    health JobTaskHealth
    block described below that specifies health conditions for a given task.
    jobClusterKey String
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    libraries List<JobTaskLibrary>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job.
    maxRetries Integer
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.
    minRetryIntervalMillis Integer
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    newCluster JobTaskNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebookTask JobTaskNotebookTask
    notificationSettings JobTaskNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    pipelineTask JobTaskPipelineTask
    pythonWheelTask JobTaskPythonWheelTask
    retryOnTimeout Boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    runIf String
    An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. When omitted, defaults to ALL_SUCCESS.
    runJobTask JobTaskRunJobTask
    sparkJarTask JobTaskSparkJarTask
    sparkPythonTask JobTaskSparkPythonTask
    sparkSubmitTask JobTaskSparkSubmitTask
    sqlTask JobTaskSqlTask
    taskKey String
    string specifying an unique key for a given task.

    • *_task - (Required) one of the specific task blocks described below:
    timeoutSeconds Integer
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    webhookNotifications JobTaskWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    computeKey string
    conditionTask JobTaskConditionTask
    dbtTask JobTaskDbtTask
    dependsOns JobTaskDependsOn[]
    block specifying dependency(-ies) for a given task.
    description string
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    emailNotifications JobTaskEmailNotifications
    (List) An optional set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId string
    forEachTask JobTaskForEachTask
    health JobTaskHealth
    block described below that specifies health conditions for a given task.
    jobClusterKey string
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    libraries JobTaskLibrary[]
    (Set) An optional list of libraries to be installed on the cluster that will execute the job.
    maxRetries number
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.
    minRetryIntervalMillis number
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    newCluster JobTaskNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebookTask JobTaskNotebookTask
    notificationSettings JobTaskNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    pipelineTask JobTaskPipelineTask
    pythonWheelTask JobTaskPythonWheelTask
    retryOnTimeout boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    runIf string
    An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. When omitted, defaults to ALL_SUCCESS.
    runJobTask JobTaskRunJobTask
    sparkJarTask JobTaskSparkJarTask
    sparkPythonTask JobTaskSparkPythonTask
    sparkSubmitTask JobTaskSparkSubmitTask
    sqlTask JobTaskSqlTask
    taskKey string
    string specifying an unique key for a given task.

    • *_task - (Required) one of the specific task blocks described below:
    timeoutSeconds number
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    webhookNotifications JobTaskWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    compute_key str
    condition_task JobTaskConditionTask
    dbt_task JobTaskDbtTask
    depends_ons Sequence[JobTaskDependsOn]
    block specifying dependency(-ies) for a given task.
    description str
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    email_notifications JobTaskEmailNotifications
    (List) An optional set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existing_cluster_id str
    for_each_task JobTaskForEachTask
    health JobTaskHealth
    block described below that specifies health conditions for a given task.
    job_cluster_key str
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    libraries Sequence[JobTaskLibrary]
    (Set) An optional list of libraries to be installed on the cluster that will execute the job.
    max_retries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.
    min_retry_interval_millis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    new_cluster JobTaskNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebook_task JobTaskNotebookTask
    notification_settings JobTaskNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    pipeline_task JobTaskPipelineTask
    python_wheel_task JobTaskPythonWheelTask
    retry_on_timeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    run_if str
    An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. When omitted, defaults to ALL_SUCCESS.
    run_job_task JobTaskRunJobTask
    spark_jar_task JobTaskSparkJarTask
    spark_python_task JobTaskSparkPythonTask
    spark_submit_task JobTaskSparkSubmitTask
    sql_task JobTaskSqlTask
    task_key str
    string specifying an unique key for a given task.

    • *_task - (Required) one of the specific task blocks described below:
    timeout_seconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    webhook_notifications JobTaskWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    computeKey String
    conditionTask Property Map
    dbtTask Property Map
    dependsOns List<Property Map>
    block specifying dependency(-ies) for a given task.
    description String
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    emailNotifications Property Map
    (List) An optional set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId String
    forEachTask Property Map
    health Property Map
    block described below that specifies health conditions for a given task.
    jobClusterKey String
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    libraries List<Property Map>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job.
    maxRetries Number
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.
    minRetryIntervalMillis Number
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    newCluster Property Map
    Same set of parameters as for databricks.Cluster resource.
    notebookTask Property Map
    notificationSettings Property Map
    An optional block controlling the notification settings on the job level (described below).
    pipelineTask Property Map
    pythonWheelTask Property Map
    retryOnTimeout Boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    runIf String
    An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. When omitted, defaults to ALL_SUCCESS.
    runJobTask Property Map
    sparkJarTask Property Map
    sparkPythonTask Property Map
    sparkSubmitTask Property Map
    sqlTask Property Map
    taskKey String
    string specifying an unique key for a given task.

    • *_task - (Required) one of the specific task blocks described below:
    timeoutSeconds Number
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    webhookNotifications Property Map
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.

    JobTaskConditionTask, JobTaskConditionTaskArgs

    Left string
    The left operand of the condition task. It could be a string value, job state, or a parameter reference.
    Op string

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    Right string
    The right operand of the condition task. It could be a string value, job state, or parameter reference.
    Left string
    The left operand of the condition task. It could be a string value, job state, or a parameter reference.
    Op string

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    Right string
    The right operand of the condition task. It could be a string value, job state, or parameter reference.
    left String
    The left operand of the condition task. It could be a string value, job state, or a parameter reference.
    op String

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    right String
    The right operand of the condition task. It could be a string value, job state, or parameter reference.
    left string
    The left operand of the condition task. It could be a string value, job state, or a parameter reference.
    op string

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    right string
    The right operand of the condition task. It could be a string value, job state, or parameter reference.
    left str
    The left operand of the condition task. It could be a string value, job state, or a parameter reference.
    op str

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    right str
    The right operand of the condition task. It could be a string value, job state, or parameter reference.
    left String
    The left operand of the condition task. It could be a string value, job state, or a parameter reference.
    op String

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    right String
    The right operand of the condition task. It could be a string value, job state, or parameter reference.

    JobTaskDbtTask, JobTaskDbtTaskArgs

    Commands List<string>
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    Catalog string
    The name of the catalog to use inside Unity Catalog.
    ProfilesDirectory string
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    ProjectDirectory string
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    Schema string
    The name of the schema dbt should run in. Defaults to default.
    Source string
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    WarehouseId string

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    Commands []string
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    Catalog string
    The name of the catalog to use inside Unity Catalog.
    ProfilesDirectory string
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    ProjectDirectory string
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    Schema string
    The name of the schema dbt should run in. Defaults to default.
    Source string
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    WarehouseId string

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    commands List<String>
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    catalog String
    The name of the catalog to use inside Unity Catalog.
    profilesDirectory String
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    projectDirectory String
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    schema String
    The name of the schema dbt should run in. Defaults to default.
    source String
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    warehouseId String

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    commands string[]
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    catalog string
    The name of the catalog to use inside Unity Catalog.
    profilesDirectory string
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    projectDirectory string
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    schema string
    The name of the schema dbt should run in. Defaults to default.
    source string
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    warehouseId string

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    commands Sequence[str]
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    catalog str
    The name of the catalog to use inside Unity Catalog.
    profiles_directory str
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    project_directory str
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    schema str
    The name of the schema dbt should run in. Defaults to default.
    source str
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    warehouse_id str

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    commands List<String>
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    catalog String
    The name of the catalog to use inside Unity Catalog.
    profilesDirectory String
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    projectDirectory String
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    schema String
    The name of the schema dbt should run in. Defaults to default.
    source String
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    warehouseId String

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    JobTaskDependsOn, JobTaskDependsOnArgs

    TaskKey string
    The name of the task this task depends on.
    Outcome string
    Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are "true" or "false".
    TaskKey string
    The name of the task this task depends on.
    Outcome string
    Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are "true" or "false".
    taskKey String
    The name of the task this task depends on.
    outcome String
    Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are "true" or "false".
    taskKey string
    The name of the task this task depends on.
    outcome string
    Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are "true" or "false".
    task_key str
    The name of the task this task depends on.
    outcome str
    Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are "true" or "false".
    taskKey String
    The name of the task this task depends on.
    outcome String
    Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are "true" or "false".

    JobTaskEmailNotifications, JobTaskEmailNotificationsArgs

    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    OnDurationWarningThresholdExceededs List<string>
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    OnFailures List<string>
    (List) list of emails to notify when the run fails.
    OnStarts List<string>
    (List) list of emails to notify when the run starts.
    OnSuccesses List<string>
    (List) list of emails to notify when the run completes successfully.
    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    OnDurationWarningThresholdExceededs []string
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    OnFailures []string
    (List) list of emails to notify when the run fails.
    OnStarts []string
    (List) list of emails to notify when the run starts.
    OnSuccesses []string
    (List) list of emails to notify when the run completes successfully.
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    onDurationWarningThresholdExceededs List<String>
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    onFailures List<String>
    (List) list of emails to notify when the run fails.
    onStarts List<String>
    (List) list of emails to notify when the run starts.
    onSuccesses List<String>
    (List) list of emails to notify when the run completes successfully.
    noAlertForSkippedRuns boolean
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    onDurationWarningThresholdExceededs string[]
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    onFailures string[]
    (List) list of emails to notify when the run fails.
    onStarts string[]
    (List) list of emails to notify when the run starts.
    onSuccesses string[]
    (List) list of emails to notify when the run completes successfully.
    no_alert_for_skipped_runs bool
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    on_duration_warning_threshold_exceededs Sequence[str]
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    on_failures Sequence[str]
    (List) list of emails to notify when the run fails.
    on_starts Sequence[str]
    (List) list of emails to notify when the run starts.
    on_successes Sequence[str]
    (List) list of emails to notify when the run completes successfully.
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    onDurationWarningThresholdExceededs List<String>
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    onFailures List<String>
    (List) list of emails to notify when the run fails.
    onStarts List<String>
    (List) list of emails to notify when the run starts.
    onSuccesses List<String>
    (List) list of emails to notify when the run completes successfully.

    JobTaskForEachTask, JobTaskForEachTaskArgs

    Inputs string
    (String) Array for task to iterate on. This can be a JSON string or a reference to an array parameter.
    Task JobTaskForEachTaskTask
    Task to run against the inputs list.
    Concurrency int
    Controls the number of active iteration task runs. Default is 20, maximum allowed is 100.
    Inputs string
    (String) Array for task to iterate on. This can be a JSON string or a reference to an array parameter.
    Task JobTaskForEachTaskTask
    Task to run against the inputs list.
    Concurrency int
    Controls the number of active iteration task runs. Default is 20, maximum allowed is 100.
    inputs String
    (String) Array for task to iterate on. This can be a JSON string or a reference to an array parameter.
    task JobTaskForEachTaskTask
    Task to run against the inputs list.
    concurrency Integer
    Controls the number of active iteration task runs. Default is 20, maximum allowed is 100.
    inputs string
    (String) Array for task to iterate on. This can be a JSON string or a reference to an array parameter.
    task JobTaskForEachTaskTask
    Task to run against the inputs list.
    concurrency number
    Controls the number of active iteration task runs. Default is 20, maximum allowed is 100.
    inputs str
    (String) Array for task to iterate on. This can be a JSON string or a reference to an array parameter.
    task JobTaskForEachTaskTask
    Task to run against the inputs list.
    concurrency int
    Controls the number of active iteration task runs. Default is 20, maximum allowed is 100.
    inputs String
    (String) Array for task to iterate on. This can be a JSON string or a reference to an array parameter.
    task Property Map
    Task to run against the inputs list.
    concurrency Number
    Controls the number of active iteration task runs. Default is 20, maximum allowed is 100.

    JobTaskForEachTaskTask, JobTaskForEachTaskTaskArgs

    ComputeKey string
    ConditionTask JobTaskForEachTaskTaskConditionTask
    DbtTask JobTaskForEachTaskTaskDbtTask
    DependsOns List<JobTaskForEachTaskTaskDependsOn>
    block specifying dependency(-ies) for a given task.
    Description string
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    EmailNotifications JobTaskForEachTaskTaskEmailNotifications
    (List) An optional set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    ExistingClusterId string
    Health JobTaskForEachTaskTaskHealth
    block described below that specifies health conditions for a given task.
    JobClusterKey string
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    Libraries List<JobTaskForEachTaskTaskLibrary>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job.
    MaxRetries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.
    MinRetryIntervalMillis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    NewCluster JobTaskForEachTaskTaskNewCluster
    Same set of parameters as for databricks.Cluster resource.
    NotebookTask JobTaskForEachTaskTaskNotebookTask
    NotificationSettings JobTaskForEachTaskTaskNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    PipelineTask JobTaskForEachTaskTaskPipelineTask
    PythonWheelTask JobTaskForEachTaskTaskPythonWheelTask
    RetryOnTimeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    RunIf string
    An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. When omitted, defaults to ALL_SUCCESS.
    RunJobTask JobTaskForEachTaskTaskRunJobTask
    SparkJarTask JobTaskForEachTaskTaskSparkJarTask
    SparkPythonTask JobTaskForEachTaskTaskSparkPythonTask
    SparkSubmitTask JobTaskForEachTaskTaskSparkSubmitTask
    SqlTask JobTaskForEachTaskTaskSqlTask
    TaskKey string
    string specifying an unique key for a given task.

    • *_task - (Required) one of the specific task blocks described below:
    TimeoutSeconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    WebhookNotifications JobTaskForEachTaskTaskWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    ComputeKey string
    ConditionTask JobTaskForEachTaskTaskConditionTask
    DbtTask JobTaskForEachTaskTaskDbtTask
    DependsOns []JobTaskForEachTaskTaskDependsOn
    block specifying dependency(-ies) for a given task.
    Description string
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    EmailNotifications JobTaskForEachTaskTaskEmailNotifications
    (List) An optional set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    ExistingClusterId string
    Health JobTaskForEachTaskTaskHealth
    block described below that specifies health conditions for a given task.
    JobClusterKey string
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    Libraries []JobTaskForEachTaskTaskLibrary
    (Set) An optional list of libraries to be installed on the cluster that will execute the job.
    MaxRetries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.
    MinRetryIntervalMillis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    NewCluster JobTaskForEachTaskTaskNewCluster
    Same set of parameters as for databricks.Cluster resource.
    NotebookTask JobTaskForEachTaskTaskNotebookTask
    NotificationSettings JobTaskForEachTaskTaskNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    PipelineTask JobTaskForEachTaskTaskPipelineTask
    PythonWheelTask JobTaskForEachTaskTaskPythonWheelTask
    RetryOnTimeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    RunIf string
    An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. When omitted, defaults to ALL_SUCCESS.
    RunJobTask JobTaskForEachTaskTaskRunJobTask
    SparkJarTask JobTaskForEachTaskTaskSparkJarTask
    SparkPythonTask JobTaskForEachTaskTaskSparkPythonTask
    SparkSubmitTask JobTaskForEachTaskTaskSparkSubmitTask
    SqlTask JobTaskForEachTaskTaskSqlTask
    TaskKey string
    string specifying an unique key for a given task.

    • *_task - (Required) one of the specific task blocks described below:
    TimeoutSeconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    WebhookNotifications JobTaskForEachTaskTaskWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    computeKey String
    conditionTask JobTaskForEachTaskTaskConditionTask
    dbtTask JobTaskForEachTaskTaskDbtTask
    dependsOns List<JobTaskForEachTaskTaskDependsOn>
    block specifying dependency(-ies) for a given task.
    description String
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    emailNotifications JobTaskForEachTaskTaskEmailNotifications
    (List) An optional set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId String
    health JobTaskForEachTaskTaskHealth
    block described below that specifies health conditions for a given task.
    jobClusterKey String
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    libraries List<JobTaskForEachTaskTaskLibrary>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job.
    maxRetries Integer
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.
    minRetryIntervalMillis Integer
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    newCluster JobTaskForEachTaskTaskNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebookTask JobTaskForEachTaskTaskNotebookTask
    notificationSettings JobTaskForEachTaskTaskNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    pipelineTask JobTaskForEachTaskTaskPipelineTask
    pythonWheelTask JobTaskForEachTaskTaskPythonWheelTask
    retryOnTimeout Boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    runIf String
    An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. When omitted, defaults to ALL_SUCCESS.
    runJobTask JobTaskForEachTaskTaskRunJobTask
    sparkJarTask JobTaskForEachTaskTaskSparkJarTask
    sparkPythonTask JobTaskForEachTaskTaskSparkPythonTask
    sparkSubmitTask JobTaskForEachTaskTaskSparkSubmitTask
    sqlTask JobTaskForEachTaskTaskSqlTask
    taskKey String
    string specifying an unique key for a given task.

    • *_task - (Required) one of the specific task blocks described below:
    timeoutSeconds Integer
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    webhookNotifications JobTaskForEachTaskTaskWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    computeKey string
    conditionTask JobTaskForEachTaskTaskConditionTask
    dbtTask JobTaskForEachTaskTaskDbtTask
    dependsOns JobTaskForEachTaskTaskDependsOn[]
    block specifying dependency(-ies) for a given task.
    description string
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    emailNotifications JobTaskForEachTaskTaskEmailNotifications
    (List) An optional set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId string
    health JobTaskForEachTaskTaskHealth
    block described below that specifies health conditions for a given task.
    jobClusterKey string
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    libraries JobTaskForEachTaskTaskLibrary[]
    (Set) An optional list of libraries to be installed on the cluster that will execute the job.
    maxRetries number
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.
    minRetryIntervalMillis number
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    newCluster JobTaskForEachTaskTaskNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebookTask JobTaskForEachTaskTaskNotebookTask
    notificationSettings JobTaskForEachTaskTaskNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    pipelineTask JobTaskForEachTaskTaskPipelineTask
    pythonWheelTask JobTaskForEachTaskTaskPythonWheelTask
    retryOnTimeout boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    runIf string
    An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. When omitted, defaults to ALL_SUCCESS.
    runJobTask JobTaskForEachTaskTaskRunJobTask
    sparkJarTask JobTaskForEachTaskTaskSparkJarTask
    sparkPythonTask JobTaskForEachTaskTaskSparkPythonTask
    sparkSubmitTask JobTaskForEachTaskTaskSparkSubmitTask
    sqlTask JobTaskForEachTaskTaskSqlTask
    taskKey string
    string specifying an unique key for a given task.

    • *_task - (Required) one of the specific task blocks described below:
    timeoutSeconds number
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    webhookNotifications JobTaskForEachTaskTaskWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    compute_key str
    condition_task JobTaskForEachTaskTaskConditionTask
    dbt_task JobTaskForEachTaskTaskDbtTask
    depends_ons Sequence[JobTaskForEachTaskTaskDependsOn]
    block specifying dependency(-ies) for a given task.
    description str
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    email_notifications JobTaskForEachTaskTaskEmailNotifications
    (List) An optional set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existing_cluster_id str
    health JobTaskForEachTaskTaskHealth
    block described below that specifies health conditions for a given task.
    job_cluster_key str
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    libraries Sequence[JobTaskForEachTaskTaskLibrary]
    (Set) An optional list of libraries to be installed on the cluster that will execute the job.
    max_retries int
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.
    min_retry_interval_millis int
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    new_cluster JobTaskForEachTaskTaskNewCluster
    Same set of parameters as for databricks.Cluster resource.
    notebook_task JobTaskForEachTaskTaskNotebookTask
    notification_settings JobTaskForEachTaskTaskNotificationSettings
    An optional block controlling the notification settings on the job level (described below).
    pipeline_task JobTaskForEachTaskTaskPipelineTask
    python_wheel_task JobTaskForEachTaskTaskPythonWheelTask
    retry_on_timeout bool
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    run_if str
    An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. When omitted, defaults to ALL_SUCCESS.
    run_job_task JobTaskForEachTaskTaskRunJobTask
    spark_jar_task JobTaskForEachTaskTaskSparkJarTask
    spark_python_task JobTaskForEachTaskTaskSparkPythonTask
    spark_submit_task JobTaskForEachTaskTaskSparkSubmitTask
    sql_task JobTaskForEachTaskTaskSqlTask
    task_key str
    string specifying an unique key for a given task.

    • *_task - (Required) one of the specific task blocks described below:
    timeout_seconds int
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    webhook_notifications JobTaskForEachTaskTaskWebhookNotifications
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
    computeKey String
    conditionTask Property Map
    dbtTask Property Map
    dependsOns List<Property Map>
    block specifying dependency(-ies) for a given task.
    description String
    An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
    emailNotifications Property Map
    (List) An optional set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.
    existingClusterId String
    health Property Map
    block described below that specifies health conditions for a given task.
    jobClusterKey String
    Identifier that can be referenced in task block, so that cluster is shared between tasks
    libraries List<Property Map>
    (Set) An optional list of libraries to be installed on the cluster that will execute the job.
    maxRetries Number
    (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED or INTERNAL_ERROR lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED or INTERNAL_ERROR.
    minRetryIntervalMillis Number
    (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
    newCluster Property Map
    Same set of parameters as for databricks.Cluster resource.
    notebookTask Property Map
    notificationSettings Property Map
    An optional block controlling the notification settings on the job level (described below).
    pipelineTask Property Map
    pythonWheelTask Property Map
    retryOnTimeout Boolean
    (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
    runIf String
    An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. When omitted, defaults to ALL_SUCCESS.
    runJobTask Property Map
    sparkJarTask Property Map
    sparkPythonTask Property Map
    sparkSubmitTask Property Map
    sqlTask Property Map
    taskKey String
    string specifying an unique key for a given task.

    • *_task - (Required) one of the specific task blocks described below:
    timeoutSeconds Number
    (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
    webhookNotifications Property Map
    (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.

    JobTaskForEachTaskTaskConditionTask, JobTaskForEachTaskTaskConditionTaskArgs

    Left string
    The left operand of the condition task. It could be a string value, job state, or a parameter reference.
    Op string

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    Right string
    The right operand of the condition task. It could be a string value, job state, or parameter reference.
    Left string
    The left operand of the condition task. It could be a string value, job state, or a parameter reference.
    Op string

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    Right string
    The right operand of the condition task. It could be a string value, job state, or parameter reference.
    left String
    The left operand of the condition task. It could be a string value, job state, or a parameter reference.
    op String

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    right String
    The right operand of the condition task. It could be a string value, job state, or parameter reference.
    left string
    The left operand of the condition task. It could be a string value, job state, or a parameter reference.
    op string

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    right string
    The right operand of the condition task. It could be a string value, job state, or parameter reference.
    left str
    The left operand of the condition task. It could be a string value, job state, or a parameter reference.
    op str

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    right str
    The right operand of the condition task. It could be a string value, job state, or parameter reference.
    left String
    The left operand of the condition task. It could be a string value, job state, or a parameter reference.
    op String

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    right String
    The right operand of the condition task. It could be a string value, job state, or parameter reference.

    JobTaskForEachTaskTaskDbtTask, JobTaskForEachTaskTaskDbtTaskArgs

    Commands List<string>
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    Catalog string
    The name of the catalog to use inside Unity Catalog.
    ProfilesDirectory string
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    ProjectDirectory string
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    Schema string
    The name of the schema dbt should run in. Defaults to default.
    Source string
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    WarehouseId string

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    Commands []string
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    Catalog string
    The name of the catalog to use inside Unity Catalog.
    ProfilesDirectory string
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    ProjectDirectory string
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    Schema string
    The name of the schema dbt should run in. Defaults to default.
    Source string
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    WarehouseId string

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    commands List<String>
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    catalog String
    The name of the catalog to use inside Unity Catalog.
    profilesDirectory String
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    projectDirectory String
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    schema String
    The name of the schema dbt should run in. Defaults to default.
    source String
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    warehouseId String

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    commands string[]
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    catalog string
    The name of the catalog to use inside Unity Catalog.
    profilesDirectory string
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    projectDirectory string
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    schema string
    The name of the schema dbt should run in. Defaults to default.
    source string
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    warehouseId string

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    commands Sequence[str]
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    catalog str
    The name of the catalog to use inside Unity Catalog.
    profiles_directory str
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    project_directory str
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    schema str
    The name of the schema dbt should run in. Defaults to default.
    source str
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    warehouse_id str

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    commands List<String>
    (Array) Series of dbt commands to execute in sequence. Every command must start with "dbt".
    catalog String
    The name of the catalog to use inside Unity Catalog.
    profilesDirectory String
    The relative path to the directory in the repository specified by git_source where dbt should look in for the profiles.yml file. If not specified, defaults to the repository's root directory. Equivalent to passing --profile-dir to a dbt command.
    projectDirectory String
    The path where dbt should look for dbt_project.yml. Equivalent to passing --project-dir to the dbt CLI.

    • If source is GIT: Relative path to the directory in the repository specified in the git_source block. Defaults to the repository's root directory when not specified.
    • If source is WORKSPACE: Absolute path to the folder in the workspace.
    schema String
    The name of the schema dbt should run in. Defaults to default.
    source String
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    warehouseId String

    The ID of the SQL warehouse that dbt should execute against.

    You also need to include a git_source block to configure the repository that contains the dbt project.

    JobTaskForEachTaskTaskDependsOn, JobTaskForEachTaskTaskDependsOnArgs

    TaskKey string
    The name of the task this task depends on.
    Outcome string
    Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are "true" or "false".
    TaskKey string
    The name of the task this task depends on.
    Outcome string
    Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are "true" or "false".
    taskKey String
    The name of the task this task depends on.
    outcome String
    Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are "true" or "false".
    taskKey string
    The name of the task this task depends on.
    outcome string
    Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are "true" or "false".
    task_key str
    The name of the task this task depends on.
    outcome str
    Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are "true" or "false".
    taskKey String
    The name of the task this task depends on.
    outcome String
    Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are "true" or "false".

    JobTaskForEachTaskTaskEmailNotifications, JobTaskForEachTaskTaskEmailNotificationsArgs

    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    OnDurationWarningThresholdExceededs List<string>
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    OnFailures List<string>
    (List) list of emails to notify when the run fails.
    OnStarts List<string>
    (List) list of emails to notify when the run starts.
    OnSuccesses List<string>
    (List) list of emails to notify when the run completes successfully.
    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    OnDurationWarningThresholdExceededs []string
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    OnFailures []string
    (List) list of emails to notify when the run fails.
    OnStarts []string
    (List) list of emails to notify when the run starts.
    OnSuccesses []string
    (List) list of emails to notify when the run completes successfully.
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    onDurationWarningThresholdExceededs List<String>
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    onFailures List<String>
    (List) list of emails to notify when the run fails.
    onStarts List<String>
    (List) list of emails to notify when the run starts.
    onSuccesses List<String>
    (List) list of emails to notify when the run completes successfully.
    noAlertForSkippedRuns boolean
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    onDurationWarningThresholdExceededs string[]
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    onFailures string[]
    (List) list of emails to notify when the run fails.
    onStarts string[]
    (List) list of emails to notify when the run starts.
    onSuccesses string[]
    (List) list of emails to notify when the run completes successfully.
    no_alert_for_skipped_runs bool
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    on_duration_warning_threshold_exceededs Sequence[str]
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    on_failures Sequence[str]
    (List) list of emails to notify when the run fails.
    on_starts Sequence[str]
    (List) list of emails to notify when the run starts.
    on_successes Sequence[str]
    (List) list of emails to notify when the run completes successfully.
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the notification_settings configuration block).
    onDurationWarningThresholdExceededs List<String>
    (List) list of emails to notify when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.
    onFailures List<String>
    (List) list of emails to notify when the run fails.
    onStarts List<String>
    (List) list of emails to notify when the run starts.
    onSuccesses List<String>
    (List) list of emails to notify when the run completes successfully.

    JobTaskForEachTaskTaskHealth, JobTaskForEachTaskTaskHealthArgs

    Rules List<JobTaskForEachTaskTaskHealthRule>
    list of rules that are represented as objects with the following attributes:
    Rules []JobTaskForEachTaskTaskHealthRule
    list of rules that are represented as objects with the following attributes:
    rules List<JobTaskForEachTaskTaskHealthRule>
    list of rules that are represented as objects with the following attributes:
    rules JobTaskForEachTaskTaskHealthRule[]
    list of rules that are represented as objects with the following attributes:
    rules Sequence[JobTaskForEachTaskTaskHealthRule]
    list of rules that are represented as objects with the following attributes:
    rules List<Property Map>
    list of rules that are represented as objects with the following attributes:

    JobTaskForEachTaskTaskHealthRule, JobTaskForEachTaskTaskHealthRuleArgs

    Metric string
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    Op string

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    Value int
    integer value used to compare to the given metric.
    Metric string
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    Op string

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    Value int
    integer value used to compare to the given metric.
    metric String
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    op String

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    value Integer
    integer value used to compare to the given metric.
    metric string
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    op string

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    value number
    integer value used to compare to the given metric.
    metric str
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    op str

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    value int
    integer value used to compare to the given metric.
    metric String
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    op String

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    value Number
    integer value used to compare to the given metric.

    JobTaskForEachTaskTaskLibrary, JobTaskForEachTaskTaskLibraryArgs

    JobTaskForEachTaskTaskLibraryCran, JobTaskForEachTaskTaskLibraryCranArgs

    Package string
    Repo string
    Package string
    Repo string
    package_ String
    repo String
    package string
    repo string
    package str
    repo str
    package String
    repo String

    JobTaskForEachTaskTaskLibraryMaven, JobTaskForEachTaskTaskLibraryMavenArgs

    Coordinates string
    Exclusions List<string>
    Repo string
    Coordinates string
    Exclusions []string
    Repo string
    coordinates String
    exclusions List<String>
    repo String
    coordinates string
    exclusions string[]
    repo string
    coordinates str
    exclusions Sequence[str]
    repo str
    coordinates String
    exclusions List<String>
    repo String

    JobTaskForEachTaskTaskLibraryPypi, JobTaskForEachTaskTaskLibraryPypiArgs

    Package string
    Repo string
    Package string
    Repo string
    package_ String
    repo String
    package string
    repo string
    package str
    repo str
    package String
    repo String

    JobTaskForEachTaskTaskNewCluster, JobTaskForEachTaskTaskNewClusterArgs

    NumWorkers int
    SparkVersion string
    ApplyPolicyDefaultValues bool
    Autoscale JobTaskForEachTaskTaskNewClusterAutoscale
    AutoterminationMinutes int
    AwsAttributes JobTaskForEachTaskTaskNewClusterAwsAttributes
    AzureAttributes JobTaskForEachTaskTaskNewClusterAzureAttributes
    ClusterId string
    ClusterLogConf JobTaskForEachTaskTaskNewClusterClusterLogConf
    ClusterMountInfos List<JobTaskForEachTaskTaskNewClusterClusterMountInfo>
    ClusterName string
    CustomTags Dictionary<string, object>
    DataSecurityMode string
    DockerImage JobTaskForEachTaskTaskNewClusterDockerImage
    DriverInstancePoolId string
    DriverNodeTypeId string
    EnableElasticDisk bool
    EnableLocalDiskEncryption bool
    GcpAttributes JobTaskForEachTaskTaskNewClusterGcpAttributes
    IdempotencyToken string
    InitScripts List<JobTaskForEachTaskTaskNewClusterInitScript>
    InstancePoolId string
    NodeTypeId string
    PolicyId string
    RuntimeEngine string
    SingleUserName string
    SparkConf Dictionary<string, object>
    SparkEnvVars Dictionary<string, object>
    SshPublicKeys List<string>
    WorkloadType JobTaskForEachTaskTaskNewClusterWorkloadType
    NumWorkers int
    SparkVersion string
    ApplyPolicyDefaultValues bool
    Autoscale JobTaskForEachTaskTaskNewClusterAutoscale
    AutoterminationMinutes int
    AwsAttributes JobTaskForEachTaskTaskNewClusterAwsAttributes
    AzureAttributes JobTaskForEachTaskTaskNewClusterAzureAttributes
    ClusterId string
    ClusterLogConf JobTaskForEachTaskTaskNewClusterClusterLogConf
    ClusterMountInfos []JobTaskForEachTaskTaskNewClusterClusterMountInfo
    ClusterName string
    CustomTags map[string]interface{}
    DataSecurityMode string
    DockerImage JobTaskForEachTaskTaskNewClusterDockerImage
    DriverInstancePoolId string
    DriverNodeTypeId string
    EnableElasticDisk bool
    EnableLocalDiskEncryption bool
    GcpAttributes JobTaskForEachTaskTaskNewClusterGcpAttributes
    IdempotencyToken string
    InitScripts []JobTaskForEachTaskTaskNewClusterInitScript
    InstancePoolId string
    NodeTypeId string
    PolicyId string
    RuntimeEngine string
    SingleUserName string
    SparkConf map[string]interface{}
    SparkEnvVars map[string]interface{}
    SshPublicKeys []string
    WorkloadType JobTaskForEachTaskTaskNewClusterWorkloadType
    numWorkers Integer
    sparkVersion String
    applyPolicyDefaultValues Boolean
    autoscale JobTaskForEachTaskTaskNewClusterAutoscale
    autoterminationMinutes Integer
    awsAttributes JobTaskForEachTaskTaskNewClusterAwsAttributes
    azureAttributes JobTaskForEachTaskTaskNewClusterAzureAttributes
    clusterId String
    clusterLogConf JobTaskForEachTaskTaskNewClusterClusterLogConf
    clusterMountInfos List<JobTaskForEachTaskTaskNewClusterClusterMountInfo>
    clusterName String
    customTags Map<String,Object>
    dataSecurityMode String
    dockerImage JobTaskForEachTaskTaskNewClusterDockerImage
    driverInstancePoolId String
    driverNodeTypeId String
    enableElasticDisk Boolean
    enableLocalDiskEncryption Boolean
    gcpAttributes JobTaskForEachTaskTaskNewClusterGcpAttributes
    idempotencyToken String
    initScripts List<JobTaskForEachTaskTaskNewClusterInitScript>
    instancePoolId String
    nodeTypeId String
    policyId String
    runtimeEngine String
    singleUserName String
    sparkConf Map<String,Object>
    sparkEnvVars Map<String,Object>
    sshPublicKeys List<String>
    workloadType JobTaskForEachTaskTaskNewClusterWorkloadType
    numWorkers number
    sparkVersion string
    applyPolicyDefaultValues boolean
    autoscale JobTaskForEachTaskTaskNewClusterAutoscale
    autoterminationMinutes number
    awsAttributes JobTaskForEachTaskTaskNewClusterAwsAttributes
    azureAttributes JobTaskForEachTaskTaskNewClusterAzureAttributes
    clusterId string
    clusterLogConf JobTaskForEachTaskTaskNewClusterClusterLogConf
    clusterMountInfos JobTaskForEachTaskTaskNewClusterClusterMountInfo[]
    clusterName string
    customTags {[key: string]: any}
    dataSecurityMode string
    dockerImage JobTaskForEachTaskTaskNewClusterDockerImage
    driverInstancePoolId string
    driverNodeTypeId string
    enableElasticDisk boolean
    enableLocalDiskEncryption boolean
    gcpAttributes JobTaskForEachTaskTaskNewClusterGcpAttributes
    idempotencyToken string
    initScripts JobTaskForEachTaskTaskNewClusterInitScript[]
    instancePoolId string
    nodeTypeId string
    policyId string
    runtimeEngine string
    singleUserName string
    sparkConf {[key: string]: any}
    sparkEnvVars {[key: string]: any}
    sshPublicKeys string[]
    workloadType JobTaskForEachTaskTaskNewClusterWorkloadType
    num_workers int
    spark_version str
    apply_policy_default_values bool
    autoscale JobTaskForEachTaskTaskNewClusterAutoscale
    autotermination_minutes int
    aws_attributes JobTaskForEachTaskTaskNewClusterAwsAttributes
    azure_attributes JobTaskForEachTaskTaskNewClusterAzureAttributes
    cluster_id str
    cluster_log_conf JobTaskForEachTaskTaskNewClusterClusterLogConf
    cluster_mount_infos Sequence[JobTaskForEachTaskTaskNewClusterClusterMountInfo]
    cluster_name str
    custom_tags Mapping[str, Any]
    data_security_mode str
    docker_image JobTaskForEachTaskTaskNewClusterDockerImage
    driver_instance_pool_id str
    driver_node_type_id str
    enable_elastic_disk bool
    enable_local_disk_encryption bool
    gcp_attributes JobTaskForEachTaskTaskNewClusterGcpAttributes
    idempotency_token str
    init_scripts Sequence[JobTaskForEachTaskTaskNewClusterInitScript]
    instance_pool_id str
    node_type_id str
    policy_id str
    runtime_engine str
    single_user_name str
    spark_conf Mapping[str, Any]
    spark_env_vars Mapping[str, Any]
    ssh_public_keys Sequence[str]
    workload_type JobTaskForEachTaskTaskNewClusterWorkloadType

    JobTaskForEachTaskTaskNewClusterAutoscale, JobTaskForEachTaskTaskNewClusterAutoscaleArgs

    maxWorkers Integer
    minWorkers Integer
    maxWorkers number
    minWorkers number
    maxWorkers Number
    minWorkers Number

    JobTaskForEachTaskTaskNewClusterAwsAttributes, JobTaskForEachTaskTaskNewClusterAwsAttributesArgs

    JobTaskForEachTaskTaskNewClusterAzureAttributes, JobTaskForEachTaskTaskNewClusterAzureAttributesArgs

    JobTaskForEachTaskTaskNewClusterClusterLogConf, JobTaskForEachTaskTaskNewClusterClusterLogConfArgs

    JobTaskForEachTaskTaskNewClusterClusterLogConfDbfs, JobTaskForEachTaskTaskNewClusterClusterLogConfDbfsArgs

    JobTaskForEachTaskTaskNewClusterClusterLogConfS3, JobTaskForEachTaskTaskNewClusterClusterLogConfS3Args

    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String
    destination string
    cannedAcl string
    enableEncryption boolean
    encryptionType string
    endpoint string
    kmsKey string
    region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String

    JobTaskForEachTaskTaskNewClusterClusterMountInfo, JobTaskForEachTaskTaskNewClusterClusterMountInfoArgs

    JobTaskForEachTaskTaskNewClusterClusterMountInfoNetworkFilesystemInfo, JobTaskForEachTaskTaskNewClusterClusterMountInfoNetworkFilesystemInfoArgs

    JobTaskForEachTaskTaskNewClusterDockerImage, JobTaskForEachTaskTaskNewClusterDockerImageArgs

    url String
    URL of the Git repository to use.
    basicAuth Property Map

    JobTaskForEachTaskTaskNewClusterDockerImageBasicAuth, JobTaskForEachTaskTaskNewClusterDockerImageBasicAuthArgs

    Password string
    Username string
    Password string
    Username string
    password String
    username String
    password string
    username string
    password String
    username String

    JobTaskForEachTaskTaskNewClusterGcpAttributes, JobTaskForEachTaskTaskNewClusterGcpAttributesArgs

    JobTaskForEachTaskTaskNewClusterInitScript, JobTaskForEachTaskTaskNewClusterInitScriptArgs

    JobTaskForEachTaskTaskNewClusterInitScriptAbfss, JobTaskForEachTaskTaskNewClusterInitScriptAbfssArgs

    JobTaskForEachTaskTaskNewClusterInitScriptDbfs, JobTaskForEachTaskTaskNewClusterInitScriptDbfsArgs

    JobTaskForEachTaskTaskNewClusterInitScriptFile, JobTaskForEachTaskTaskNewClusterInitScriptFileArgs

    JobTaskForEachTaskTaskNewClusterInitScriptGcs, JobTaskForEachTaskTaskNewClusterInitScriptGcsArgs

    JobTaskForEachTaskTaskNewClusterInitScriptS3, JobTaskForEachTaskTaskNewClusterInitScriptS3Args

    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String
    destination string
    cannedAcl string
    enableEncryption boolean
    encryptionType string
    endpoint string
    kmsKey string
    region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String

    JobTaskForEachTaskTaskNewClusterInitScriptVolumes, JobTaskForEachTaskTaskNewClusterInitScriptVolumesArgs

    JobTaskForEachTaskTaskNewClusterInitScriptWorkspace, JobTaskForEachTaskTaskNewClusterInitScriptWorkspaceArgs

    JobTaskForEachTaskTaskNewClusterWorkloadType, JobTaskForEachTaskTaskNewClusterWorkloadTypeArgs

    JobTaskForEachTaskTaskNewClusterWorkloadTypeClients, JobTaskForEachTaskTaskNewClusterWorkloadTypeClientsArgs

    Jobs bool
    Notebooks bool
    Jobs bool
    Notebooks bool
    jobs Boolean
    notebooks Boolean
    jobs boolean
    notebooks boolean
    jobs bool
    notebooks bool
    jobs Boolean
    notebooks Boolean

    JobTaskForEachTaskTaskNotebookTask, JobTaskForEachTaskTaskNotebookTaskArgs

    NotebookPath string
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    BaseParameters Dictionary<string, object>
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    Source string
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.
    NotebookPath string
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    BaseParameters map[string]interface{}
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    Source string
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.
    notebookPath String
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    baseParameters Map<String,Object>
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    source String
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.
    notebookPath string
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    baseParameters {[key: string]: any}
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    source string
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.
    notebook_path str
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    base_parameters Mapping[str, Any]
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    source str
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.
    notebookPath String
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    baseParameters Map<Any>
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    source String
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.

    JobTaskForEachTaskTaskNotificationSettings, JobTaskForEachTaskTaskNotificationSettingsArgs

    AlertOnLastAttempt bool
    (Bool) do not send notifications to recipients specified in on_start for the retried runs and do not send notifications to recipients specified in on_failure until the last retry of the run.
    NoAlertForCanceledRuns bool
    (Bool) don't send alert for cancelled runs.
    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs.
    AlertOnLastAttempt bool
    (Bool) do not send notifications to recipients specified in on_start for the retried runs and do not send notifications to recipients specified in on_failure until the last retry of the run.
    NoAlertForCanceledRuns bool
    (Bool) don't send alert for cancelled runs.
    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs.
    alertOnLastAttempt Boolean
    (Bool) do not send notifications to recipients specified in on_start for the retried runs and do not send notifications to recipients specified in on_failure until the last retry of the run.
    noAlertForCanceledRuns Boolean
    (Bool) don't send alert for cancelled runs.
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs.
    alertOnLastAttempt boolean
    (Bool) do not send notifications to recipients specified in on_start for the retried runs and do not send notifications to recipients specified in on_failure until the last retry of the run.
    noAlertForCanceledRuns boolean
    (Bool) don't send alert for cancelled runs.
    noAlertForSkippedRuns boolean
    (Bool) don't send alert for skipped runs.
    alert_on_last_attempt bool
    (Bool) do not send notifications to recipients specified in on_start for the retried runs and do not send notifications to recipients specified in on_failure until the last retry of the run.
    no_alert_for_canceled_runs bool
    (Bool) don't send alert for cancelled runs.
    no_alert_for_skipped_runs bool
    (Bool) don't send alert for skipped runs.
    alertOnLastAttempt Boolean
    (Bool) do not send notifications to recipients specified in on_start for the retried runs and do not send notifications to recipients specified in on_failure until the last retry of the run.
    noAlertForCanceledRuns Boolean
    (Bool) don't send alert for cancelled runs.
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs.

    JobTaskForEachTaskTaskPipelineTask, JobTaskForEachTaskTaskPipelineTaskArgs

    PipelineId string
    The pipeline's unique ID.
    FullRefresh bool

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    PipelineId string
    The pipeline's unique ID.
    FullRefresh bool

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    pipelineId String
    The pipeline's unique ID.
    fullRefresh Boolean

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    pipelineId string
    The pipeline's unique ID.
    fullRefresh boolean

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    pipeline_id str
    The pipeline's unique ID.
    full_refresh bool

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    pipelineId String
    The pipeline's unique ID.
    fullRefresh Boolean

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    JobTaskForEachTaskTaskPythonWheelTask, JobTaskForEachTaskTaskPythonWheelTaskArgs

    EntryPoint string
    Python function as entry point for the task
    NamedParameters Dictionary<string, object>
    Named parameters for the task
    PackageName string
    Name of Python package
    Parameters List<string>
    Parameters for the task
    EntryPoint string
    Python function as entry point for the task
    NamedParameters map[string]interface{}
    Named parameters for the task
    PackageName string
    Name of Python package
    Parameters []string
    Parameters for the task
    entryPoint String
    Python function as entry point for the task
    namedParameters Map<String,Object>
    Named parameters for the task
    packageName String
    Name of Python package
    parameters List<String>
    Parameters for the task
    entryPoint string
    Python function as entry point for the task
    namedParameters {[key: string]: any}
    Named parameters for the task
    packageName string
    Name of Python package
    parameters string[]
    Parameters for the task
    entry_point str
    Python function as entry point for the task
    named_parameters Mapping[str, Any]
    Named parameters for the task
    package_name str
    Name of Python package
    parameters Sequence[str]
    Parameters for the task
    entryPoint String
    Python function as entry point for the task
    namedParameters Map<Any>
    Named parameters for the task
    packageName String
    Name of Python package
    parameters List<String>
    Parameters for the task

    JobTaskForEachTaskTaskRunJobTask, JobTaskForEachTaskTaskRunJobTaskArgs

    JobId int
    (String) ID of the job
    JobParameters Dictionary<string, object>
    (Map) Job parameters for the task
    JobId int
    (String) ID of the job
    JobParameters map[string]interface{}
    (Map) Job parameters for the task
    jobId Integer
    (String) ID of the job
    jobParameters Map<String,Object>
    (Map) Job parameters for the task
    jobId number
    (String) ID of the job
    jobParameters {[key: string]: any}
    (Map) Job parameters for the task
    job_id int
    (String) ID of the job
    job_parameters Mapping[str, Any]
    (Map) Job parameters for the task
    jobId Number
    (String) ID of the job
    jobParameters Map<Any>
    (Map) Job parameters for the task

    JobTaskForEachTaskTaskSparkJarTask, JobTaskForEachTaskTaskSparkJarTaskArgs

    JarUri string
    MainClassName string
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    Parameters List<string>
    (List) Parameters passed to the main method.
    JarUri string
    MainClassName string
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    Parameters []string
    (List) Parameters passed to the main method.
    jarUri String
    mainClassName String
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters List<String>
    (List) Parameters passed to the main method.
    jarUri string
    mainClassName string
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters string[]
    (List) Parameters passed to the main method.
    jar_uri str
    main_class_name str
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters Sequence[str]
    (List) Parameters passed to the main method.
    jarUri String
    mainClassName String
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters List<String>
    (List) Parameters passed to the main method.

    JobTaskForEachTaskTaskSparkPythonTask, JobTaskForEachTaskTaskSparkPythonTaskArgs

    PythonFile string
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    Parameters List<string>
    (List) Command line parameters passed to the Python file.
    Source string
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.
    PythonFile string
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    Parameters []string
    (List) Command line parameters passed to the Python file.
    Source string
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.
    pythonFile String
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    parameters List<String>
    (List) Command line parameters passed to the Python file.
    source String
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.
    pythonFile string
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    parameters string[]
    (List) Command line parameters passed to the Python file.
    source string
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.
    python_file str
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    parameters Sequence[str]
    (List) Command line parameters passed to the Python file.
    source str
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.
    pythonFile String
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    parameters List<String>
    (List) Command line parameters passed to the Python file.
    source String
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.

    JobTaskForEachTaskTaskSparkSubmitTask, JobTaskForEachTaskTaskSparkSubmitTaskArgs

    Parameters List<string>
    (List) Command-line parameters passed to spark submit.
    Parameters []string
    (List) Command-line parameters passed to spark submit.
    parameters List<String>
    (List) Command-line parameters passed to spark submit.
    parameters string[]
    (List) Command-line parameters passed to spark submit.
    parameters Sequence[str]
    (List) Command-line parameters passed to spark submit.
    parameters List<String>
    (List) Command-line parameters passed to spark submit.

    JobTaskForEachTaskTaskSqlTask, JobTaskForEachTaskTaskSqlTaskArgs

    Alert JobTaskForEachTaskTaskSqlTaskAlert
    block consisting of following fields:
    Dashboard JobTaskForEachTaskTaskSqlTaskDashboard
    block consisting of following fields:
    File JobTaskForEachTaskTaskSqlTaskFile
    block consisting of single string fields:
    Parameters Dictionary<string, object>
    (Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.
    Query JobTaskForEachTaskTaskSqlTaskQuery
    block consisting of single string field: query_id - identifier of the Databricks SQL Query (databricks_sql_query).
    WarehouseId string
    ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now.
    Alert JobTaskForEachTaskTaskSqlTaskAlert
    block consisting of following fields:
    Dashboard JobTaskForEachTaskTaskSqlTaskDashboard
    block consisting of following fields:
    File JobTaskForEachTaskTaskSqlTaskFile
    block consisting of single string fields:
    Parameters map[string]interface{}
    (Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.
    Query JobTaskForEachTaskTaskSqlTaskQuery
    block consisting of single string field: query_id - identifier of the Databricks SQL Query (databricks_sql_query).
    WarehouseId string
    ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now.
    alert JobTaskForEachTaskTaskSqlTaskAlert
    block consisting of following fields:
    dashboard JobTaskForEachTaskTaskSqlTaskDashboard
    block consisting of following fields:
    file JobTaskForEachTaskTaskSqlTaskFile
    block consisting of single string fields:
    parameters Map<String,Object>
    (Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.
    query JobTaskForEachTaskTaskSqlTaskQuery
    block consisting of single string field: query_id - identifier of the Databricks SQL Query (databricks_sql_query).
    warehouseId String
    ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now.
    alert JobTaskForEachTaskTaskSqlTaskAlert
    block consisting of following fields:
    dashboard JobTaskForEachTaskTaskSqlTaskDashboard
    block consisting of following fields:
    file JobTaskForEachTaskTaskSqlTaskFile
    block consisting of single string fields:
    parameters {[key: string]: any}
    (Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.
    query JobTaskForEachTaskTaskSqlTaskQuery
    block consisting of single string field: query_id - identifier of the Databricks SQL Query (databricks_sql_query).
    warehouseId string
    ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now.
    alert JobTaskForEachTaskTaskSqlTaskAlert
    block consisting of following fields:
    dashboard JobTaskForEachTaskTaskSqlTaskDashboard
    block consisting of following fields:
    file JobTaskForEachTaskTaskSqlTaskFile
    block consisting of single string fields:
    parameters Mapping[str, Any]
    (Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.
    query JobTaskForEachTaskTaskSqlTaskQuery
    block consisting of single string field: query_id - identifier of the Databricks SQL Query (databricks_sql_query).
    warehouse_id str
    ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now.
    alert Property Map
    block consisting of following fields:
    dashboard Property Map
    block consisting of following fields:
    file Property Map
    block consisting of single string fields:
    parameters Map<Any>
    (Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.
    query Property Map
    block consisting of single string field: query_id - identifier of the Databricks SQL Query (databricks_sql_query).
    warehouseId String
    ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now.

    JobTaskForEachTaskTaskSqlTaskAlert, JobTaskForEachTaskTaskSqlTaskAlertArgs

    AlertId string
    (String) identifier of the Databricks SQL Alert.
    Subscriptions List<JobTaskForEachTaskTaskSqlTaskAlertSubscription>
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    PauseSubscriptions bool
    flag that specifies if subscriptions are paused or not.
    AlertId string
    (String) identifier of the Databricks SQL Alert.
    Subscriptions []JobTaskForEachTaskTaskSqlTaskAlertSubscription
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    PauseSubscriptions bool
    flag that specifies if subscriptions are paused or not.
    alertId String
    (String) identifier of the Databricks SQL Alert.
    subscriptions List<JobTaskForEachTaskTaskSqlTaskAlertSubscription>
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    pauseSubscriptions Boolean
    flag that specifies if subscriptions are paused or not.
    alertId string
    (String) identifier of the Databricks SQL Alert.
    subscriptions JobTaskForEachTaskTaskSqlTaskAlertSubscription[]
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    pauseSubscriptions boolean
    flag that specifies if subscriptions are paused or not.
    alert_id str
    (String) identifier of the Databricks SQL Alert.
    subscriptions Sequence[JobTaskForEachTaskTaskSqlTaskAlertSubscription]
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    pause_subscriptions bool
    flag that specifies if subscriptions are paused or not.
    alertId String
    (String) identifier of the Databricks SQL Alert.
    subscriptions List<Property Map>
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    pauseSubscriptions Boolean
    flag that specifies if subscriptions are paused or not.

    JobTaskForEachTaskTaskSqlTaskAlertSubscription, JobTaskForEachTaskTaskSqlTaskAlertSubscriptionArgs

    DestinationId string
    UserName string
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    DestinationId string
    UserName string
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destinationId String
    userName String
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destinationId string
    userName string
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destination_id str
    user_name str
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destinationId String
    userName String
    The email of an active workspace user. Non-admin users can only set this field to their own email.

    JobTaskForEachTaskTaskSqlTaskDashboard, JobTaskForEachTaskTaskSqlTaskDashboardArgs

    DashboardId string
    (String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard.
    CustomSubject string
    string specifying a custom subject of email sent.
    PauseSubscriptions bool
    flag that specifies if subscriptions are paused or not.
    Subscriptions List<JobTaskForEachTaskTaskSqlTaskDashboardSubscription>
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    DashboardId string
    (String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard.
    CustomSubject string
    string specifying a custom subject of email sent.
    PauseSubscriptions bool
    flag that specifies if subscriptions are paused or not.
    Subscriptions []JobTaskForEachTaskTaskSqlTaskDashboardSubscription
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    dashboardId String
    (String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard.
    customSubject String
    string specifying a custom subject of email sent.
    pauseSubscriptions Boolean
    flag that specifies if subscriptions are paused or not.
    subscriptions List<JobTaskForEachTaskTaskSqlTaskDashboardSubscription>
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    dashboardId string
    (String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard.
    customSubject string
    string specifying a custom subject of email sent.
    pauseSubscriptions boolean
    flag that specifies if subscriptions are paused or not.
    subscriptions JobTaskForEachTaskTaskSqlTaskDashboardSubscription[]
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    dashboard_id str
    (String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard.
    custom_subject str
    string specifying a custom subject of email sent.
    pause_subscriptions bool
    flag that specifies if subscriptions are paused or not.
    subscriptions Sequence[JobTaskForEachTaskTaskSqlTaskDashboardSubscription]
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    dashboardId String
    (String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard.
    customSubject String
    string specifying a custom subject of email sent.
    pauseSubscriptions Boolean
    flag that specifies if subscriptions are paused or not.
    subscriptions List<Property Map>
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.

    JobTaskForEachTaskTaskSqlTaskDashboardSubscription, JobTaskForEachTaskTaskSqlTaskDashboardSubscriptionArgs

    DestinationId string
    UserName string
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    DestinationId string
    UserName string
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destinationId String
    userName String
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destinationId string
    userName string
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destination_id str
    user_name str
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destinationId String
    userName String
    The email of an active workspace user. Non-admin users can only set this field to their own email.

    JobTaskForEachTaskTaskSqlTaskFile, JobTaskForEachTaskTaskSqlTaskFileArgs

    Path string

    If source is GIT: Relative path to the file in the repository specified in the git_source block with SQL commands to execute. If source is WORKSPACE: Absolute path to the file in the workspace with SQL commands to execute.

    Example

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const sqlAggregationJob = new databricks.Job("sqlAggregationJob", {tasks: [ { taskKey: "run_agg_query", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, query: { queryId: databricks_sql_query.agg_query.id, }, }, }, { taskKey: "run_dashboard", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, dashboard: { dashboardId: databricks_sql_dashboard.dash.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, { taskKey: "run_alert", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, alert: { alertId: databricks_sql_alert.alert.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, ]});

    import pulumi
    import pulumi_databricks as databricks
    
    sql_aggregation_job = databricks.Job("sqlAggregationJob", tasks=[
        databricks.JobTaskArgs(
            task_key="run_agg_query",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                query=databricks.JobTaskSqlTaskQueryArgs(
                    query_id=databricks_sql_query["agg_query"]["id"],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_dashboard",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                dashboard=databricks.JobTaskSqlTaskDashboardArgs(
                    dashboard_id=databricks_sql_dashboard["dash"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskDashboardSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_alert",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                alert=databricks.JobTaskSqlTaskAlertArgs(
                    alert_id=databricks_sql_alert["alert"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskAlertSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
    ])
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var sqlAggregationJob = new Databricks.Job("sqlAggregationJob", new()
        {
            Tasks = new[]
            {
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_agg_query",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Query = new Databricks.Inputs.JobTaskSqlTaskQueryArgs
                        {
                            QueryId = databricks_sql_query.Agg_query.Id,
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_dashboard",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Dashboard = new Databricks.Inputs.JobTaskSqlTaskDashboardArgs
                        {
                            DashboardId = databricks_sql_dashboard.Dash.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskDashboardSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_alert",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Alert = new Databricks.Inputs.JobTaskSqlTaskAlertArgs
                        {
                            AlertId = databricks_sql_alert.Alert.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskAlertSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "sqlAggregationJob", &databricks.JobArgs{
    			Tasks: databricks.JobTaskArray{
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_agg_query"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Query: &databricks.JobTaskSqlTaskQueryArgs{
    							QueryId: pulumi.Any(databricks_sql_query.Agg_query.Id),
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_dashboard"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Dashboard: &databricks.JobTaskSqlTaskDashboardArgs{
    							DashboardId: pulumi.Any(databricks_sql_dashboard.Dash.Id),
    							Subscriptions: databricks.JobTaskSqlTaskDashboardSubscriptionArray{
    								&databricks.JobTaskSqlTaskDashboardSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_alert"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Alert: &databricks.JobTaskSqlTaskAlertArgs{
    							AlertId: pulumi.Any(databricks_sql_alert.Alert.Id),
    							Subscriptions: databricks.JobTaskSqlTaskAlertSubscriptionArray{
    								&databricks.JobTaskSqlTaskAlertSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskQueryArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskDashboardArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskAlertArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var sqlAggregationJob = new Job("sqlAggregationJob", JobArgs.builder()        
                .tasks(            
                    JobTaskArgs.builder()
                        .taskKey("run_agg_query")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .query(JobTaskSqlTaskQueryArgs.builder()
                                .queryId(databricks_sql_query.agg_query().id())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_dashboard")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .dashboard(JobTaskSqlTaskDashboardArgs.builder()
                                .dashboardId(databricks_sql_dashboard.dash().id())
                                .subscriptions(JobTaskSqlTaskDashboardSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_alert")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .alert(JobTaskSqlTaskAlertArgs.builder()
                                .alertId(databricks_sql_alert.alert().id())
                                .subscriptions(JobTaskSqlTaskAlertSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build())
                .build());
    
        }
    }
    
    resources:
      sqlAggregationJob:
        type: databricks:Job
        properties:
          tasks:
            - taskKey: run_agg_query
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                query:
                  queryId: ${databricks_sql_query.agg_query.id}
            - taskKey: run_dashboard
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                dashboard:
                  dashboardId: ${databricks_sql_dashboard.dash.id}
                  subscriptions:
                    - userName: user@domain.com
            - taskKey: run_alert
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                alert:
                  alertId: ${databricks_sql_alert.alert.id}
                  subscriptions:
                    - userName: user@domain.com
    
    Source string
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    Path string

    If source is GIT: Relative path to the file in the repository specified in the git_source block with SQL commands to execute. If source is WORKSPACE: Absolute path to the file in the workspace with SQL commands to execute.

    Example

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const sqlAggregationJob = new databricks.Job("sqlAggregationJob", {tasks: [ { taskKey: "run_agg_query", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, query: { queryId: databricks_sql_query.agg_query.id, }, }, }, { taskKey: "run_dashboard", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, dashboard: { dashboardId: databricks_sql_dashboard.dash.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, { taskKey: "run_alert", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, alert: { alertId: databricks_sql_alert.alert.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, ]});

    import pulumi
    import pulumi_databricks as databricks
    
    sql_aggregation_job = databricks.Job("sqlAggregationJob", tasks=[
        databricks.JobTaskArgs(
            task_key="run_agg_query",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                query=databricks.JobTaskSqlTaskQueryArgs(
                    query_id=databricks_sql_query["agg_query"]["id"],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_dashboard",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                dashboard=databricks.JobTaskSqlTaskDashboardArgs(
                    dashboard_id=databricks_sql_dashboard["dash"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskDashboardSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_alert",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                alert=databricks.JobTaskSqlTaskAlertArgs(
                    alert_id=databricks_sql_alert["alert"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskAlertSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
    ])
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var sqlAggregationJob = new Databricks.Job("sqlAggregationJob", new()
        {
            Tasks = new[]
            {
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_agg_query",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Query = new Databricks.Inputs.JobTaskSqlTaskQueryArgs
                        {
                            QueryId = databricks_sql_query.Agg_query.Id,
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_dashboard",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Dashboard = new Databricks.Inputs.JobTaskSqlTaskDashboardArgs
                        {
                            DashboardId = databricks_sql_dashboard.Dash.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskDashboardSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_alert",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Alert = new Databricks.Inputs.JobTaskSqlTaskAlertArgs
                        {
                            AlertId = databricks_sql_alert.Alert.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskAlertSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "sqlAggregationJob", &databricks.JobArgs{
    			Tasks: databricks.JobTaskArray{
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_agg_query"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Query: &databricks.JobTaskSqlTaskQueryArgs{
    							QueryId: pulumi.Any(databricks_sql_query.Agg_query.Id),
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_dashboard"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Dashboard: &databricks.JobTaskSqlTaskDashboardArgs{
    							DashboardId: pulumi.Any(databricks_sql_dashboard.Dash.Id),
    							Subscriptions: databricks.JobTaskSqlTaskDashboardSubscriptionArray{
    								&databricks.JobTaskSqlTaskDashboardSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_alert"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Alert: &databricks.JobTaskSqlTaskAlertArgs{
    							AlertId: pulumi.Any(databricks_sql_alert.Alert.Id),
    							Subscriptions: databricks.JobTaskSqlTaskAlertSubscriptionArray{
    								&databricks.JobTaskSqlTaskAlertSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskQueryArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskDashboardArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskAlertArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var sqlAggregationJob = new Job("sqlAggregationJob", JobArgs.builder()        
                .tasks(            
                    JobTaskArgs.builder()
                        .taskKey("run_agg_query")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .query(JobTaskSqlTaskQueryArgs.builder()
                                .queryId(databricks_sql_query.agg_query().id())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_dashboard")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .dashboard(JobTaskSqlTaskDashboardArgs.builder()
                                .dashboardId(databricks_sql_dashboard.dash().id())
                                .subscriptions(JobTaskSqlTaskDashboardSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_alert")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .alert(JobTaskSqlTaskAlertArgs.builder()
                                .alertId(databricks_sql_alert.alert().id())
                                .subscriptions(JobTaskSqlTaskAlertSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build())
                .build());
    
        }
    }
    
    resources:
      sqlAggregationJob:
        type: databricks:Job
        properties:
          tasks:
            - taskKey: run_agg_query
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                query:
                  queryId: ${databricks_sql_query.agg_query.id}
            - taskKey: run_dashboard
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                dashboard:
                  dashboardId: ${databricks_sql_dashboard.dash.id}
                  subscriptions:
                    - userName: user@domain.com
            - taskKey: run_alert
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                alert:
                  alertId: ${databricks_sql_alert.alert.id}
                  subscriptions:
                    - userName: user@domain.com
    
    Source string
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    path String

    If source is GIT: Relative path to the file in the repository specified in the git_source block with SQL commands to execute. If source is WORKSPACE: Absolute path to the file in the workspace with SQL commands to execute.

    Example

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const sqlAggregationJob = new databricks.Job("sqlAggregationJob", {tasks: [ { taskKey: "run_agg_query", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, query: { queryId: databricks_sql_query.agg_query.id, }, }, }, { taskKey: "run_dashboard", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, dashboard: { dashboardId: databricks_sql_dashboard.dash.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, { taskKey: "run_alert", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, alert: { alertId: databricks_sql_alert.alert.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, ]});

    import pulumi
    import pulumi_databricks as databricks
    
    sql_aggregation_job = databricks.Job("sqlAggregationJob", tasks=[
        databricks.JobTaskArgs(
            task_key="run_agg_query",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                query=databricks.JobTaskSqlTaskQueryArgs(
                    query_id=databricks_sql_query["agg_query"]["id"],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_dashboard",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                dashboard=databricks.JobTaskSqlTaskDashboardArgs(
                    dashboard_id=databricks_sql_dashboard["dash"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskDashboardSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_alert",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                alert=databricks.JobTaskSqlTaskAlertArgs(
                    alert_id=databricks_sql_alert["alert"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskAlertSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
    ])
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var sqlAggregationJob = new Databricks.Job("sqlAggregationJob", new()
        {
            Tasks = new[]
            {
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_agg_query",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Query = new Databricks.Inputs.JobTaskSqlTaskQueryArgs
                        {
                            QueryId = databricks_sql_query.Agg_query.Id,
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_dashboard",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Dashboard = new Databricks.Inputs.JobTaskSqlTaskDashboardArgs
                        {
                            DashboardId = databricks_sql_dashboard.Dash.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskDashboardSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_alert",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Alert = new Databricks.Inputs.JobTaskSqlTaskAlertArgs
                        {
                            AlertId = databricks_sql_alert.Alert.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskAlertSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "sqlAggregationJob", &databricks.JobArgs{
    			Tasks: databricks.JobTaskArray{
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_agg_query"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Query: &databricks.JobTaskSqlTaskQueryArgs{
    							QueryId: pulumi.Any(databricks_sql_query.Agg_query.Id),
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_dashboard"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Dashboard: &databricks.JobTaskSqlTaskDashboardArgs{
    							DashboardId: pulumi.Any(databricks_sql_dashboard.Dash.Id),
    							Subscriptions: databricks.JobTaskSqlTaskDashboardSubscriptionArray{
    								&databricks.JobTaskSqlTaskDashboardSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_alert"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Alert: &databricks.JobTaskSqlTaskAlertArgs{
    							AlertId: pulumi.Any(databricks_sql_alert.Alert.Id),
    							Subscriptions: databricks.JobTaskSqlTaskAlertSubscriptionArray{
    								&databricks.JobTaskSqlTaskAlertSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskQueryArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskDashboardArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskAlertArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var sqlAggregationJob = new Job("sqlAggregationJob", JobArgs.builder()        
                .tasks(            
                    JobTaskArgs.builder()
                        .taskKey("run_agg_query")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .query(JobTaskSqlTaskQueryArgs.builder()
                                .queryId(databricks_sql_query.agg_query().id())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_dashboard")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .dashboard(JobTaskSqlTaskDashboardArgs.builder()
                                .dashboardId(databricks_sql_dashboard.dash().id())
                                .subscriptions(JobTaskSqlTaskDashboardSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_alert")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .alert(JobTaskSqlTaskAlertArgs.builder()
                                .alertId(databricks_sql_alert.alert().id())
                                .subscriptions(JobTaskSqlTaskAlertSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build())
                .build());
    
        }
    }
    
    resources:
      sqlAggregationJob:
        type: databricks:Job
        properties:
          tasks:
            - taskKey: run_agg_query
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                query:
                  queryId: ${databricks_sql_query.agg_query.id}
            - taskKey: run_dashboard
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                dashboard:
                  dashboardId: ${databricks_sql_dashboard.dash.id}
                  subscriptions:
                    - userName: user@domain.com
            - taskKey: run_alert
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                alert:
                  alertId: ${databricks_sql_alert.alert.id}
                  subscriptions:
                    - userName: user@domain.com
    
    source String
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    path string

    If source is GIT: Relative path to the file in the repository specified in the git_source block with SQL commands to execute. If source is WORKSPACE: Absolute path to the file in the workspace with SQL commands to execute.

    Example

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const sqlAggregationJob = new databricks.Job("sqlAggregationJob", {tasks: [ { taskKey: "run_agg_query", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, query: { queryId: databricks_sql_query.agg_query.id, }, }, }, { taskKey: "run_dashboard", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, dashboard: { dashboardId: databricks_sql_dashboard.dash.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, { taskKey: "run_alert", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, alert: { alertId: databricks_sql_alert.alert.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, ]});

    import pulumi
    import pulumi_databricks as databricks
    
    sql_aggregation_job = databricks.Job("sqlAggregationJob", tasks=[
        databricks.JobTaskArgs(
            task_key="run_agg_query",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                query=databricks.JobTaskSqlTaskQueryArgs(
                    query_id=databricks_sql_query["agg_query"]["id"],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_dashboard",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                dashboard=databricks.JobTaskSqlTaskDashboardArgs(
                    dashboard_id=databricks_sql_dashboard["dash"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskDashboardSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_alert",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                alert=databricks.JobTaskSqlTaskAlertArgs(
                    alert_id=databricks_sql_alert["alert"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskAlertSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
    ])
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var sqlAggregationJob = new Databricks.Job("sqlAggregationJob", new()
        {
            Tasks = new[]
            {
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_agg_query",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Query = new Databricks.Inputs.JobTaskSqlTaskQueryArgs
                        {
                            QueryId = databricks_sql_query.Agg_query.Id,
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_dashboard",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Dashboard = new Databricks.Inputs.JobTaskSqlTaskDashboardArgs
                        {
                            DashboardId = databricks_sql_dashboard.Dash.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskDashboardSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_alert",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Alert = new Databricks.Inputs.JobTaskSqlTaskAlertArgs
                        {
                            AlertId = databricks_sql_alert.Alert.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskAlertSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "sqlAggregationJob", &databricks.JobArgs{
    			Tasks: databricks.JobTaskArray{
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_agg_query"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Query: &databricks.JobTaskSqlTaskQueryArgs{
    							QueryId: pulumi.Any(databricks_sql_query.Agg_query.Id),
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_dashboard"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Dashboard: &databricks.JobTaskSqlTaskDashboardArgs{
    							DashboardId: pulumi.Any(databricks_sql_dashboard.Dash.Id),
    							Subscriptions: databricks.JobTaskSqlTaskDashboardSubscriptionArray{
    								&databricks.JobTaskSqlTaskDashboardSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_alert"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Alert: &databricks.JobTaskSqlTaskAlertArgs{
    							AlertId: pulumi.Any(databricks_sql_alert.Alert.Id),
    							Subscriptions: databricks.JobTaskSqlTaskAlertSubscriptionArray{
    								&databricks.JobTaskSqlTaskAlertSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskQueryArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskDashboardArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskAlertArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var sqlAggregationJob = new Job("sqlAggregationJob", JobArgs.builder()        
                .tasks(            
                    JobTaskArgs.builder()
                        .taskKey("run_agg_query")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .query(JobTaskSqlTaskQueryArgs.builder()
                                .queryId(databricks_sql_query.agg_query().id())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_dashboard")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .dashboard(JobTaskSqlTaskDashboardArgs.builder()
                                .dashboardId(databricks_sql_dashboard.dash().id())
                                .subscriptions(JobTaskSqlTaskDashboardSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_alert")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .alert(JobTaskSqlTaskAlertArgs.builder()
                                .alertId(databricks_sql_alert.alert().id())
                                .subscriptions(JobTaskSqlTaskAlertSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build())
                .build());
    
        }
    }
    
    resources:
      sqlAggregationJob:
        type: databricks:Job
        properties:
          tasks:
            - taskKey: run_agg_query
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                query:
                  queryId: ${databricks_sql_query.agg_query.id}
            - taskKey: run_dashboard
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                dashboard:
                  dashboardId: ${databricks_sql_dashboard.dash.id}
                  subscriptions:
                    - userName: user@domain.com
            - taskKey: run_alert
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                alert:
                  alertId: ${databricks_sql_alert.alert.id}
                  subscriptions:
                    - userName: user@domain.com
    
    source string
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    path str

    If source is GIT: Relative path to the file in the repository specified in the git_source block with SQL commands to execute. If source is WORKSPACE: Absolute path to the file in the workspace with SQL commands to execute.

    Example

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const sqlAggregationJob = new databricks.Job("sqlAggregationJob", {tasks: [ { taskKey: "run_agg_query", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, query: { queryId: databricks_sql_query.agg_query.id, }, }, }, { taskKey: "run_dashboard", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, dashboard: { dashboardId: databricks_sql_dashboard.dash.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, { taskKey: "run_alert", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, alert: { alertId: databricks_sql_alert.alert.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, ]});

    import pulumi
    import pulumi_databricks as databricks
    
    sql_aggregation_job = databricks.Job("sqlAggregationJob", tasks=[
        databricks.JobTaskArgs(
            task_key="run_agg_query",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                query=databricks.JobTaskSqlTaskQueryArgs(
                    query_id=databricks_sql_query["agg_query"]["id"],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_dashboard",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                dashboard=databricks.JobTaskSqlTaskDashboardArgs(
                    dashboard_id=databricks_sql_dashboard["dash"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskDashboardSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_alert",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                alert=databricks.JobTaskSqlTaskAlertArgs(
                    alert_id=databricks_sql_alert["alert"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskAlertSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
    ])
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var sqlAggregationJob = new Databricks.Job("sqlAggregationJob", new()
        {
            Tasks = new[]
            {
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_agg_query",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Query = new Databricks.Inputs.JobTaskSqlTaskQueryArgs
                        {
                            QueryId = databricks_sql_query.Agg_query.Id,
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_dashboard",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Dashboard = new Databricks.Inputs.JobTaskSqlTaskDashboardArgs
                        {
                            DashboardId = databricks_sql_dashboard.Dash.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskDashboardSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_alert",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Alert = new Databricks.Inputs.JobTaskSqlTaskAlertArgs
                        {
                            AlertId = databricks_sql_alert.Alert.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskAlertSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "sqlAggregationJob", &databricks.JobArgs{
    			Tasks: databricks.JobTaskArray{
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_agg_query"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Query: &databricks.JobTaskSqlTaskQueryArgs{
    							QueryId: pulumi.Any(databricks_sql_query.Agg_query.Id),
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_dashboard"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Dashboard: &databricks.JobTaskSqlTaskDashboardArgs{
    							DashboardId: pulumi.Any(databricks_sql_dashboard.Dash.Id),
    							Subscriptions: databricks.JobTaskSqlTaskDashboardSubscriptionArray{
    								&databricks.JobTaskSqlTaskDashboardSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_alert"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Alert: &databricks.JobTaskSqlTaskAlertArgs{
    							AlertId: pulumi.Any(databricks_sql_alert.Alert.Id),
    							Subscriptions: databricks.JobTaskSqlTaskAlertSubscriptionArray{
    								&databricks.JobTaskSqlTaskAlertSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskQueryArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskDashboardArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskAlertArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var sqlAggregationJob = new Job("sqlAggregationJob", JobArgs.builder()        
                .tasks(            
                    JobTaskArgs.builder()
                        .taskKey("run_agg_query")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .query(JobTaskSqlTaskQueryArgs.builder()
                                .queryId(databricks_sql_query.agg_query().id())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_dashboard")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .dashboard(JobTaskSqlTaskDashboardArgs.builder()
                                .dashboardId(databricks_sql_dashboard.dash().id())
                                .subscriptions(JobTaskSqlTaskDashboardSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_alert")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .alert(JobTaskSqlTaskAlertArgs.builder()
                                .alertId(databricks_sql_alert.alert().id())
                                .subscriptions(JobTaskSqlTaskAlertSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build())
                .build());
    
        }
    }
    
    resources:
      sqlAggregationJob:
        type: databricks:Job
        properties:
          tasks:
            - taskKey: run_agg_query
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                query:
                  queryId: ${databricks_sql_query.agg_query.id}
            - taskKey: run_dashboard
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                dashboard:
                  dashboardId: ${databricks_sql_dashboard.dash.id}
                  subscriptions:
                    - userName: user@domain.com
            - taskKey: run_alert
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                alert:
                  alertId: ${databricks_sql_alert.alert.id}
                  subscriptions:
                    - userName: user@domain.com
    
    source str
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    path String

    If source is GIT: Relative path to the file in the repository specified in the git_source block with SQL commands to execute. If source is WORKSPACE: Absolute path to the file in the workspace with SQL commands to execute.

    Example

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const sqlAggregationJob = new databricks.Job("sqlAggregationJob", {tasks: [ { taskKey: "run_agg_query", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, query: { queryId: databricks_sql_query.agg_query.id, }, }, }, { taskKey: "run_dashboard", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, dashboard: { dashboardId: databricks_sql_dashboard.dash.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, { taskKey: "run_alert", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, alert: { alertId: databricks_sql_alert.alert.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, ]});

    import pulumi
    import pulumi_databricks as databricks
    
    sql_aggregation_job = databricks.Job("sqlAggregationJob", tasks=[
        databricks.JobTaskArgs(
            task_key="run_agg_query",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                query=databricks.JobTaskSqlTaskQueryArgs(
                    query_id=databricks_sql_query["agg_query"]["id"],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_dashboard",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                dashboard=databricks.JobTaskSqlTaskDashboardArgs(
                    dashboard_id=databricks_sql_dashboard["dash"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskDashboardSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_alert",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                alert=databricks.JobTaskSqlTaskAlertArgs(
                    alert_id=databricks_sql_alert["alert"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskAlertSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
    ])
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var sqlAggregationJob = new Databricks.Job("sqlAggregationJob", new()
        {
            Tasks = new[]
            {
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_agg_query",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Query = new Databricks.Inputs.JobTaskSqlTaskQueryArgs
                        {
                            QueryId = databricks_sql_query.Agg_query.Id,
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_dashboard",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Dashboard = new Databricks.Inputs.JobTaskSqlTaskDashboardArgs
                        {
                            DashboardId = databricks_sql_dashboard.Dash.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskDashboardSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_alert",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Alert = new Databricks.Inputs.JobTaskSqlTaskAlertArgs
                        {
                            AlertId = databricks_sql_alert.Alert.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskAlertSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "sqlAggregationJob", &databricks.JobArgs{
    			Tasks: databricks.JobTaskArray{
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_agg_query"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Query: &databricks.JobTaskSqlTaskQueryArgs{
    							QueryId: pulumi.Any(databricks_sql_query.Agg_query.Id),
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_dashboard"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Dashboard: &databricks.JobTaskSqlTaskDashboardArgs{
    							DashboardId: pulumi.Any(databricks_sql_dashboard.Dash.Id),
    							Subscriptions: databricks.JobTaskSqlTaskDashboardSubscriptionArray{
    								&databricks.JobTaskSqlTaskDashboardSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_alert"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Alert: &databricks.JobTaskSqlTaskAlertArgs{
    							AlertId: pulumi.Any(databricks_sql_alert.Alert.Id),
    							Subscriptions: databricks.JobTaskSqlTaskAlertSubscriptionArray{
    								&databricks.JobTaskSqlTaskAlertSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskQueryArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskDashboardArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskAlertArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var sqlAggregationJob = new Job("sqlAggregationJob", JobArgs.builder()        
                .tasks(            
                    JobTaskArgs.builder()
                        .taskKey("run_agg_query")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .query(JobTaskSqlTaskQueryArgs.builder()
                                .queryId(databricks_sql_query.agg_query().id())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_dashboard")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .dashboard(JobTaskSqlTaskDashboardArgs.builder()
                                .dashboardId(databricks_sql_dashboard.dash().id())
                                .subscriptions(JobTaskSqlTaskDashboardSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_alert")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .alert(JobTaskSqlTaskAlertArgs.builder()
                                .alertId(databricks_sql_alert.alert().id())
                                .subscriptions(JobTaskSqlTaskAlertSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build())
                .build());
    
        }
    }
    
    resources:
      sqlAggregationJob:
        type: databricks:Job
        properties:
          tasks:
            - taskKey: run_agg_query
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                query:
                  queryId: ${databricks_sql_query.agg_query.id}
            - taskKey: run_dashboard
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                dashboard:
                  dashboardId: ${databricks_sql_dashboard.dash.id}
                  subscriptions:
                    - userName: user@domain.com
            - taskKey: run_alert
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                alert:
                  alertId: ${databricks_sql_alert.alert.id}
                  subscriptions:
                    - userName: user@domain.com
    
    source String
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.

    JobTaskForEachTaskTaskSqlTaskQuery, JobTaskForEachTaskTaskSqlTaskQueryArgs

    QueryId string
    QueryId string
    queryId String
    queryId string
    queryId String

    JobTaskForEachTaskTaskWebhookNotifications, JobTaskForEachTaskTaskWebhookNotificationsArgs

    OnDurationWarningThresholdExceededs List<JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceeded>

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    OnFailures List<JobTaskForEachTaskTaskWebhookNotificationsOnFailure>
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    OnStarts List<JobTaskForEachTaskTaskWebhookNotificationsOnStart>
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    OnSuccesses List<JobTaskForEachTaskTaskWebhookNotificationsOnSuccess>
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.
    OnDurationWarningThresholdExceededs []JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceeded

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    OnFailures []JobTaskForEachTaskTaskWebhookNotificationsOnFailure
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    OnStarts []JobTaskForEachTaskTaskWebhookNotificationsOnStart
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    OnSuccesses []JobTaskForEachTaskTaskWebhookNotificationsOnSuccess
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.
    onDurationWarningThresholdExceededs List<JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceeded>

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    onFailures List<JobTaskForEachTaskTaskWebhookNotificationsOnFailure>
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    onStarts List<JobTaskForEachTaskTaskWebhookNotificationsOnStart>
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    onSuccesses List<JobTaskForEachTaskTaskWebhookNotificationsOnSuccess>
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.
    onDurationWarningThresholdExceededs JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceeded[]

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    onFailures JobTaskForEachTaskTaskWebhookNotificationsOnFailure[]
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    onStarts JobTaskForEachTaskTaskWebhookNotificationsOnStart[]
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    onSuccesses JobTaskForEachTaskTaskWebhookNotificationsOnSuccess[]
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.
    on_duration_warning_threshold_exceededs Sequence[JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceeded]

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    on_failures Sequence[JobTaskForEachTaskTaskWebhookNotificationsOnFailure]
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    on_starts Sequence[JobTaskForEachTaskTaskWebhookNotificationsOnStart]
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    on_successes Sequence[JobTaskForEachTaskTaskWebhookNotificationsOnSuccess]
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.
    onDurationWarningThresholdExceededs List<Property Map>

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    onFailures List<Property Map>
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    onStarts List<Property Map>
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    onSuccesses List<Property Map>
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.

    JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceeded, JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceededArgs

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id str

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    JobTaskForEachTaskTaskWebhookNotificationsOnFailure, JobTaskForEachTaskTaskWebhookNotificationsOnFailureArgs

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id str

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    JobTaskForEachTaskTaskWebhookNotificationsOnStart, JobTaskForEachTaskTaskWebhookNotificationsOnStartArgs

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id str

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    JobTaskForEachTaskTaskWebhookNotificationsOnSuccess, JobTaskForEachTaskTaskWebhookNotificationsOnSuccessArgs

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id str

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    JobTaskHealth, JobTaskHealthArgs

    Rules List<JobTaskHealthRule>
    list of rules that are represented as objects with the following attributes:
    Rules []JobTaskHealthRule
    list of rules that are represented as objects with the following attributes:
    rules List<JobTaskHealthRule>
    list of rules that are represented as objects with the following attributes:
    rules JobTaskHealthRule[]
    list of rules that are represented as objects with the following attributes:
    rules Sequence[JobTaskHealthRule]
    list of rules that are represented as objects with the following attributes:
    rules List<Property Map>
    list of rules that are represented as objects with the following attributes:

    JobTaskHealthRule, JobTaskHealthRuleArgs

    Metric string
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    Op string

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    Value int
    integer value used to compare to the given metric.
    Metric string
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    Op string

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    Value int
    integer value used to compare to the given metric.
    metric String
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    op String

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    value Integer
    integer value used to compare to the given metric.
    metric string
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    op string

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    value number
    integer value used to compare to the given metric.
    metric str
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    op str

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    value int
    integer value used to compare to the given metric.
    metric String
    string specifying the metric to check. The only supported metric is RUN_DURATION_SECONDS (check Jobs REST API documentation for the latest information).
    op String

    The string specifying the operation used to compare operands. Currently, following operators are supported: EQUAL_TO, GREATER_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN, LESS_THAN_OR_EQUAL, NOT_EQUAL. (Check the API docs for the latest information).

    This task does not require a cluster to execute and does not support retries or notifications.

    value Number
    integer value used to compare to the given metric.

    JobTaskLibrary, JobTaskLibraryArgs

    JobTaskLibraryCran, JobTaskLibraryCranArgs

    Package string
    Repo string
    Package string
    Repo string
    package_ String
    repo String
    package string
    repo string
    package str
    repo str
    package String
    repo String

    JobTaskLibraryMaven, JobTaskLibraryMavenArgs

    Coordinates string
    Exclusions List<string>
    Repo string
    Coordinates string
    Exclusions []string
    Repo string
    coordinates String
    exclusions List<String>
    repo String
    coordinates string
    exclusions string[]
    repo string
    coordinates str
    exclusions Sequence[str]
    repo str
    coordinates String
    exclusions List<String>
    repo String

    JobTaskLibraryPypi, JobTaskLibraryPypiArgs

    Package string
    Repo string
    Package string
    Repo string
    package_ String
    repo String
    package string
    repo string
    package str
    repo str
    package String
    repo String

    JobTaskNewCluster, JobTaskNewClusterArgs

    SparkVersion string
    ApplyPolicyDefaultValues bool
    Autoscale JobTaskNewClusterAutoscale
    AutoterminationMinutes int
    AwsAttributes JobTaskNewClusterAwsAttributes
    AzureAttributes JobTaskNewClusterAzureAttributes
    ClusterId string
    ClusterLogConf JobTaskNewClusterClusterLogConf
    ClusterMountInfos List<JobTaskNewClusterClusterMountInfo>
    ClusterName string
    CustomTags Dictionary<string, object>
    DataSecurityMode string
    DockerImage JobTaskNewClusterDockerImage
    DriverInstancePoolId string
    DriverNodeTypeId string
    EnableElasticDisk bool
    EnableLocalDiskEncryption bool
    GcpAttributes JobTaskNewClusterGcpAttributes
    IdempotencyToken string
    InitScripts List<JobTaskNewClusterInitScript>
    InstancePoolId string
    NodeTypeId string
    NumWorkers int
    PolicyId string
    RuntimeEngine string
    SingleUserName string
    SparkConf Dictionary<string, object>
    SparkEnvVars Dictionary<string, object>
    SshPublicKeys List<string>
    WorkloadType JobTaskNewClusterWorkloadType
    SparkVersion string
    ApplyPolicyDefaultValues bool
    Autoscale JobTaskNewClusterAutoscale
    AutoterminationMinutes int
    AwsAttributes JobTaskNewClusterAwsAttributes
    AzureAttributes JobTaskNewClusterAzureAttributes
    ClusterId string
    ClusterLogConf JobTaskNewClusterClusterLogConf
    ClusterMountInfos []JobTaskNewClusterClusterMountInfo
    ClusterName string
    CustomTags map[string]interface{}
    DataSecurityMode string
    DockerImage JobTaskNewClusterDockerImage
    DriverInstancePoolId string
    DriverNodeTypeId string
    EnableElasticDisk bool
    EnableLocalDiskEncryption bool
    GcpAttributes JobTaskNewClusterGcpAttributes
    IdempotencyToken string
    InitScripts []JobTaskNewClusterInitScript
    InstancePoolId string
    NodeTypeId string
    NumWorkers int
    PolicyId string
    RuntimeEngine string
    SingleUserName string
    SparkConf map[string]interface{}
    SparkEnvVars map[string]interface{}
    SshPublicKeys []string
    WorkloadType JobTaskNewClusterWorkloadType
    sparkVersion String
    applyPolicyDefaultValues Boolean
    autoscale JobTaskNewClusterAutoscale
    autoterminationMinutes Integer
    awsAttributes JobTaskNewClusterAwsAttributes
    azureAttributes JobTaskNewClusterAzureAttributes
    clusterId String
    clusterLogConf JobTaskNewClusterClusterLogConf
    clusterMountInfos List<JobTaskNewClusterClusterMountInfo>
    clusterName String
    customTags Map<String,Object>
    dataSecurityMode String
    dockerImage JobTaskNewClusterDockerImage
    driverInstancePoolId String
    driverNodeTypeId String
    enableElasticDisk Boolean
    enableLocalDiskEncryption Boolean
    gcpAttributes JobTaskNewClusterGcpAttributes
    idempotencyToken String
    initScripts List<JobTaskNewClusterInitScript>
    instancePoolId String
    nodeTypeId String
    numWorkers Integer
    policyId String
    runtimeEngine String
    singleUserName String
    sparkConf Map<String,Object>
    sparkEnvVars Map<String,Object>
    sshPublicKeys List<String>
    workloadType JobTaskNewClusterWorkloadType
    sparkVersion string
    applyPolicyDefaultValues boolean
    autoscale JobTaskNewClusterAutoscale
    autoterminationMinutes number
    awsAttributes JobTaskNewClusterAwsAttributes
    azureAttributes JobTaskNewClusterAzureAttributes
    clusterId string
    clusterLogConf JobTaskNewClusterClusterLogConf
    clusterMountInfos JobTaskNewClusterClusterMountInfo[]
    clusterName string
    customTags {[key: string]: any}
    dataSecurityMode string
    dockerImage JobTaskNewClusterDockerImage
    driverInstancePoolId string
    driverNodeTypeId string
    enableElasticDisk boolean
    enableLocalDiskEncryption boolean
    gcpAttributes JobTaskNewClusterGcpAttributes
    idempotencyToken string
    initScripts JobTaskNewClusterInitScript[]
    instancePoolId string
    nodeTypeId string
    numWorkers number
    policyId string
    runtimeEngine string
    singleUserName string
    sparkConf {[key: string]: any}
    sparkEnvVars {[key: string]: any}
    sshPublicKeys string[]
    workloadType JobTaskNewClusterWorkloadType
    spark_version str
    apply_policy_default_values bool
    autoscale JobTaskNewClusterAutoscale
    autotermination_minutes int
    aws_attributes JobTaskNewClusterAwsAttributes
    azure_attributes JobTaskNewClusterAzureAttributes
    cluster_id str
    cluster_log_conf JobTaskNewClusterClusterLogConf
    cluster_mount_infos Sequence[JobTaskNewClusterClusterMountInfo]
    cluster_name str
    custom_tags Mapping[str, Any]
    data_security_mode str
    docker_image JobTaskNewClusterDockerImage
    driver_instance_pool_id str
    driver_node_type_id str
    enable_elastic_disk bool
    enable_local_disk_encryption bool
    gcp_attributes JobTaskNewClusterGcpAttributes
    idempotency_token str
    init_scripts Sequence[JobTaskNewClusterInitScript]
    instance_pool_id str
    node_type_id str
    num_workers int
    policy_id str
    runtime_engine str
    single_user_name str
    spark_conf Mapping[str, Any]
    spark_env_vars Mapping[str, Any]
    ssh_public_keys Sequence[str]
    workload_type JobTaskNewClusterWorkloadType

    JobTaskNewClusterAutoscale, JobTaskNewClusterAutoscaleArgs

    maxWorkers Integer
    minWorkers Integer
    maxWorkers number
    minWorkers number
    maxWorkers Number
    minWorkers Number

    JobTaskNewClusterAwsAttributes, JobTaskNewClusterAwsAttributesArgs

    JobTaskNewClusterAzureAttributes, JobTaskNewClusterAzureAttributesArgs

    JobTaskNewClusterClusterLogConf, JobTaskNewClusterClusterLogConfArgs

    JobTaskNewClusterClusterLogConfDbfs, JobTaskNewClusterClusterLogConfDbfsArgs

    JobTaskNewClusterClusterLogConfS3, JobTaskNewClusterClusterLogConfS3Args

    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String
    destination string
    cannedAcl string
    enableEncryption boolean
    encryptionType string
    endpoint string
    kmsKey string
    region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String

    JobTaskNewClusterClusterMountInfo, JobTaskNewClusterClusterMountInfoArgs

    JobTaskNewClusterClusterMountInfoNetworkFilesystemInfo, JobTaskNewClusterClusterMountInfoNetworkFilesystemInfoArgs

    JobTaskNewClusterDockerImage, JobTaskNewClusterDockerImageArgs

    Url string
    URL of the Git repository to use.
    BasicAuth JobTaskNewClusterDockerImageBasicAuth
    Url string
    URL of the Git repository to use.
    BasicAuth JobTaskNewClusterDockerImageBasicAuth
    url String
    URL of the Git repository to use.
    basicAuth JobTaskNewClusterDockerImageBasicAuth
    url string
    URL of the Git repository to use.
    basicAuth JobTaskNewClusterDockerImageBasicAuth
    url str
    URL of the Git repository to use.
    basic_auth JobTaskNewClusterDockerImageBasicAuth
    url String
    URL of the Git repository to use.
    basicAuth Property Map

    JobTaskNewClusterDockerImageBasicAuth, JobTaskNewClusterDockerImageBasicAuthArgs

    Password string
    Username string
    Password string
    Username string
    password String
    username String
    password string
    username string
    password String
    username String

    JobTaskNewClusterGcpAttributes, JobTaskNewClusterGcpAttributesArgs

    JobTaskNewClusterInitScript, JobTaskNewClusterInitScriptArgs

    abfss Property Map
    dbfs Property Map

    Deprecated: For init scripts use 'volumes', 'workspace' or cloud storage location instead of 'dbfs'.

    file Property Map
    block consisting of single string fields:
    gcs Property Map
    s3 Property Map
    volumes Property Map
    workspace Property Map

    JobTaskNewClusterInitScriptAbfss, JobTaskNewClusterInitScriptAbfssArgs

    JobTaskNewClusterInitScriptDbfs, JobTaskNewClusterInitScriptDbfsArgs

    JobTaskNewClusterInitScriptFile, JobTaskNewClusterInitScriptFileArgs

    JobTaskNewClusterInitScriptGcs, JobTaskNewClusterInitScriptGcsArgs

    JobTaskNewClusterInitScriptS3, JobTaskNewClusterInitScriptS3Args

    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    Destination string
    CannedAcl string
    EnableEncryption bool
    EncryptionType string
    Endpoint string
    KmsKey string
    Region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String
    destination string
    cannedAcl string
    enableEncryption boolean
    encryptionType string
    endpoint string
    kmsKey string
    region string
    destination String
    cannedAcl String
    enableEncryption Boolean
    encryptionType String
    endpoint String
    kmsKey String
    region String

    JobTaskNewClusterInitScriptVolumes, JobTaskNewClusterInitScriptVolumesArgs

    JobTaskNewClusterInitScriptWorkspace, JobTaskNewClusterInitScriptWorkspaceArgs

    JobTaskNewClusterWorkloadType, JobTaskNewClusterWorkloadTypeArgs

    JobTaskNewClusterWorkloadTypeClients, JobTaskNewClusterWorkloadTypeClientsArgs

    Jobs bool
    Notebooks bool
    Jobs bool
    Notebooks bool
    jobs Boolean
    notebooks Boolean
    jobs boolean
    notebooks boolean
    jobs bool
    notebooks bool
    jobs Boolean
    notebooks Boolean

    JobTaskNotebookTask, JobTaskNotebookTaskArgs

    NotebookPath string
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    BaseParameters Dictionary<string, object>
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    Source string
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.
    NotebookPath string
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    BaseParameters map[string]interface{}
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    Source string
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.
    notebookPath String
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    baseParameters Map<String,Object>
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    source String
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.
    notebookPath string
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    baseParameters {[key: string]: any}
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    source string
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.
    notebook_path str
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    base_parameters Mapping[str, Any]
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    source str
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.
    notebookPath String
    The path of the databricks.Notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.
    baseParameters Map<Any>
    (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using dbutils.widgets.get.
    source String
    Location type of the notebook, can only be WORKSPACE or GIT. When set to WORKSPACE, the notebook will be retrieved from the local Databricks workspace. When set to GIT, the notebook will be retrieved from a Git repository defined in git_source. If the value is empty, the task will use GIT if git_source is defined and WORKSPACE otherwise.

    JobTaskNotificationSettings, JobTaskNotificationSettingsArgs

    AlertOnLastAttempt bool
    (Bool) do not send notifications to recipients specified in on_start for the retried runs and do not send notifications to recipients specified in on_failure until the last retry of the run.
    NoAlertForCanceledRuns bool
    (Bool) don't send alert for cancelled runs.
    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs.
    AlertOnLastAttempt bool
    (Bool) do not send notifications to recipients specified in on_start for the retried runs and do not send notifications to recipients specified in on_failure until the last retry of the run.
    NoAlertForCanceledRuns bool
    (Bool) don't send alert for cancelled runs.
    NoAlertForSkippedRuns bool
    (Bool) don't send alert for skipped runs.
    alertOnLastAttempt Boolean
    (Bool) do not send notifications to recipients specified in on_start for the retried runs and do not send notifications to recipients specified in on_failure until the last retry of the run.
    noAlertForCanceledRuns Boolean
    (Bool) don't send alert for cancelled runs.
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs.
    alertOnLastAttempt boolean
    (Bool) do not send notifications to recipients specified in on_start for the retried runs and do not send notifications to recipients specified in on_failure until the last retry of the run.
    noAlertForCanceledRuns boolean
    (Bool) don't send alert for cancelled runs.
    noAlertForSkippedRuns boolean
    (Bool) don't send alert for skipped runs.
    alert_on_last_attempt bool
    (Bool) do not send notifications to recipients specified in on_start for the retried runs and do not send notifications to recipients specified in on_failure until the last retry of the run.
    no_alert_for_canceled_runs bool
    (Bool) don't send alert for cancelled runs.
    no_alert_for_skipped_runs bool
    (Bool) don't send alert for skipped runs.
    alertOnLastAttempt Boolean
    (Bool) do not send notifications to recipients specified in on_start for the retried runs and do not send notifications to recipients specified in on_failure until the last retry of the run.
    noAlertForCanceledRuns Boolean
    (Bool) don't send alert for cancelled runs.
    noAlertForSkippedRuns Boolean
    (Bool) don't send alert for skipped runs.

    JobTaskPipelineTask, JobTaskPipelineTaskArgs

    PipelineId string
    The pipeline's unique ID.
    FullRefresh bool

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    PipelineId string
    The pipeline's unique ID.
    FullRefresh bool

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    pipelineId String
    The pipeline's unique ID.
    fullRefresh Boolean

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    pipelineId string
    The pipeline's unique ID.
    fullRefresh boolean

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    pipeline_id str
    The pipeline's unique ID.
    full_refresh bool

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    pipelineId String
    The pipeline's unique ID.
    fullRefresh Boolean

    (Bool) Specifies if there should be full refresh of the pipeline.

    Note The following configuration blocks are only supported inside a task block

    JobTaskPythonWheelTask, JobTaskPythonWheelTaskArgs

    EntryPoint string
    Python function as entry point for the task
    NamedParameters Dictionary<string, object>
    Named parameters for the task
    PackageName string
    Name of Python package
    Parameters List<string>
    Parameters for the task
    EntryPoint string
    Python function as entry point for the task
    NamedParameters map[string]interface{}
    Named parameters for the task
    PackageName string
    Name of Python package
    Parameters []string
    Parameters for the task
    entryPoint String
    Python function as entry point for the task
    namedParameters Map<String,Object>
    Named parameters for the task
    packageName String
    Name of Python package
    parameters List<String>
    Parameters for the task
    entryPoint string
    Python function as entry point for the task
    namedParameters {[key: string]: any}
    Named parameters for the task
    packageName string
    Name of Python package
    parameters string[]
    Parameters for the task
    entry_point str
    Python function as entry point for the task
    named_parameters Mapping[str, Any]
    Named parameters for the task
    package_name str
    Name of Python package
    parameters Sequence[str]
    Parameters for the task
    entryPoint String
    Python function as entry point for the task
    namedParameters Map<Any>
    Named parameters for the task
    packageName String
    Name of Python package
    parameters List<String>
    Parameters for the task

    JobTaskRunJobTask, JobTaskRunJobTaskArgs

    JobId int
    (String) ID of the job
    JobParameters Dictionary<string, object>
    (Map) Job parameters for the task
    JobId int
    (String) ID of the job
    JobParameters map[string]interface{}
    (Map) Job parameters for the task
    jobId Integer
    (String) ID of the job
    jobParameters Map<String,Object>
    (Map) Job parameters for the task
    jobId number
    (String) ID of the job
    jobParameters {[key: string]: any}
    (Map) Job parameters for the task
    job_id int
    (String) ID of the job
    job_parameters Mapping[str, Any]
    (Map) Job parameters for the task
    jobId Number
    (String) ID of the job
    jobParameters Map<Any>
    (Map) Job parameters for the task

    JobTaskSparkJarTask, JobTaskSparkJarTaskArgs

    JarUri string
    MainClassName string
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    Parameters List<string>
    (List) Parameters passed to the main method.
    JarUri string
    MainClassName string
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    Parameters []string
    (List) Parameters passed to the main method.
    jarUri String
    mainClassName String
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters List<String>
    (List) Parameters passed to the main method.
    jarUri string
    mainClassName string
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters string[]
    (List) Parameters passed to the main method.
    jar_uri str
    main_class_name str
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters Sequence[str]
    (List) Parameters passed to the main method.
    jarUri String
    mainClassName String
    The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use SparkContext.getOrCreate to obtain a Spark context; otherwise, runs of the job will fail.
    parameters List<String>
    (List) Parameters passed to the main method.

    JobTaskSparkPythonTask, JobTaskSparkPythonTaskArgs

    PythonFile string
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    Parameters List<string>
    (List) Command line parameters passed to the Python file.
    Source string
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.
    PythonFile string
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    Parameters []string
    (List) Command line parameters passed to the Python file.
    Source string
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.
    pythonFile String
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    parameters List<String>
    (List) Command line parameters passed to the Python file.
    source String
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.
    pythonFile string
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    parameters string[]
    (List) Command line parameters passed to the Python file.
    source string
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.
    python_file str
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    parameters Sequence[str]
    (List) Command line parameters passed to the Python file.
    source str
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.
    pythonFile String
    The URI of the Python file to be executed. databricks_dbfs_file, cloud file URIs (e.g. s3:/, abfss:/, gs:/), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with /Repos. For files stored in a remote repository, the path must be relative. This field is required.
    parameters List<String>
    (List) Command line parameters passed to the Python file.
    source String
    Location type of the Python file, can only be GIT. When set to GIT, the Python file will be retrieved from a Git repository defined in git_source.

    JobTaskSparkSubmitTask, JobTaskSparkSubmitTaskArgs

    Parameters List<string>
    (List) Command-line parameters passed to spark submit.
    Parameters []string
    (List) Command-line parameters passed to spark submit.
    parameters List<String>
    (List) Command-line parameters passed to spark submit.
    parameters string[]
    (List) Command-line parameters passed to spark submit.
    parameters Sequence[str]
    (List) Command-line parameters passed to spark submit.
    parameters List<String>
    (List) Command-line parameters passed to spark submit.

    JobTaskSqlTask, JobTaskSqlTaskArgs

    Alert JobTaskSqlTaskAlert
    block consisting of following fields:
    Dashboard JobTaskSqlTaskDashboard
    block consisting of following fields:
    File JobTaskSqlTaskFile
    block consisting of single string fields:
    Parameters Dictionary<string, object>
    (Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.
    Query JobTaskSqlTaskQuery
    block consisting of single string field: query_id - identifier of the Databricks SQL Query (databricks_sql_query).
    WarehouseId string
    ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now.
    Alert JobTaskSqlTaskAlert
    block consisting of following fields:
    Dashboard JobTaskSqlTaskDashboard
    block consisting of following fields:
    File JobTaskSqlTaskFile
    block consisting of single string fields:
    Parameters map[string]interface{}
    (Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.
    Query JobTaskSqlTaskQuery
    block consisting of single string field: query_id - identifier of the Databricks SQL Query (databricks_sql_query).
    WarehouseId string
    ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now.
    alert JobTaskSqlTaskAlert
    block consisting of following fields:
    dashboard JobTaskSqlTaskDashboard
    block consisting of following fields:
    file JobTaskSqlTaskFile
    block consisting of single string fields:
    parameters Map<String,Object>
    (Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.
    query JobTaskSqlTaskQuery
    block consisting of single string field: query_id - identifier of the Databricks SQL Query (databricks_sql_query).
    warehouseId String
    ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now.
    alert JobTaskSqlTaskAlert
    block consisting of following fields:
    dashboard JobTaskSqlTaskDashboard
    block consisting of following fields:
    file JobTaskSqlTaskFile
    block consisting of single string fields:
    parameters {[key: string]: any}
    (Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.
    query JobTaskSqlTaskQuery
    block consisting of single string field: query_id - identifier of the Databricks SQL Query (databricks_sql_query).
    warehouseId string
    ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now.
    alert JobTaskSqlTaskAlert
    block consisting of following fields:
    dashboard JobTaskSqlTaskDashboard
    block consisting of following fields:
    file JobTaskSqlTaskFile
    block consisting of single string fields:
    parameters Mapping[str, Any]
    (Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.
    query JobTaskSqlTaskQuery
    block consisting of single string field: query_id - identifier of the Databricks SQL Query (databricks_sql_query).
    warehouse_id str
    ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now.
    alert Property Map
    block consisting of following fields:
    dashboard Property Map
    block consisting of following fields:
    file Property Map
    block consisting of single string fields:
    parameters Map<Any>
    (Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.
    query Property Map
    block consisting of single string field: query_id - identifier of the Databricks SQL Query (databricks_sql_query).
    warehouseId String
    ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now.

    JobTaskSqlTaskAlert, JobTaskSqlTaskAlertArgs

    AlertId string
    (String) identifier of the Databricks SQL Alert.
    Subscriptions List<JobTaskSqlTaskAlertSubscription>
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    PauseSubscriptions bool
    flag that specifies if subscriptions are paused or not.
    AlertId string
    (String) identifier of the Databricks SQL Alert.
    Subscriptions []JobTaskSqlTaskAlertSubscription
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    PauseSubscriptions bool
    flag that specifies if subscriptions are paused or not.
    alertId String
    (String) identifier of the Databricks SQL Alert.
    subscriptions List<JobTaskSqlTaskAlertSubscription>
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    pauseSubscriptions Boolean
    flag that specifies if subscriptions are paused or not.
    alertId string
    (String) identifier of the Databricks SQL Alert.
    subscriptions JobTaskSqlTaskAlertSubscription[]
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    pauseSubscriptions boolean
    flag that specifies if subscriptions are paused or not.
    alert_id str
    (String) identifier of the Databricks SQL Alert.
    subscriptions Sequence[JobTaskSqlTaskAlertSubscription]
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    pause_subscriptions bool
    flag that specifies if subscriptions are paused or not.
    alertId String
    (String) identifier of the Databricks SQL Alert.
    subscriptions List<Property Map>
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    pauseSubscriptions Boolean
    flag that specifies if subscriptions are paused or not.

    JobTaskSqlTaskAlertSubscription, JobTaskSqlTaskAlertSubscriptionArgs

    DestinationId string
    UserName string
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    DestinationId string
    UserName string
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destinationId String
    userName String
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destinationId string
    userName string
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destination_id str
    user_name str
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destinationId String
    userName String
    The email of an active workspace user. Non-admin users can only set this field to their own email.

    JobTaskSqlTaskDashboard, JobTaskSqlTaskDashboardArgs

    DashboardId string
    (String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard.
    CustomSubject string
    string specifying a custom subject of email sent.
    PauseSubscriptions bool
    flag that specifies if subscriptions are paused or not.
    Subscriptions List<JobTaskSqlTaskDashboardSubscription>
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    DashboardId string
    (String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard.
    CustomSubject string
    string specifying a custom subject of email sent.
    PauseSubscriptions bool
    flag that specifies if subscriptions are paused or not.
    Subscriptions []JobTaskSqlTaskDashboardSubscription
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    dashboardId String
    (String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard.
    customSubject String
    string specifying a custom subject of email sent.
    pauseSubscriptions Boolean
    flag that specifies if subscriptions are paused or not.
    subscriptions List<JobTaskSqlTaskDashboardSubscription>
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    dashboardId string
    (String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard.
    customSubject string
    string specifying a custom subject of email sent.
    pauseSubscriptions boolean
    flag that specifies if subscriptions are paused or not.
    subscriptions JobTaskSqlTaskDashboardSubscription[]
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    dashboard_id str
    (String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard.
    custom_subject str
    string specifying a custom subject of email sent.
    pause_subscriptions bool
    flag that specifies if subscriptions are paused or not.
    subscriptions Sequence[JobTaskSqlTaskDashboardSubscription]
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.
    dashboardId String
    (String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard.
    customSubject String
    string specifying a custom subject of email sent.
    pauseSubscriptions Boolean
    flag that specifies if subscriptions are paused or not.
    subscriptions List<Property Map>
    a list of subscription blocks consisting out of one of the required fields: user_name for user emails or destination_id - for Alert destination's identifier.

    JobTaskSqlTaskDashboardSubscription, JobTaskSqlTaskDashboardSubscriptionArgs

    DestinationId string
    UserName string
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    DestinationId string
    UserName string
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destinationId String
    userName String
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destinationId string
    userName string
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destination_id str
    user_name str
    The email of an active workspace user. Non-admin users can only set this field to their own email.
    destinationId String
    userName String
    The email of an active workspace user. Non-admin users can only set this field to their own email.

    JobTaskSqlTaskFile, JobTaskSqlTaskFileArgs

    Path string

    If source is GIT: Relative path to the file in the repository specified in the git_source block with SQL commands to execute. If source is WORKSPACE: Absolute path to the file in the workspace with SQL commands to execute.

    Example

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const sqlAggregationJob = new databricks.Job("sqlAggregationJob", {tasks: [ { taskKey: "run_agg_query", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, query: { queryId: databricks_sql_query.agg_query.id, }, }, }, { taskKey: "run_dashboard", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, dashboard: { dashboardId: databricks_sql_dashboard.dash.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, { taskKey: "run_alert", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, alert: { alertId: databricks_sql_alert.alert.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, ]});

    import pulumi
    import pulumi_databricks as databricks
    
    sql_aggregation_job = databricks.Job("sqlAggregationJob", tasks=[
        databricks.JobTaskArgs(
            task_key="run_agg_query",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                query=databricks.JobTaskSqlTaskQueryArgs(
                    query_id=databricks_sql_query["agg_query"]["id"],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_dashboard",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                dashboard=databricks.JobTaskSqlTaskDashboardArgs(
                    dashboard_id=databricks_sql_dashboard["dash"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskDashboardSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_alert",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                alert=databricks.JobTaskSqlTaskAlertArgs(
                    alert_id=databricks_sql_alert["alert"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskAlertSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
    ])
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var sqlAggregationJob = new Databricks.Job("sqlAggregationJob", new()
        {
            Tasks = new[]
            {
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_agg_query",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Query = new Databricks.Inputs.JobTaskSqlTaskQueryArgs
                        {
                            QueryId = databricks_sql_query.Agg_query.Id,
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_dashboard",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Dashboard = new Databricks.Inputs.JobTaskSqlTaskDashboardArgs
                        {
                            DashboardId = databricks_sql_dashboard.Dash.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskDashboardSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_alert",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Alert = new Databricks.Inputs.JobTaskSqlTaskAlertArgs
                        {
                            AlertId = databricks_sql_alert.Alert.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskAlertSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "sqlAggregationJob", &databricks.JobArgs{
    			Tasks: databricks.JobTaskArray{
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_agg_query"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Query: &databricks.JobTaskSqlTaskQueryArgs{
    							QueryId: pulumi.Any(databricks_sql_query.Agg_query.Id),
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_dashboard"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Dashboard: &databricks.JobTaskSqlTaskDashboardArgs{
    							DashboardId: pulumi.Any(databricks_sql_dashboard.Dash.Id),
    							Subscriptions: databricks.JobTaskSqlTaskDashboardSubscriptionArray{
    								&databricks.JobTaskSqlTaskDashboardSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_alert"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Alert: &databricks.JobTaskSqlTaskAlertArgs{
    							AlertId: pulumi.Any(databricks_sql_alert.Alert.Id),
    							Subscriptions: databricks.JobTaskSqlTaskAlertSubscriptionArray{
    								&databricks.JobTaskSqlTaskAlertSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskQueryArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskDashboardArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskAlertArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var sqlAggregationJob = new Job("sqlAggregationJob", JobArgs.builder()        
                .tasks(            
                    JobTaskArgs.builder()
                        .taskKey("run_agg_query")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .query(JobTaskSqlTaskQueryArgs.builder()
                                .queryId(databricks_sql_query.agg_query().id())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_dashboard")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .dashboard(JobTaskSqlTaskDashboardArgs.builder()
                                .dashboardId(databricks_sql_dashboard.dash().id())
                                .subscriptions(JobTaskSqlTaskDashboardSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_alert")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .alert(JobTaskSqlTaskAlertArgs.builder()
                                .alertId(databricks_sql_alert.alert().id())
                                .subscriptions(JobTaskSqlTaskAlertSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build())
                .build());
    
        }
    }
    
    resources:
      sqlAggregationJob:
        type: databricks:Job
        properties:
          tasks:
            - taskKey: run_agg_query
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                query:
                  queryId: ${databricks_sql_query.agg_query.id}
            - taskKey: run_dashboard
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                dashboard:
                  dashboardId: ${databricks_sql_dashboard.dash.id}
                  subscriptions:
                    - userName: user@domain.com
            - taskKey: run_alert
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                alert:
                  alertId: ${databricks_sql_alert.alert.id}
                  subscriptions:
                    - userName: user@domain.com
    
    Source string
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    Path string

    If source is GIT: Relative path to the file in the repository specified in the git_source block with SQL commands to execute. If source is WORKSPACE: Absolute path to the file in the workspace with SQL commands to execute.

    Example

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const sqlAggregationJob = new databricks.Job("sqlAggregationJob", {tasks: [ { taskKey: "run_agg_query", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, query: { queryId: databricks_sql_query.agg_query.id, }, }, }, { taskKey: "run_dashboard", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, dashboard: { dashboardId: databricks_sql_dashboard.dash.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, { taskKey: "run_alert", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, alert: { alertId: databricks_sql_alert.alert.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, ]});

    import pulumi
    import pulumi_databricks as databricks
    
    sql_aggregation_job = databricks.Job("sqlAggregationJob", tasks=[
        databricks.JobTaskArgs(
            task_key="run_agg_query",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                query=databricks.JobTaskSqlTaskQueryArgs(
                    query_id=databricks_sql_query["agg_query"]["id"],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_dashboard",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                dashboard=databricks.JobTaskSqlTaskDashboardArgs(
                    dashboard_id=databricks_sql_dashboard["dash"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskDashboardSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_alert",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                alert=databricks.JobTaskSqlTaskAlertArgs(
                    alert_id=databricks_sql_alert["alert"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskAlertSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
    ])
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var sqlAggregationJob = new Databricks.Job("sqlAggregationJob", new()
        {
            Tasks = new[]
            {
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_agg_query",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Query = new Databricks.Inputs.JobTaskSqlTaskQueryArgs
                        {
                            QueryId = databricks_sql_query.Agg_query.Id,
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_dashboard",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Dashboard = new Databricks.Inputs.JobTaskSqlTaskDashboardArgs
                        {
                            DashboardId = databricks_sql_dashboard.Dash.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskDashboardSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_alert",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Alert = new Databricks.Inputs.JobTaskSqlTaskAlertArgs
                        {
                            AlertId = databricks_sql_alert.Alert.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskAlertSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "sqlAggregationJob", &databricks.JobArgs{
    			Tasks: databricks.JobTaskArray{
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_agg_query"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Query: &databricks.JobTaskSqlTaskQueryArgs{
    							QueryId: pulumi.Any(databricks_sql_query.Agg_query.Id),
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_dashboard"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Dashboard: &databricks.JobTaskSqlTaskDashboardArgs{
    							DashboardId: pulumi.Any(databricks_sql_dashboard.Dash.Id),
    							Subscriptions: databricks.JobTaskSqlTaskDashboardSubscriptionArray{
    								&databricks.JobTaskSqlTaskDashboardSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_alert"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Alert: &databricks.JobTaskSqlTaskAlertArgs{
    							AlertId: pulumi.Any(databricks_sql_alert.Alert.Id),
    							Subscriptions: databricks.JobTaskSqlTaskAlertSubscriptionArray{
    								&databricks.JobTaskSqlTaskAlertSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskQueryArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskDashboardArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskAlertArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var sqlAggregationJob = new Job("sqlAggregationJob", JobArgs.builder()        
                .tasks(            
                    JobTaskArgs.builder()
                        .taskKey("run_agg_query")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .query(JobTaskSqlTaskQueryArgs.builder()
                                .queryId(databricks_sql_query.agg_query().id())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_dashboard")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .dashboard(JobTaskSqlTaskDashboardArgs.builder()
                                .dashboardId(databricks_sql_dashboard.dash().id())
                                .subscriptions(JobTaskSqlTaskDashboardSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_alert")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .alert(JobTaskSqlTaskAlertArgs.builder()
                                .alertId(databricks_sql_alert.alert().id())
                                .subscriptions(JobTaskSqlTaskAlertSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build())
                .build());
    
        }
    }
    
    resources:
      sqlAggregationJob:
        type: databricks:Job
        properties:
          tasks:
            - taskKey: run_agg_query
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                query:
                  queryId: ${databricks_sql_query.agg_query.id}
            - taskKey: run_dashboard
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                dashboard:
                  dashboardId: ${databricks_sql_dashboard.dash.id}
                  subscriptions:
                    - userName: user@domain.com
            - taskKey: run_alert
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                alert:
                  alertId: ${databricks_sql_alert.alert.id}
                  subscriptions:
                    - userName: user@domain.com
    
    Source string
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    path String

    If source is GIT: Relative path to the file in the repository specified in the git_source block with SQL commands to execute. If source is WORKSPACE: Absolute path to the file in the workspace with SQL commands to execute.

    Example

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const sqlAggregationJob = new databricks.Job("sqlAggregationJob", {tasks: [ { taskKey: "run_agg_query", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, query: { queryId: databricks_sql_query.agg_query.id, }, }, }, { taskKey: "run_dashboard", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, dashboard: { dashboardId: databricks_sql_dashboard.dash.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, { taskKey: "run_alert", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, alert: { alertId: databricks_sql_alert.alert.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, ]});

    import pulumi
    import pulumi_databricks as databricks
    
    sql_aggregation_job = databricks.Job("sqlAggregationJob", tasks=[
        databricks.JobTaskArgs(
            task_key="run_agg_query",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                query=databricks.JobTaskSqlTaskQueryArgs(
                    query_id=databricks_sql_query["agg_query"]["id"],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_dashboard",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                dashboard=databricks.JobTaskSqlTaskDashboardArgs(
                    dashboard_id=databricks_sql_dashboard["dash"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskDashboardSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_alert",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                alert=databricks.JobTaskSqlTaskAlertArgs(
                    alert_id=databricks_sql_alert["alert"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskAlertSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
    ])
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var sqlAggregationJob = new Databricks.Job("sqlAggregationJob", new()
        {
            Tasks = new[]
            {
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_agg_query",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Query = new Databricks.Inputs.JobTaskSqlTaskQueryArgs
                        {
                            QueryId = databricks_sql_query.Agg_query.Id,
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_dashboard",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Dashboard = new Databricks.Inputs.JobTaskSqlTaskDashboardArgs
                        {
                            DashboardId = databricks_sql_dashboard.Dash.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskDashboardSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_alert",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Alert = new Databricks.Inputs.JobTaskSqlTaskAlertArgs
                        {
                            AlertId = databricks_sql_alert.Alert.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskAlertSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "sqlAggregationJob", &databricks.JobArgs{
    			Tasks: databricks.JobTaskArray{
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_agg_query"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Query: &databricks.JobTaskSqlTaskQueryArgs{
    							QueryId: pulumi.Any(databricks_sql_query.Agg_query.Id),
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_dashboard"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Dashboard: &databricks.JobTaskSqlTaskDashboardArgs{
    							DashboardId: pulumi.Any(databricks_sql_dashboard.Dash.Id),
    							Subscriptions: databricks.JobTaskSqlTaskDashboardSubscriptionArray{
    								&databricks.JobTaskSqlTaskDashboardSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_alert"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Alert: &databricks.JobTaskSqlTaskAlertArgs{
    							AlertId: pulumi.Any(databricks_sql_alert.Alert.Id),
    							Subscriptions: databricks.JobTaskSqlTaskAlertSubscriptionArray{
    								&databricks.JobTaskSqlTaskAlertSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskQueryArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskDashboardArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskAlertArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var sqlAggregationJob = new Job("sqlAggregationJob", JobArgs.builder()        
                .tasks(            
                    JobTaskArgs.builder()
                        .taskKey("run_agg_query")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .query(JobTaskSqlTaskQueryArgs.builder()
                                .queryId(databricks_sql_query.agg_query().id())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_dashboard")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .dashboard(JobTaskSqlTaskDashboardArgs.builder()
                                .dashboardId(databricks_sql_dashboard.dash().id())
                                .subscriptions(JobTaskSqlTaskDashboardSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_alert")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .alert(JobTaskSqlTaskAlertArgs.builder()
                                .alertId(databricks_sql_alert.alert().id())
                                .subscriptions(JobTaskSqlTaskAlertSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build())
                .build());
    
        }
    }
    
    resources:
      sqlAggregationJob:
        type: databricks:Job
        properties:
          tasks:
            - taskKey: run_agg_query
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                query:
                  queryId: ${databricks_sql_query.agg_query.id}
            - taskKey: run_dashboard
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                dashboard:
                  dashboardId: ${databricks_sql_dashboard.dash.id}
                  subscriptions:
                    - userName: user@domain.com
            - taskKey: run_alert
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                alert:
                  alertId: ${databricks_sql_alert.alert.id}
                  subscriptions:
                    - userName: user@domain.com
    
    source String
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    path string

    If source is GIT: Relative path to the file in the repository specified in the git_source block with SQL commands to execute. If source is WORKSPACE: Absolute path to the file in the workspace with SQL commands to execute.

    Example

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const sqlAggregationJob = new databricks.Job("sqlAggregationJob", {tasks: [ { taskKey: "run_agg_query", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, query: { queryId: databricks_sql_query.agg_query.id, }, }, }, { taskKey: "run_dashboard", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, dashboard: { dashboardId: databricks_sql_dashboard.dash.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, { taskKey: "run_alert", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, alert: { alertId: databricks_sql_alert.alert.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, ]});

    import pulumi
    import pulumi_databricks as databricks
    
    sql_aggregation_job = databricks.Job("sqlAggregationJob", tasks=[
        databricks.JobTaskArgs(
            task_key="run_agg_query",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                query=databricks.JobTaskSqlTaskQueryArgs(
                    query_id=databricks_sql_query["agg_query"]["id"],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_dashboard",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                dashboard=databricks.JobTaskSqlTaskDashboardArgs(
                    dashboard_id=databricks_sql_dashboard["dash"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskDashboardSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_alert",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                alert=databricks.JobTaskSqlTaskAlertArgs(
                    alert_id=databricks_sql_alert["alert"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskAlertSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
    ])
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var sqlAggregationJob = new Databricks.Job("sqlAggregationJob", new()
        {
            Tasks = new[]
            {
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_agg_query",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Query = new Databricks.Inputs.JobTaskSqlTaskQueryArgs
                        {
                            QueryId = databricks_sql_query.Agg_query.Id,
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_dashboard",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Dashboard = new Databricks.Inputs.JobTaskSqlTaskDashboardArgs
                        {
                            DashboardId = databricks_sql_dashboard.Dash.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskDashboardSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_alert",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Alert = new Databricks.Inputs.JobTaskSqlTaskAlertArgs
                        {
                            AlertId = databricks_sql_alert.Alert.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskAlertSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "sqlAggregationJob", &databricks.JobArgs{
    			Tasks: databricks.JobTaskArray{
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_agg_query"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Query: &databricks.JobTaskSqlTaskQueryArgs{
    							QueryId: pulumi.Any(databricks_sql_query.Agg_query.Id),
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_dashboard"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Dashboard: &databricks.JobTaskSqlTaskDashboardArgs{
    							DashboardId: pulumi.Any(databricks_sql_dashboard.Dash.Id),
    							Subscriptions: databricks.JobTaskSqlTaskDashboardSubscriptionArray{
    								&databricks.JobTaskSqlTaskDashboardSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_alert"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Alert: &databricks.JobTaskSqlTaskAlertArgs{
    							AlertId: pulumi.Any(databricks_sql_alert.Alert.Id),
    							Subscriptions: databricks.JobTaskSqlTaskAlertSubscriptionArray{
    								&databricks.JobTaskSqlTaskAlertSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskQueryArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskDashboardArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskAlertArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var sqlAggregationJob = new Job("sqlAggregationJob", JobArgs.builder()        
                .tasks(            
                    JobTaskArgs.builder()
                        .taskKey("run_agg_query")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .query(JobTaskSqlTaskQueryArgs.builder()
                                .queryId(databricks_sql_query.agg_query().id())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_dashboard")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .dashboard(JobTaskSqlTaskDashboardArgs.builder()
                                .dashboardId(databricks_sql_dashboard.dash().id())
                                .subscriptions(JobTaskSqlTaskDashboardSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_alert")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .alert(JobTaskSqlTaskAlertArgs.builder()
                                .alertId(databricks_sql_alert.alert().id())
                                .subscriptions(JobTaskSqlTaskAlertSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build())
                .build());
    
        }
    }
    
    resources:
      sqlAggregationJob:
        type: databricks:Job
        properties:
          tasks:
            - taskKey: run_agg_query
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                query:
                  queryId: ${databricks_sql_query.agg_query.id}
            - taskKey: run_dashboard
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                dashboard:
                  dashboardId: ${databricks_sql_dashboard.dash.id}
                  subscriptions:
                    - userName: user@domain.com
            - taskKey: run_alert
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                alert:
                  alertId: ${databricks_sql_alert.alert.id}
                  subscriptions:
                    - userName: user@domain.com
    
    source string
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    path str

    If source is GIT: Relative path to the file in the repository specified in the git_source block with SQL commands to execute. If source is WORKSPACE: Absolute path to the file in the workspace with SQL commands to execute.

    Example

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const sqlAggregationJob = new databricks.Job("sqlAggregationJob", {tasks: [ { taskKey: "run_agg_query", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, query: { queryId: databricks_sql_query.agg_query.id, }, }, }, { taskKey: "run_dashboard", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, dashboard: { dashboardId: databricks_sql_dashboard.dash.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, { taskKey: "run_alert", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, alert: { alertId: databricks_sql_alert.alert.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, ]});

    import pulumi
    import pulumi_databricks as databricks
    
    sql_aggregation_job = databricks.Job("sqlAggregationJob", tasks=[
        databricks.JobTaskArgs(
            task_key="run_agg_query",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                query=databricks.JobTaskSqlTaskQueryArgs(
                    query_id=databricks_sql_query["agg_query"]["id"],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_dashboard",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                dashboard=databricks.JobTaskSqlTaskDashboardArgs(
                    dashboard_id=databricks_sql_dashboard["dash"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskDashboardSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_alert",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                alert=databricks.JobTaskSqlTaskAlertArgs(
                    alert_id=databricks_sql_alert["alert"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskAlertSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
    ])
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var sqlAggregationJob = new Databricks.Job("sqlAggregationJob", new()
        {
            Tasks = new[]
            {
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_agg_query",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Query = new Databricks.Inputs.JobTaskSqlTaskQueryArgs
                        {
                            QueryId = databricks_sql_query.Agg_query.Id,
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_dashboard",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Dashboard = new Databricks.Inputs.JobTaskSqlTaskDashboardArgs
                        {
                            DashboardId = databricks_sql_dashboard.Dash.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskDashboardSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_alert",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Alert = new Databricks.Inputs.JobTaskSqlTaskAlertArgs
                        {
                            AlertId = databricks_sql_alert.Alert.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskAlertSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "sqlAggregationJob", &databricks.JobArgs{
    			Tasks: databricks.JobTaskArray{
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_agg_query"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Query: &databricks.JobTaskSqlTaskQueryArgs{
    							QueryId: pulumi.Any(databricks_sql_query.Agg_query.Id),
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_dashboard"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Dashboard: &databricks.JobTaskSqlTaskDashboardArgs{
    							DashboardId: pulumi.Any(databricks_sql_dashboard.Dash.Id),
    							Subscriptions: databricks.JobTaskSqlTaskDashboardSubscriptionArray{
    								&databricks.JobTaskSqlTaskDashboardSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_alert"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Alert: &databricks.JobTaskSqlTaskAlertArgs{
    							AlertId: pulumi.Any(databricks_sql_alert.Alert.Id),
    							Subscriptions: databricks.JobTaskSqlTaskAlertSubscriptionArray{
    								&databricks.JobTaskSqlTaskAlertSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskQueryArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskDashboardArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskAlertArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var sqlAggregationJob = new Job("sqlAggregationJob", JobArgs.builder()        
                .tasks(            
                    JobTaskArgs.builder()
                        .taskKey("run_agg_query")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .query(JobTaskSqlTaskQueryArgs.builder()
                                .queryId(databricks_sql_query.agg_query().id())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_dashboard")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .dashboard(JobTaskSqlTaskDashboardArgs.builder()
                                .dashboardId(databricks_sql_dashboard.dash().id())
                                .subscriptions(JobTaskSqlTaskDashboardSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_alert")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .alert(JobTaskSqlTaskAlertArgs.builder()
                                .alertId(databricks_sql_alert.alert().id())
                                .subscriptions(JobTaskSqlTaskAlertSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build())
                .build());
    
        }
    }
    
    resources:
      sqlAggregationJob:
        type: databricks:Job
        properties:
          tasks:
            - taskKey: run_agg_query
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                query:
                  queryId: ${databricks_sql_query.agg_query.id}
            - taskKey: run_dashboard
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                dashboard:
                  dashboardId: ${databricks_sql_dashboard.dash.id}
                  subscriptions:
                    - userName: user@domain.com
            - taskKey: run_alert
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                alert:
                  alertId: ${databricks_sql_alert.alert.id}
                  subscriptions:
                    - userName: user@domain.com
    
    source str
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.
    path String

    If source is GIT: Relative path to the file in the repository specified in the git_source block with SQL commands to execute. If source is WORKSPACE: Absolute path to the file in the workspace with SQL commands to execute.

    Example

    import * as pulumi from "@pulumi/pulumi";
    import * as databricks from "@pulumi/databricks";
    

    const sqlAggregationJob = new databricks.Job("sqlAggregationJob", {tasks: [ { taskKey: "run_agg_query", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, query: { queryId: databricks_sql_query.agg_query.id, }, }, }, { taskKey: "run_dashboard", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, dashboard: { dashboardId: databricks_sql_dashboard.dash.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, { taskKey: "run_alert", sqlTask: { warehouseId: databricks_sql_endpoint.sql_job_warehouse.id, alert: { alertId: databricks_sql_alert.alert.id, subscriptions: [{ userName: "user@domain.com", }], }, }, }, ]});

    import pulumi
    import pulumi_databricks as databricks
    
    sql_aggregation_job = databricks.Job("sqlAggregationJob", tasks=[
        databricks.JobTaskArgs(
            task_key="run_agg_query",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                query=databricks.JobTaskSqlTaskQueryArgs(
                    query_id=databricks_sql_query["agg_query"]["id"],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_dashboard",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                dashboard=databricks.JobTaskSqlTaskDashboardArgs(
                    dashboard_id=databricks_sql_dashboard["dash"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskDashboardSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
        databricks.JobTaskArgs(
            task_key="run_alert",
            sql_task=databricks.JobTaskSqlTaskArgs(
                warehouse_id=databricks_sql_endpoint["sql_job_warehouse"]["id"],
                alert=databricks.JobTaskSqlTaskAlertArgs(
                    alert_id=databricks_sql_alert["alert"]["id"],
                    subscriptions=[databricks.JobTaskSqlTaskAlertSubscriptionArgs(
                        user_name="user@domain.com",
                    )],
                ),
            ),
        ),
    ])
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using Databricks = Pulumi.Databricks;
    
    return await Deployment.RunAsync(() => 
    {
        var sqlAggregationJob = new Databricks.Job("sqlAggregationJob", new()
        {
            Tasks = new[]
            {
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_agg_query",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Query = new Databricks.Inputs.JobTaskSqlTaskQueryArgs
                        {
                            QueryId = databricks_sql_query.Agg_query.Id,
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_dashboard",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Dashboard = new Databricks.Inputs.JobTaskSqlTaskDashboardArgs
                        {
                            DashboardId = databricks_sql_dashboard.Dash.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskDashboardSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
                new Databricks.Inputs.JobTaskArgs
                {
                    TaskKey = "run_alert",
                    SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs
                    {
                        WarehouseId = databricks_sql_endpoint.Sql_job_warehouse.Id,
                        Alert = new Databricks.Inputs.JobTaskSqlTaskAlertArgs
                        {
                            AlertId = databricks_sql_alert.Alert.Id,
                            Subscriptions = new[]
                            {
                                new Databricks.Inputs.JobTaskSqlTaskAlertSubscriptionArgs
                                {
                                    UserName = "user@domain.com",
                                },
                            },
                        },
                    },
                },
            },
        });
    
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		_, err := databricks.NewJob(ctx, "sqlAggregationJob", &databricks.JobArgs{
    			Tasks: databricks.JobTaskArray{
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_agg_query"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Query: &databricks.JobTaskSqlTaskQueryArgs{
    							QueryId: pulumi.Any(databricks_sql_query.Agg_query.Id),
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_dashboard"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Dashboard: &databricks.JobTaskSqlTaskDashboardArgs{
    							DashboardId: pulumi.Any(databricks_sql_dashboard.Dash.Id),
    							Subscriptions: databricks.JobTaskSqlTaskDashboardSubscriptionArray{
    								&databricks.JobTaskSqlTaskDashboardSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    				&databricks.JobTaskArgs{
    					TaskKey: pulumi.String("run_alert"),
    					SqlTask: &databricks.JobTaskSqlTaskArgs{
    						WarehouseId: pulumi.Any(databricks_sql_endpoint.Sql_job_warehouse.Id),
    						Alert: &databricks.JobTaskSqlTaskAlertArgs{
    							AlertId: pulumi.Any(databricks_sql_alert.Alert.Id),
    							Subscriptions: databricks.JobTaskSqlTaskAlertSubscriptionArray{
    								&databricks.JobTaskSqlTaskAlertSubscriptionArgs{
    									UserName: pulumi.String("user@domain.com"),
    								},
    							},
    						},
    					},
    				},
    			},
    		})
    		if err != nil {
    			return err
    		}
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.databricks.Job;
    import com.pulumi.databricks.JobArgs;
    import com.pulumi.databricks.inputs.JobTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskQueryArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskDashboardArgs;
    import com.pulumi.databricks.inputs.JobTaskSqlTaskAlertArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var sqlAggregationJob = new Job("sqlAggregationJob", JobArgs.builder()        
                .tasks(            
                    JobTaskArgs.builder()
                        .taskKey("run_agg_query")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .query(JobTaskSqlTaskQueryArgs.builder()
                                .queryId(databricks_sql_query.agg_query().id())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_dashboard")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .dashboard(JobTaskSqlTaskDashboardArgs.builder()
                                .dashboardId(databricks_sql_dashboard.dash().id())
                                .subscriptions(JobTaskSqlTaskDashboardSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build(),
                    JobTaskArgs.builder()
                        .taskKey("run_alert")
                        .sqlTask(JobTaskSqlTaskArgs.builder()
                            .warehouseId(databricks_sql_endpoint.sql_job_warehouse().id())
                            .alert(JobTaskSqlTaskAlertArgs.builder()
                                .alertId(databricks_sql_alert.alert().id())
                                .subscriptions(JobTaskSqlTaskAlertSubscriptionArgs.builder()
                                    .userName("user@domain.com")
                                    .build())
                                .build())
                            .build())
                        .build())
                .build());
    
        }
    }
    
    resources:
      sqlAggregationJob:
        type: databricks:Job
        properties:
          tasks:
            - taskKey: run_agg_query
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                query:
                  queryId: ${databricks_sql_query.agg_query.id}
            - taskKey: run_dashboard
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                dashboard:
                  dashboardId: ${databricks_sql_dashboard.dash.id}
                  subscriptions:
                    - userName: user@domain.com
            - taskKey: run_alert
              sqlTask:
                warehouseId: ${databricks_sql_endpoint.sql_job_warehouse.id}
                alert:
                  alertId: ${databricks_sql_alert.alert.id}
                  subscriptions:
                    - userName: user@domain.com
    
    source String
    The source of the project. Possible values are WORKSPACE and GIT. Defaults to GIT if a git_source block is present in the job definition.

    JobTaskSqlTaskQuery, JobTaskSqlTaskQueryArgs

    QueryId string
    QueryId string
    queryId String
    queryId string
    queryId String

    JobTaskWebhookNotifications, JobTaskWebhookNotificationsArgs

    OnDurationWarningThresholdExceededs List<JobTaskWebhookNotificationsOnDurationWarningThresholdExceeded>

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    OnFailures List<JobTaskWebhookNotificationsOnFailure>
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    OnStarts List<JobTaskWebhookNotificationsOnStart>
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    OnSuccesses List<JobTaskWebhookNotificationsOnSuccess>
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.
    OnDurationWarningThresholdExceededs []JobTaskWebhookNotificationsOnDurationWarningThresholdExceeded

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    OnFailures []JobTaskWebhookNotificationsOnFailure
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    OnStarts []JobTaskWebhookNotificationsOnStart
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    OnSuccesses []JobTaskWebhookNotificationsOnSuccess
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.
    onDurationWarningThresholdExceededs List<JobTaskWebhookNotificationsOnDurationWarningThresholdExceeded>

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    onFailures List<JobTaskWebhookNotificationsOnFailure>
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    onStarts List<JobTaskWebhookNotificationsOnStart>
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    onSuccesses List<JobTaskWebhookNotificationsOnSuccess>
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.
    onDurationWarningThresholdExceededs JobTaskWebhookNotificationsOnDurationWarningThresholdExceeded[]

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    onFailures JobTaskWebhookNotificationsOnFailure[]
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    onStarts JobTaskWebhookNotificationsOnStart[]
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    onSuccesses JobTaskWebhookNotificationsOnSuccess[]
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.
    on_duration_warning_threshold_exceededs Sequence[JobTaskWebhookNotificationsOnDurationWarningThresholdExceeded]

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    on_failures Sequence[JobTaskWebhookNotificationsOnFailure]
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    on_starts Sequence[JobTaskWebhookNotificationsOnStart]
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    on_successes Sequence[JobTaskWebhookNotificationsOnSuccess]
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.
    onDurationWarningThresholdExceededs List<Property Map>

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    onFailures List<Property Map>
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    onStarts List<Property Map>
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    onSuccesses List<Property Map>
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.

    JobTaskWebhookNotificationsOnDurationWarningThresholdExceeded, JobTaskWebhookNotificationsOnDurationWarningThresholdExceededArgs

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id str

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    JobTaskWebhookNotificationsOnFailure, JobTaskWebhookNotificationsOnFailureArgs

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id str

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    JobTaskWebhookNotificationsOnStart, JobTaskWebhookNotificationsOnStartArgs

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id str

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    JobTaskWebhookNotificationsOnSuccess, JobTaskWebhookNotificationsOnSuccessArgs

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id str

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    JobTrigger, JobTriggerArgs

    FileArrival JobTriggerFileArrival
    configuration block to define a trigger for File Arrival events consisting of following attributes:
    PauseStatus string
    Indicate whether this trigger is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted in the block, the server will default to using UNPAUSED as a value for pause_status.
    TableUpdate JobTriggerTableUpdate
    configuration block to define a trigger for Table Update events consisting of following attributes:
    FileArrival JobTriggerFileArrival
    configuration block to define a trigger for File Arrival events consisting of following attributes:
    PauseStatus string
    Indicate whether this trigger is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted in the block, the server will default to using UNPAUSED as a value for pause_status.
    TableUpdate JobTriggerTableUpdate
    configuration block to define a trigger for Table Update events consisting of following attributes:
    fileArrival JobTriggerFileArrival
    configuration block to define a trigger for File Arrival events consisting of following attributes:
    pauseStatus String
    Indicate whether this trigger is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted in the block, the server will default to using UNPAUSED as a value for pause_status.
    tableUpdate JobTriggerTableUpdate
    configuration block to define a trigger for Table Update events consisting of following attributes:
    fileArrival JobTriggerFileArrival
    configuration block to define a trigger for File Arrival events consisting of following attributes:
    pauseStatus string
    Indicate whether this trigger is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted in the block, the server will default to using UNPAUSED as a value for pause_status.
    tableUpdate JobTriggerTableUpdate
    configuration block to define a trigger for Table Update events consisting of following attributes:
    file_arrival JobTriggerFileArrival
    configuration block to define a trigger for File Arrival events consisting of following attributes:
    pause_status str
    Indicate whether this trigger is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted in the block, the server will default to using UNPAUSED as a value for pause_status.
    table_update JobTriggerTableUpdate
    configuration block to define a trigger for Table Update events consisting of following attributes:
    fileArrival Property Map
    configuration block to define a trigger for File Arrival events consisting of following attributes:
    pauseStatus String
    Indicate whether this trigger is paused or not. Either PAUSED or UNPAUSED. When the pause_status field is omitted in the block, the server will default to using UNPAUSED as a value for pause_status.
    tableUpdate Property Map
    configuration block to define a trigger for Table Update events consisting of following attributes:

    JobTriggerFileArrival, JobTriggerFileArrivalArgs

    Url string
    URL of the Git repository to use.
    MinTimeBetweenTriggersSeconds int
    If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds.
    WaitAfterLastChangeSeconds int
    If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds.
    Url string
    URL of the Git repository to use.
    MinTimeBetweenTriggersSeconds int
    If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds.
    WaitAfterLastChangeSeconds int
    If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds.
    url String
    URL of the Git repository to use.
    minTimeBetweenTriggersSeconds Integer
    If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds.
    waitAfterLastChangeSeconds Integer
    If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds.
    url string
    URL of the Git repository to use.
    minTimeBetweenTriggersSeconds number
    If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds.
    waitAfterLastChangeSeconds number
    If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds.
    url str
    URL of the Git repository to use.
    min_time_between_triggers_seconds int
    If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds.
    wait_after_last_change_seconds int
    If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds.
    url String
    URL of the Git repository to use.
    minTimeBetweenTriggersSeconds Number
    If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds.
    waitAfterLastChangeSeconds Number
    If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds.

    JobTriggerTableUpdate, JobTriggerTableUpdateArgs

    TableNames List<string>
    A list of Delta tables to monitor for changes. The table name must be in the format catalog_name.schema_name.table_name.
    Condition string
    The table(s) condition based on which to trigger a job run. Valid values are ANY_UPDATED or ALL_UPDATED.
    MinTimeBetweenTriggersSeconds int
    If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds.
    WaitAfterLastChangeSeconds int
    If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds.
    TableNames []string
    A list of Delta tables to monitor for changes. The table name must be in the format catalog_name.schema_name.table_name.
    Condition string
    The table(s) condition based on which to trigger a job run. Valid values are ANY_UPDATED or ALL_UPDATED.
    MinTimeBetweenTriggersSeconds int
    If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds.
    WaitAfterLastChangeSeconds int
    If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds.
    tableNames List<String>
    A list of Delta tables to monitor for changes. The table name must be in the format catalog_name.schema_name.table_name.
    condition String
    The table(s) condition based on which to trigger a job run. Valid values are ANY_UPDATED or ALL_UPDATED.
    minTimeBetweenTriggersSeconds Integer
    If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds.
    waitAfterLastChangeSeconds Integer
    If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds.
    tableNames string[]
    A list of Delta tables to monitor for changes. The table name must be in the format catalog_name.schema_name.table_name.
    condition string
    The table(s) condition based on which to trigger a job run. Valid values are ANY_UPDATED or ALL_UPDATED.
    minTimeBetweenTriggersSeconds number
    If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds.
    waitAfterLastChangeSeconds number
    If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds.
    table_names Sequence[str]
    A list of Delta tables to monitor for changes. The table name must be in the format catalog_name.schema_name.table_name.
    condition str
    The table(s) condition based on which to trigger a job run. Valid values are ANY_UPDATED or ALL_UPDATED.
    min_time_between_triggers_seconds int
    If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds.
    wait_after_last_change_seconds int
    If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds.
    tableNames List<String>
    A list of Delta tables to monitor for changes. The table name must be in the format catalog_name.schema_name.table_name.
    condition String
    The table(s) condition based on which to trigger a job run. Valid values are ANY_UPDATED or ALL_UPDATED.
    minTimeBetweenTriggersSeconds Number
    If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds.
    waitAfterLastChangeSeconds Number
    If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds.

    JobWebhookNotifications, JobWebhookNotificationsArgs

    OnDurationWarningThresholdExceededs List<JobWebhookNotificationsOnDurationWarningThresholdExceeded>

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    OnFailures List<JobWebhookNotificationsOnFailure>
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    OnStarts List<JobWebhookNotificationsOnStart>
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    OnSuccesses List<JobWebhookNotificationsOnSuccess>
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.
    OnDurationWarningThresholdExceededs []JobWebhookNotificationsOnDurationWarningThresholdExceeded

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    OnFailures []JobWebhookNotificationsOnFailure
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    OnStarts []JobWebhookNotificationsOnStart
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    OnSuccesses []JobWebhookNotificationsOnSuccess
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.
    onDurationWarningThresholdExceededs List<JobWebhookNotificationsOnDurationWarningThresholdExceeded>

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    onFailures List<JobWebhookNotificationsOnFailure>
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    onStarts List<JobWebhookNotificationsOnStart>
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    onSuccesses List<JobWebhookNotificationsOnSuccess>
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.
    onDurationWarningThresholdExceededs JobWebhookNotificationsOnDurationWarningThresholdExceeded[]

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    onFailures JobWebhookNotificationsOnFailure[]
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    onStarts JobWebhookNotificationsOnStart[]
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    onSuccesses JobWebhookNotificationsOnSuccess[]
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.
    on_duration_warning_threshold_exceededs Sequence[JobWebhookNotificationsOnDurationWarningThresholdExceeded]

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    on_failures Sequence[JobWebhookNotificationsOnFailure]
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    on_starts Sequence[JobWebhookNotificationsOnStart]
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    on_successes Sequence[JobWebhookNotificationsOnSuccess]
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.
    onDurationWarningThresholdExceededs List<Property Map>

    (List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the RUN_DURATION_SECONDS metric in the health block.

    Note that the id is not to be confused with the name of the alert destination. The id can be retrieved through the API or the URL of Databricks UI https://<workspace host>/sql/destinations/<notification id>?o=<workspace id>

    Example

    import * as pulumi from "@pulumi/pulumi";
    
    import pulumi
    
    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    

    return await Deployment.RunAsync(() => { });

    package main
    
    import (
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		return nil
    	})
    }
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
        }
    }
    
    {}
    
    onFailures List<Property Map>
    (List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.
    onStarts List<Property Map>
    (List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.
    onSuccesses List<Property Map>
    (List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.

    JobWebhookNotificationsOnDurationWarningThresholdExceeded, JobWebhookNotificationsOnDurationWarningThresholdExceededArgs

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id str

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    JobWebhookNotificationsOnFailure, JobWebhookNotificationsOnFailureArgs

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id str

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    JobWebhookNotificationsOnStart, JobWebhookNotificationsOnStartArgs

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id str

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    JobWebhookNotificationsOnSuccess, JobWebhookNotificationsOnSuccessArgs

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    Id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id string

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id str

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    id String

    ID of the system notification that is notified when an event defined in webhook_notifications is triggered.

    Note The following configuration blocks can be standalone or nested inside a task block

    Package Details

    Repository
    databricks pulumi/pulumi-databricks
    License
    Apache-2.0
    Notes
    This Pulumi package is based on the databricks Terraform Provider.
    databricks logo
    Databricks v1.35.0 published on Friday, Mar 29, 2024 by Pulumi