published on Monday, Mar 9, 2026 by Pulumi
published on Monday, Mar 9, 2026 by Pulumi
Import
The resource job can be imported using the id of the job bash
$ pulumi import databricks:index/job:Job this <job-id>
Create Job Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Job(name: string, args?: JobArgs, opts?: CustomResourceOptions);@overload
def Job(resource_name: str,
args: Optional[JobArgs] = None,
opts: Optional[ResourceOptions] = None)
@overload
def Job(resource_name: str,
opts: Optional[ResourceOptions] = None,
always_running: Optional[bool] = None,
email_notifications: Optional[JobEmailNotificationsArgs] = None,
existing_cluster_id: Optional[str] = None,
format: Optional[str] = None,
git_source: Optional[JobGitSourceArgs] = None,
job_clusters: Optional[Sequence[JobJobClusterArgs]] = None,
libraries: Optional[Sequence[JobLibraryArgs]] = None,
max_concurrent_runs: Optional[int] = None,
max_retries: Optional[int] = None,
min_retry_interval_millis: Optional[int] = None,
name: Optional[str] = None,
new_cluster: Optional[JobNewClusterArgs] = None,
notebook_task: Optional[JobNotebookTaskArgs] = None,
pipeline_task: Optional[JobPipelineTaskArgs] = None,
python_wheel_task: Optional[JobPythonWheelTaskArgs] = None,
retry_on_timeout: Optional[bool] = None,
schedule: Optional[JobScheduleArgs] = None,
spark_jar_task: Optional[JobSparkJarTaskArgs] = None,
spark_python_task: Optional[JobSparkPythonTaskArgs] = None,
spark_submit_task: Optional[JobSparkSubmitTaskArgs] = None,
tasks: Optional[Sequence[JobTaskArgs]] = None,
timeout_seconds: Optional[int] = None)func NewJob(ctx *Context, name string, args *JobArgs, opts ...ResourceOption) (*Job, error)public Job(string name, JobArgs? args = null, CustomResourceOptions? opts = null)type: databricks:Job
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var jobResource = new Databricks.Job("jobResource", new()
{
AlwaysRunning = false,
EmailNotifications = new Databricks.Inputs.JobEmailNotificationsArgs
{
NoAlertForSkippedRuns = false,
OnFailures = new[]
{
"string",
},
OnStarts = new[]
{
"string",
},
OnSuccesses = new[]
{
"string",
},
},
ExistingClusterId = "string",
Format = "string",
GitSource = new Databricks.Inputs.JobGitSourceArgs
{
Url = "string",
Branch = "string",
Commit = "string",
Provider = "string",
Tag = "string",
},
JobClusters = new[]
{
new Databricks.Inputs.JobJobClusterArgs
{
JobClusterKey = "string",
NewCluster = new Databricks.Inputs.JobJobClusterNewClusterArgs
{
NumWorkers = 0,
SparkVersion = "string",
EnableElasticDisk = false,
DataSecurityMode = "string",
ClusterId = "string",
ClusterLogConf = new Databricks.Inputs.JobJobClusterNewClusterClusterLogConfArgs
{
Dbfs = new Databricks.Inputs.JobJobClusterNewClusterClusterLogConfDbfsArgs
{
Destination = "string",
},
S3 = new Databricks.Inputs.JobJobClusterNewClusterClusterLogConfS3Args
{
Destination = "string",
CannedAcl = "string",
EnableEncryption = false,
EncryptionType = "string",
Endpoint = "string",
KmsKey = "string",
Region = "string",
},
},
ClusterName = "string",
CustomTags =
{
{ "string", "any" },
},
EnableLocalDiskEncryption = false,
DockerImage = new Databricks.Inputs.JobJobClusterNewClusterDockerImageArgs
{
Url = "string",
BasicAuth = new Databricks.Inputs.JobJobClusterNewClusterDockerImageBasicAuthArgs
{
Password = "string",
Username = "string",
},
},
DriverInstancePoolId = "string",
DriverNodeTypeId = "string",
AzureAttributes = new Databricks.Inputs.JobJobClusterNewClusterAzureAttributesArgs
{
Availability = "string",
FirstOnDemand = 0,
SpotBidMaxPrice = 0,
},
Autoscale = new Databricks.Inputs.JobJobClusterNewClusterAutoscaleArgs
{
MaxWorkers = 0,
MinWorkers = 0,
},
NodeTypeId = "string",
IdempotencyToken = "string",
InitScripts = new[]
{
new Databricks.Inputs.JobJobClusterNewClusterInitScriptArgs
{
Dbfs = new Databricks.Inputs.JobJobClusterNewClusterInitScriptDbfsArgs
{
Destination = "string",
},
File = new Databricks.Inputs.JobJobClusterNewClusterInitScriptFileArgs
{
Destination = "string",
},
S3 = new Databricks.Inputs.JobJobClusterNewClusterInitScriptS3Args
{
Destination = "string",
CannedAcl = "string",
EnableEncryption = false,
EncryptionType = "string",
Endpoint = "string",
KmsKey = "string",
Region = "string",
},
},
},
InstancePoolId = "string",
GcpAttributes = new Databricks.Inputs.JobJobClusterNewClusterGcpAttributesArgs
{
Availability = "string",
BootDiskSize = 0,
GoogleServiceAccount = "string",
UsePreemptibleExecutors = false,
ZoneId = "string",
},
AwsAttributes = new Databricks.Inputs.JobJobClusterNewClusterAwsAttributesArgs
{
Availability = "string",
EbsVolumeCount = 0,
EbsVolumeSize = 0,
EbsVolumeType = "string",
FirstOnDemand = 0,
InstanceProfileArn = "string",
SpotBidPricePercent = 0,
ZoneId = "string",
},
PolicyId = "string",
SingleUserName = "string",
SparkConf =
{
{ "string", "any" },
},
SparkEnvVars =
{
{ "string", "any" },
},
AutoterminationMinutes = 0,
SshPublicKeys = new[]
{
"string",
},
},
},
},
Libraries = new[]
{
new Databricks.Inputs.JobLibraryArgs
{
Cran = new Databricks.Inputs.JobLibraryCranArgs
{
Package = "string",
Repo = "string",
},
Egg = "string",
Jar = "string",
Maven = new Databricks.Inputs.JobLibraryMavenArgs
{
Coordinates = "string",
Exclusions = new[]
{
"string",
},
Repo = "string",
},
Pypi = new Databricks.Inputs.JobLibraryPypiArgs
{
Package = "string",
Repo = "string",
},
Whl = "string",
},
},
MaxConcurrentRuns = 0,
MaxRetries = 0,
MinRetryIntervalMillis = 0,
Name = "string",
NewCluster = new Databricks.Inputs.JobNewClusterArgs
{
SparkVersion = "string",
EnableElasticDisk = false,
SparkConf =
{
{ "string", "any" },
},
AzureAttributes = new Databricks.Inputs.JobNewClusterAzureAttributesArgs
{
Availability = "string",
FirstOnDemand = 0,
SpotBidMaxPrice = 0,
},
ClusterId = "string",
ClusterLogConf = new Databricks.Inputs.JobNewClusterClusterLogConfArgs
{
Dbfs = new Databricks.Inputs.JobNewClusterClusterLogConfDbfsArgs
{
Destination = "string",
},
S3 = new Databricks.Inputs.JobNewClusterClusterLogConfS3Args
{
Destination = "string",
CannedAcl = "string",
EnableEncryption = false,
EncryptionType = "string",
Endpoint = "string",
KmsKey = "string",
Region = "string",
},
},
ClusterName = "string",
CustomTags =
{
{ "string", "any" },
},
DataSecurityMode = "string",
DockerImage = new Databricks.Inputs.JobNewClusterDockerImageArgs
{
Url = "string",
BasicAuth = new Databricks.Inputs.JobNewClusterDockerImageBasicAuthArgs
{
Password = "string",
Username = "string",
},
},
DriverInstancePoolId = "string",
DriverNodeTypeId = "string",
Autoscale = new Databricks.Inputs.JobNewClusterAutoscaleArgs
{
MaxWorkers = 0,
MinWorkers = 0,
},
AwsAttributes = new Databricks.Inputs.JobNewClusterAwsAttributesArgs
{
Availability = "string",
EbsVolumeCount = 0,
EbsVolumeSize = 0,
EbsVolumeType = "string",
FirstOnDemand = 0,
InstanceProfileArn = "string",
SpotBidPricePercent = 0,
ZoneId = "string",
},
IdempotencyToken = "string",
EnableLocalDiskEncryption = false,
InitScripts = new[]
{
new Databricks.Inputs.JobNewClusterInitScriptArgs
{
Dbfs = new Databricks.Inputs.JobNewClusterInitScriptDbfsArgs
{
Destination = "string",
},
File = new Databricks.Inputs.JobNewClusterInitScriptFileArgs
{
Destination = "string",
},
S3 = new Databricks.Inputs.JobNewClusterInitScriptS3Args
{
Destination = "string",
CannedAcl = "string",
EnableEncryption = false,
EncryptionType = "string",
Endpoint = "string",
KmsKey = "string",
Region = "string",
},
},
},
InstancePoolId = "string",
NodeTypeId = "string",
NumWorkers = 0,
PolicyId = "string",
SingleUserName = "string",
GcpAttributes = new Databricks.Inputs.JobNewClusterGcpAttributesArgs
{
Availability = "string",
BootDiskSize = 0,
GoogleServiceAccount = "string",
UsePreemptibleExecutors = false,
ZoneId = "string",
},
SparkEnvVars =
{
{ "string", "any" },
},
AutoterminationMinutes = 0,
SshPublicKeys = new[]
{
"string",
},
},
NotebookTask = new Databricks.Inputs.JobNotebookTaskArgs
{
NotebookPath = "string",
BaseParameters =
{
{ "string", "any" },
},
},
PipelineTask = new Databricks.Inputs.JobPipelineTaskArgs
{
PipelineId = "string",
},
PythonWheelTask = new Databricks.Inputs.JobPythonWheelTaskArgs
{
EntryPoint = "string",
NamedParameters =
{
{ "string", "any" },
},
PackageName = "string",
Parameters = new[]
{
"string",
},
},
RetryOnTimeout = false,
Schedule = new Databricks.Inputs.JobScheduleArgs
{
QuartzCronExpression = "string",
TimezoneId = "string",
PauseStatus = "string",
},
SparkJarTask = new Databricks.Inputs.JobSparkJarTaskArgs
{
JarUri = "string",
MainClassName = "string",
Parameters = new[]
{
"string",
},
},
SparkPythonTask = new Databricks.Inputs.JobSparkPythonTaskArgs
{
PythonFile = "string",
Parameters = new[]
{
"string",
},
},
SparkSubmitTask = new Databricks.Inputs.JobSparkSubmitTaskArgs
{
Parameters = new[]
{
"string",
},
},
Tasks = new[]
{
new Databricks.Inputs.JobTaskArgs
{
DependsOns = new[]
{
new Databricks.Inputs.JobTaskDependsOnArgs
{
TaskKey = "string",
},
},
Description = "string",
EmailNotifications = new Databricks.Inputs.JobTaskEmailNotificationsArgs
{
NoAlertForSkippedRuns = false,
OnFailures = new[]
{
"string",
},
OnStarts = new[]
{
"string",
},
OnSuccesses = new[]
{
"string",
},
},
ExistingClusterId = "string",
JobClusterKey = "string",
Libraries = new[]
{
new Databricks.Inputs.JobTaskLibraryArgs
{
Cran = new Databricks.Inputs.JobTaskLibraryCranArgs
{
Package = "string",
Repo = "string",
},
Egg = "string",
Jar = "string",
Maven = new Databricks.Inputs.JobTaskLibraryMavenArgs
{
Coordinates = "string",
Exclusions = new[]
{
"string",
},
Repo = "string",
},
Pypi = new Databricks.Inputs.JobTaskLibraryPypiArgs
{
Package = "string",
Repo = "string",
},
Whl = "string",
},
},
MaxRetries = 0,
MinRetryIntervalMillis = 0,
NewCluster = new Databricks.Inputs.JobTaskNewClusterArgs
{
SparkVersion = "string",
EnableElasticDisk = false,
SparkConf =
{
{ "string", "any" },
},
AzureAttributes = new Databricks.Inputs.JobTaskNewClusterAzureAttributesArgs
{
Availability = "string",
FirstOnDemand = 0,
SpotBidMaxPrice = 0,
},
ClusterId = "string",
ClusterLogConf = new Databricks.Inputs.JobTaskNewClusterClusterLogConfArgs
{
Dbfs = new Databricks.Inputs.JobTaskNewClusterClusterLogConfDbfsArgs
{
Destination = "string",
},
S3 = new Databricks.Inputs.JobTaskNewClusterClusterLogConfS3Args
{
Destination = "string",
CannedAcl = "string",
EnableEncryption = false,
EncryptionType = "string",
Endpoint = "string",
KmsKey = "string",
Region = "string",
},
},
ClusterName = "string",
CustomTags =
{
{ "string", "any" },
},
DataSecurityMode = "string",
DockerImage = new Databricks.Inputs.JobTaskNewClusterDockerImageArgs
{
Url = "string",
BasicAuth = new Databricks.Inputs.JobTaskNewClusterDockerImageBasicAuthArgs
{
Password = "string",
Username = "string",
},
},
DriverInstancePoolId = "string",
DriverNodeTypeId = "string",
Autoscale = new Databricks.Inputs.JobTaskNewClusterAutoscaleArgs
{
MaxWorkers = 0,
MinWorkers = 0,
},
AwsAttributes = new Databricks.Inputs.JobTaskNewClusterAwsAttributesArgs
{
Availability = "string",
EbsVolumeCount = 0,
EbsVolumeSize = 0,
EbsVolumeType = "string",
FirstOnDemand = 0,
InstanceProfileArn = "string",
SpotBidPricePercent = 0,
ZoneId = "string",
},
IdempotencyToken = "string",
EnableLocalDiskEncryption = false,
InitScripts = new[]
{
new Databricks.Inputs.JobTaskNewClusterInitScriptArgs
{
Dbfs = new Databricks.Inputs.JobTaskNewClusterInitScriptDbfsArgs
{
Destination = "string",
},
File = new Databricks.Inputs.JobTaskNewClusterInitScriptFileArgs
{
Destination = "string",
},
S3 = new Databricks.Inputs.JobTaskNewClusterInitScriptS3Args
{
Destination = "string",
CannedAcl = "string",
EnableEncryption = false,
EncryptionType = "string",
Endpoint = "string",
KmsKey = "string",
Region = "string",
},
},
},
InstancePoolId = "string",
NodeTypeId = "string",
NumWorkers = 0,
PolicyId = "string",
SingleUserName = "string",
GcpAttributes = new Databricks.Inputs.JobTaskNewClusterGcpAttributesArgs
{
Availability = "string",
BootDiskSize = 0,
GoogleServiceAccount = "string",
UsePreemptibleExecutors = false,
ZoneId = "string",
},
SparkEnvVars =
{
{ "string", "any" },
},
AutoterminationMinutes = 0,
SshPublicKeys = new[]
{
"string",
},
},
NotebookTask = new Databricks.Inputs.JobTaskNotebookTaskArgs
{
NotebookPath = "string",
BaseParameters =
{
{ "string", "any" },
},
},
PipelineTask = new Databricks.Inputs.JobTaskPipelineTaskArgs
{
PipelineId = "string",
},
PythonWheelTask = new Databricks.Inputs.JobTaskPythonWheelTaskArgs
{
EntryPoint = "string",
NamedParameters =
{
{ "string", "any" },
},
PackageName = "string",
Parameters = new[]
{
"string",
},
},
RetryOnTimeout = false,
SparkJarTask = new Databricks.Inputs.JobTaskSparkJarTaskArgs
{
JarUri = "string",
MainClassName = "string",
Parameters = new[]
{
"string",
},
},
SparkPythonTask = new Databricks.Inputs.JobTaskSparkPythonTaskArgs
{
PythonFile = "string",
Parameters = new[]
{
"string",
},
},
SparkSubmitTask = new Databricks.Inputs.JobTaskSparkSubmitTaskArgs
{
Parameters = new[]
{
"string",
},
},
TaskKey = "string",
TimeoutSeconds = 0,
},
},
TimeoutSeconds = 0,
});
example, err := databricks.NewJob(ctx, "jobResource", &databricks.JobArgs{
AlwaysRunning: pulumi.Bool(false),
EmailNotifications: &databricks.JobEmailNotificationsArgs{
NoAlertForSkippedRuns: pulumi.Bool(false),
OnFailures: pulumi.StringArray{
pulumi.String("string"),
},
OnStarts: pulumi.StringArray{
pulumi.String("string"),
},
OnSuccesses: pulumi.StringArray{
pulumi.String("string"),
},
},
ExistingClusterId: pulumi.String("string"),
Format: pulumi.String("string"),
GitSource: &databricks.JobGitSourceArgs{
Url: pulumi.String("string"),
Branch: pulumi.String("string"),
Commit: pulumi.String("string"),
Provider: pulumi.String("string"),
Tag: pulumi.String("string"),
},
JobClusters: databricks.JobJobClusterArray{
&databricks.JobJobClusterArgs{
JobClusterKey: pulumi.String("string"),
NewCluster: &databricks.JobJobClusterNewClusterArgs{
NumWorkers: pulumi.Int(0),
SparkVersion: pulumi.String("string"),
EnableElasticDisk: pulumi.Bool(false),
DataSecurityMode: pulumi.String("string"),
ClusterId: pulumi.String("string"),
ClusterLogConf: &databricks.JobJobClusterNewClusterClusterLogConfArgs{
Dbfs: &databricks.JobJobClusterNewClusterClusterLogConfDbfsArgs{
Destination: pulumi.String("string"),
},
S3: &databricks.JobJobClusterNewClusterClusterLogConfS3Args{
Destination: pulumi.String("string"),
CannedAcl: pulumi.String("string"),
EnableEncryption: pulumi.Bool(false),
EncryptionType: pulumi.String("string"),
Endpoint: pulumi.String("string"),
KmsKey: pulumi.String("string"),
Region: pulumi.String("string"),
},
},
ClusterName: pulumi.String("string"),
CustomTags: pulumi.Map{
"string": pulumi.Any("any"),
},
EnableLocalDiskEncryption: pulumi.Bool(false),
DockerImage: &databricks.JobJobClusterNewClusterDockerImageArgs{
Url: pulumi.String("string"),
BasicAuth: &databricks.JobJobClusterNewClusterDockerImageBasicAuthArgs{
Password: pulumi.String("string"),
Username: pulumi.String("string"),
},
},
DriverInstancePoolId: pulumi.String("string"),
DriverNodeTypeId: pulumi.String("string"),
AzureAttributes: &databricks.JobJobClusterNewClusterAzureAttributesArgs{
Availability: pulumi.String("string"),
FirstOnDemand: pulumi.Int(0),
SpotBidMaxPrice: pulumi.Float64(0),
},
Autoscale: &databricks.JobJobClusterNewClusterAutoscaleArgs{
MaxWorkers: pulumi.Int(0),
MinWorkers: pulumi.Int(0),
},
NodeTypeId: pulumi.String("string"),
IdempotencyToken: pulumi.String("string"),
InitScripts: databricks.JobJobClusterNewClusterInitScriptArray{
&databricks.JobJobClusterNewClusterInitScriptArgs{
Dbfs: &databricks.JobJobClusterNewClusterInitScriptDbfsArgs{
Destination: pulumi.String("string"),
},
File: &databricks.JobJobClusterNewClusterInitScriptFileArgs{
Destination: pulumi.String("string"),
},
S3: &databricks.JobJobClusterNewClusterInitScriptS3Args{
Destination: pulumi.String("string"),
CannedAcl: pulumi.String("string"),
EnableEncryption: pulumi.Bool(false),
EncryptionType: pulumi.String("string"),
Endpoint: pulumi.String("string"),
KmsKey: pulumi.String("string"),
Region: pulumi.String("string"),
},
},
},
InstancePoolId: pulumi.String("string"),
GcpAttributes: &databricks.JobJobClusterNewClusterGcpAttributesArgs{
Availability: pulumi.String("string"),
BootDiskSize: pulumi.Int(0),
GoogleServiceAccount: pulumi.String("string"),
UsePreemptibleExecutors: pulumi.Bool(false),
ZoneId: pulumi.String("string"),
},
AwsAttributes: &databricks.JobJobClusterNewClusterAwsAttributesArgs{
Availability: pulumi.String("string"),
EbsVolumeCount: pulumi.Int(0),
EbsVolumeSize: pulumi.Int(0),
EbsVolumeType: pulumi.String("string"),
FirstOnDemand: pulumi.Int(0),
InstanceProfileArn: pulumi.String("string"),
SpotBidPricePercent: pulumi.Int(0),
ZoneId: pulumi.String("string"),
},
PolicyId: pulumi.String("string"),
SingleUserName: pulumi.String("string"),
SparkConf: pulumi.Map{
"string": pulumi.Any("any"),
},
SparkEnvVars: pulumi.Map{
"string": pulumi.Any("any"),
},
AutoterminationMinutes: pulumi.Int(0),
SshPublicKeys: pulumi.StringArray{
pulumi.String("string"),
},
},
},
},
Libraries: databricks.JobLibraryArray{
&databricks.JobLibraryArgs{
Cran: &databricks.JobLibraryCranArgs{
Package: pulumi.String("string"),
Repo: pulumi.String("string"),
},
Egg: pulumi.String("string"),
Jar: pulumi.String("string"),
Maven: &databricks.JobLibraryMavenArgs{
Coordinates: pulumi.String("string"),
Exclusions: pulumi.StringArray{
pulumi.String("string"),
},
Repo: pulumi.String("string"),
},
Pypi: &databricks.JobLibraryPypiArgs{
Package: pulumi.String("string"),
Repo: pulumi.String("string"),
},
Whl: pulumi.String("string"),
},
},
MaxConcurrentRuns: pulumi.Int(0),
MaxRetries: pulumi.Int(0),
MinRetryIntervalMillis: pulumi.Int(0),
Name: pulumi.String("string"),
NewCluster: &databricks.JobNewClusterArgs{
SparkVersion: pulumi.String("string"),
EnableElasticDisk: pulumi.Bool(false),
SparkConf: pulumi.Map{
"string": pulumi.Any("any"),
},
AzureAttributes: &databricks.JobNewClusterAzureAttributesArgs{
Availability: pulumi.String("string"),
FirstOnDemand: pulumi.Int(0),
SpotBidMaxPrice: pulumi.Float64(0),
},
ClusterId: pulumi.String("string"),
ClusterLogConf: &databricks.JobNewClusterClusterLogConfArgs{
Dbfs: &databricks.JobNewClusterClusterLogConfDbfsArgs{
Destination: pulumi.String("string"),
},
S3: &databricks.JobNewClusterClusterLogConfS3Args{
Destination: pulumi.String("string"),
CannedAcl: pulumi.String("string"),
EnableEncryption: pulumi.Bool(false),
EncryptionType: pulumi.String("string"),
Endpoint: pulumi.String("string"),
KmsKey: pulumi.String("string"),
Region: pulumi.String("string"),
},
},
ClusterName: pulumi.String("string"),
CustomTags: pulumi.Map{
"string": pulumi.Any("any"),
},
DataSecurityMode: pulumi.String("string"),
DockerImage: &databricks.JobNewClusterDockerImageArgs{
Url: pulumi.String("string"),
BasicAuth: &databricks.JobNewClusterDockerImageBasicAuthArgs{
Password: pulumi.String("string"),
Username: pulumi.String("string"),
},
},
DriverInstancePoolId: pulumi.String("string"),
DriverNodeTypeId: pulumi.String("string"),
Autoscale: &databricks.JobNewClusterAutoscaleArgs{
MaxWorkers: pulumi.Int(0),
MinWorkers: pulumi.Int(0),
},
AwsAttributes: &databricks.JobNewClusterAwsAttributesArgs{
Availability: pulumi.String("string"),
EbsVolumeCount: pulumi.Int(0),
EbsVolumeSize: pulumi.Int(0),
EbsVolumeType: pulumi.String("string"),
FirstOnDemand: pulumi.Int(0),
InstanceProfileArn: pulumi.String("string"),
SpotBidPricePercent: pulumi.Int(0),
ZoneId: pulumi.String("string"),
},
IdempotencyToken: pulumi.String("string"),
EnableLocalDiskEncryption: pulumi.Bool(false),
InitScripts: databricks.JobNewClusterInitScriptArray{
&databricks.JobNewClusterInitScriptArgs{
Dbfs: &databricks.JobNewClusterInitScriptDbfsArgs{
Destination: pulumi.String("string"),
},
File: &databricks.JobNewClusterInitScriptFileArgs{
Destination: pulumi.String("string"),
},
S3: &databricks.JobNewClusterInitScriptS3Args{
Destination: pulumi.String("string"),
CannedAcl: pulumi.String("string"),
EnableEncryption: pulumi.Bool(false),
EncryptionType: pulumi.String("string"),
Endpoint: pulumi.String("string"),
KmsKey: pulumi.String("string"),
Region: pulumi.String("string"),
},
},
},
InstancePoolId: pulumi.String("string"),
NodeTypeId: pulumi.String("string"),
NumWorkers: pulumi.Int(0),
PolicyId: pulumi.String("string"),
SingleUserName: pulumi.String("string"),
GcpAttributes: &databricks.JobNewClusterGcpAttributesArgs{
Availability: pulumi.String("string"),
BootDiskSize: pulumi.Int(0),
GoogleServiceAccount: pulumi.String("string"),
UsePreemptibleExecutors: pulumi.Bool(false),
ZoneId: pulumi.String("string"),
},
SparkEnvVars: pulumi.Map{
"string": pulumi.Any("any"),
},
AutoterminationMinutes: pulumi.Int(0),
SshPublicKeys: pulumi.StringArray{
pulumi.String("string"),
},
},
NotebookTask: &databricks.JobNotebookTaskArgs{
NotebookPath: pulumi.String("string"),
BaseParameters: pulumi.Map{
"string": pulumi.Any("any"),
},
},
PipelineTask: &databricks.JobPipelineTaskArgs{
PipelineId: pulumi.String("string"),
},
PythonWheelTask: &databricks.JobPythonWheelTaskArgs{
EntryPoint: pulumi.String("string"),
NamedParameters: pulumi.Map{
"string": pulumi.Any("any"),
},
PackageName: pulumi.String("string"),
Parameters: pulumi.StringArray{
pulumi.String("string"),
},
},
RetryOnTimeout: pulumi.Bool(false),
Schedule: &databricks.JobScheduleArgs{
QuartzCronExpression: pulumi.String("string"),
TimezoneId: pulumi.String("string"),
PauseStatus: pulumi.String("string"),
},
SparkJarTask: &databricks.JobSparkJarTaskArgs{
JarUri: pulumi.String("string"),
MainClassName: pulumi.String("string"),
Parameters: pulumi.StringArray{
pulumi.String("string"),
},
},
SparkPythonTask: &databricks.JobSparkPythonTaskArgs{
PythonFile: pulumi.String("string"),
Parameters: pulumi.StringArray{
pulumi.String("string"),
},
},
SparkSubmitTask: &databricks.JobSparkSubmitTaskArgs{
Parameters: pulumi.StringArray{
pulumi.String("string"),
},
},
Tasks: databricks.JobTaskArray{
&databricks.JobTaskArgs{
DependsOns: databricks.JobTaskDependsOnArray{
&databricks.JobTaskDependsOnArgs{
TaskKey: pulumi.String("string"),
},
},
Description: pulumi.String("string"),
EmailNotifications: &databricks.JobTaskEmailNotificationsArgs{
NoAlertForSkippedRuns: pulumi.Bool(false),
OnFailures: pulumi.StringArray{
pulumi.String("string"),
},
OnStarts: pulumi.StringArray{
pulumi.String("string"),
},
OnSuccesses: pulumi.StringArray{
pulumi.String("string"),
},
},
ExistingClusterId: pulumi.String("string"),
JobClusterKey: pulumi.String("string"),
Libraries: databricks.JobTaskLibraryArray{
&databricks.JobTaskLibraryArgs{
Cran: &databricks.JobTaskLibraryCranArgs{
Package: pulumi.String("string"),
Repo: pulumi.String("string"),
},
Egg: pulumi.String("string"),
Jar: pulumi.String("string"),
Maven: &databricks.JobTaskLibraryMavenArgs{
Coordinates: pulumi.String("string"),
Exclusions: pulumi.StringArray{
pulumi.String("string"),
},
Repo: pulumi.String("string"),
},
Pypi: &databricks.JobTaskLibraryPypiArgs{
Package: pulumi.String("string"),
Repo: pulumi.String("string"),
},
Whl: pulumi.String("string"),
},
},
MaxRetries: pulumi.Int(0),
MinRetryIntervalMillis: pulumi.Int(0),
NewCluster: &databricks.JobTaskNewClusterArgs{
SparkVersion: pulumi.String("string"),
EnableElasticDisk: pulumi.Bool(false),
SparkConf: pulumi.Map{
"string": pulumi.Any("any"),
},
AzureAttributes: &databricks.JobTaskNewClusterAzureAttributesArgs{
Availability: pulumi.String("string"),
FirstOnDemand: pulumi.Int(0),
SpotBidMaxPrice: pulumi.Float64(0),
},
ClusterId: pulumi.String("string"),
ClusterLogConf: &databricks.JobTaskNewClusterClusterLogConfArgs{
Dbfs: &databricks.JobTaskNewClusterClusterLogConfDbfsArgs{
Destination: pulumi.String("string"),
},
S3: &databricks.JobTaskNewClusterClusterLogConfS3Args{
Destination: pulumi.String("string"),
CannedAcl: pulumi.String("string"),
EnableEncryption: pulumi.Bool(false),
EncryptionType: pulumi.String("string"),
Endpoint: pulumi.String("string"),
KmsKey: pulumi.String("string"),
Region: pulumi.String("string"),
},
},
ClusterName: pulumi.String("string"),
CustomTags: pulumi.Map{
"string": pulumi.Any("any"),
},
DataSecurityMode: pulumi.String("string"),
DockerImage: &databricks.JobTaskNewClusterDockerImageArgs{
Url: pulumi.String("string"),
BasicAuth: &databricks.JobTaskNewClusterDockerImageBasicAuthArgs{
Password: pulumi.String("string"),
Username: pulumi.String("string"),
},
},
DriverInstancePoolId: pulumi.String("string"),
DriverNodeTypeId: pulumi.String("string"),
Autoscale: &databricks.JobTaskNewClusterAutoscaleArgs{
MaxWorkers: pulumi.Int(0),
MinWorkers: pulumi.Int(0),
},
AwsAttributes: &databricks.JobTaskNewClusterAwsAttributesArgs{
Availability: pulumi.String("string"),
EbsVolumeCount: pulumi.Int(0),
EbsVolumeSize: pulumi.Int(0),
EbsVolumeType: pulumi.String("string"),
FirstOnDemand: pulumi.Int(0),
InstanceProfileArn: pulumi.String("string"),
SpotBidPricePercent: pulumi.Int(0),
ZoneId: pulumi.String("string"),
},
IdempotencyToken: pulumi.String("string"),
EnableLocalDiskEncryption: pulumi.Bool(false),
InitScripts: databricks.JobTaskNewClusterInitScriptArray{
&databricks.JobTaskNewClusterInitScriptArgs{
Dbfs: &databricks.JobTaskNewClusterInitScriptDbfsArgs{
Destination: pulumi.String("string"),
},
File: &databricks.JobTaskNewClusterInitScriptFileArgs{
Destination: pulumi.String("string"),
},
S3: &databricks.JobTaskNewClusterInitScriptS3Args{
Destination: pulumi.String("string"),
CannedAcl: pulumi.String("string"),
EnableEncryption: pulumi.Bool(false),
EncryptionType: pulumi.String("string"),
Endpoint: pulumi.String("string"),
KmsKey: pulumi.String("string"),
Region: pulumi.String("string"),
},
},
},
InstancePoolId: pulumi.String("string"),
NodeTypeId: pulumi.String("string"),
NumWorkers: pulumi.Int(0),
PolicyId: pulumi.String("string"),
SingleUserName: pulumi.String("string"),
GcpAttributes: &databricks.JobTaskNewClusterGcpAttributesArgs{
Availability: pulumi.String("string"),
BootDiskSize: pulumi.Int(0),
GoogleServiceAccount: pulumi.String("string"),
UsePreemptibleExecutors: pulumi.Bool(false),
ZoneId: pulumi.String("string"),
},
SparkEnvVars: pulumi.Map{
"string": pulumi.Any("any"),
},
AutoterminationMinutes: pulumi.Int(0),
SshPublicKeys: pulumi.StringArray{
pulumi.String("string"),
},
},
NotebookTask: &databricks.JobTaskNotebookTaskArgs{
NotebookPath: pulumi.String("string"),
BaseParameters: pulumi.Map{
"string": pulumi.Any("any"),
},
},
PipelineTask: &databricks.JobTaskPipelineTaskArgs{
PipelineId: pulumi.String("string"),
},
PythonWheelTask: &databricks.JobTaskPythonWheelTaskArgs{
EntryPoint: pulumi.String("string"),
NamedParameters: pulumi.Map{
"string": pulumi.Any("any"),
},
PackageName: pulumi.String("string"),
Parameters: pulumi.StringArray{
pulumi.String("string"),
},
},
RetryOnTimeout: pulumi.Bool(false),
SparkJarTask: &databricks.JobTaskSparkJarTaskArgs{
JarUri: pulumi.String("string"),
MainClassName: pulumi.String("string"),
Parameters: pulumi.StringArray{
pulumi.String("string"),
},
},
SparkPythonTask: &databricks.JobTaskSparkPythonTaskArgs{
PythonFile: pulumi.String("string"),
Parameters: pulumi.StringArray{
pulumi.String("string"),
},
},
SparkSubmitTask: &databricks.JobTaskSparkSubmitTaskArgs{
Parameters: pulumi.StringArray{
pulumi.String("string"),
},
},
TaskKey: pulumi.String("string"),
TimeoutSeconds: pulumi.Int(0),
},
},
TimeoutSeconds: pulumi.Int(0),
})
var jobResource = new Job("jobResource", JobArgs.builder()
.alwaysRunning(false)
.emailNotifications(JobEmailNotificationsArgs.builder()
.noAlertForSkippedRuns(false)
.onFailures("string")
.onStarts("string")
.onSuccesses("string")
.build())
.existingClusterId("string")
.format("string")
.gitSource(JobGitSourceArgs.builder()
.url("string")
.branch("string")
.commit("string")
.provider("string")
.tag("string")
.build())
.jobClusters(JobJobClusterArgs.builder()
.jobClusterKey("string")
.newCluster(JobJobClusterNewClusterArgs.builder()
.numWorkers(0)
.sparkVersion("string")
.enableElasticDisk(false)
.dataSecurityMode("string")
.clusterId("string")
.clusterLogConf(JobJobClusterNewClusterClusterLogConfArgs.builder()
.dbfs(JobJobClusterNewClusterClusterLogConfDbfsArgs.builder()
.destination("string")
.build())
.s3(JobJobClusterNewClusterClusterLogConfS3Args.builder()
.destination("string")
.cannedAcl("string")
.enableEncryption(false)
.encryptionType("string")
.endpoint("string")
.kmsKey("string")
.region("string")
.build())
.build())
.clusterName("string")
.customTags(Map.of("string", "any"))
.enableLocalDiskEncryption(false)
.dockerImage(JobJobClusterNewClusterDockerImageArgs.builder()
.url("string")
.basicAuth(JobJobClusterNewClusterDockerImageBasicAuthArgs.builder()
.password("string")
.username("string")
.build())
.build())
.driverInstancePoolId("string")
.driverNodeTypeId("string")
.azureAttributes(JobJobClusterNewClusterAzureAttributesArgs.builder()
.availability("string")
.firstOnDemand(0)
.spotBidMaxPrice(0.0)
.build())
.autoscale(JobJobClusterNewClusterAutoscaleArgs.builder()
.maxWorkers(0)
.minWorkers(0)
.build())
.nodeTypeId("string")
.idempotencyToken("string")
.initScripts(JobJobClusterNewClusterInitScriptArgs.builder()
.dbfs(JobJobClusterNewClusterInitScriptDbfsArgs.builder()
.destination("string")
.build())
.file(JobJobClusterNewClusterInitScriptFileArgs.builder()
.destination("string")
.build())
.s3(JobJobClusterNewClusterInitScriptS3Args.builder()
.destination("string")
.cannedAcl("string")
.enableEncryption(false)
.encryptionType("string")
.endpoint("string")
.kmsKey("string")
.region("string")
.build())
.build())
.instancePoolId("string")
.gcpAttributes(JobJobClusterNewClusterGcpAttributesArgs.builder()
.availability("string")
.bootDiskSize(0)
.googleServiceAccount("string")
.usePreemptibleExecutors(false)
.zoneId("string")
.build())
.awsAttributes(JobJobClusterNewClusterAwsAttributesArgs.builder()
.availability("string")
.ebsVolumeCount(0)
.ebsVolumeSize(0)
.ebsVolumeType("string")
.firstOnDemand(0)
.instanceProfileArn("string")
.spotBidPricePercent(0)
.zoneId("string")
.build())
.policyId("string")
.singleUserName("string")
.sparkConf(Map.of("string", "any"))
.sparkEnvVars(Map.of("string", "any"))
.autoterminationMinutes(0)
.sshPublicKeys("string")
.build())
.build())
.libraries(JobLibraryArgs.builder()
.cran(JobLibraryCranArgs.builder()
.package_("string")
.repo("string")
.build())
.egg("string")
.jar("string")
.maven(JobLibraryMavenArgs.builder()
.coordinates("string")
.exclusions("string")
.repo("string")
.build())
.pypi(JobLibraryPypiArgs.builder()
.package_("string")
.repo("string")
.build())
.whl("string")
.build())
.maxConcurrentRuns(0)
.maxRetries(0)
.minRetryIntervalMillis(0)
.name("string")
.newCluster(JobNewClusterArgs.builder()
.sparkVersion("string")
.enableElasticDisk(false)
.sparkConf(Map.of("string", "any"))
.azureAttributes(JobNewClusterAzureAttributesArgs.builder()
.availability("string")
.firstOnDemand(0)
.spotBidMaxPrice(0.0)
.build())
.clusterId("string")
.clusterLogConf(JobNewClusterClusterLogConfArgs.builder()
.dbfs(JobNewClusterClusterLogConfDbfsArgs.builder()
.destination("string")
.build())
.s3(JobNewClusterClusterLogConfS3Args.builder()
.destination("string")
.cannedAcl("string")
.enableEncryption(false)
.encryptionType("string")
.endpoint("string")
.kmsKey("string")
.region("string")
.build())
.build())
.clusterName("string")
.customTags(Map.of("string", "any"))
.dataSecurityMode("string")
.dockerImage(JobNewClusterDockerImageArgs.builder()
.url("string")
.basicAuth(JobNewClusterDockerImageBasicAuthArgs.builder()
.password("string")
.username("string")
.build())
.build())
.driverInstancePoolId("string")
.driverNodeTypeId("string")
.autoscale(JobNewClusterAutoscaleArgs.builder()
.maxWorkers(0)
.minWorkers(0)
.build())
.awsAttributes(JobNewClusterAwsAttributesArgs.builder()
.availability("string")
.ebsVolumeCount(0)
.ebsVolumeSize(0)
.ebsVolumeType("string")
.firstOnDemand(0)
.instanceProfileArn("string")
.spotBidPricePercent(0)
.zoneId("string")
.build())
.idempotencyToken("string")
.enableLocalDiskEncryption(false)
.initScripts(JobNewClusterInitScriptArgs.builder()
.dbfs(JobNewClusterInitScriptDbfsArgs.builder()
.destination("string")
.build())
.file(JobNewClusterInitScriptFileArgs.builder()
.destination("string")
.build())
.s3(JobNewClusterInitScriptS3Args.builder()
.destination("string")
.cannedAcl("string")
.enableEncryption(false)
.encryptionType("string")
.endpoint("string")
.kmsKey("string")
.region("string")
.build())
.build())
.instancePoolId("string")
.nodeTypeId("string")
.numWorkers(0)
.policyId("string")
.singleUserName("string")
.gcpAttributes(JobNewClusterGcpAttributesArgs.builder()
.availability("string")
.bootDiskSize(0)
.googleServiceAccount("string")
.usePreemptibleExecutors(false)
.zoneId("string")
.build())
.sparkEnvVars(Map.of("string", "any"))
.autoterminationMinutes(0)
.sshPublicKeys("string")
.build())
.notebookTask(JobNotebookTaskArgs.builder()
.notebookPath("string")
.baseParameters(Map.of("string", "any"))
.build())
.pipelineTask(JobPipelineTaskArgs.builder()
.pipelineId("string")
.build())
.pythonWheelTask(JobPythonWheelTaskArgs.builder()
.entryPoint("string")
.namedParameters(Map.of("string", "any"))
.packageName("string")
.parameters("string")
.build())
.retryOnTimeout(false)
.schedule(JobScheduleArgs.builder()
.quartzCronExpression("string")
.timezoneId("string")
.pauseStatus("string")
.build())
.sparkJarTask(JobSparkJarTaskArgs.builder()
.jarUri("string")
.mainClassName("string")
.parameters("string")
.build())
.sparkPythonTask(JobSparkPythonTaskArgs.builder()
.pythonFile("string")
.parameters("string")
.build())
.sparkSubmitTask(JobSparkSubmitTaskArgs.builder()
.parameters("string")
.build())
.tasks(JobTaskArgs.builder()
.dependsOns(JobTaskDependsOnArgs.builder()
.taskKey("string")
.build())
.description("string")
.emailNotifications(JobTaskEmailNotificationsArgs.builder()
.noAlertForSkippedRuns(false)
.onFailures("string")
.onStarts("string")
.onSuccesses("string")
.build())
.existingClusterId("string")
.jobClusterKey("string")
.libraries(JobTaskLibraryArgs.builder()
.cran(JobTaskLibraryCranArgs.builder()
.package_("string")
.repo("string")
.build())
.egg("string")
.jar("string")
.maven(JobTaskLibraryMavenArgs.builder()
.coordinates("string")
.exclusions("string")
.repo("string")
.build())
.pypi(JobTaskLibraryPypiArgs.builder()
.package_("string")
.repo("string")
.build())
.whl("string")
.build())
.maxRetries(0)
.minRetryIntervalMillis(0)
.newCluster(JobTaskNewClusterArgs.builder()
.sparkVersion("string")
.enableElasticDisk(false)
.sparkConf(Map.of("string", "any"))
.azureAttributes(JobTaskNewClusterAzureAttributesArgs.builder()
.availability("string")
.firstOnDemand(0)
.spotBidMaxPrice(0.0)
.build())
.clusterId("string")
.clusterLogConf(JobTaskNewClusterClusterLogConfArgs.builder()
.dbfs(JobTaskNewClusterClusterLogConfDbfsArgs.builder()
.destination("string")
.build())
.s3(JobTaskNewClusterClusterLogConfS3Args.builder()
.destination("string")
.cannedAcl("string")
.enableEncryption(false)
.encryptionType("string")
.endpoint("string")
.kmsKey("string")
.region("string")
.build())
.build())
.clusterName("string")
.customTags(Map.of("string", "any"))
.dataSecurityMode("string")
.dockerImage(JobTaskNewClusterDockerImageArgs.builder()
.url("string")
.basicAuth(JobTaskNewClusterDockerImageBasicAuthArgs.builder()
.password("string")
.username("string")
.build())
.build())
.driverInstancePoolId("string")
.driverNodeTypeId("string")
.autoscale(JobTaskNewClusterAutoscaleArgs.builder()
.maxWorkers(0)
.minWorkers(0)
.build())
.awsAttributes(JobTaskNewClusterAwsAttributesArgs.builder()
.availability("string")
.ebsVolumeCount(0)
.ebsVolumeSize(0)
.ebsVolumeType("string")
.firstOnDemand(0)
.instanceProfileArn("string")
.spotBidPricePercent(0)
.zoneId("string")
.build())
.idempotencyToken("string")
.enableLocalDiskEncryption(false)
.initScripts(JobTaskNewClusterInitScriptArgs.builder()
.dbfs(JobTaskNewClusterInitScriptDbfsArgs.builder()
.destination("string")
.build())
.file(JobTaskNewClusterInitScriptFileArgs.builder()
.destination("string")
.build())
.s3(JobTaskNewClusterInitScriptS3Args.builder()
.destination("string")
.cannedAcl("string")
.enableEncryption(false)
.encryptionType("string")
.endpoint("string")
.kmsKey("string")
.region("string")
.build())
.build())
.instancePoolId("string")
.nodeTypeId("string")
.numWorkers(0)
.policyId("string")
.singleUserName("string")
.gcpAttributes(JobTaskNewClusterGcpAttributesArgs.builder()
.availability("string")
.bootDiskSize(0)
.googleServiceAccount("string")
.usePreemptibleExecutors(false)
.zoneId("string")
.build())
.sparkEnvVars(Map.of("string", "any"))
.autoterminationMinutes(0)
.sshPublicKeys("string")
.build())
.notebookTask(JobTaskNotebookTaskArgs.builder()
.notebookPath("string")
.baseParameters(Map.of("string", "any"))
.build())
.pipelineTask(JobTaskPipelineTaskArgs.builder()
.pipelineId("string")
.build())
.pythonWheelTask(JobTaskPythonWheelTaskArgs.builder()
.entryPoint("string")
.namedParameters(Map.of("string", "any"))
.packageName("string")
.parameters("string")
.build())
.retryOnTimeout(false)
.sparkJarTask(JobTaskSparkJarTaskArgs.builder()
.jarUri("string")
.mainClassName("string")
.parameters("string")
.build())
.sparkPythonTask(JobTaskSparkPythonTaskArgs.builder()
.pythonFile("string")
.parameters("string")
.build())
.sparkSubmitTask(JobTaskSparkSubmitTaskArgs.builder()
.parameters("string")
.build())
.taskKey("string")
.timeoutSeconds(0)
.build())
.timeoutSeconds(0)
.build());
job_resource = databricks.Job("jobResource",
always_running=False,
email_notifications={
"no_alert_for_skipped_runs": False,
"on_failures": ["string"],
"on_starts": ["string"],
"on_successes": ["string"],
},
existing_cluster_id="string",
format="string",
git_source={
"url": "string",
"branch": "string",
"commit": "string",
"provider": "string",
"tag": "string",
},
job_clusters=[{
"job_cluster_key": "string",
"new_cluster": {
"num_workers": 0,
"spark_version": "string",
"enable_elastic_disk": False,
"data_security_mode": "string",
"cluster_id": "string",
"cluster_log_conf": {
"dbfs": {
"destination": "string",
},
"s3": {
"destination": "string",
"canned_acl": "string",
"enable_encryption": False,
"encryption_type": "string",
"endpoint": "string",
"kms_key": "string",
"region": "string",
},
},
"cluster_name": "string",
"custom_tags": {
"string": "any",
},
"enable_local_disk_encryption": False,
"docker_image": {
"url": "string",
"basic_auth": {
"password": "string",
"username": "string",
},
},
"driver_instance_pool_id": "string",
"driver_node_type_id": "string",
"azure_attributes": {
"availability": "string",
"first_on_demand": 0,
"spot_bid_max_price": 0,
},
"autoscale": {
"max_workers": 0,
"min_workers": 0,
},
"node_type_id": "string",
"idempotency_token": "string",
"init_scripts": [{
"dbfs": {
"destination": "string",
},
"file": {
"destination": "string",
},
"s3": {
"destination": "string",
"canned_acl": "string",
"enable_encryption": False,
"encryption_type": "string",
"endpoint": "string",
"kms_key": "string",
"region": "string",
},
}],
"instance_pool_id": "string",
"gcp_attributes": {
"availability": "string",
"boot_disk_size": 0,
"google_service_account": "string",
"use_preemptible_executors": False,
"zone_id": "string",
},
"aws_attributes": {
"availability": "string",
"ebs_volume_count": 0,
"ebs_volume_size": 0,
"ebs_volume_type": "string",
"first_on_demand": 0,
"instance_profile_arn": "string",
"spot_bid_price_percent": 0,
"zone_id": "string",
},
"policy_id": "string",
"single_user_name": "string",
"spark_conf": {
"string": "any",
},
"spark_env_vars": {
"string": "any",
},
"autotermination_minutes": 0,
"ssh_public_keys": ["string"],
},
}],
libraries=[{
"cran": {
"package": "string",
"repo": "string",
},
"egg": "string",
"jar": "string",
"maven": {
"coordinates": "string",
"exclusions": ["string"],
"repo": "string",
},
"pypi": {
"package": "string",
"repo": "string",
},
"whl": "string",
}],
max_concurrent_runs=0,
max_retries=0,
min_retry_interval_millis=0,
name="string",
new_cluster={
"spark_version": "string",
"enable_elastic_disk": False,
"spark_conf": {
"string": "any",
},
"azure_attributes": {
"availability": "string",
"first_on_demand": 0,
"spot_bid_max_price": 0,
},
"cluster_id": "string",
"cluster_log_conf": {
"dbfs": {
"destination": "string",
},
"s3": {
"destination": "string",
"canned_acl": "string",
"enable_encryption": False,
"encryption_type": "string",
"endpoint": "string",
"kms_key": "string",
"region": "string",
},
},
"cluster_name": "string",
"custom_tags": {
"string": "any",
},
"data_security_mode": "string",
"docker_image": {
"url": "string",
"basic_auth": {
"password": "string",
"username": "string",
},
},
"driver_instance_pool_id": "string",
"driver_node_type_id": "string",
"autoscale": {
"max_workers": 0,
"min_workers": 0,
},
"aws_attributes": {
"availability": "string",
"ebs_volume_count": 0,
"ebs_volume_size": 0,
"ebs_volume_type": "string",
"first_on_demand": 0,
"instance_profile_arn": "string",
"spot_bid_price_percent": 0,
"zone_id": "string",
},
"idempotency_token": "string",
"enable_local_disk_encryption": False,
"init_scripts": [{
"dbfs": {
"destination": "string",
},
"file": {
"destination": "string",
},
"s3": {
"destination": "string",
"canned_acl": "string",
"enable_encryption": False,
"encryption_type": "string",
"endpoint": "string",
"kms_key": "string",
"region": "string",
},
}],
"instance_pool_id": "string",
"node_type_id": "string",
"num_workers": 0,
"policy_id": "string",
"single_user_name": "string",
"gcp_attributes": {
"availability": "string",
"boot_disk_size": 0,
"google_service_account": "string",
"use_preemptible_executors": False,
"zone_id": "string",
},
"spark_env_vars": {
"string": "any",
},
"autotermination_minutes": 0,
"ssh_public_keys": ["string"],
},
notebook_task={
"notebook_path": "string",
"base_parameters": {
"string": "any",
},
},
pipeline_task={
"pipeline_id": "string",
},
python_wheel_task={
"entry_point": "string",
"named_parameters": {
"string": "any",
},
"package_name": "string",
"parameters": ["string"],
},
retry_on_timeout=False,
schedule={
"quartz_cron_expression": "string",
"timezone_id": "string",
"pause_status": "string",
},
spark_jar_task={
"jar_uri": "string",
"main_class_name": "string",
"parameters": ["string"],
},
spark_python_task={
"python_file": "string",
"parameters": ["string"],
},
spark_submit_task={
"parameters": ["string"],
},
tasks=[{
"depends_ons": [{
"task_key": "string",
}],
"description": "string",
"email_notifications": {
"no_alert_for_skipped_runs": False,
"on_failures": ["string"],
"on_starts": ["string"],
"on_successes": ["string"],
},
"existing_cluster_id": "string",
"job_cluster_key": "string",
"libraries": [{
"cran": {
"package": "string",
"repo": "string",
},
"egg": "string",
"jar": "string",
"maven": {
"coordinates": "string",
"exclusions": ["string"],
"repo": "string",
},
"pypi": {
"package": "string",
"repo": "string",
},
"whl": "string",
}],
"max_retries": 0,
"min_retry_interval_millis": 0,
"new_cluster": {
"spark_version": "string",
"enable_elastic_disk": False,
"spark_conf": {
"string": "any",
},
"azure_attributes": {
"availability": "string",
"first_on_demand": 0,
"spot_bid_max_price": 0,
},
"cluster_id": "string",
"cluster_log_conf": {
"dbfs": {
"destination": "string",
},
"s3": {
"destination": "string",
"canned_acl": "string",
"enable_encryption": False,
"encryption_type": "string",
"endpoint": "string",
"kms_key": "string",
"region": "string",
},
},
"cluster_name": "string",
"custom_tags": {
"string": "any",
},
"data_security_mode": "string",
"docker_image": {
"url": "string",
"basic_auth": {
"password": "string",
"username": "string",
},
},
"driver_instance_pool_id": "string",
"driver_node_type_id": "string",
"autoscale": {
"max_workers": 0,
"min_workers": 0,
},
"aws_attributes": {
"availability": "string",
"ebs_volume_count": 0,
"ebs_volume_size": 0,
"ebs_volume_type": "string",
"first_on_demand": 0,
"instance_profile_arn": "string",
"spot_bid_price_percent": 0,
"zone_id": "string",
},
"idempotency_token": "string",
"enable_local_disk_encryption": False,
"init_scripts": [{
"dbfs": {
"destination": "string",
},
"file": {
"destination": "string",
},
"s3": {
"destination": "string",
"canned_acl": "string",
"enable_encryption": False,
"encryption_type": "string",
"endpoint": "string",
"kms_key": "string",
"region": "string",
},
}],
"instance_pool_id": "string",
"node_type_id": "string",
"num_workers": 0,
"policy_id": "string",
"single_user_name": "string",
"gcp_attributes": {
"availability": "string",
"boot_disk_size": 0,
"google_service_account": "string",
"use_preemptible_executors": False,
"zone_id": "string",
},
"spark_env_vars": {
"string": "any",
},
"autotermination_minutes": 0,
"ssh_public_keys": ["string"],
},
"notebook_task": {
"notebook_path": "string",
"base_parameters": {
"string": "any",
},
},
"pipeline_task": {
"pipeline_id": "string",
},
"python_wheel_task": {
"entry_point": "string",
"named_parameters": {
"string": "any",
},
"package_name": "string",
"parameters": ["string"],
},
"retry_on_timeout": False,
"spark_jar_task": {
"jar_uri": "string",
"main_class_name": "string",
"parameters": ["string"],
},
"spark_python_task": {
"python_file": "string",
"parameters": ["string"],
},
"spark_submit_task": {
"parameters": ["string"],
},
"task_key": "string",
"timeout_seconds": 0,
}],
timeout_seconds=0)
const jobResource = new databricks.Job("jobResource", {
alwaysRunning: false,
emailNotifications: {
noAlertForSkippedRuns: false,
onFailures: ["string"],
onStarts: ["string"],
onSuccesses: ["string"],
},
existingClusterId: "string",
format: "string",
gitSource: {
url: "string",
branch: "string",
commit: "string",
provider: "string",
tag: "string",
},
jobClusters: [{
jobClusterKey: "string",
newCluster: {
numWorkers: 0,
sparkVersion: "string",
enableElasticDisk: false,
dataSecurityMode: "string",
clusterId: "string",
clusterLogConf: {
dbfs: {
destination: "string",
},
s3: {
destination: "string",
cannedAcl: "string",
enableEncryption: false,
encryptionType: "string",
endpoint: "string",
kmsKey: "string",
region: "string",
},
},
clusterName: "string",
customTags: {
string: "any",
},
enableLocalDiskEncryption: false,
dockerImage: {
url: "string",
basicAuth: {
password: "string",
username: "string",
},
},
driverInstancePoolId: "string",
driverNodeTypeId: "string",
azureAttributes: {
availability: "string",
firstOnDemand: 0,
spotBidMaxPrice: 0,
},
autoscale: {
maxWorkers: 0,
minWorkers: 0,
},
nodeTypeId: "string",
idempotencyToken: "string",
initScripts: [{
dbfs: {
destination: "string",
},
file: {
destination: "string",
},
s3: {
destination: "string",
cannedAcl: "string",
enableEncryption: false,
encryptionType: "string",
endpoint: "string",
kmsKey: "string",
region: "string",
},
}],
instancePoolId: "string",
gcpAttributes: {
availability: "string",
bootDiskSize: 0,
googleServiceAccount: "string",
usePreemptibleExecutors: false,
zoneId: "string",
},
awsAttributes: {
availability: "string",
ebsVolumeCount: 0,
ebsVolumeSize: 0,
ebsVolumeType: "string",
firstOnDemand: 0,
instanceProfileArn: "string",
spotBidPricePercent: 0,
zoneId: "string",
},
policyId: "string",
singleUserName: "string",
sparkConf: {
string: "any",
},
sparkEnvVars: {
string: "any",
},
autoterminationMinutes: 0,
sshPublicKeys: ["string"],
},
}],
libraries: [{
cran: {
"package": "string",
repo: "string",
},
egg: "string",
jar: "string",
maven: {
coordinates: "string",
exclusions: ["string"],
repo: "string",
},
pypi: {
"package": "string",
repo: "string",
},
whl: "string",
}],
maxConcurrentRuns: 0,
maxRetries: 0,
minRetryIntervalMillis: 0,
name: "string",
newCluster: {
sparkVersion: "string",
enableElasticDisk: false,
sparkConf: {
string: "any",
},
azureAttributes: {
availability: "string",
firstOnDemand: 0,
spotBidMaxPrice: 0,
},
clusterId: "string",
clusterLogConf: {
dbfs: {
destination: "string",
},
s3: {
destination: "string",
cannedAcl: "string",
enableEncryption: false,
encryptionType: "string",
endpoint: "string",
kmsKey: "string",
region: "string",
},
},
clusterName: "string",
customTags: {
string: "any",
},
dataSecurityMode: "string",
dockerImage: {
url: "string",
basicAuth: {
password: "string",
username: "string",
},
},
driverInstancePoolId: "string",
driverNodeTypeId: "string",
autoscale: {
maxWorkers: 0,
minWorkers: 0,
},
awsAttributes: {
availability: "string",
ebsVolumeCount: 0,
ebsVolumeSize: 0,
ebsVolumeType: "string",
firstOnDemand: 0,
instanceProfileArn: "string",
spotBidPricePercent: 0,
zoneId: "string",
},
idempotencyToken: "string",
enableLocalDiskEncryption: false,
initScripts: [{
dbfs: {
destination: "string",
},
file: {
destination: "string",
},
s3: {
destination: "string",
cannedAcl: "string",
enableEncryption: false,
encryptionType: "string",
endpoint: "string",
kmsKey: "string",
region: "string",
},
}],
instancePoolId: "string",
nodeTypeId: "string",
numWorkers: 0,
policyId: "string",
singleUserName: "string",
gcpAttributes: {
availability: "string",
bootDiskSize: 0,
googleServiceAccount: "string",
usePreemptibleExecutors: false,
zoneId: "string",
},
sparkEnvVars: {
string: "any",
},
autoterminationMinutes: 0,
sshPublicKeys: ["string"],
},
notebookTask: {
notebookPath: "string",
baseParameters: {
string: "any",
},
},
pipelineTask: {
pipelineId: "string",
},
pythonWheelTask: {
entryPoint: "string",
namedParameters: {
string: "any",
},
packageName: "string",
parameters: ["string"],
},
retryOnTimeout: false,
schedule: {
quartzCronExpression: "string",
timezoneId: "string",
pauseStatus: "string",
},
sparkJarTask: {
jarUri: "string",
mainClassName: "string",
parameters: ["string"],
},
sparkPythonTask: {
pythonFile: "string",
parameters: ["string"],
},
sparkSubmitTask: {
parameters: ["string"],
},
tasks: [{
dependsOns: [{
taskKey: "string",
}],
description: "string",
emailNotifications: {
noAlertForSkippedRuns: false,
onFailures: ["string"],
onStarts: ["string"],
onSuccesses: ["string"],
},
existingClusterId: "string",
jobClusterKey: "string",
libraries: [{
cran: {
"package": "string",
repo: "string",
},
egg: "string",
jar: "string",
maven: {
coordinates: "string",
exclusions: ["string"],
repo: "string",
},
pypi: {
"package": "string",
repo: "string",
},
whl: "string",
}],
maxRetries: 0,
minRetryIntervalMillis: 0,
newCluster: {
sparkVersion: "string",
enableElasticDisk: false,
sparkConf: {
string: "any",
},
azureAttributes: {
availability: "string",
firstOnDemand: 0,
spotBidMaxPrice: 0,
},
clusterId: "string",
clusterLogConf: {
dbfs: {
destination: "string",
},
s3: {
destination: "string",
cannedAcl: "string",
enableEncryption: false,
encryptionType: "string",
endpoint: "string",
kmsKey: "string",
region: "string",
},
},
clusterName: "string",
customTags: {
string: "any",
},
dataSecurityMode: "string",
dockerImage: {
url: "string",
basicAuth: {
password: "string",
username: "string",
},
},
driverInstancePoolId: "string",
driverNodeTypeId: "string",
autoscale: {
maxWorkers: 0,
minWorkers: 0,
},
awsAttributes: {
availability: "string",
ebsVolumeCount: 0,
ebsVolumeSize: 0,
ebsVolumeType: "string",
firstOnDemand: 0,
instanceProfileArn: "string",
spotBidPricePercent: 0,
zoneId: "string",
},
idempotencyToken: "string",
enableLocalDiskEncryption: false,
initScripts: [{
dbfs: {
destination: "string",
},
file: {
destination: "string",
},
s3: {
destination: "string",
cannedAcl: "string",
enableEncryption: false,
encryptionType: "string",
endpoint: "string",
kmsKey: "string",
region: "string",
},
}],
instancePoolId: "string",
nodeTypeId: "string",
numWorkers: 0,
policyId: "string",
singleUserName: "string",
gcpAttributes: {
availability: "string",
bootDiskSize: 0,
googleServiceAccount: "string",
usePreemptibleExecutors: false,
zoneId: "string",
},
sparkEnvVars: {
string: "any",
},
autoterminationMinutes: 0,
sshPublicKeys: ["string"],
},
notebookTask: {
notebookPath: "string",
baseParameters: {
string: "any",
},
},
pipelineTask: {
pipelineId: "string",
},
pythonWheelTask: {
entryPoint: "string",
namedParameters: {
string: "any",
},
packageName: "string",
parameters: ["string"],
},
retryOnTimeout: false,
sparkJarTask: {
jarUri: "string",
mainClassName: "string",
parameters: ["string"],
},
sparkPythonTask: {
pythonFile: "string",
parameters: ["string"],
},
sparkSubmitTask: {
parameters: ["string"],
},
taskKey: "string",
timeoutSeconds: 0,
}],
timeoutSeconds: 0,
});
type: databricks:Job
properties:
alwaysRunning: false
emailNotifications:
noAlertForSkippedRuns: false
onFailures:
- string
onStarts:
- string
onSuccesses:
- string
existingClusterId: string
format: string
gitSource:
branch: string
commit: string
provider: string
tag: string
url: string
jobClusters:
- jobClusterKey: string
newCluster:
autoscale:
maxWorkers: 0
minWorkers: 0
autoterminationMinutes: 0
awsAttributes:
availability: string
ebsVolumeCount: 0
ebsVolumeSize: 0
ebsVolumeType: string
firstOnDemand: 0
instanceProfileArn: string
spotBidPricePercent: 0
zoneId: string
azureAttributes:
availability: string
firstOnDemand: 0
spotBidMaxPrice: 0
clusterId: string
clusterLogConf:
dbfs:
destination: string
s3:
cannedAcl: string
destination: string
enableEncryption: false
encryptionType: string
endpoint: string
kmsKey: string
region: string
clusterName: string
customTags:
string: any
dataSecurityMode: string
dockerImage:
basicAuth:
password: string
username: string
url: string
driverInstancePoolId: string
driverNodeTypeId: string
enableElasticDisk: false
enableLocalDiskEncryption: false
gcpAttributes:
availability: string
bootDiskSize: 0
googleServiceAccount: string
usePreemptibleExecutors: false
zoneId: string
idempotencyToken: string
initScripts:
- dbfs:
destination: string
file:
destination: string
s3:
cannedAcl: string
destination: string
enableEncryption: false
encryptionType: string
endpoint: string
kmsKey: string
region: string
instancePoolId: string
nodeTypeId: string
numWorkers: 0
policyId: string
singleUserName: string
sparkConf:
string: any
sparkEnvVars:
string: any
sparkVersion: string
sshPublicKeys:
- string
libraries:
- cran:
package: string
repo: string
egg: string
jar: string
maven:
coordinates: string
exclusions:
- string
repo: string
pypi:
package: string
repo: string
whl: string
maxConcurrentRuns: 0
maxRetries: 0
minRetryIntervalMillis: 0
name: string
newCluster:
autoscale:
maxWorkers: 0
minWorkers: 0
autoterminationMinutes: 0
awsAttributes:
availability: string
ebsVolumeCount: 0
ebsVolumeSize: 0
ebsVolumeType: string
firstOnDemand: 0
instanceProfileArn: string
spotBidPricePercent: 0
zoneId: string
azureAttributes:
availability: string
firstOnDemand: 0
spotBidMaxPrice: 0
clusterId: string
clusterLogConf:
dbfs:
destination: string
s3:
cannedAcl: string
destination: string
enableEncryption: false
encryptionType: string
endpoint: string
kmsKey: string
region: string
clusterName: string
customTags:
string: any
dataSecurityMode: string
dockerImage:
basicAuth:
password: string
username: string
url: string
driverInstancePoolId: string
driverNodeTypeId: string
enableElasticDisk: false
enableLocalDiskEncryption: false
gcpAttributes:
availability: string
bootDiskSize: 0
googleServiceAccount: string
usePreemptibleExecutors: false
zoneId: string
idempotencyToken: string
initScripts:
- dbfs:
destination: string
file:
destination: string
s3:
cannedAcl: string
destination: string
enableEncryption: false
encryptionType: string
endpoint: string
kmsKey: string
region: string
instancePoolId: string
nodeTypeId: string
numWorkers: 0
policyId: string
singleUserName: string
sparkConf:
string: any
sparkEnvVars:
string: any
sparkVersion: string
sshPublicKeys:
- string
notebookTask:
baseParameters:
string: any
notebookPath: string
pipelineTask:
pipelineId: string
pythonWheelTask:
entryPoint: string
namedParameters:
string: any
packageName: string
parameters:
- string
retryOnTimeout: false
schedule:
pauseStatus: string
quartzCronExpression: string
timezoneId: string
sparkJarTask:
jarUri: string
mainClassName: string
parameters:
- string
sparkPythonTask:
parameters:
- string
pythonFile: string
sparkSubmitTask:
parameters:
- string
tasks:
- dependsOns:
- taskKey: string
description: string
emailNotifications:
noAlertForSkippedRuns: false
onFailures:
- string
onStarts:
- string
onSuccesses:
- string
existingClusterId: string
jobClusterKey: string
libraries:
- cran:
package: string
repo: string
egg: string
jar: string
maven:
coordinates: string
exclusions:
- string
repo: string
pypi:
package: string
repo: string
whl: string
maxRetries: 0
minRetryIntervalMillis: 0
newCluster:
autoscale:
maxWorkers: 0
minWorkers: 0
autoterminationMinutes: 0
awsAttributes:
availability: string
ebsVolumeCount: 0
ebsVolumeSize: 0
ebsVolumeType: string
firstOnDemand: 0
instanceProfileArn: string
spotBidPricePercent: 0
zoneId: string
azureAttributes:
availability: string
firstOnDemand: 0
spotBidMaxPrice: 0
clusterId: string
clusterLogConf:
dbfs:
destination: string
s3:
cannedAcl: string
destination: string
enableEncryption: false
encryptionType: string
endpoint: string
kmsKey: string
region: string
clusterName: string
customTags:
string: any
dataSecurityMode: string
dockerImage:
basicAuth:
password: string
username: string
url: string
driverInstancePoolId: string
driverNodeTypeId: string
enableElasticDisk: false
enableLocalDiskEncryption: false
gcpAttributes:
availability: string
bootDiskSize: 0
googleServiceAccount: string
usePreemptibleExecutors: false
zoneId: string
idempotencyToken: string
initScripts:
- dbfs:
destination: string
file:
destination: string
s3:
cannedAcl: string
destination: string
enableEncryption: false
encryptionType: string
endpoint: string
kmsKey: string
region: string
instancePoolId: string
nodeTypeId: string
numWorkers: 0
policyId: string
singleUserName: string
sparkConf:
string: any
sparkEnvVars:
string: any
sparkVersion: string
sshPublicKeys:
- string
notebookTask:
baseParameters:
string: any
notebookPath: string
pipelineTask:
pipelineId: string
pythonWheelTask:
entryPoint: string
namedParameters:
string: any
packageName: string
parameters:
- string
retryOnTimeout: false
sparkJarTask:
jarUri: string
mainClassName: string
parameters:
- string
sparkPythonTask:
parameters:
- string
pythonFile: string
sparkSubmitTask:
parameters:
- string
taskKey: string
timeoutSeconds: 0
timeoutSeconds: 0
Job Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The Job resource accepts the following input properties:
- Always
Running bool - (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with
parametersspecified inspark_jar_taskorspark_submit_taskorspark_python_taskornotebook_taskblocks. - Email
Notifications JobEmail Notifications - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- Existing
Cluster stringId - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - Format string
- Git
Source JobGit Source - Job
Clusters List<JobJob Cluster> - Libraries
List<Job
Library> - (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- Max
Concurrent intRuns - (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
- Max
Retries int - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- Min
Retry intInterval Millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- Name string
- An optional name for the job. The default value is Untitled.
- New
Cluster JobNew Cluster - Same set of parameters as for databricks.Cluster resource.
- Notebook
Task JobNotebook Task - Pipeline
Task JobPipeline Task - Python
Wheel JobTask Python Wheel Task - Retry
On boolTimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- Schedule
Job
Schedule - (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
- Spark
Jar JobTask Spark Jar Task - Spark
Python JobTask Spark Python Task - Spark
Submit JobTask Spark Submit Task - Tasks
List<Job
Task> - Timeout
Seconds int - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- Always
Running bool - (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with
parametersspecified inspark_jar_taskorspark_submit_taskorspark_python_taskornotebook_taskblocks. - Email
Notifications JobEmail Notifications Args - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- Existing
Cluster stringId - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - Format string
- Git
Source JobGit Source Args - Job
Clusters []JobJob Cluster Args - Libraries
[]Job
Library Args - (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- Max
Concurrent intRuns - (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
- Max
Retries int - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- Min
Retry intInterval Millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- Name string
- An optional name for the job. The default value is Untitled.
- New
Cluster JobNew Cluster Args - Same set of parameters as for databricks.Cluster resource.
- Notebook
Task JobNotebook Task Args - Pipeline
Task JobPipeline Task Args - Python
Wheel JobTask Python Wheel Task Args - Retry
On boolTimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- Schedule
Job
Schedule Args - (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
- Spark
Jar JobTask Spark Jar Task Args - Spark
Python JobTask Spark Python Task Args - Spark
Submit JobTask Spark Submit Task Args - Tasks
[]Job
Task Args - Timeout
Seconds int - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- always
Running Boolean - (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with
parametersspecified inspark_jar_taskorspark_submit_taskorspark_python_taskornotebook_taskblocks. - email
Notifications JobEmail Notifications - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- existing
Cluster StringId - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - format String
- git
Source JobGit Source - job
Clusters List<JobJob Cluster> - libraries
List<Job
Library> - (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- max
Concurrent IntegerRuns - (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
- max
Retries Integer - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- min
Retry IntegerInterval Millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- name String
- An optional name for the job. The default value is Untitled.
- new
Cluster JobNew Cluster - Same set of parameters as for databricks.Cluster resource.
- notebook
Task JobNotebook Task - pipeline
Task JobPipeline Task - python
Wheel JobTask Python Wheel Task - retry
On BooleanTimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- schedule
Job
Schedule - (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
- spark
Jar JobTask Spark Jar Task - spark
Python JobTask Spark Python Task - spark
Submit JobTask Spark Submit Task - tasks
List<Job
Task> - timeout
Seconds Integer - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- always
Running boolean - (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with
parametersspecified inspark_jar_taskorspark_submit_taskorspark_python_taskornotebook_taskblocks. - email
Notifications JobEmail Notifications - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- existing
Cluster stringId - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - format string
- git
Source JobGit Source - job
Clusters JobJob Cluster[] - libraries
Job
Library[] - (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- max
Concurrent numberRuns - (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
- max
Retries number - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- min
Retry numberInterval Millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- name string
- An optional name for the job. The default value is Untitled.
- new
Cluster JobNew Cluster - Same set of parameters as for databricks.Cluster resource.
- notebook
Task JobNotebook Task - pipeline
Task JobPipeline Task - python
Wheel JobTask Python Wheel Task - retry
On booleanTimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- schedule
Job
Schedule - (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
- spark
Jar JobTask Spark Jar Task - spark
Python JobTask Spark Python Task - spark
Submit JobTask Spark Submit Task - tasks
Job
Task[] - timeout
Seconds number - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- always_
running bool - (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with
parametersspecified inspark_jar_taskorspark_submit_taskorspark_python_taskornotebook_taskblocks. - email_
notifications JobEmail Notifications Args - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- existing_
cluster_ strid - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - format str
- git_
source JobGit Source Args - job_
clusters Sequence[JobJob Cluster Args] - libraries
Sequence[Job
Library Args] - (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- max_
concurrent_ intruns - (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
- max_
retries int - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- min_
retry_ intinterval_ millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- name str
- An optional name for the job. The default value is Untitled.
- new_
cluster JobNew Cluster Args - Same set of parameters as for databricks.Cluster resource.
- notebook_
task JobNotebook Task Args - pipeline_
task JobPipeline Task Args - python_
wheel_ Jobtask Python Wheel Task Args - retry_
on_ booltimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- schedule
Job
Schedule Args - (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
- spark_
jar_ Jobtask Spark Jar Task Args - spark_
python_ Jobtask Spark Python Task Args - spark_
submit_ Jobtask Spark Submit Task Args - tasks
Sequence[Job
Task Args] - timeout_
seconds int - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- always
Running Boolean - (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with
parametersspecified inspark_jar_taskorspark_submit_taskorspark_python_taskornotebook_taskblocks. - email
Notifications Property Map - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- existing
Cluster StringId - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - format String
- git
Source Property Map - job
Clusters List<Property Map> - libraries List<Property Map>
- (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- max
Concurrent NumberRuns - (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
- max
Retries Number - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- min
Retry NumberInterval Millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- name String
- An optional name for the job. The default value is Untitled.
- new
Cluster Property Map - Same set of parameters as for databricks.Cluster resource.
- notebook
Task Property Map - pipeline
Task Property Map - python
Wheel Property MapTask - retry
On BooleanTimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- schedule Property Map
- (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
- spark
Jar Property MapTask - spark
Python Property MapTask - spark
Submit Property MapTask - tasks List<Property Map>
- timeout
Seconds Number - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
Outputs
All input properties are implicitly available as output properties. Additionally, the Job resource produces the following output properties:
Look up Existing Job Resource
Get an existing Job resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.
public static get(name: string, id: Input<ID>, state?: JobState, opts?: CustomResourceOptions): Job@staticmethod
def get(resource_name: str,
id: str,
opts: Optional[ResourceOptions] = None,
always_running: Optional[bool] = None,
email_notifications: Optional[JobEmailNotificationsArgs] = None,
existing_cluster_id: Optional[str] = None,
format: Optional[str] = None,
git_source: Optional[JobGitSourceArgs] = None,
job_clusters: Optional[Sequence[JobJobClusterArgs]] = None,
libraries: Optional[Sequence[JobLibraryArgs]] = None,
max_concurrent_runs: Optional[int] = None,
max_retries: Optional[int] = None,
min_retry_interval_millis: Optional[int] = None,
name: Optional[str] = None,
new_cluster: Optional[JobNewClusterArgs] = None,
notebook_task: Optional[JobNotebookTaskArgs] = None,
pipeline_task: Optional[JobPipelineTaskArgs] = None,
python_wheel_task: Optional[JobPythonWheelTaskArgs] = None,
retry_on_timeout: Optional[bool] = None,
schedule: Optional[JobScheduleArgs] = None,
spark_jar_task: Optional[JobSparkJarTaskArgs] = None,
spark_python_task: Optional[JobSparkPythonTaskArgs] = None,
spark_submit_task: Optional[JobSparkSubmitTaskArgs] = None,
tasks: Optional[Sequence[JobTaskArgs]] = None,
timeout_seconds: Optional[int] = None,
url: Optional[str] = None) -> Jobfunc GetJob(ctx *Context, name string, id IDInput, state *JobState, opts ...ResourceOption) (*Job, error)public static Job Get(string name, Input<string> id, JobState? state, CustomResourceOptions? opts = null)public static Job get(String name, Output<String> id, JobState state, CustomResourceOptions options)resources: _: type: databricks:Job get: id: ${id}- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- resource_name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- Always
Running bool - (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with
parametersspecified inspark_jar_taskorspark_submit_taskorspark_python_taskornotebook_taskblocks. - Email
Notifications JobEmail Notifications - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- Existing
Cluster stringId - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - Format string
- Git
Source JobGit Source - Job
Clusters List<JobJob Cluster> - Libraries
List<Job
Library> - (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- Max
Concurrent intRuns - (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
- Max
Retries int - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- Min
Retry intInterval Millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- Name string
- An optional name for the job. The default value is Untitled.
- New
Cluster JobNew Cluster - Same set of parameters as for databricks.Cluster resource.
- Notebook
Task JobNotebook Task - Pipeline
Task JobPipeline Task - Python
Wheel JobTask Python Wheel Task - Retry
On boolTimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- Schedule
Job
Schedule - (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
- Spark
Jar JobTask Spark Jar Task - Spark
Python JobTask Spark Python Task - Spark
Submit JobTask Spark Submit Task - Tasks
List<Job
Task> - Timeout
Seconds int - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- Url string
- URL of the job on the given workspace
- Always
Running bool - (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with
parametersspecified inspark_jar_taskorspark_submit_taskorspark_python_taskornotebook_taskblocks. - Email
Notifications JobEmail Notifications Args - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- Existing
Cluster stringId - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - Format string
- Git
Source JobGit Source Args - Job
Clusters []JobJob Cluster Args - Libraries
[]Job
Library Args - (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- Max
Concurrent intRuns - (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
- Max
Retries int - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- Min
Retry intInterval Millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- Name string
- An optional name for the job. The default value is Untitled.
- New
Cluster JobNew Cluster Args - Same set of parameters as for databricks.Cluster resource.
- Notebook
Task JobNotebook Task Args - Pipeline
Task JobPipeline Task Args - Python
Wheel JobTask Python Wheel Task Args - Retry
On boolTimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- Schedule
Job
Schedule Args - (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
- Spark
Jar JobTask Spark Jar Task Args - Spark
Python JobTask Spark Python Task Args - Spark
Submit JobTask Spark Submit Task Args - Tasks
[]Job
Task Args - Timeout
Seconds int - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- Url string
- URL of the job on the given workspace
- always
Running Boolean - (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with
parametersspecified inspark_jar_taskorspark_submit_taskorspark_python_taskornotebook_taskblocks. - email
Notifications JobEmail Notifications - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- existing
Cluster StringId - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - format String
- git
Source JobGit Source - job
Clusters List<JobJob Cluster> - libraries
List<Job
Library> - (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- max
Concurrent IntegerRuns - (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
- max
Retries Integer - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- min
Retry IntegerInterval Millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- name String
- An optional name for the job. The default value is Untitled.
- new
Cluster JobNew Cluster - Same set of parameters as for databricks.Cluster resource.
- notebook
Task JobNotebook Task - pipeline
Task JobPipeline Task - python
Wheel JobTask Python Wheel Task - retry
On BooleanTimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- schedule
Job
Schedule - (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
- spark
Jar JobTask Spark Jar Task - spark
Python JobTask Spark Python Task - spark
Submit JobTask Spark Submit Task - tasks
List<Job
Task> - timeout
Seconds Integer - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- url String
- URL of the job on the given workspace
- always
Running boolean - (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with
parametersspecified inspark_jar_taskorspark_submit_taskorspark_python_taskornotebook_taskblocks. - email
Notifications JobEmail Notifications - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- existing
Cluster stringId - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - format string
- git
Source JobGit Source - job
Clusters JobJob Cluster[] - libraries
Job
Library[] - (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- max
Concurrent numberRuns - (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
- max
Retries number - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- min
Retry numberInterval Millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- name string
- An optional name for the job. The default value is Untitled.
- new
Cluster JobNew Cluster - Same set of parameters as for databricks.Cluster resource.
- notebook
Task JobNotebook Task - pipeline
Task JobPipeline Task - python
Wheel JobTask Python Wheel Task - retry
On booleanTimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- schedule
Job
Schedule - (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
- spark
Jar JobTask Spark Jar Task - spark
Python JobTask Spark Python Task - spark
Submit JobTask Spark Submit Task - tasks
Job
Task[] - timeout
Seconds number - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- url string
- URL of the job on the given workspace
- always_
running bool - (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with
parametersspecified inspark_jar_taskorspark_submit_taskorspark_python_taskornotebook_taskblocks. - email_
notifications JobEmail Notifications Args - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- existing_
cluster_ strid - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - format str
- git_
source JobGit Source Args - job_
clusters Sequence[JobJob Cluster Args] - libraries
Sequence[Job
Library Args] - (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- max_
concurrent_ intruns - (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
- max_
retries int - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- min_
retry_ intinterval_ millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- name str
- An optional name for the job. The default value is Untitled.
- new_
cluster JobNew Cluster Args - Same set of parameters as for databricks.Cluster resource.
- notebook_
task JobNotebook Task Args - pipeline_
task JobPipeline Task Args - python_
wheel_ Jobtask Python Wheel Task Args - retry_
on_ booltimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- schedule
Job
Schedule Args - (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
- spark_
jar_ Jobtask Spark Jar Task Args - spark_
python_ Jobtask Spark Python Task Args - spark_
submit_ Jobtask Spark Submit Task Args - tasks
Sequence[Job
Task Args] - timeout_
seconds int - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- url str
- URL of the job on the given workspace
- always
Running Boolean - (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with
parametersspecified inspark_jar_taskorspark_submit_taskorspark_python_taskornotebook_taskblocks. - email
Notifications Property Map - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- existing
Cluster StringId - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - format String
- git
Source Property Map - job
Clusters List<Property Map> - libraries List<Property Map>
- (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- max
Concurrent NumberRuns - (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1.
- max
Retries Number - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- min
Retry NumberInterval Millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- name String
- An optional name for the job. The default value is Untitled.
- new
Cluster Property Map - Same set of parameters as for databricks.Cluster resource.
- notebook
Task Property Map - pipeline
Task Property Map - python
Wheel Property MapTask - retry
On BooleanTimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- schedule Property Map
- (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
- spark
Jar Property MapTask - spark
Python Property MapTask - spark
Submit Property MapTask - tasks List<Property Map>
- timeout
Seconds Number - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- url String
- URL of the job on the given workspace
Supporting Types
JobEmailNotifications, JobEmailNotificationsArgs
- No
Alert boolFor Skipped Runs - (Bool) don't send alert for skipped runs
- On
Failures List<string> - (List) list of emails to notify on failure
- On
Starts List<string> - (List) list of emails to notify on failure
- On
Successes List<string> - (List) list of emails to notify on failure
- No
Alert boolFor Skipped Runs - (Bool) don't send alert for skipped runs
- On
Failures []string - (List) list of emails to notify on failure
- On
Starts []string - (List) list of emails to notify on failure
- On
Successes []string - (List) list of emails to notify on failure
- no
Alert BooleanFor Skipped Runs - (Bool) don't send alert for skipped runs
- on
Failures List<String> - (List) list of emails to notify on failure
- on
Starts List<String> - (List) list of emails to notify on failure
- on
Successes List<String> - (List) list of emails to notify on failure
- no
Alert booleanFor Skipped Runs - (Bool) don't send alert for skipped runs
- on
Failures string[] - (List) list of emails to notify on failure
- on
Starts string[] - (List) list of emails to notify on failure
- on
Successes string[] - (List) list of emails to notify on failure
- no_
alert_ boolfor_ skipped_ runs - (Bool) don't send alert for skipped runs
- on_
failures Sequence[str] - (List) list of emails to notify on failure
- on_
starts Sequence[str] - (List) list of emails to notify on failure
- on_
successes Sequence[str] - (List) list of emails to notify on failure
- no
Alert BooleanFor Skipped Runs - (Bool) don't send alert for skipped runs
- on
Failures List<String> - (List) list of emails to notify on failure
- on
Starts List<String> - (List) list of emails to notify on failure
- on
Successes List<String> - (List) list of emails to notify on failure
JobGitSource, JobGitSourceArgs
JobJobCluster, JobJobClusterArgs
- Job
Cluster stringKey - Identifier that can be referenced in
taskblock, so that cluster is shared between tasks - New
Cluster JobJob Cluster New Cluster - Same set of parameters as for databricks.Cluster resource.
- Job
Cluster stringKey - Identifier that can be referenced in
taskblock, so that cluster is shared between tasks - New
Cluster JobJob Cluster New Cluster - Same set of parameters as for databricks.Cluster resource.
- job
Cluster StringKey - Identifier that can be referenced in
taskblock, so that cluster is shared between tasks - new
Cluster JobJob Cluster New Cluster - Same set of parameters as for databricks.Cluster resource.
- job
Cluster stringKey - Identifier that can be referenced in
taskblock, so that cluster is shared between tasks - new
Cluster JobJob Cluster New Cluster - Same set of parameters as for databricks.Cluster resource.
- job_
cluster_ strkey - Identifier that can be referenced in
taskblock, so that cluster is shared between tasks - new_
cluster JobJob Cluster New Cluster - Same set of parameters as for databricks.Cluster resource.
- job
Cluster StringKey - Identifier that can be referenced in
taskblock, so that cluster is shared between tasks - new
Cluster Property Map - Same set of parameters as for databricks.Cluster resource.
JobJobClusterNewCluster, JobJobClusterNewClusterArgs
- Num
Workers int - Spark
Version string - Autoscale
Job
Job Cluster New Cluster Autoscale - Autotermination
Minutes int - Aws
Attributes JobJob Cluster New Cluster Aws Attributes - Azure
Attributes JobJob Cluster New Cluster Azure Attributes - Cluster
Id string - Cluster
Log JobConf Job Cluster New Cluster Cluster Log Conf - Cluster
Name string - Dictionary<string, object>
- Data
Security stringMode - Docker
Image JobJob Cluster New Cluster Docker Image - Driver
Instance stringPool Id - Driver
Node stringType Id - Enable
Elastic boolDisk - Enable
Local boolDisk Encryption - Gcp
Attributes JobJob Cluster New Cluster Gcp Attributes - Idempotency
Token string - Init
Scripts List<JobJob Cluster New Cluster Init Script> - Instance
Pool stringId - Node
Type stringId - Policy
Id string - Single
User stringName - Spark
Conf Dictionary<string, object> - Spark
Env Dictionary<string, object>Vars - Ssh
Public List<string>Keys
- Num
Workers int - Spark
Version string - Autoscale
Job
Job Cluster New Cluster Autoscale - Autotermination
Minutes int - Aws
Attributes JobJob Cluster New Cluster Aws Attributes - Azure
Attributes JobJob Cluster New Cluster Azure Attributes - Cluster
Id string - Cluster
Log JobConf Job Cluster New Cluster Cluster Log Conf - Cluster
Name string - map[string]interface{}
- Data
Security stringMode - Docker
Image JobJob Cluster New Cluster Docker Image - Driver
Instance stringPool Id - Driver
Node stringType Id - Enable
Elastic boolDisk - Enable
Local boolDisk Encryption - Gcp
Attributes JobJob Cluster New Cluster Gcp Attributes - Idempotency
Token string - Init
Scripts []JobJob Cluster New Cluster Init Script - Instance
Pool stringId - Node
Type stringId - Policy
Id string - Single
User stringName - Spark
Conf map[string]interface{} - Spark
Env map[string]interface{}Vars - Ssh
Public []stringKeys
- num
Workers Integer - spark
Version String - autoscale
Job
Job Cluster New Cluster Autoscale - autotermination
Minutes Integer - aws
Attributes JobJob Cluster New Cluster Aws Attributes - azure
Attributes JobJob Cluster New Cluster Azure Attributes - cluster
Id String - cluster
Log JobConf Job Cluster New Cluster Cluster Log Conf - cluster
Name String - Map<String,Object>
- data
Security StringMode - docker
Image JobJob Cluster New Cluster Docker Image - driver
Instance StringPool Id - driver
Node StringType Id - enable
Elastic BooleanDisk - enable
Local BooleanDisk Encryption - gcp
Attributes JobJob Cluster New Cluster Gcp Attributes - idempotency
Token String - init
Scripts List<JobJob Cluster New Cluster Init Script> - instance
Pool StringId - node
Type StringId - policy
Id String - single
User StringName - spark
Conf Map<String,Object> - spark
Env Map<String,Object>Vars - ssh
Public List<String>Keys
- num
Workers number - spark
Version string - autoscale
Job
Job Cluster New Cluster Autoscale - autotermination
Minutes number - aws
Attributes JobJob Cluster New Cluster Aws Attributes - azure
Attributes JobJob Cluster New Cluster Azure Attributes - cluster
Id string - cluster
Log JobConf Job Cluster New Cluster Cluster Log Conf - cluster
Name string - {[key: string]: any}
- data
Security stringMode - docker
Image JobJob Cluster New Cluster Docker Image - driver
Instance stringPool Id - driver
Node stringType Id - enable
Elastic booleanDisk - enable
Local booleanDisk Encryption - gcp
Attributes JobJob Cluster New Cluster Gcp Attributes - idempotency
Token string - init
Scripts JobJob Cluster New Cluster Init Script[] - instance
Pool stringId - node
Type stringId - policy
Id string - single
User stringName - spark
Conf {[key: string]: any} - spark
Env {[key: string]: any}Vars - ssh
Public string[]Keys
- num_
workers int - spark_
version str - autoscale
Job
Job Cluster New Cluster Autoscale - autotermination_
minutes int - aws_
attributes JobJob Cluster New Cluster Aws Attributes - azure_
attributes JobJob Cluster New Cluster Azure Attributes - cluster_
id str - cluster_
log_ Jobconf Job Cluster New Cluster Cluster Log Conf - cluster_
name str - Mapping[str, Any]
- data_
security_ strmode - docker_
image JobJob Cluster New Cluster Docker Image - driver_
instance_ strpool_ id - driver_
node_ strtype_ id - enable_
elastic_ booldisk - enable_
local_ booldisk_ encryption - gcp_
attributes JobJob Cluster New Cluster Gcp Attributes - idempotency_
token str - init_
scripts Sequence[JobJob Cluster New Cluster Init Script] - instance_
pool_ strid - node_
type_ strid - policy_
id str - single_
user_ strname - spark_
conf Mapping[str, Any] - spark_
env_ Mapping[str, Any]vars - ssh_
public_ Sequence[str]keys
- num
Workers Number - spark
Version String - autoscale Property Map
- autotermination
Minutes Number - aws
Attributes Property Map - azure
Attributes Property Map - cluster
Id String - cluster
Log Property MapConf - cluster
Name String - Map<Any>
- data
Security StringMode - docker
Image Property Map - driver
Instance StringPool Id - driver
Node StringType Id - enable
Elastic BooleanDisk - enable
Local BooleanDisk Encryption - gcp
Attributes Property Map - idempotency
Token String - init
Scripts List<Property Map> - instance
Pool StringId - node
Type StringId - policy
Id String - single
User StringName - spark
Conf Map<Any> - spark
Env Map<Any>Vars - ssh
Public List<String>Keys
JobJobClusterNewClusterAutoscale, JobJobClusterNewClusterAutoscaleArgs
- Max
Workers int - Min
Workers int
- Max
Workers int - Min
Workers int
- max
Workers Integer - min
Workers Integer
- max
Workers number - min
Workers number
- max_
workers int - min_
workers int
- max
Workers Number - min
Workers Number
JobJobClusterNewClusterAwsAttributes, JobJobClusterNewClusterAwsAttributesArgs
- Availability string
- Ebs
Volume intCount - Ebs
Volume intSize - Ebs
Volume stringType - First
On intDemand - Instance
Profile stringArn - Spot
Bid intPrice Percent - Zone
Id string
- Availability string
- Ebs
Volume intCount - Ebs
Volume intSize - Ebs
Volume stringType - First
On intDemand - Instance
Profile stringArn - Spot
Bid intPrice Percent - Zone
Id string
- availability String
- ebs
Volume IntegerCount - ebs
Volume IntegerSize - ebs
Volume StringType - first
On IntegerDemand - instance
Profile StringArn - spot
Bid IntegerPrice Percent - zone
Id String
- availability string
- ebs
Volume numberCount - ebs
Volume numberSize - ebs
Volume stringType - first
On numberDemand - instance
Profile stringArn - spot
Bid numberPrice Percent - zone
Id string
- availability str
- ebs_
volume_ intcount - ebs_
volume_ intsize - ebs_
volume_ strtype - first_
on_ intdemand - instance_
profile_ strarn - spot_
bid_ intprice_ percent - zone_
id str
- availability String
- ebs
Volume NumberCount - ebs
Volume NumberSize - ebs
Volume StringType - first
On NumberDemand - instance
Profile StringArn - spot
Bid NumberPrice Percent - zone
Id String
JobJobClusterNewClusterAzureAttributes, JobJobClusterNewClusterAzureAttributesArgs
- Availability string
- First
On intDemand - Spot
Bid doubleMax Price
- Availability string
- First
On intDemand - Spot
Bid float64Max Price
- availability String
- first
On IntegerDemand - spot
Bid DoubleMax Price
- availability string
- first
On numberDemand - spot
Bid numberMax Price
- availability str
- first_
on_ intdemand - spot_
bid_ floatmax_ price
- availability String
- first
On NumberDemand - spot
Bid NumberMax Price
JobJobClusterNewClusterClusterLogConf, JobJobClusterNewClusterClusterLogConfArgs
JobJobClusterNewClusterClusterLogConfDbfs, JobJobClusterNewClusterClusterLogConfDbfsArgs
- Destination string
- Destination string
- destination String
- destination string
- destination str
- destination String
JobJobClusterNewClusterClusterLogConfS3, JobJobClusterNewClusterClusterLogConfS3Args
- Destination string
- Canned
Acl string - Enable
Encryption bool - Encryption
Type string - Endpoint string
- Kms
Key string - Region string
- Destination string
- Canned
Acl string - Enable
Encryption bool - Encryption
Type string - Endpoint string
- Kms
Key string - Region string
- destination String
- canned
Acl String - enable
Encryption Boolean - encryption
Type String - endpoint String
- kms
Key String - region String
- destination string
- canned
Acl string - enable
Encryption boolean - encryption
Type string - endpoint string
- kms
Key string - region string
- destination str
- canned_
acl str - enable_
encryption bool - encryption_
type str - endpoint str
- kms_
key str - region str
- destination String
- canned
Acl String - enable
Encryption Boolean - encryption
Type String - endpoint String
- kms
Key String - region String
JobJobClusterNewClusterDockerImage, JobJobClusterNewClusterDockerImageArgs
- Url string
- URL of the job on the given workspace
- Basic
Auth JobJob Cluster New Cluster Docker Image Basic Auth
- Url string
- URL of the job on the given workspace
- Basic
Auth JobJob Cluster New Cluster Docker Image Basic Auth
- url String
- URL of the job on the given workspace
- basic
Auth JobJob Cluster New Cluster Docker Image Basic Auth
- url string
- URL of the job on the given workspace
- basic
Auth JobJob Cluster New Cluster Docker Image Basic Auth
- url str
- URL of the job on the given workspace
- basic_
auth JobJob Cluster New Cluster Docker Image Basic Auth
- url String
- URL of the job on the given workspace
- basic
Auth Property Map
JobJobClusterNewClusterDockerImageBasicAuth, JobJobClusterNewClusterDockerImageBasicAuthArgs
JobJobClusterNewClusterGcpAttributes, JobJobClusterNewClusterGcpAttributesArgs
- Availability string
- Boot
Disk intSize - Google
Service stringAccount - Use
Preemptible boolExecutors - Zone
Id string
- Availability string
- Boot
Disk intSize - Google
Service stringAccount - Use
Preemptible boolExecutors - Zone
Id string
- availability String
- boot
Disk IntegerSize - google
Service StringAccount - use
Preemptible BooleanExecutors - zone
Id String
- availability string
- boot
Disk numberSize - google
Service stringAccount - use
Preemptible booleanExecutors - zone
Id string
- availability str
- boot_
disk_ intsize - google_
service_ straccount - use_
preemptible_ boolexecutors - zone_
id str
- availability String
- boot
Disk NumberSize - google
Service StringAccount - use
Preemptible BooleanExecutors - zone
Id String
JobJobClusterNewClusterInitScript, JobJobClusterNewClusterInitScriptArgs
JobJobClusterNewClusterInitScriptDbfs, JobJobClusterNewClusterInitScriptDbfsArgs
- Destination string
- Destination string
- destination String
- destination string
- destination str
- destination String
JobJobClusterNewClusterInitScriptFile, JobJobClusterNewClusterInitScriptFileArgs
- Destination string
- Destination string
- destination String
- destination string
- destination str
- destination String
JobJobClusterNewClusterInitScriptS3, JobJobClusterNewClusterInitScriptS3Args
- Destination string
- Canned
Acl string - Enable
Encryption bool - Encryption
Type string - Endpoint string
- Kms
Key string - Region string
- Destination string
- Canned
Acl string - Enable
Encryption bool - Encryption
Type string - Endpoint string
- Kms
Key string - Region string
- destination String
- canned
Acl String - enable
Encryption Boolean - encryption
Type String - endpoint String
- kms
Key String - region String
- destination string
- canned
Acl string - enable
Encryption boolean - encryption
Type string - endpoint string
- kms
Key string - region string
- destination str
- canned_
acl str - enable_
encryption bool - encryption_
type str - endpoint str
- kms_
key str - region str
- destination String
- canned
Acl String - enable
Encryption Boolean - encryption
Type String - endpoint String
- kms
Key String - region String
JobLibrary, JobLibraryArgs
- Cran
Job
Library Cran - Egg string
- Jar string
- Maven
Job
Library Maven - Pypi
Job
Library Pypi - Whl string
- Cran
Job
Library Cran - Egg string
- Jar string
- Maven
Job
Library Maven - Pypi
Job
Library Pypi - Whl string
- cran
Job
Library Cran - egg String
- jar String
- maven
Job
Library Maven - pypi
Job
Library Pypi - whl String
- cran
Job
Library Cran - egg string
- jar string
- maven
Job
Library Maven - pypi
Job
Library Pypi - whl string
- cran Property Map
- egg String
- jar String
- maven Property Map
- pypi Property Map
- whl String
JobLibraryCran, JobLibraryCranArgs
JobLibraryMaven, JobLibraryMavenArgs
- Coordinates string
- Exclusions List<string>
- Repo string
- Coordinates string
- Exclusions []string
- Repo string
- coordinates String
- exclusions List<String>
- repo String
- coordinates string
- exclusions string[]
- repo string
- coordinates str
- exclusions Sequence[str]
- repo str
- coordinates String
- exclusions List<String>
- repo String
JobLibraryPypi, JobLibraryPypiArgs
JobNewCluster, JobNewClusterArgs
- Spark
Version string - Autoscale
Job
New Cluster Autoscale - Autotermination
Minutes int - Aws
Attributes JobNew Cluster Aws Attributes - Azure
Attributes JobNew Cluster Azure Attributes - Cluster
Id string - Cluster
Log JobConf New Cluster Cluster Log Conf - Cluster
Name string - Dictionary<string, object>
- Data
Security stringMode - Docker
Image JobNew Cluster Docker Image - Driver
Instance stringPool Id - Driver
Node stringType Id - Enable
Elastic boolDisk - Enable
Local boolDisk Encryption - Gcp
Attributes JobNew Cluster Gcp Attributes - Idempotency
Token string - Init
Scripts List<JobNew Cluster Init Script> - Instance
Pool stringId - Node
Type stringId - Num
Workers int - Policy
Id string - Single
User stringName - Spark
Conf Dictionary<string, object> - Spark
Env Dictionary<string, object>Vars - Ssh
Public List<string>Keys
- Spark
Version string - Autoscale
Job
New Cluster Autoscale - Autotermination
Minutes int - Aws
Attributes JobNew Cluster Aws Attributes - Azure
Attributes JobNew Cluster Azure Attributes - Cluster
Id string - Cluster
Log JobConf New Cluster Cluster Log Conf - Cluster
Name string - map[string]interface{}
- Data
Security stringMode - Docker
Image JobNew Cluster Docker Image - Driver
Instance stringPool Id - Driver
Node stringType Id - Enable
Elastic boolDisk - Enable
Local boolDisk Encryption - Gcp
Attributes JobNew Cluster Gcp Attributes - Idempotency
Token string - Init
Scripts []JobNew Cluster Init Script - Instance
Pool stringId - Node
Type stringId - Num
Workers int - Policy
Id string - Single
User stringName - Spark
Conf map[string]interface{} - Spark
Env map[string]interface{}Vars - Ssh
Public []stringKeys
- spark
Version String - autoscale
Job
New Cluster Autoscale - autotermination
Minutes Integer - aws
Attributes JobNew Cluster Aws Attributes - azure
Attributes JobNew Cluster Azure Attributes - cluster
Id String - cluster
Log JobConf New Cluster Cluster Log Conf - cluster
Name String - Map<String,Object>
- data
Security StringMode - docker
Image JobNew Cluster Docker Image - driver
Instance StringPool Id - driver
Node StringType Id - enable
Elastic BooleanDisk - enable
Local BooleanDisk Encryption - gcp
Attributes JobNew Cluster Gcp Attributes - idempotency
Token String - init
Scripts List<JobNew Cluster Init Script> - instance
Pool StringId - node
Type StringId - num
Workers Integer - policy
Id String - single
User StringName - spark
Conf Map<String,Object> - spark
Env Map<String,Object>Vars - ssh
Public List<String>Keys
- spark
Version string - autoscale
Job
New Cluster Autoscale - autotermination
Minutes number - aws
Attributes JobNew Cluster Aws Attributes - azure
Attributes JobNew Cluster Azure Attributes - cluster
Id string - cluster
Log JobConf New Cluster Cluster Log Conf - cluster
Name string - {[key: string]: any}
- data
Security stringMode - docker
Image JobNew Cluster Docker Image - driver
Instance stringPool Id - driver
Node stringType Id - enable
Elastic booleanDisk - enable
Local booleanDisk Encryption - gcp
Attributes JobNew Cluster Gcp Attributes - idempotency
Token string - init
Scripts JobNew Cluster Init Script[] - instance
Pool stringId - node
Type stringId - num
Workers number - policy
Id string - single
User stringName - spark
Conf {[key: string]: any} - spark
Env {[key: string]: any}Vars - ssh
Public string[]Keys
- spark_
version str - autoscale
Job
New Cluster Autoscale - autotermination_
minutes int - aws_
attributes JobNew Cluster Aws Attributes - azure_
attributes JobNew Cluster Azure Attributes - cluster_
id str - cluster_
log_ Jobconf New Cluster Cluster Log Conf - cluster_
name str - Mapping[str, Any]
- data_
security_ strmode - docker_
image JobNew Cluster Docker Image - driver_
instance_ strpool_ id - driver_
node_ strtype_ id - enable_
elastic_ booldisk - enable_
local_ booldisk_ encryption - gcp_
attributes JobNew Cluster Gcp Attributes - idempotency_
token str - init_
scripts Sequence[JobNew Cluster Init Script] - instance_
pool_ strid - node_
type_ strid - num_
workers int - policy_
id str - single_
user_ strname - spark_
conf Mapping[str, Any] - spark_
env_ Mapping[str, Any]vars - ssh_
public_ Sequence[str]keys
- spark
Version String - autoscale Property Map
- autotermination
Minutes Number - aws
Attributes Property Map - azure
Attributes Property Map - cluster
Id String - cluster
Log Property MapConf - cluster
Name String - Map<Any>
- data
Security StringMode - docker
Image Property Map - driver
Instance StringPool Id - driver
Node StringType Id - enable
Elastic BooleanDisk - enable
Local BooleanDisk Encryption - gcp
Attributes Property Map - idempotency
Token String - init
Scripts List<Property Map> - instance
Pool StringId - node
Type StringId - num
Workers Number - policy
Id String - single
User StringName - spark
Conf Map<Any> - spark
Env Map<Any>Vars - ssh
Public List<String>Keys
JobNewClusterAutoscale, JobNewClusterAutoscaleArgs
- Max
Workers int - Min
Workers int
- Max
Workers int - Min
Workers int
- max
Workers Integer - min
Workers Integer
- max
Workers number - min
Workers number
- max_
workers int - min_
workers int
- max
Workers Number - min
Workers Number
JobNewClusterAwsAttributes, JobNewClusterAwsAttributesArgs
- Availability string
- Ebs
Volume intCount - Ebs
Volume intSize - Ebs
Volume stringType - First
On intDemand - Instance
Profile stringArn - Spot
Bid intPrice Percent - Zone
Id string
- Availability string
- Ebs
Volume intCount - Ebs
Volume intSize - Ebs
Volume stringType - First
On intDemand - Instance
Profile stringArn - Spot
Bid intPrice Percent - Zone
Id string
- availability String
- ebs
Volume IntegerCount - ebs
Volume IntegerSize - ebs
Volume StringType - first
On IntegerDemand - instance
Profile StringArn - spot
Bid IntegerPrice Percent - zone
Id String
- availability string
- ebs
Volume numberCount - ebs
Volume numberSize - ebs
Volume stringType - first
On numberDemand - instance
Profile stringArn - spot
Bid numberPrice Percent - zone
Id string
- availability str
- ebs_
volume_ intcount - ebs_
volume_ intsize - ebs_
volume_ strtype - first_
on_ intdemand - instance_
profile_ strarn - spot_
bid_ intprice_ percent - zone_
id str
- availability String
- ebs
Volume NumberCount - ebs
Volume NumberSize - ebs
Volume StringType - first
On NumberDemand - instance
Profile StringArn - spot
Bid NumberPrice Percent - zone
Id String
JobNewClusterAzureAttributes, JobNewClusterAzureAttributesArgs
- Availability string
- First
On intDemand - Spot
Bid doubleMax Price
- Availability string
- First
On intDemand - Spot
Bid float64Max Price
- availability String
- first
On IntegerDemand - spot
Bid DoubleMax Price
- availability string
- first
On numberDemand - spot
Bid numberMax Price
- availability str
- first_
on_ intdemand - spot_
bid_ floatmax_ price
- availability String
- first
On NumberDemand - spot
Bid NumberMax Price
JobNewClusterClusterLogConf, JobNewClusterClusterLogConfArgs
JobNewClusterClusterLogConfDbfs, JobNewClusterClusterLogConfDbfsArgs
- Destination string
- Destination string
- destination String
- destination string
- destination str
- destination String
JobNewClusterClusterLogConfS3, JobNewClusterClusterLogConfS3Args
- Destination string
- Canned
Acl string - Enable
Encryption bool - Encryption
Type string - Endpoint string
- Kms
Key string - Region string
- Destination string
- Canned
Acl string - Enable
Encryption bool - Encryption
Type string - Endpoint string
- Kms
Key string - Region string
- destination String
- canned
Acl String - enable
Encryption Boolean - encryption
Type String - endpoint String
- kms
Key String - region String
- destination string
- canned
Acl string - enable
Encryption boolean - encryption
Type string - endpoint string
- kms
Key string - region string
- destination str
- canned_
acl str - enable_
encryption bool - encryption_
type str - endpoint str
- kms_
key str - region str
- destination String
- canned
Acl String - enable
Encryption Boolean - encryption
Type String - endpoint String
- kms
Key String - region String
JobNewClusterDockerImage, JobNewClusterDockerImageArgs
- Url string
- URL of the job on the given workspace
- Basic
Auth JobNew Cluster Docker Image Basic Auth
- Url string
- URL of the job on the given workspace
- Basic
Auth JobNew Cluster Docker Image Basic Auth
- url String
- URL of the job on the given workspace
- basic
Auth JobNew Cluster Docker Image Basic Auth
- url string
- URL of the job on the given workspace
- basic
Auth JobNew Cluster Docker Image Basic Auth
- url str
- URL of the job on the given workspace
- basic_
auth JobNew Cluster Docker Image Basic Auth
- url String
- URL of the job on the given workspace
- basic
Auth Property Map
JobNewClusterDockerImageBasicAuth, JobNewClusterDockerImageBasicAuthArgs
JobNewClusterGcpAttributes, JobNewClusterGcpAttributesArgs
- Availability string
- Boot
Disk intSize - Google
Service stringAccount - Use
Preemptible boolExecutors - Zone
Id string
- Availability string
- Boot
Disk intSize - Google
Service stringAccount - Use
Preemptible boolExecutors - Zone
Id string
- availability String
- boot
Disk IntegerSize - google
Service StringAccount - use
Preemptible BooleanExecutors - zone
Id String
- availability string
- boot
Disk numberSize - google
Service stringAccount - use
Preemptible booleanExecutors - zone
Id string
- availability str
- boot_
disk_ intsize - google_
service_ straccount - use_
preemptible_ boolexecutors - zone_
id str
- availability String
- boot
Disk NumberSize - google
Service StringAccount - use
Preemptible BooleanExecutors - zone
Id String
JobNewClusterInitScript, JobNewClusterInitScriptArgs
JobNewClusterInitScriptDbfs, JobNewClusterInitScriptDbfsArgs
- Destination string
- Destination string
- destination String
- destination string
- destination str
- destination String
JobNewClusterInitScriptFile, JobNewClusterInitScriptFileArgs
- Destination string
- Destination string
- destination String
- destination string
- destination str
- destination String
JobNewClusterInitScriptS3, JobNewClusterInitScriptS3Args
- Destination string
- Canned
Acl string - Enable
Encryption bool - Encryption
Type string - Endpoint string
- Kms
Key string - Region string
- Destination string
- Canned
Acl string - Enable
Encryption bool - Encryption
Type string - Endpoint string
- Kms
Key string - Region string
- destination String
- canned
Acl String - enable
Encryption Boolean - encryption
Type String - endpoint String
- kms
Key String - region String
- destination string
- canned
Acl string - enable
Encryption boolean - encryption
Type string - endpoint string
- kms
Key string - region string
- destination str
- canned_
acl str - enable_
encryption bool - encryption_
type str - endpoint str
- kms_
key str - region str
- destination String
- canned
Acl String - enable
Encryption Boolean - encryption
Type String - endpoint String
- kms
Key String - region String
JobNotebookTask, JobNotebookTaskArgs
- Notebook
Path string - The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
- Base
Parameters Dictionary<string, object> - (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using
dbutils.widgets.get.
- Notebook
Path string - The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
- Base
Parameters map[string]interface{} - (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using
dbutils.widgets.get.
- notebook
Path String - The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
- base
Parameters Map<String,Object> - (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using
dbutils.widgets.get.
- notebook
Path string - The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
- base
Parameters {[key: string]: any} - (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using
dbutils.widgets.get.
- notebook_
path str - The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
- base_
parameters Mapping[str, Any] - (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using
dbutils.widgets.get.
- notebook
Path String - The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
- base
Parameters Map<Any> - (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using
dbutils.widgets.get.
JobPipelineTask, JobPipelineTaskArgs
- Pipeline
Id string - The pipeline's unique ID.
- Pipeline
Id string - The pipeline's unique ID.
- pipeline
Id String - The pipeline's unique ID.
- pipeline
Id string - The pipeline's unique ID.
- pipeline_
id str - The pipeline's unique ID.
- pipeline
Id String - The pipeline's unique ID.
JobPythonWheelTask, JobPythonWheelTaskArgs
- Entry
Point string - Python function as entry point for the task
- Named
Parameters Dictionary<string, object> - Named parameters for the task
- Package
Name string - Name of Python package
- Parameters List<string>
- Parameters for the task
- Entry
Point string - Python function as entry point for the task
- Named
Parameters map[string]interface{} - Named parameters for the task
- Package
Name string - Name of Python package
- Parameters []string
- Parameters for the task
- entry
Point String - Python function as entry point for the task
- named
Parameters Map<String,Object> - Named parameters for the task
- package
Name String - Name of Python package
- parameters List<String>
- Parameters for the task
- entry
Point string - Python function as entry point for the task
- named
Parameters {[key: string]: any} - Named parameters for the task
- package
Name string - Name of Python package
- parameters string[]
- Parameters for the task
- entry_
point str - Python function as entry point for the task
- named_
parameters Mapping[str, Any] - Named parameters for the task
- package_
name str - Name of Python package
- parameters Sequence[str]
- Parameters for the task
- entry
Point String - Python function as entry point for the task
- named
Parameters Map<Any> - Named parameters for the task
- package
Name String - Name of Python package
- parameters List<String>
- Parameters for the task
JobSchedule, JobScheduleArgs
- Quartz
Cron stringExpression - A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
- Timezone
Id string - A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
- Pause
Status string - Indicate whether this schedule is paused or not. Either “PAUSED” or “UNPAUSED”. When the pause_status field is omitted and a schedule is provided, the server will default to using "UNPAUSED" as a value for pause_status.
- Quartz
Cron stringExpression - A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
- Timezone
Id string - A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
- Pause
Status string - Indicate whether this schedule is paused or not. Either “PAUSED” or “UNPAUSED”. When the pause_status field is omitted and a schedule is provided, the server will default to using "UNPAUSED" as a value for pause_status.
- quartz
Cron StringExpression - A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
- timezone
Id String - A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
- pause
Status String - Indicate whether this schedule is paused or not. Either “PAUSED” or “UNPAUSED”. When the pause_status field is omitted and a schedule is provided, the server will default to using "UNPAUSED" as a value for pause_status.
- quartz
Cron stringExpression - A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
- timezone
Id string - A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
- pause
Status string - Indicate whether this schedule is paused or not. Either “PAUSED” or “UNPAUSED”. When the pause_status field is omitted and a schedule is provided, the server will default to using "UNPAUSED" as a value for pause_status.
- quartz_
cron_ strexpression - A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
- timezone_
id str - A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
- pause_
status str - Indicate whether this schedule is paused or not. Either “PAUSED” or “UNPAUSED”. When the pause_status field is omitted and a schedule is provided, the server will default to using "UNPAUSED" as a value for pause_status.
- quartz
Cron StringExpression - A Cron expression using Quartz syntax that describes the schedule for a job. This field is required.
- timezone
Id String - A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.
- pause
Status String - Indicate whether this schedule is paused or not. Either “PAUSED” or “UNPAUSED”. When the pause_status field is omitted and a schedule is provided, the server will default to using "UNPAUSED" as a value for pause_status.
JobSparkJarTask, JobSparkJarTaskArgs
- Jar
Uri string - Main
Class stringName - The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use
SparkContext.getOrCreateto obtain a Spark context; otherwise, runs of the job will fail. - Parameters List<string>
- Parameters for the task
- Jar
Uri string - Main
Class stringName - The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use
SparkContext.getOrCreateto obtain a Spark context; otherwise, runs of the job will fail. - Parameters []string
- Parameters for the task
- jar
Uri String - main
Class StringName - The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use
SparkContext.getOrCreateto obtain a Spark context; otherwise, runs of the job will fail. - parameters List<String>
- Parameters for the task
- jar
Uri string - main
Class stringName - The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use
SparkContext.getOrCreateto obtain a Spark context; otherwise, runs of the job will fail. - parameters string[]
- Parameters for the task
- jar_
uri str - main_
class_ strname - The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use
SparkContext.getOrCreateto obtain a Spark context; otherwise, runs of the job will fail. - parameters Sequence[str]
- Parameters for the task
- jar
Uri String - main
Class StringName - The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use
SparkContext.getOrCreateto obtain a Spark context; otherwise, runs of the job will fail. - parameters List<String>
- Parameters for the task
JobSparkPythonTask, JobSparkPythonTaskArgs
- Python
File string - The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
- Parameters List<string>
- Parameters for the task
- Python
File string - The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
- Parameters []string
- Parameters for the task
- python
File String - The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
- parameters List<String>
- Parameters for the task
- python
File string - The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
- parameters string[]
- Parameters for the task
- python_
file str - The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
- parameters Sequence[str]
- Parameters for the task
- python
File String - The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
- parameters List<String>
- Parameters for the task
JobSparkSubmitTask, JobSparkSubmitTaskArgs
- Parameters List<string>
- Parameters for the task
- Parameters []string
- Parameters for the task
- parameters List<String>
- Parameters for the task
- parameters string[]
- Parameters for the task
- parameters Sequence[str]
- Parameters for the task
- parameters List<String>
- Parameters for the task
JobTask, JobTaskArgs
- Depends
Ons List<JobTask Depends On> - Description string
- Email
Notifications JobTask Email Notifications - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- Existing
Cluster stringId - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - Job
Cluster stringKey - Identifier that can be referenced in
taskblock, so that cluster is shared between tasks - Libraries
List<Job
Task Library> - (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- Max
Retries int - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- Min
Retry intInterval Millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- New
Cluster JobTask New Cluster - Same set of parameters as for databricks.Cluster resource.
- Notebook
Task JobTask Notebook Task - Pipeline
Task JobTask Pipeline Task - Python
Wheel JobTask Task Python Wheel Task - Retry
On boolTimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- Spark
Jar JobTask Task Spark Jar Task - Spark
Python JobTask Task Spark Python Task - Spark
Submit JobTask Task Spark Submit Task - Task
Key string - Timeout
Seconds int - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- Depends
Ons []JobTask Depends On - Description string
- Email
Notifications JobTask Email Notifications - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- Existing
Cluster stringId - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - Job
Cluster stringKey - Identifier that can be referenced in
taskblock, so that cluster is shared between tasks - Libraries
[]Job
Task Library - (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- Max
Retries int - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- Min
Retry intInterval Millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- New
Cluster JobTask New Cluster - Same set of parameters as for databricks.Cluster resource.
- Notebook
Task JobTask Notebook Task - Pipeline
Task JobTask Pipeline Task - Python
Wheel JobTask Task Python Wheel Task - Retry
On boolTimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- Spark
Jar JobTask Task Spark Jar Task - Spark
Python JobTask Task Spark Python Task - Spark
Submit JobTask Task Spark Submit Task - Task
Key string - Timeout
Seconds int - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- depends
Ons List<JobTask Depends On> - description String
- email
Notifications JobTask Email Notifications - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- existing
Cluster StringId - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - job
Cluster StringKey - Identifier that can be referenced in
taskblock, so that cluster is shared between tasks - libraries
List<Job
Task Library> - (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- max
Retries Integer - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- min
Retry IntegerInterval Millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- new
Cluster JobTask New Cluster - Same set of parameters as for databricks.Cluster resource.
- notebook
Task JobTask Notebook Task - pipeline
Task JobTask Pipeline Task - python
Wheel JobTask Task Python Wheel Task - retry
On BooleanTimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- spark
Jar JobTask Task Spark Jar Task - spark
Python JobTask Task Spark Python Task - spark
Submit JobTask Task Spark Submit Task - task
Key String - timeout
Seconds Integer - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- depends
Ons JobTask Depends On[] - description string
- email
Notifications JobTask Email Notifications - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- existing
Cluster stringId - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - job
Cluster stringKey - Identifier that can be referenced in
taskblock, so that cluster is shared between tasks - libraries
Job
Task Library[] - (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- max
Retries number - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- min
Retry numberInterval Millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- new
Cluster JobTask New Cluster - Same set of parameters as for databricks.Cluster resource.
- notebook
Task JobTask Notebook Task - pipeline
Task JobTask Pipeline Task - python
Wheel JobTask Task Python Wheel Task - retry
On booleanTimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- spark
Jar JobTask Task Spark Jar Task - spark
Python JobTask Task Spark Python Task - spark
Submit JobTask Task Spark Submit Task - task
Key string - timeout
Seconds number - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- depends_
ons Sequence[JobTask Depends On] - description str
- email_
notifications JobTask Email Notifications - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- existing_
cluster_ strid - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - job_
cluster_ strkey - Identifier that can be referenced in
taskblock, so that cluster is shared between tasks - libraries
Sequence[Job
Task Library] - (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- max_
retries int - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- min_
retry_ intinterval_ millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- new_
cluster JobTask New Cluster - Same set of parameters as for databricks.Cluster resource.
- notebook_
task JobTask Notebook Task - pipeline_
task JobTask Pipeline Task - python_
wheel_ Jobtask Task Python Wheel Task - retry_
on_ booltimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- spark_
jar_ Jobtask Task Spark Jar Task - spark_
python_ Jobtask Task Spark Python Task - spark_
submit_ Jobtask Task Spark Submit Task - task_
key str - timeout_
seconds int - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
- depends
Ons List<Property Map> - description String
- email
Notifications Property Map - (List) An optional set of email addresses notified when runs of this job begin and complete and when this job is deleted. The default behavior is to not send any emails. This field is a block and is documented below.
- existing
Cluster StringId - If existing_cluster_id, the ID of an existing cluster that will be used for all runs of this job. When running jobs on an existing cluster, you may need to manually restart the cluster if it stops responding. We strongly suggest to use
new_clusterfor greater reliability. - job
Cluster StringKey - Identifier that can be referenced in
taskblock, so that cluster is shared between tasks - libraries List<Property Map>
- (Set) An optional list of libraries to be installed on the cluster that will execute the job. Please consult libraries section for databricks.Cluster resource.
- max
Retries Number - (Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result_state or INTERNAL_ERROR life_cycle_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry.
- min
Retry NumberInterval Millis - (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
- new
Cluster Property Map - Same set of parameters as for databricks.Cluster resource.
- notebook
Task Property Map - pipeline
Task Property Map - python
Wheel Property MapTask - retry
On BooleanTimeout - (Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.
- spark
Jar Property MapTask - spark
Python Property MapTask - spark
Submit Property MapTask - task
Key String - timeout
Seconds Number - (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
JobTaskDependsOn, JobTaskDependsOnArgs
- Task
Key string
- Task
Key string
- task
Key String
- task
Key string
- task_
key str
- task
Key String
JobTaskEmailNotifications, JobTaskEmailNotificationsArgs
- No
Alert boolFor Skipped Runs - (Bool) don't send alert for skipped runs
- On
Failures List<string> - (List) list of emails to notify on failure
- On
Starts List<string> - (List) list of emails to notify on failure
- On
Successes List<string> - (List) list of emails to notify on failure
- No
Alert boolFor Skipped Runs - (Bool) don't send alert for skipped runs
- On
Failures []string - (List) list of emails to notify on failure
- On
Starts []string - (List) list of emails to notify on failure
- On
Successes []string - (List) list of emails to notify on failure
- no
Alert BooleanFor Skipped Runs - (Bool) don't send alert for skipped runs
- on
Failures List<String> - (List) list of emails to notify on failure
- on
Starts List<String> - (List) list of emails to notify on failure
- on
Successes List<String> - (List) list of emails to notify on failure
- no
Alert booleanFor Skipped Runs - (Bool) don't send alert for skipped runs
- on
Failures string[] - (List) list of emails to notify on failure
- on
Starts string[] - (List) list of emails to notify on failure
- on
Successes string[] - (List) list of emails to notify on failure
- no_
alert_ boolfor_ skipped_ runs - (Bool) don't send alert for skipped runs
- on_
failures Sequence[str] - (List) list of emails to notify on failure
- on_
starts Sequence[str] - (List) list of emails to notify on failure
- on_
successes Sequence[str] - (List) list of emails to notify on failure
- no
Alert BooleanFor Skipped Runs - (Bool) don't send alert for skipped runs
- on
Failures List<String> - (List) list of emails to notify on failure
- on
Starts List<String> - (List) list of emails to notify on failure
- on
Successes List<String> - (List) list of emails to notify on failure
JobTaskLibrary, JobTaskLibraryArgs
- Cran
Job
Task Library Cran - Egg string
- Jar string
- Maven
Job
Task Library Maven - Pypi
Job
Task Library Pypi - Whl string
- Cran
Job
Task Library Cran - Egg string
- Jar string
- Maven
Job
Task Library Maven - Pypi
Job
Task Library Pypi - Whl string
- cran
Job
Task Library Cran - egg String
- jar String
- maven
Job
Task Library Maven - pypi
Job
Task Library Pypi - whl String
- cran
Job
Task Library Cran - egg string
- jar string
- maven
Job
Task Library Maven - pypi
Job
Task Library Pypi - whl string
- cran Property Map
- egg String
- jar String
- maven Property Map
- pypi Property Map
- whl String
JobTaskLibraryCran, JobTaskLibraryCranArgs
JobTaskLibraryMaven, JobTaskLibraryMavenArgs
- Coordinates string
- Exclusions List<string>
- Repo string
- Coordinates string
- Exclusions []string
- Repo string
- coordinates String
- exclusions List<String>
- repo String
- coordinates string
- exclusions string[]
- repo string
- coordinates str
- exclusions Sequence[str]
- repo str
- coordinates String
- exclusions List<String>
- repo String
JobTaskLibraryPypi, JobTaskLibraryPypiArgs
JobTaskNewCluster, JobTaskNewClusterArgs
- Spark
Version string - Autoscale
Job
Task New Cluster Autoscale - Autotermination
Minutes int - Aws
Attributes JobTask New Cluster Aws Attributes - Azure
Attributes JobTask New Cluster Azure Attributes - Cluster
Id string - Cluster
Log JobConf Task New Cluster Cluster Log Conf - Cluster
Name string - Dictionary<string, object>
- Data
Security stringMode - Docker
Image JobTask New Cluster Docker Image - Driver
Instance stringPool Id - Driver
Node stringType Id - Enable
Elastic boolDisk - Enable
Local boolDisk Encryption - Gcp
Attributes JobTask New Cluster Gcp Attributes - Idempotency
Token string - Init
Scripts List<JobTask New Cluster Init Script> - Instance
Pool stringId - Node
Type stringId - Num
Workers int - Policy
Id string - Single
User stringName - Spark
Conf Dictionary<string, object> - Spark
Env Dictionary<string, object>Vars - Ssh
Public List<string>Keys
- Spark
Version string - Autoscale
Job
Task New Cluster Autoscale - Autotermination
Minutes int - Aws
Attributes JobTask New Cluster Aws Attributes - Azure
Attributes JobTask New Cluster Azure Attributes - Cluster
Id string - Cluster
Log JobConf Task New Cluster Cluster Log Conf - Cluster
Name string - map[string]interface{}
- Data
Security stringMode - Docker
Image JobTask New Cluster Docker Image - Driver
Instance stringPool Id - Driver
Node stringType Id - Enable
Elastic boolDisk - Enable
Local boolDisk Encryption - Gcp
Attributes JobTask New Cluster Gcp Attributes - Idempotency
Token string - Init
Scripts []JobTask New Cluster Init Script - Instance
Pool stringId - Node
Type stringId - Num
Workers int - Policy
Id string - Single
User stringName - Spark
Conf map[string]interface{} - Spark
Env map[string]interface{}Vars - Ssh
Public []stringKeys
- spark
Version String - autoscale
Job
Task New Cluster Autoscale - autotermination
Minutes Integer - aws
Attributes JobTask New Cluster Aws Attributes - azure
Attributes JobTask New Cluster Azure Attributes - cluster
Id String - cluster
Log JobConf Task New Cluster Cluster Log Conf - cluster
Name String - Map<String,Object>
- data
Security StringMode - docker
Image JobTask New Cluster Docker Image - driver
Instance StringPool Id - driver
Node StringType Id - enable
Elastic BooleanDisk - enable
Local BooleanDisk Encryption - gcp
Attributes JobTask New Cluster Gcp Attributes - idempotency
Token String - init
Scripts List<JobTask New Cluster Init Script> - instance
Pool StringId - node
Type StringId - num
Workers Integer - policy
Id String - single
User StringName - spark
Conf Map<String,Object> - spark
Env Map<String,Object>Vars - ssh
Public List<String>Keys
- spark
Version string - autoscale
Job
Task New Cluster Autoscale - autotermination
Minutes number - aws
Attributes JobTask New Cluster Aws Attributes - azure
Attributes JobTask New Cluster Azure Attributes - cluster
Id string - cluster
Log JobConf Task New Cluster Cluster Log Conf - cluster
Name string - {[key: string]: any}
- data
Security stringMode - docker
Image JobTask New Cluster Docker Image - driver
Instance stringPool Id - driver
Node stringType Id - enable
Elastic booleanDisk - enable
Local booleanDisk Encryption - gcp
Attributes JobTask New Cluster Gcp Attributes - idempotency
Token string - init
Scripts JobTask New Cluster Init Script[] - instance
Pool stringId - node
Type stringId - num
Workers number - policy
Id string - single
User stringName - spark
Conf {[key: string]: any} - spark
Env {[key: string]: any}Vars - ssh
Public string[]Keys
- spark_
version str - autoscale
Job
Task New Cluster Autoscale - autotermination_
minutes int - aws_
attributes JobTask New Cluster Aws Attributes - azure_
attributes JobTask New Cluster Azure Attributes - cluster_
id str - cluster_
log_ Jobconf Task New Cluster Cluster Log Conf - cluster_
name str - Mapping[str, Any]
- data_
security_ strmode - docker_
image JobTask New Cluster Docker Image - driver_
instance_ strpool_ id - driver_
node_ strtype_ id - enable_
elastic_ booldisk - enable_
local_ booldisk_ encryption - gcp_
attributes JobTask New Cluster Gcp Attributes - idempotency_
token str - init_
scripts Sequence[JobTask New Cluster Init Script] - instance_
pool_ strid - node_
type_ strid - num_
workers int - policy_
id str - single_
user_ strname - spark_
conf Mapping[str, Any] - spark_
env_ Mapping[str, Any]vars - ssh_
public_ Sequence[str]keys
- spark
Version String - autoscale Property Map
- autotermination
Minutes Number - aws
Attributes Property Map - azure
Attributes Property Map - cluster
Id String - cluster
Log Property MapConf - cluster
Name String - Map<Any>
- data
Security StringMode - docker
Image Property Map - driver
Instance StringPool Id - driver
Node StringType Id - enable
Elastic BooleanDisk - enable
Local BooleanDisk Encryption - gcp
Attributes Property Map - idempotency
Token String - init
Scripts List<Property Map> - instance
Pool StringId - node
Type StringId - num
Workers Number - policy
Id String - single
User StringName - spark
Conf Map<Any> - spark
Env Map<Any>Vars - ssh
Public List<String>Keys
JobTaskNewClusterAutoscale, JobTaskNewClusterAutoscaleArgs
- Max
Workers int - Min
Workers int
- Max
Workers int - Min
Workers int
- max
Workers Integer - min
Workers Integer
- max
Workers number - min
Workers number
- max_
workers int - min_
workers int
- max
Workers Number - min
Workers Number
JobTaskNewClusterAwsAttributes, JobTaskNewClusterAwsAttributesArgs
- Availability string
- Ebs
Volume intCount - Ebs
Volume intSize - Ebs
Volume stringType - First
On intDemand - Instance
Profile stringArn - Spot
Bid intPrice Percent - Zone
Id string
- Availability string
- Ebs
Volume intCount - Ebs
Volume intSize - Ebs
Volume stringType - First
On intDemand - Instance
Profile stringArn - Spot
Bid intPrice Percent - Zone
Id string
- availability String
- ebs
Volume IntegerCount - ebs
Volume IntegerSize - ebs
Volume StringType - first
On IntegerDemand - instance
Profile StringArn - spot
Bid IntegerPrice Percent - zone
Id String
- availability string
- ebs
Volume numberCount - ebs
Volume numberSize - ebs
Volume stringType - first
On numberDemand - instance
Profile stringArn - spot
Bid numberPrice Percent - zone
Id string
- availability str
- ebs_
volume_ intcount - ebs_
volume_ intsize - ebs_
volume_ strtype - first_
on_ intdemand - instance_
profile_ strarn - spot_
bid_ intprice_ percent - zone_
id str
- availability String
- ebs
Volume NumberCount - ebs
Volume NumberSize - ebs
Volume StringType - first
On NumberDemand - instance
Profile StringArn - spot
Bid NumberPrice Percent - zone
Id String
JobTaskNewClusterAzureAttributes, JobTaskNewClusterAzureAttributesArgs
- Availability string
- First
On intDemand - Spot
Bid doubleMax Price
- Availability string
- First
On intDemand - Spot
Bid float64Max Price
- availability String
- first
On IntegerDemand - spot
Bid DoubleMax Price
- availability string
- first
On numberDemand - spot
Bid numberMax Price
- availability str
- first_
on_ intdemand - spot_
bid_ floatmax_ price
- availability String
- first
On NumberDemand - spot
Bid NumberMax Price
JobTaskNewClusterClusterLogConf, JobTaskNewClusterClusterLogConfArgs
JobTaskNewClusterClusterLogConfDbfs, JobTaskNewClusterClusterLogConfDbfsArgs
- Destination string
- Destination string
- destination String
- destination string
- destination str
- destination String
JobTaskNewClusterClusterLogConfS3, JobTaskNewClusterClusterLogConfS3Args
- Destination string
- Canned
Acl string - Enable
Encryption bool - Encryption
Type string - Endpoint string
- Kms
Key string - Region string
- Destination string
- Canned
Acl string - Enable
Encryption bool - Encryption
Type string - Endpoint string
- Kms
Key string - Region string
- destination String
- canned
Acl String - enable
Encryption Boolean - encryption
Type String - endpoint String
- kms
Key String - region String
- destination string
- canned
Acl string - enable
Encryption boolean - encryption
Type string - endpoint string
- kms
Key string - region string
- destination str
- canned_
acl str - enable_
encryption bool - encryption_
type str - endpoint str
- kms_
key str - region str
- destination String
- canned
Acl String - enable
Encryption Boolean - encryption
Type String - endpoint String
- kms
Key String - region String
JobTaskNewClusterDockerImage, JobTaskNewClusterDockerImageArgs
- Url string
- URL of the job on the given workspace
- Basic
Auth JobTask New Cluster Docker Image Basic Auth
- Url string
- URL of the job on the given workspace
- Basic
Auth JobTask New Cluster Docker Image Basic Auth
- url String
- URL of the job on the given workspace
- basic
Auth JobTask New Cluster Docker Image Basic Auth
- url string
- URL of the job on the given workspace
- basic
Auth JobTask New Cluster Docker Image Basic Auth
- url str
- URL of the job on the given workspace
- basic_
auth JobTask New Cluster Docker Image Basic Auth
- url String
- URL of the job on the given workspace
- basic
Auth Property Map
JobTaskNewClusterDockerImageBasicAuth, JobTaskNewClusterDockerImageBasicAuthArgs
JobTaskNewClusterGcpAttributes, JobTaskNewClusterGcpAttributesArgs
- Availability string
- Boot
Disk intSize - Google
Service stringAccount - Use
Preemptible boolExecutors - Zone
Id string
- Availability string
- Boot
Disk intSize - Google
Service stringAccount - Use
Preemptible boolExecutors - Zone
Id string
- availability String
- boot
Disk IntegerSize - google
Service StringAccount - use
Preemptible BooleanExecutors - zone
Id String
- availability string
- boot
Disk numberSize - google
Service stringAccount - use
Preemptible booleanExecutors - zone
Id string
- availability str
- boot_
disk_ intsize - google_
service_ straccount - use_
preemptible_ boolexecutors - zone_
id str
- availability String
- boot
Disk NumberSize - google
Service StringAccount - use
Preemptible BooleanExecutors - zone
Id String
JobTaskNewClusterInitScript, JobTaskNewClusterInitScriptArgs
JobTaskNewClusterInitScriptDbfs, JobTaskNewClusterInitScriptDbfsArgs
- Destination string
- Destination string
- destination String
- destination string
- destination str
- destination String
JobTaskNewClusterInitScriptFile, JobTaskNewClusterInitScriptFileArgs
- Destination string
- Destination string
- destination String
- destination string
- destination str
- destination String
JobTaskNewClusterInitScriptS3, JobTaskNewClusterInitScriptS3Args
- Destination string
- Canned
Acl string - Enable
Encryption bool - Encryption
Type string - Endpoint string
- Kms
Key string - Region string
- Destination string
- Canned
Acl string - Enable
Encryption bool - Encryption
Type string - Endpoint string
- Kms
Key string - Region string
- destination String
- canned
Acl String - enable
Encryption Boolean - encryption
Type String - endpoint String
- kms
Key String - region String
- destination string
- canned
Acl string - enable
Encryption boolean - encryption
Type string - endpoint string
- kms
Key string - region string
- destination str
- canned_
acl str - enable_
encryption bool - encryption_
type str - endpoint str
- kms_
key str - region str
- destination String
- canned
Acl String - enable
Encryption Boolean - encryption
Type String - endpoint String
- kms
Key String - region String
JobTaskNotebookTask, JobTaskNotebookTaskArgs
- Notebook
Path string - The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
- Base
Parameters Dictionary<string, object> - (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using
dbutils.widgets.get.
- Notebook
Path string - The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
- Base
Parameters map[string]interface{} - (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using
dbutils.widgets.get.
- notebook
Path String - The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
- base
Parameters Map<String,Object> - (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using
dbutils.widgets.get.
- notebook
Path string - The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
- base
Parameters {[key: string]: any} - (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using
dbutils.widgets.get.
- notebook_
path str - The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
- base_
parameters Mapping[str, Any] - (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using
dbutils.widgets.get.
- notebook
Path String - The absolute path of the databricks.Notebook to be run in the Databricks workspace. This path must begin with a slash. This field is required.
- base
Parameters Map<Any> - (Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job’s base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using
dbutils.widgets.get.
JobTaskPipelineTask, JobTaskPipelineTaskArgs
- Pipeline
Id string - The pipeline's unique ID.
- Pipeline
Id string - The pipeline's unique ID.
- pipeline
Id String - The pipeline's unique ID.
- pipeline
Id string - The pipeline's unique ID.
- pipeline_
id str - The pipeline's unique ID.
- pipeline
Id String - The pipeline's unique ID.
JobTaskPythonWheelTask, JobTaskPythonWheelTaskArgs
- Entry
Point string - Python function as entry point for the task
- Named
Parameters Dictionary<string, object> - Named parameters for the task
- Package
Name string - Name of Python package
- Parameters List<string>
- Parameters for the task
- Entry
Point string - Python function as entry point for the task
- Named
Parameters map[string]interface{} - Named parameters for the task
- Package
Name string - Name of Python package
- Parameters []string
- Parameters for the task
- entry
Point String - Python function as entry point for the task
- named
Parameters Map<String,Object> - Named parameters for the task
- package
Name String - Name of Python package
- parameters List<String>
- Parameters for the task
- entry
Point string - Python function as entry point for the task
- named
Parameters {[key: string]: any} - Named parameters for the task
- package
Name string - Name of Python package
- parameters string[]
- Parameters for the task
- entry_
point str - Python function as entry point for the task
- named_
parameters Mapping[str, Any] - Named parameters for the task
- package_
name str - Name of Python package
- parameters Sequence[str]
- Parameters for the task
- entry
Point String - Python function as entry point for the task
- named
Parameters Map<Any> - Named parameters for the task
- package
Name String - Name of Python package
- parameters List<String>
- Parameters for the task
JobTaskSparkJarTask, JobTaskSparkJarTaskArgs
- Jar
Uri string - Main
Class stringName - The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use
SparkContext.getOrCreateto obtain a Spark context; otherwise, runs of the job will fail. - Parameters List<string>
- Parameters for the task
- Jar
Uri string - Main
Class stringName - The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use
SparkContext.getOrCreateto obtain a Spark context; otherwise, runs of the job will fail. - Parameters []string
- Parameters for the task
- jar
Uri String - main
Class StringName - The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use
SparkContext.getOrCreateto obtain a Spark context; otherwise, runs of the job will fail. - parameters List<String>
- Parameters for the task
- jar
Uri string - main
Class stringName - The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use
SparkContext.getOrCreateto obtain a Spark context; otherwise, runs of the job will fail. - parameters string[]
- Parameters for the task
- jar_
uri str - main_
class_ strname - The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use
SparkContext.getOrCreateto obtain a Spark context; otherwise, runs of the job will fail. - parameters Sequence[str]
- Parameters for the task
- jar
Uri String - main
Class StringName - The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use
SparkContext.getOrCreateto obtain a Spark context; otherwise, runs of the job will fail. - parameters List<String>
- Parameters for the task
JobTaskSparkPythonTask, JobTaskSparkPythonTaskArgs
- Python
File string - The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
- Parameters List<string>
- Parameters for the task
- Python
File string - The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
- Parameters []string
- Parameters for the task
- python
File String - The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
- parameters List<String>
- Parameters for the task
- python
File string - The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
- parameters string[]
- Parameters for the task
- python_
file str - The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
- parameters Sequence[str]
- Parameters for the task
- python
File String - The URI of the Python file to be executed. databricks.DbfsFile and S3 paths are supported. This field is required.
- parameters List<String>
- Parameters for the task
JobTaskSparkSubmitTask, JobTaskSparkSubmitTaskArgs
- Parameters List<string>
- Parameters for the task
- Parameters []string
- Parameters for the task
- parameters List<String>
- Parameters for the task
- parameters string[]
- Parameters for the task
- parameters Sequence[str]
- Parameters for the task
- parameters List<String>
- Parameters for the task
Package Details
- Repository
- databricks pulumi/pulumi-databricks
- License
- Apache-2.0
- Notes
- This Pulumi package is based on the
databricksTerraform Provider.
published on Monday, Mar 9, 2026 by Pulumi
