Google Native

Pulumi Official
Package maintained by Pulumi
v0.18.2 published on Monday, May 2, 2022 by Pulumi

getSchedule

Gets details of schedule

Using getSchedule

Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

function getSchedule(args: GetScheduleArgs, opts?: InvokeOptions): Promise<GetScheduleResult>
function getScheduleOutput(args: GetScheduleOutputArgs, opts?: InvokeOptions): Output<GetScheduleResult>
def get_schedule(location: Optional[str] = None,
                 project: Optional[str] = None,
                 schedule_id: Optional[str] = None,
                 opts: Optional[InvokeOptions] = None) -> GetScheduleResult
def get_schedule_output(location: Optional[pulumi.Input[str]] = None,
                 project: Optional[pulumi.Input[str]] = None,
                 schedule_id: Optional[pulumi.Input[str]] = None,
                 opts: Optional[InvokeOptions] = None) -> Output[GetScheduleResult]
func LookupSchedule(ctx *Context, args *LookupScheduleArgs, opts ...InvokeOption) (*LookupScheduleResult, error)
func LookupScheduleOutput(ctx *Context, args *LookupScheduleOutputArgs, opts ...InvokeOption) LookupScheduleResultOutput

> Note: This function is named LookupSchedule in the Go SDK.

public static class GetSchedule 
{
    public static Task<GetScheduleResult> InvokeAsync(GetScheduleArgs args, InvokeOptions? opts = null)
    public static Output<GetScheduleResult> Invoke(GetScheduleInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetScheduleResult> getSchedule(GetScheduleArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
Fn::Invoke:
  Function: google-native:notebooks/v1:getSchedule
  Arguments:
    # Arguments dictionary

The following arguments are supported:

Location string
ScheduleId string
Project string
Location string
ScheduleId string
Project string
location String
scheduleId String
project String
location string
scheduleId string
project string
location String
scheduleId String
project String

getSchedule Result

The following output properties are available:

CreateTime string

Time the schedule was created.

CronSchedule string

Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html

Description string

A brief description of this environment.

DisplayName string

Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens '-', and underscores '_'.

ExecutionTemplate Pulumi.GoogleNative.Notebooks.V1.Outputs.ExecutionTemplateResponse

Notebook Execution Template corresponding to this schedule.

Name string

The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}

RecentExecutions List<Pulumi.GoogleNative.Notebooks.V1.Outputs.ExecutionResponse>

The most recent execution names triggered from this schedule and their corresponding states.

State string
TimeZone string

Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).

UpdateTime string

Time the schedule was last updated.

CreateTime string

Time the schedule was created.

CronSchedule string

Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html

Description string

A brief description of this environment.

DisplayName string

Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens '-', and underscores '_'.

ExecutionTemplate ExecutionTemplateResponse

Notebook Execution Template corresponding to this schedule.

Name string

The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}

RecentExecutions []ExecutionResponse

The most recent execution names triggered from this schedule and their corresponding states.

State string
TimeZone string

Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).

UpdateTime string

Time the schedule was last updated.

createTime String

Time the schedule was created.

cronSchedule String

Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html

description String

A brief description of this environment.

displayName String

Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens '-', and underscores '_'.

executionTemplate ExecutionTemplateResponse

Notebook Execution Template corresponding to this schedule.

name String

The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}

recentExecutions ListResponse>

The most recent execution names triggered from this schedule and their corresponding states.

state String
timeZone String

Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).

updateTime String

Time the schedule was last updated.

createTime string

Time the schedule was created.

cronSchedule string

Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html

description string

A brief description of this environment.

displayName string

Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens '-', and underscores '_'.

executionTemplate ExecutionTemplateResponse

Notebook Execution Template corresponding to this schedule.

name string

The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}

recentExecutions ExecutionResponse[]

The most recent execution names triggered from this schedule and their corresponding states.

state string
timeZone string

Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).

updateTime string

Time the schedule was last updated.

create_time str

Time the schedule was created.

cron_schedule str

Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html

description str

A brief description of this environment.

display_name str

Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens '-', and underscores '_'.

execution_template ExecutionTemplateResponse

Notebook Execution Template corresponding to this schedule.

name str

The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}

recent_executions Sequence[ExecutionResponse]

The most recent execution names triggered from this schedule and their corresponding states.

state str
time_zone str

Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).

update_time str

Time the schedule was last updated.

createTime String

Time the schedule was created.

cronSchedule String

Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html

description String

A brief description of this environment.

displayName String

Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens '-', and underscores '_'.

executionTemplate Property Map

Notebook Execution Template corresponding to this schedule.

name String

The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}

recentExecutions List

The most recent execution names triggered from this schedule and their corresponding states.

state String
timeZone String

Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).

updateTime String

Time the schedule was last updated.

Supporting Types

DataprocParametersResponse

Cluster string

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

Cluster string

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

cluster String

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

cluster string

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

cluster str

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

cluster String

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

ExecutionResponse

CreateTime string

Time the Execution was instantiated.

Description string

A brief description of this execution.

DisplayName string

Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

ExecutionTemplate Pulumi.GoogleNative.Notebooks.V1.Inputs.ExecutionTemplateResponse

execute metadata including name, hardware spec, region, labels, etc.

JobUri string

The URI of the external job used to execute the notebook.

Name string

The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

OutputNotebookFile string

Output notebook file generated by this execution

State string

State of the underlying AI Platform job.

UpdateTime string

Time the Execution was last updated.

CreateTime string

Time the Execution was instantiated.

Description string

A brief description of this execution.

DisplayName string

Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

ExecutionTemplate ExecutionTemplateResponse

execute metadata including name, hardware spec, region, labels, etc.

JobUri string

The URI of the external job used to execute the notebook.

Name string

The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

OutputNotebookFile string

Output notebook file generated by this execution

State string

State of the underlying AI Platform job.

UpdateTime string

Time the Execution was last updated.

createTime String

Time the Execution was instantiated.

description String

A brief description of this execution.

displayName String

Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

executionTemplate ExecutionTemplateResponse

execute metadata including name, hardware spec, region, labels, etc.

jobUri String

The URI of the external job used to execute the notebook.

name String

The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

outputNotebookFile String

Output notebook file generated by this execution

state String

State of the underlying AI Platform job.

updateTime String

Time the Execution was last updated.

createTime string

Time the Execution was instantiated.

description string

A brief description of this execution.

displayName string

Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

executionTemplate ExecutionTemplateResponse

execute metadata including name, hardware spec, region, labels, etc.

jobUri string

The URI of the external job used to execute the notebook.

name string

The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

outputNotebookFile string

Output notebook file generated by this execution

state string

State of the underlying AI Platform job.

updateTime string

Time the Execution was last updated.

create_time str

Time the Execution was instantiated.

description str

A brief description of this execution.

display_name str

Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

execution_template ExecutionTemplateResponse

execute metadata including name, hardware spec, region, labels, etc.

job_uri str

The URI of the external job used to execute the notebook.

name str

The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

output_notebook_file str

Output notebook file generated by this execution

state str

State of the underlying AI Platform job.

update_time str

Time the Execution was last updated.

createTime String

Time the Execution was instantiated.

description String

A brief description of this execution.

displayName String

Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

executionTemplate Property Map

execute metadata including name, hardware spec, region, labels, etc.

jobUri String

The URI of the external job used to execute the notebook.

name String

The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

outputNotebookFile String

Output notebook file generated by this execution

state String

State of the underlying AI Platform job.

updateTime String

Time the Execution was last updated.

ExecutionTemplateResponse

AcceleratorConfig Pulumi.GoogleNative.Notebooks.V1.Inputs.SchedulerAcceleratorConfigResponse

Configuration (count and accelerator type) for hardware running notebook execution.

ContainerImageUri string

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

DataprocParameters Pulumi.GoogleNative.Notebooks.V1.Inputs.DataprocParametersResponse

Parameters used in Dataproc JobType executions.

InputNotebookFile string

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

JobType string

The type of Job to be used on this execution.

KernelSpec string

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

Labels Dictionary<string, string>

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

MasterType string

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

OutputNotebookFolder string

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

Parameters string

Parameters used within the 'input_notebook_file' notebook.

ParamsYamlFile string

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

ScaleTier string

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

ServiceAccount string

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

Tensorboard string

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

VertexAiParameters Pulumi.GoogleNative.Notebooks.V1.Inputs.VertexAIParametersResponse

Parameters used in Vertex AI JobType executions.

AcceleratorConfig SchedulerAcceleratorConfigResponse

Configuration (count and accelerator type) for hardware running notebook execution.

ContainerImageUri string

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

DataprocParameters DataprocParametersResponse

Parameters used in Dataproc JobType executions.

InputNotebookFile string

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

JobType string

The type of Job to be used on this execution.

KernelSpec string

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

Labels map[string]string

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

MasterType string

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

OutputNotebookFolder string

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

Parameters string

Parameters used within the 'input_notebook_file' notebook.

ParamsYamlFile string

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

ScaleTier string

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

ServiceAccount string

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

Tensorboard string

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

VertexAiParameters VertexAIParametersResponse

Parameters used in Vertex AI JobType executions.

acceleratorConfig SchedulerAcceleratorConfigResponse

Configuration (count and accelerator type) for hardware running notebook execution.

containerImageUri String

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

dataprocParameters DataprocParametersResponse

Parameters used in Dataproc JobType executions.

inputNotebookFile String

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

jobType String

The type of Job to be used on this execution.

kernelSpec String

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

labels Map

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

masterType String

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

outputNotebookFolder String

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

parameters String

Parameters used within the 'input_notebook_file' notebook.

paramsYamlFile String

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

scaleTier String

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

serviceAccount String

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

tensorboard String

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

vertexAiParameters VertexAIParametersResponse

Parameters used in Vertex AI JobType executions.

acceleratorConfig SchedulerAcceleratorConfigResponse

Configuration (count and accelerator type) for hardware running notebook execution.

containerImageUri string

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

dataprocParameters DataprocParametersResponse

Parameters used in Dataproc JobType executions.

inputNotebookFile string

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

jobType string

The type of Job to be used on this execution.

kernelSpec string

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

labels {[key: string]: string}

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

masterType string

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

outputNotebookFolder string

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

parameters string

Parameters used within the 'input_notebook_file' notebook.

paramsYamlFile string

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

scaleTier string

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

serviceAccount string

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

tensorboard string

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

vertexAiParameters VertexAIParametersResponse

Parameters used in Vertex AI JobType executions.

accelerator_config SchedulerAcceleratorConfigResponse

Configuration (count and accelerator type) for hardware running notebook execution.

container_image_uri str

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

dataproc_parameters DataprocParametersResponse

Parameters used in Dataproc JobType executions.

input_notebook_file str

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

job_type str

The type of Job to be used on this execution.

kernel_spec str

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

labels Mapping[str, str]

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

master_type str

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

output_notebook_folder str

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

parameters str

Parameters used within the 'input_notebook_file' notebook.

params_yaml_file str

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

scale_tier str

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

service_account str

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

tensorboard str

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

vertex_ai_parameters VertexAIParametersResponse

Parameters used in Vertex AI JobType executions.

acceleratorConfig Property Map

Configuration (count and accelerator type) for hardware running notebook execution.

containerImageUri String

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

dataprocParameters Property Map

Parameters used in Dataproc JobType executions.

inputNotebookFile String

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

jobType String

The type of Job to be used on this execution.

kernelSpec String

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

labels Map

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

masterType String

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

outputNotebookFolder String

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

parameters String

Parameters used within the 'input_notebook_file' notebook.

paramsYamlFile String

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

scaleTier String

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

serviceAccount String

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

tensorboard String

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

vertexAiParameters Property Map

Parameters used in Vertex AI JobType executions.

SchedulerAcceleratorConfigResponse

CoreCount string

Count of cores of this accelerator.

Type string

Type of this accelerator.

CoreCount string

Count of cores of this accelerator.

Type string

Type of this accelerator.

coreCount String

Count of cores of this accelerator.

type String

Type of this accelerator.

coreCount string

Count of cores of this accelerator.

type string

Type of this accelerator.

core_count str

Count of cores of this accelerator.

type str

Type of this accelerator.

coreCount String

Count of cores of this accelerator.

type String

Type of this accelerator.

VertexAIParametersResponse

Env Dictionary<string, string>

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

Network string

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

Env map[string]string

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

Network string

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

env Map

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

network String

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

env {[key: string]: string}

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

network string

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

env Mapping[str, str]

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

network str

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

env Map

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

network String

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

Package Details

Repository
https://github.com/pulumi/pulumi-google-native
License
Apache-2.0