google-native logo
Google Cloud Native v0.30.0, Apr 14 23

google-native.notebooks/v1.Execution

Explore with Pulumi AI

Creates a new Execution in a given project and location. Auto-naming is currently not supported for this resource.

Create Execution Resource

new Execution(name: string, args: ExecutionArgs, opts?: CustomResourceOptions);
@overload
def Execution(resource_name: str,
              opts: Optional[ResourceOptions] = None,
              description: Optional[str] = None,
              execution_id: Optional[str] = None,
              execution_template: Optional[ExecutionTemplateArgs] = None,
              location: Optional[str] = None,
              output_notebook_file: Optional[str] = None,
              project: Optional[str] = None)
@overload
def Execution(resource_name: str,
              args: ExecutionArgs,
              opts: Optional[ResourceOptions] = None)
func NewExecution(ctx *Context, name string, args ExecutionArgs, opts ...ResourceOption) (*Execution, error)
public Execution(string name, ExecutionArgs args, CustomResourceOptions? opts = null)
public Execution(String name, ExecutionArgs args)
public Execution(String name, ExecutionArgs args, CustomResourceOptions options)
type: google-native:notebooks/v1:Execution
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.

name string
The unique name of the resource.
args ExecutionArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
resource_name str
The unique name of the resource.
args ExecutionArgs
The arguments to resource properties.
opts ResourceOptions
Bag of options to control resource's behavior.
ctx Context
Context object for the current deployment.
name string
The unique name of the resource.
args ExecutionArgs
The arguments to resource properties.
opts ResourceOption
Bag of options to control resource's behavior.
name string
The unique name of the resource.
args ExecutionArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
name String
The unique name of the resource.
args ExecutionArgs
The arguments to resource properties.
options CustomResourceOptions
Bag of options to control resource's behavior.

Execution Resource Properties

To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

Inputs

The Execution resource accepts the following input properties:

ExecutionId string

Required. User-defined unique ID of this execution.

Description string

A brief description of this execution.

ExecutionTemplate Pulumi.GoogleNative.Notebooks.V1.Inputs.ExecutionTemplateArgs

execute metadata including name, hardware spec, region, labels, etc.

Location string
OutputNotebookFile string

Output notebook file generated by this execution

Project string
ExecutionId string

Required. User-defined unique ID of this execution.

Description string

A brief description of this execution.

ExecutionTemplate ExecutionTemplateArgs

execute metadata including name, hardware spec, region, labels, etc.

Location string
OutputNotebookFile string

Output notebook file generated by this execution

Project string
executionId String

Required. User-defined unique ID of this execution.

description String

A brief description of this execution.

executionTemplate ExecutionTemplateArgs

execute metadata including name, hardware spec, region, labels, etc.

location String
outputNotebookFile String

Output notebook file generated by this execution

project String
executionId string

Required. User-defined unique ID of this execution.

description string

A brief description of this execution.

executionTemplate ExecutionTemplateArgs

execute metadata including name, hardware spec, region, labels, etc.

location string
outputNotebookFile string

Output notebook file generated by this execution

project string
execution_id str

Required. User-defined unique ID of this execution.

description str

A brief description of this execution.

execution_template ExecutionTemplateArgs

execute metadata including name, hardware spec, region, labels, etc.

location str
output_notebook_file str

Output notebook file generated by this execution

project str
executionId String

Required. User-defined unique ID of this execution.

description String

A brief description of this execution.

executionTemplate Property Map

execute metadata including name, hardware spec, region, labels, etc.

location String
outputNotebookFile String

Output notebook file generated by this execution

project String

Outputs

All input properties are implicitly available as output properties. Additionally, the Execution resource produces the following output properties:

CreateTime string

Time the Execution was instantiated.

DisplayName string

Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

Id string

The provider-assigned unique ID for this managed resource.

JobUri string

The URI of the external job used to execute the notebook.

Name string

The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

State string

State of the underlying AI Platform job.

UpdateTime string

Time the Execution was last updated.

CreateTime string

Time the Execution was instantiated.

DisplayName string

Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

Id string

The provider-assigned unique ID for this managed resource.

JobUri string

The URI of the external job used to execute the notebook.

Name string

The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

State string

State of the underlying AI Platform job.

UpdateTime string

Time the Execution was last updated.

createTime String

Time the Execution was instantiated.

displayName String

Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

id String

The provider-assigned unique ID for this managed resource.

jobUri String

The URI of the external job used to execute the notebook.

name String

The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

state String

State of the underlying AI Platform job.

updateTime String

Time the Execution was last updated.

createTime string

Time the Execution was instantiated.

displayName string

Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

id string

The provider-assigned unique ID for this managed resource.

jobUri string

The URI of the external job used to execute the notebook.

name string

The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

state string

State of the underlying AI Platform job.

updateTime string

Time the Execution was last updated.

create_time str

Time the Execution was instantiated.

display_name str

Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

id str

The provider-assigned unique ID for this managed resource.

job_uri str

The URI of the external job used to execute the notebook.

name str

The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

state str

State of the underlying AI Platform job.

update_time str

Time the Execution was last updated.

createTime String

Time the Execution was instantiated.

displayName String

Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

id String

The provider-assigned unique ID for this managed resource.

jobUri String

The URI of the external job used to execute the notebook.

name String

The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

state String

State of the underlying AI Platform job.

updateTime String

Time the Execution was last updated.

Supporting Types

DataprocParameters

Cluster string

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

Cluster string

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

cluster String

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

cluster string

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

cluster str

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

cluster String

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

DataprocParametersResponse

Cluster string

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

Cluster string

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

cluster String

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

cluster string

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

cluster str

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

cluster String

URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

ExecutionTemplate

ScaleTier Pulumi.GoogleNative.Notebooks.V1.ExecutionTemplateScaleTier

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

AcceleratorConfig Pulumi.GoogleNative.Notebooks.V1.Inputs.SchedulerAcceleratorConfig

Configuration (count and accelerator type) for hardware running notebook execution.

ContainerImageUri string

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

DataprocParameters Pulumi.GoogleNative.Notebooks.V1.Inputs.DataprocParameters

Parameters used in Dataproc JobType executions.

InputNotebookFile string

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

JobType Pulumi.GoogleNative.Notebooks.V1.ExecutionTemplateJobType

The type of Job to be used on this execution.

KernelSpec string

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

Labels Dictionary<string, string>

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

MasterType string

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

OutputNotebookFolder string

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

Parameters string

Parameters used within the 'input_notebook_file' notebook.

ParamsYamlFile string

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

ServiceAccount string

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

Tensorboard string

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

VertexAiParameters Pulumi.GoogleNative.Notebooks.V1.Inputs.VertexAIParameters

Parameters used in Vertex AI JobType executions.

ScaleTier ExecutionTemplateScaleTier

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

AcceleratorConfig SchedulerAcceleratorConfig

Configuration (count and accelerator type) for hardware running notebook execution.

ContainerImageUri string

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

DataprocParameters DataprocParameters

Parameters used in Dataproc JobType executions.

InputNotebookFile string

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

JobType ExecutionTemplateJobType

The type of Job to be used on this execution.

KernelSpec string

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

Labels map[string]string

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

MasterType string

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

OutputNotebookFolder string

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

Parameters string

Parameters used within the 'input_notebook_file' notebook.

ParamsYamlFile string

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

ServiceAccount string

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

Tensorboard string

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

VertexAiParameters VertexAIParameters

Parameters used in Vertex AI JobType executions.

scaleTier ExecutionTemplateScaleTier

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

acceleratorConfig SchedulerAcceleratorConfig

Configuration (count and accelerator type) for hardware running notebook execution.

containerImageUri String

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

dataprocParameters DataprocParameters

Parameters used in Dataproc JobType executions.

inputNotebookFile String

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

jobType ExecutionTemplateJobType

The type of Job to be used on this execution.

kernelSpec String

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

labels Map<String,String>

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

masterType String

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

outputNotebookFolder String

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

parameters String

Parameters used within the 'input_notebook_file' notebook.

paramsYamlFile String

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

serviceAccount String

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

tensorboard String

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

vertexAiParameters VertexAIParameters

Parameters used in Vertex AI JobType executions.

scaleTier ExecutionTemplateScaleTier

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

acceleratorConfig SchedulerAcceleratorConfig

Configuration (count and accelerator type) for hardware running notebook execution.

containerImageUri string

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

dataprocParameters DataprocParameters

Parameters used in Dataproc JobType executions.

inputNotebookFile string

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

jobType ExecutionTemplateJobType

The type of Job to be used on this execution.

kernelSpec string

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

labels {[key: string]: string}

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

masterType string

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

outputNotebookFolder string

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

parameters string

Parameters used within the 'input_notebook_file' notebook.

paramsYamlFile string

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

serviceAccount string

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

tensorboard string

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

vertexAiParameters VertexAIParameters

Parameters used in Vertex AI JobType executions.

scale_tier ExecutionTemplateScaleTier

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

accelerator_config SchedulerAcceleratorConfig

Configuration (count and accelerator type) for hardware running notebook execution.

container_image_uri str

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

dataproc_parameters DataprocParameters

Parameters used in Dataproc JobType executions.

input_notebook_file str

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

job_type ExecutionTemplateJobType

The type of Job to be used on this execution.

kernel_spec str

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

labels Mapping[str, str]

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

master_type str

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

output_notebook_folder str

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

parameters str

Parameters used within the 'input_notebook_file' notebook.

params_yaml_file str

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

service_account str

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

tensorboard str

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

vertex_ai_parameters VertexAIParameters

Parameters used in Vertex AI JobType executions.

scaleTier "SCALE_TIER_UNSPECIFIED" | "BASIC" | "STANDARD_1" | "PREMIUM_1" | "BASIC_GPU" | "BASIC_TPU" | "CUSTOM"

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

acceleratorConfig Property Map

Configuration (count and accelerator type) for hardware running notebook execution.

containerImageUri String

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

dataprocParameters Property Map

Parameters used in Dataproc JobType executions.

inputNotebookFile String

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

jobType "JOB_TYPE_UNSPECIFIED" | "VERTEX_AI" | "DATAPROC"

The type of Job to be used on this execution.

kernelSpec String

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

labels Map<String>

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

masterType String

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

outputNotebookFolder String

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

parameters String

Parameters used within the 'input_notebook_file' notebook.

paramsYamlFile String

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

serviceAccount String

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

tensorboard String

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

vertexAiParameters Property Map

Parameters used in Vertex AI JobType executions.

ExecutionTemplateJobType

JobTypeUnspecified
JOB_TYPE_UNSPECIFIED

No type specified.

VertexAi
VERTEX_AI

Custom Job in aiplatform.googleapis.com. Default value for an execution.

Dataproc
DATAPROC

Run execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs

ExecutionTemplateJobTypeJobTypeUnspecified
JOB_TYPE_UNSPECIFIED

No type specified.

ExecutionTemplateJobTypeVertexAi
VERTEX_AI

Custom Job in aiplatform.googleapis.com. Default value for an execution.

ExecutionTemplateJobTypeDataproc
DATAPROC

Run execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs

JobTypeUnspecified
JOB_TYPE_UNSPECIFIED

No type specified.

VertexAi
VERTEX_AI

Custom Job in aiplatform.googleapis.com. Default value for an execution.

Dataproc
DATAPROC

Run execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs

JobTypeUnspecified
JOB_TYPE_UNSPECIFIED

No type specified.

VertexAi
VERTEX_AI

Custom Job in aiplatform.googleapis.com. Default value for an execution.

Dataproc
DATAPROC

Run execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs

JOB_TYPE_UNSPECIFIED
JOB_TYPE_UNSPECIFIED

No type specified.

VERTEX_AI
VERTEX_AI

Custom Job in aiplatform.googleapis.com. Default value for an execution.

DATAPROC
DATAPROC

Run execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs

"JOB_TYPE_UNSPECIFIED"
JOB_TYPE_UNSPECIFIED

No type specified.

"VERTEX_AI"
VERTEX_AI

Custom Job in aiplatform.googleapis.com. Default value for an execution.

"DATAPROC"
DATAPROC

Run execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs

ExecutionTemplateResponse

AcceleratorConfig Pulumi.GoogleNative.Notebooks.V1.Inputs.SchedulerAcceleratorConfigResponse

Configuration (count and accelerator type) for hardware running notebook execution.

ContainerImageUri string

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

DataprocParameters Pulumi.GoogleNative.Notebooks.V1.Inputs.DataprocParametersResponse

Parameters used in Dataproc JobType executions.

InputNotebookFile string

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

JobType string

The type of Job to be used on this execution.

KernelSpec string

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

Labels Dictionary<string, string>

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

MasterType string

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

OutputNotebookFolder string

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

Parameters string

Parameters used within the 'input_notebook_file' notebook.

ParamsYamlFile string

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

ScaleTier string

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

ServiceAccount string

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

Tensorboard string

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

VertexAiParameters Pulumi.GoogleNative.Notebooks.V1.Inputs.VertexAIParametersResponse

Parameters used in Vertex AI JobType executions.

AcceleratorConfig SchedulerAcceleratorConfigResponse

Configuration (count and accelerator type) for hardware running notebook execution.

ContainerImageUri string

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

DataprocParameters DataprocParametersResponse

Parameters used in Dataproc JobType executions.

InputNotebookFile string

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

JobType string

The type of Job to be used on this execution.

KernelSpec string

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

Labels map[string]string

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

MasterType string

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

OutputNotebookFolder string

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

Parameters string

Parameters used within the 'input_notebook_file' notebook.

ParamsYamlFile string

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

ScaleTier string

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

ServiceAccount string

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

Tensorboard string

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

VertexAiParameters VertexAIParametersResponse

Parameters used in Vertex AI JobType executions.

acceleratorConfig SchedulerAcceleratorConfigResponse

Configuration (count and accelerator type) for hardware running notebook execution.

containerImageUri String

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

dataprocParameters DataprocParametersResponse

Parameters used in Dataproc JobType executions.

inputNotebookFile String

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

jobType String

The type of Job to be used on this execution.

kernelSpec String

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

labels Map<String,String>

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

masterType String

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

outputNotebookFolder String

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

parameters String

Parameters used within the 'input_notebook_file' notebook.

paramsYamlFile String

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

scaleTier String

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

serviceAccount String

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

tensorboard String

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

vertexAiParameters VertexAIParametersResponse

Parameters used in Vertex AI JobType executions.

acceleratorConfig SchedulerAcceleratorConfigResponse

Configuration (count and accelerator type) for hardware running notebook execution.

containerImageUri string

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

dataprocParameters DataprocParametersResponse

Parameters used in Dataproc JobType executions.

inputNotebookFile string

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

jobType string

The type of Job to be used on this execution.

kernelSpec string

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

labels {[key: string]: string}

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

masterType string

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

outputNotebookFolder string

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

parameters string

Parameters used within the 'input_notebook_file' notebook.

paramsYamlFile string

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

scaleTier string

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

serviceAccount string

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

tensorboard string

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

vertexAiParameters VertexAIParametersResponse

Parameters used in Vertex AI JobType executions.

accelerator_config SchedulerAcceleratorConfigResponse

Configuration (count and accelerator type) for hardware running notebook execution.

container_image_uri str

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

dataproc_parameters DataprocParametersResponse

Parameters used in Dataproc JobType executions.

input_notebook_file str

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

job_type str

The type of Job to be used on this execution.

kernel_spec str

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

labels Mapping[str, str]

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

master_type str

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

output_notebook_folder str

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

parameters str

Parameters used within the 'input_notebook_file' notebook.

params_yaml_file str

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

scale_tier str

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

service_account str

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

tensorboard str

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

vertex_ai_parameters VertexAIParametersResponse

Parameters used in Vertex AI JobType executions.

acceleratorConfig Property Map

Configuration (count and accelerator type) for hardware running notebook execution.

containerImageUri String

Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

dataprocParameters Property Map

Parameters used in Dataproc JobType executions.

inputNotebookFile String

Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

jobType String

The type of Job to be used on this execution.

kernelSpec String

Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

labels Map<String>

Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

masterType String

Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

outputNotebookFolder String

Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

parameters String

Parameters used within the 'input_notebook_file' notebook.

paramsYamlFile String

Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

scaleTier String

Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated:

Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

serviceAccount String

The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

tensorboard String

The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

vertexAiParameters Property Map

Parameters used in Vertex AI JobType executions.

ExecutionTemplateScaleTier

ScaleTierUnspecified
SCALE_TIER_UNSPECIFIED

Unspecified Scale Tier.

Basic
BASIC

A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.

Standard1
STANDARD_1

Many workers and a few parameter servers.

Premium1
PREMIUM_1

A large number of workers with many parameter servers.

BasicGpu
BASIC_GPU

A single worker instance with a K80 GPU.

BasicTpu
BASIC_TPU

A single worker instance with a Cloud TPU.

Custom
CUSTOM

The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.

ExecutionTemplateScaleTierScaleTierUnspecified
SCALE_TIER_UNSPECIFIED

Unspecified Scale Tier.

ExecutionTemplateScaleTierBasic
BASIC

A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.

ExecutionTemplateScaleTierStandard1
STANDARD_1

Many workers and a few parameter servers.

ExecutionTemplateScaleTierPremium1
PREMIUM_1

A large number of workers with many parameter servers.

ExecutionTemplateScaleTierBasicGpu
BASIC_GPU

A single worker instance with a K80 GPU.

ExecutionTemplateScaleTierBasicTpu
BASIC_TPU

A single worker instance with a Cloud TPU.

ExecutionTemplateScaleTierCustom
CUSTOM

The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.

ScaleTierUnspecified
SCALE_TIER_UNSPECIFIED

Unspecified Scale Tier.

Basic
BASIC

A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.

Standard1
STANDARD_1

Many workers and a few parameter servers.

Premium1
PREMIUM_1

A large number of workers with many parameter servers.

BasicGpu
BASIC_GPU

A single worker instance with a K80 GPU.

BasicTpu
BASIC_TPU

A single worker instance with a Cloud TPU.

Custom
CUSTOM

The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.

ScaleTierUnspecified
SCALE_TIER_UNSPECIFIED

Unspecified Scale Tier.

Basic
BASIC

A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.

Standard1
STANDARD_1

Many workers and a few parameter servers.

Premium1
PREMIUM_1

A large number of workers with many parameter servers.

BasicGpu
BASIC_GPU

A single worker instance with a K80 GPU.

BasicTpu
BASIC_TPU

A single worker instance with a Cloud TPU.

Custom
CUSTOM

The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.

SCALE_TIER_UNSPECIFIED
SCALE_TIER_UNSPECIFIED

Unspecified Scale Tier.

BASIC
BASIC

A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.

STANDARD1
STANDARD_1

Many workers and a few parameter servers.

PREMIUM1
PREMIUM_1

A large number of workers with many parameter servers.

BASIC_GPU
BASIC_GPU

A single worker instance with a K80 GPU.

BASIC_TPU
BASIC_TPU

A single worker instance with a Cloud TPU.

CUSTOM
CUSTOM

The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.

"SCALE_TIER_UNSPECIFIED"
SCALE_TIER_UNSPECIFIED

Unspecified Scale Tier.

"BASIC"
BASIC

A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.

"STANDARD_1"
STANDARD_1

Many workers and a few parameter servers.

"PREMIUM_1"
PREMIUM_1

A large number of workers with many parameter servers.

"BASIC_GPU"
BASIC_GPU

A single worker instance with a K80 GPU.

"BASIC_TPU"
BASIC_TPU

A single worker instance with a Cloud TPU.

"CUSTOM"
CUSTOM

The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.

SchedulerAcceleratorConfig

CoreCount string

Count of cores of this accelerator.

Type Pulumi.GoogleNative.Notebooks.V1.SchedulerAcceleratorConfigType

Type of this accelerator.

CoreCount string

Count of cores of this accelerator.

Type SchedulerAcceleratorConfigType

Type of this accelerator.

coreCount String

Count of cores of this accelerator.

type SchedulerAcceleratorConfigType

Type of this accelerator.

coreCount string

Count of cores of this accelerator.

type SchedulerAcceleratorConfigType

Type of this accelerator.

core_count str

Count of cores of this accelerator.

type SchedulerAcceleratorConfigType

Type of this accelerator.

SchedulerAcceleratorConfigResponse

CoreCount string

Count of cores of this accelerator.

Type string

Type of this accelerator.

CoreCount string

Count of cores of this accelerator.

Type string

Type of this accelerator.

coreCount String

Count of cores of this accelerator.

type String

Type of this accelerator.

coreCount string

Count of cores of this accelerator.

type string

Type of this accelerator.

core_count str

Count of cores of this accelerator.

type str

Type of this accelerator.

coreCount String

Count of cores of this accelerator.

type String

Type of this accelerator.

SchedulerAcceleratorConfigType

SchedulerAcceleratorTypeUnspecified
SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED

Unspecified accelerator type. Default to no GPU.

NvidiaTeslaK80
NVIDIA_TESLA_K80

Nvidia Tesla K80 GPU.

NvidiaTeslaP100
NVIDIA_TESLA_P100

Nvidia Tesla P100 GPU.

NvidiaTeslaV100
NVIDIA_TESLA_V100

Nvidia Tesla V100 GPU.

NvidiaTeslaP4
NVIDIA_TESLA_P4

Nvidia Tesla P4 GPU.

NvidiaTeslaT4
NVIDIA_TESLA_T4

Nvidia Tesla T4 GPU.

NvidiaTeslaA100
NVIDIA_TESLA_A100

Nvidia Tesla A100 GPU.

TpuV2
TPU_V2

TPU v2.

TpuV3
TPU_V3

TPU v3.

SchedulerAcceleratorConfigTypeSchedulerAcceleratorTypeUnspecified
SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED

Unspecified accelerator type. Default to no GPU.

SchedulerAcceleratorConfigTypeNvidiaTeslaK80
NVIDIA_TESLA_K80

Nvidia Tesla K80 GPU.

SchedulerAcceleratorConfigTypeNvidiaTeslaP100
NVIDIA_TESLA_P100

Nvidia Tesla P100 GPU.

SchedulerAcceleratorConfigTypeNvidiaTeslaV100
NVIDIA_TESLA_V100

Nvidia Tesla V100 GPU.

SchedulerAcceleratorConfigTypeNvidiaTeslaP4
NVIDIA_TESLA_P4

Nvidia Tesla P4 GPU.

SchedulerAcceleratorConfigTypeNvidiaTeslaT4
NVIDIA_TESLA_T4

Nvidia Tesla T4 GPU.

SchedulerAcceleratorConfigTypeNvidiaTeslaA100
NVIDIA_TESLA_A100

Nvidia Tesla A100 GPU.

SchedulerAcceleratorConfigTypeTpuV2
TPU_V2

TPU v2.

SchedulerAcceleratorConfigTypeTpuV3
TPU_V3

TPU v3.

SchedulerAcceleratorTypeUnspecified
SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED

Unspecified accelerator type. Default to no GPU.

NvidiaTeslaK80
NVIDIA_TESLA_K80

Nvidia Tesla K80 GPU.

NvidiaTeslaP100
NVIDIA_TESLA_P100

Nvidia Tesla P100 GPU.

NvidiaTeslaV100
NVIDIA_TESLA_V100

Nvidia Tesla V100 GPU.

NvidiaTeslaP4
NVIDIA_TESLA_P4

Nvidia Tesla P4 GPU.

NvidiaTeslaT4
NVIDIA_TESLA_T4

Nvidia Tesla T4 GPU.

NvidiaTeslaA100
NVIDIA_TESLA_A100

Nvidia Tesla A100 GPU.

TpuV2
TPU_V2

TPU v2.

TpuV3
TPU_V3

TPU v3.

SchedulerAcceleratorTypeUnspecified
SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED

Unspecified accelerator type. Default to no GPU.

NvidiaTeslaK80
NVIDIA_TESLA_K80

Nvidia Tesla K80 GPU.

NvidiaTeslaP100
NVIDIA_TESLA_P100

Nvidia Tesla P100 GPU.

NvidiaTeslaV100
NVIDIA_TESLA_V100

Nvidia Tesla V100 GPU.

NvidiaTeslaP4
NVIDIA_TESLA_P4

Nvidia Tesla P4 GPU.

NvidiaTeslaT4
NVIDIA_TESLA_T4

Nvidia Tesla T4 GPU.

NvidiaTeslaA100
NVIDIA_TESLA_A100

Nvidia Tesla A100 GPU.

TpuV2
TPU_V2

TPU v2.

TpuV3
TPU_V3

TPU v3.

SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED
SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED

Unspecified accelerator type. Default to no GPU.

NVIDIA_TESLA_K80
NVIDIA_TESLA_K80

Nvidia Tesla K80 GPU.

NVIDIA_TESLA_P100
NVIDIA_TESLA_P100

Nvidia Tesla P100 GPU.

NVIDIA_TESLA_V100
NVIDIA_TESLA_V100

Nvidia Tesla V100 GPU.

NVIDIA_TESLA_P4
NVIDIA_TESLA_P4

Nvidia Tesla P4 GPU.

NVIDIA_TESLA_T4
NVIDIA_TESLA_T4

Nvidia Tesla T4 GPU.

NVIDIA_TESLA_A100
NVIDIA_TESLA_A100

Nvidia Tesla A100 GPU.

TPU_V2
TPU_V2

TPU v2.

TPU_V3
TPU_V3

TPU v3.

"SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED"
SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED

Unspecified accelerator type. Default to no GPU.

"NVIDIA_TESLA_K80"
NVIDIA_TESLA_K80

Nvidia Tesla K80 GPU.

"NVIDIA_TESLA_P100"
NVIDIA_TESLA_P100

Nvidia Tesla P100 GPU.

"NVIDIA_TESLA_V100"
NVIDIA_TESLA_V100

Nvidia Tesla V100 GPU.

"NVIDIA_TESLA_P4"
NVIDIA_TESLA_P4

Nvidia Tesla P4 GPU.

"NVIDIA_TESLA_T4"
NVIDIA_TESLA_T4

Nvidia Tesla T4 GPU.

"NVIDIA_TESLA_A100"
NVIDIA_TESLA_A100

Nvidia Tesla A100 GPU.

"TPU_V2"
TPU_V2

TPU v2.

"TPU_V3"
TPU_V3

TPU v3.

VertexAIParameters

Env Dictionary<string, string>

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

Network string

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

Env map[string]string

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

Network string

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

env Map<String,String>

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

network String

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

env {[key: string]: string}

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

network string

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

env Mapping[str, str]

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

network str

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

env Map<String>

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

network String

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

VertexAIParametersResponse

Env Dictionary<string, string>

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

Network string

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

Env map[string]string

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

Network string

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

env Map<String,String>

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

network String

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

env {[key: string]: string}

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

network string

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

env Mapping[str, str]

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

network str

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

env Map<String>

Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

network String

The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

Package Details

Repository
Google Cloud Native pulumi/pulumi-google-native
License
Apache-2.0