1. Packages
  2. Google Cloud Native
  3. API Docs
  4. notebooks
  5. notebooks/v1
  6. Schedule

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.31.1 published on Thursday, Jul 20, 2023 by Pulumi

google-native.notebooks/v1.Schedule

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.31.1 published on Thursday, Jul 20, 2023 by Pulumi

    Creates a new Scheduled Notebook in a given project and location. Auto-naming is currently not supported for this resource.

    Create Schedule Resource

    new Schedule(name: string, args: ScheduleArgs, opts?: CustomResourceOptions);
    @overload
    def Schedule(resource_name: str,
                 opts: Optional[ResourceOptions] = None,
                 cron_schedule: Optional[str] = None,
                 description: Optional[str] = None,
                 execution_template: Optional[ExecutionTemplateArgs] = None,
                 location: Optional[str] = None,
                 project: Optional[str] = None,
                 schedule_id: Optional[str] = None,
                 state: Optional[ScheduleState] = None,
                 time_zone: Optional[str] = None)
    @overload
    def Schedule(resource_name: str,
                 args: ScheduleArgs,
                 opts: Optional[ResourceOptions] = None)
    func NewSchedule(ctx *Context, name string, args ScheduleArgs, opts ...ResourceOption) (*Schedule, error)
    public Schedule(string name, ScheduleArgs args, CustomResourceOptions? opts = null)
    public Schedule(String name, ScheduleArgs args)
    public Schedule(String name, ScheduleArgs args, CustomResourceOptions options)
    
    type: google-native:notebooks/v1:Schedule
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    
    name string
    The unique name of the resource.
    args ScheduleArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args ScheduleArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args ScheduleArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args ScheduleArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args ScheduleArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Schedule Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    The Schedule resource accepts the following input properties:

    ScheduleId string

    Required. User-defined unique ID of this schedule.

    CronSchedule string

    Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html

    Description string

    A brief description of this environment.

    ExecutionTemplate Pulumi.GoogleNative.Notebooks.V1.Inputs.ExecutionTemplate

    Notebook Execution Template corresponding to this schedule.

    Location string
    Project string
    State Pulumi.GoogleNative.Notebooks.V1.ScheduleState
    TimeZone string

    Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).

    ScheduleId string

    Required. User-defined unique ID of this schedule.

    CronSchedule string

    Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html

    Description string

    A brief description of this environment.

    ExecutionTemplate ExecutionTemplateArgs

    Notebook Execution Template corresponding to this schedule.

    Location string
    Project string
    State ScheduleStateEnum
    TimeZone string

    Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).

    scheduleId String

    Required. User-defined unique ID of this schedule.

    cronSchedule String

    Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html

    description String

    A brief description of this environment.

    executionTemplate ExecutionTemplate

    Notebook Execution Template corresponding to this schedule.

    location String
    project String
    state ScheduleState
    timeZone String

    Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).

    scheduleId string

    Required. User-defined unique ID of this schedule.

    cronSchedule string

    Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html

    description string

    A brief description of this environment.

    executionTemplate ExecutionTemplate

    Notebook Execution Template corresponding to this schedule.

    location string
    project string
    state ScheduleState
    timeZone string

    Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).

    schedule_id str

    Required. User-defined unique ID of this schedule.

    cron_schedule str

    Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html

    description str

    A brief description of this environment.

    execution_template ExecutionTemplateArgs

    Notebook Execution Template corresponding to this schedule.

    location str
    project str
    state ScheduleState
    time_zone str

    Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).

    scheduleId String

    Required. User-defined unique ID of this schedule.

    cronSchedule String

    Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html

    description String

    A brief description of this environment.

    executionTemplate Property Map

    Notebook Execution Template corresponding to this schedule.

    location String
    project String
    state "STATE_UNSPECIFIED" | "ENABLED" | "PAUSED" | "DISABLED" | "UPDATE_FAILED" | "INITIALIZING" | "DELETING"
    timeZone String

    Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).

    Outputs

    All input properties are implicitly available as output properties. Additionally, the Schedule resource produces the following output properties:

    CreateTime string

    Time the schedule was created.

    DisplayName string

    Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores _.

    Id string

    The provider-assigned unique ID for this managed resource.

    Name string

    The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}

    RecentExecutions List<Pulumi.GoogleNative.Notebooks.V1.Outputs.ExecutionResponse>

    The most recent execution names triggered from this schedule and their corresponding states.

    UpdateTime string

    Time the schedule was last updated.

    CreateTime string

    Time the schedule was created.

    DisplayName string

    Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores _.

    Id string

    The provider-assigned unique ID for this managed resource.

    Name string

    The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}

    RecentExecutions []ExecutionResponse

    The most recent execution names triggered from this schedule and their corresponding states.

    UpdateTime string

    Time the schedule was last updated.

    createTime String

    Time the schedule was created.

    displayName String

    Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores _.

    id String

    The provider-assigned unique ID for this managed resource.

    name String

    The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}

    recentExecutions List<ExecutionResponse>

    The most recent execution names triggered from this schedule and their corresponding states.

    updateTime String

    Time the schedule was last updated.

    createTime string

    Time the schedule was created.

    displayName string

    Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores _.

    id string

    The provider-assigned unique ID for this managed resource.

    name string

    The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}

    recentExecutions ExecutionResponse[]

    The most recent execution names triggered from this schedule and their corresponding states.

    updateTime string

    Time the schedule was last updated.

    create_time str

    Time the schedule was created.

    display_name str

    Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores _.

    id str

    The provider-assigned unique ID for this managed resource.

    name str

    The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}

    recent_executions Sequence[ExecutionResponse]

    The most recent execution names triggered from this schedule and their corresponding states.

    update_time str

    Time the schedule was last updated.

    createTime String

    Time the schedule was created.

    displayName String

    Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores _.

    id String

    The provider-assigned unique ID for this managed resource.

    name String

    The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}

    recentExecutions List<Property Map>

    The most recent execution names triggered from this schedule and their corresponding states.

    updateTime String

    Time the schedule was last updated.

    Supporting Types

    DataprocParameters, DataprocParametersArgs

    Cluster string

    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

    Cluster string

    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

    cluster String

    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

    cluster string

    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

    cluster str

    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

    cluster String

    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

    DataprocParametersResponse, DataprocParametersResponseArgs

    Cluster string

    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

    Cluster string

    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

    cluster String

    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

    cluster string

    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

    cluster str

    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

    cluster String

    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

    ExecutionResponse, ExecutionResponseArgs

    CreateTime string

    Time the Execution was instantiated.

    Description string

    A brief description of this execution.

    DisplayName string

    Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

    ExecutionTemplate Pulumi.GoogleNative.Notebooks.V1.Inputs.ExecutionTemplateResponse

    execute metadata including name, hardware spec, region, labels, etc.

    JobUri string

    The URI of the external job used to execute the notebook.

    Name string

    The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

    OutputNotebookFile string

    Output notebook file generated by this execution

    State string

    State of the underlying AI Platform job.

    UpdateTime string

    Time the Execution was last updated.

    CreateTime string

    Time the Execution was instantiated.

    Description string

    A brief description of this execution.

    DisplayName string

    Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

    ExecutionTemplate ExecutionTemplateResponse

    execute metadata including name, hardware spec, region, labels, etc.

    JobUri string

    The URI of the external job used to execute the notebook.

    Name string

    The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

    OutputNotebookFile string

    Output notebook file generated by this execution

    State string

    State of the underlying AI Platform job.

    UpdateTime string

    Time the Execution was last updated.

    createTime String

    Time the Execution was instantiated.

    description String

    A brief description of this execution.

    displayName String

    Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

    executionTemplate ExecutionTemplateResponse

    execute metadata including name, hardware spec, region, labels, etc.

    jobUri String

    The URI of the external job used to execute the notebook.

    name String

    The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

    outputNotebookFile String

    Output notebook file generated by this execution

    state String

    State of the underlying AI Platform job.

    updateTime String

    Time the Execution was last updated.

    createTime string

    Time the Execution was instantiated.

    description string

    A brief description of this execution.

    displayName string

    Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

    executionTemplate ExecutionTemplateResponse

    execute metadata including name, hardware spec, region, labels, etc.

    jobUri string

    The URI of the external job used to execute the notebook.

    name string

    The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

    outputNotebookFile string

    Output notebook file generated by this execution

    state string

    State of the underlying AI Platform job.

    updateTime string

    Time the Execution was last updated.

    create_time str

    Time the Execution was instantiated.

    description str

    A brief description of this execution.

    display_name str

    Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

    execution_template ExecutionTemplateResponse

    execute metadata including name, hardware spec, region, labels, etc.

    job_uri str

    The URI of the external job used to execute the notebook.

    name str

    The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

    output_notebook_file str

    Output notebook file generated by this execution

    state str

    State of the underlying AI Platform job.

    update_time str

    Time the Execution was last updated.

    createTime String

    Time the Execution was instantiated.

    description String

    A brief description of this execution.

    displayName String

    Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.

    executionTemplate Property Map

    execute metadata including name, hardware spec, region, labels, etc.

    jobUri String

    The URI of the external job used to execute the notebook.

    name String

    The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}

    outputNotebookFile String

    Output notebook file generated by this execution

    state String

    State of the underlying AI Platform job.

    updateTime String

    Time the Execution was last updated.

    ExecutionTemplate, ExecutionTemplateArgs

    ScaleTier Pulumi.GoogleNative.Notebooks.V1.ExecutionTemplateScaleTier

    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated:

    Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    AcceleratorConfig Pulumi.GoogleNative.Notebooks.V1.Inputs.SchedulerAcceleratorConfig

    Configuration (count and accelerator type) for hardware running notebook execution.

    ContainerImageUri string

    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

    DataprocParameters Pulumi.GoogleNative.Notebooks.V1.Inputs.DataprocParameters

    Parameters used in Dataproc JobType executions.

    InputNotebookFile string

    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

    JobType Pulumi.GoogleNative.Notebooks.V1.ExecutionTemplateJobType

    The type of Job to be used on this execution.

    KernelSpec string

    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

    Labels Dictionary<string, string>

    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

    MasterType string

    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

    OutputNotebookFolder string

    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

    Parameters string

    Parameters used within the 'input_notebook_file' notebook.

    ParamsYamlFile string

    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

    ServiceAccount string

    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

    Tensorboard string

    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

    VertexAiParameters Pulumi.GoogleNative.Notebooks.V1.Inputs.VertexAIParameters

    Parameters used in Vertex AI JobType executions.

    ScaleTier ExecutionTemplateScaleTier

    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated:

    Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    AcceleratorConfig SchedulerAcceleratorConfig

    Configuration (count and accelerator type) for hardware running notebook execution.

    ContainerImageUri string

    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

    DataprocParameters DataprocParameters

    Parameters used in Dataproc JobType executions.

    InputNotebookFile string

    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

    JobType ExecutionTemplateJobType

    The type of Job to be used on this execution.

    KernelSpec string

    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

    Labels map[string]string

    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

    MasterType string

    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

    OutputNotebookFolder string

    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

    Parameters string

    Parameters used within the 'input_notebook_file' notebook.

    ParamsYamlFile string

    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

    ServiceAccount string

    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

    Tensorboard string

    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

    VertexAiParameters VertexAIParameters

    Parameters used in Vertex AI JobType executions.

    scaleTier ExecutionTemplateScaleTier

    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated:

    Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    acceleratorConfig SchedulerAcceleratorConfig

    Configuration (count and accelerator type) for hardware running notebook execution.

    containerImageUri String

    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

    dataprocParameters DataprocParameters

    Parameters used in Dataproc JobType executions.

    inputNotebookFile String

    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

    jobType ExecutionTemplateJobType

    The type of Job to be used on this execution.

    kernelSpec String

    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

    labels Map<String,String>

    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

    masterType String

    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

    outputNotebookFolder String

    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

    parameters String

    Parameters used within the 'input_notebook_file' notebook.

    paramsYamlFile String

    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

    serviceAccount String

    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

    tensorboard String

    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

    vertexAiParameters VertexAIParameters

    Parameters used in Vertex AI JobType executions.

    scaleTier ExecutionTemplateScaleTier

    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated:

    Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    acceleratorConfig SchedulerAcceleratorConfig

    Configuration (count and accelerator type) for hardware running notebook execution.

    containerImageUri string

    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

    dataprocParameters DataprocParameters

    Parameters used in Dataproc JobType executions.

    inputNotebookFile string

    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

    jobType ExecutionTemplateJobType

    The type of Job to be used on this execution.

    kernelSpec string

    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

    labels {[key: string]: string}

    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

    masterType string

    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

    outputNotebookFolder string

    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

    parameters string

    Parameters used within the 'input_notebook_file' notebook.

    paramsYamlFile string

    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

    serviceAccount string

    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

    tensorboard string

    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

    vertexAiParameters VertexAIParameters

    Parameters used in Vertex AI JobType executions.

    scale_tier ExecutionTemplateScaleTier

    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated:

    Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    accelerator_config SchedulerAcceleratorConfig

    Configuration (count and accelerator type) for hardware running notebook execution.

    container_image_uri str

    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

    dataproc_parameters DataprocParameters

    Parameters used in Dataproc JobType executions.

    input_notebook_file str

    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

    job_type ExecutionTemplateJobType

    The type of Job to be used on this execution.

    kernel_spec str

    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

    labels Mapping[str, str]

    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

    master_type str

    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

    output_notebook_folder str

    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

    parameters str

    Parameters used within the 'input_notebook_file' notebook.

    params_yaml_file str

    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

    service_account str

    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

    tensorboard str

    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

    vertex_ai_parameters VertexAIParameters

    Parameters used in Vertex AI JobType executions.

    scaleTier "SCALE_TIER_UNSPECIFIED" | "BASIC" | "STANDARD_1" | "PREMIUM_1" | "BASIC_GPU" | "BASIC_TPU" | "CUSTOM"

    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated:

    Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    acceleratorConfig Property Map

    Configuration (count and accelerator type) for hardware running notebook execution.

    containerImageUri String

    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

    dataprocParameters Property Map

    Parameters used in Dataproc JobType executions.

    inputNotebookFile String

    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

    jobType "JOB_TYPE_UNSPECIFIED" | "VERTEX_AI" | "DATAPROC"

    The type of Job to be used on this execution.

    kernelSpec String

    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

    labels Map<String>

    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

    masterType String

    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

    outputNotebookFolder String

    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

    parameters String

    Parameters used within the 'input_notebook_file' notebook.

    paramsYamlFile String

    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

    serviceAccount String

    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

    tensorboard String

    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

    vertexAiParameters Property Map

    Parameters used in Vertex AI JobType executions.

    ExecutionTemplateJobType, ExecutionTemplateJobTypeArgs

    JobTypeUnspecified
    JOB_TYPE_UNSPECIFIED

    No type specified.

    VertexAi
    VERTEX_AI

    Custom Job in aiplatform.googleapis.com. Default value for an execution.

    Dataproc
    DATAPROC

    Run execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs

    ExecutionTemplateJobTypeJobTypeUnspecified
    JOB_TYPE_UNSPECIFIED

    No type specified.

    ExecutionTemplateJobTypeVertexAi
    VERTEX_AI

    Custom Job in aiplatform.googleapis.com. Default value for an execution.

    ExecutionTemplateJobTypeDataproc
    DATAPROC

    Run execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs

    JobTypeUnspecified
    JOB_TYPE_UNSPECIFIED

    No type specified.

    VertexAi
    VERTEX_AI

    Custom Job in aiplatform.googleapis.com. Default value for an execution.

    Dataproc
    DATAPROC

    Run execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs

    JobTypeUnspecified
    JOB_TYPE_UNSPECIFIED

    No type specified.

    VertexAi
    VERTEX_AI

    Custom Job in aiplatform.googleapis.com. Default value for an execution.

    Dataproc
    DATAPROC

    Run execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs

    JOB_TYPE_UNSPECIFIED
    JOB_TYPE_UNSPECIFIED

    No type specified.

    VERTEX_AI
    VERTEX_AI

    Custom Job in aiplatform.googleapis.com. Default value for an execution.

    DATAPROC
    DATAPROC

    Run execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs

    "JOB_TYPE_UNSPECIFIED"
    JOB_TYPE_UNSPECIFIED

    No type specified.

    "VERTEX_AI"
    VERTEX_AI

    Custom Job in aiplatform.googleapis.com. Default value for an execution.

    "DATAPROC"
    DATAPROC

    Run execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs

    ExecutionTemplateResponse, ExecutionTemplateResponseArgs

    AcceleratorConfig Pulumi.GoogleNative.Notebooks.V1.Inputs.SchedulerAcceleratorConfigResponse

    Configuration (count and accelerator type) for hardware running notebook execution.

    ContainerImageUri string

    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

    DataprocParameters Pulumi.GoogleNative.Notebooks.V1.Inputs.DataprocParametersResponse

    Parameters used in Dataproc JobType executions.

    InputNotebookFile string

    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

    JobType string

    The type of Job to be used on this execution.

    KernelSpec string

    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

    Labels Dictionary<string, string>

    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

    MasterType string

    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

    OutputNotebookFolder string

    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

    Parameters string

    Parameters used within the 'input_notebook_file' notebook.

    ParamsYamlFile string

    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

    ScaleTier string

    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated:

    Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    ServiceAccount string

    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

    Tensorboard string

    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

    VertexAiParameters Pulumi.GoogleNative.Notebooks.V1.Inputs.VertexAIParametersResponse

    Parameters used in Vertex AI JobType executions.

    AcceleratorConfig SchedulerAcceleratorConfigResponse

    Configuration (count and accelerator type) for hardware running notebook execution.

    ContainerImageUri string

    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

    DataprocParameters DataprocParametersResponse

    Parameters used in Dataproc JobType executions.

    InputNotebookFile string

    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

    JobType string

    The type of Job to be used on this execution.

    KernelSpec string

    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

    Labels map[string]string

    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

    MasterType string

    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

    OutputNotebookFolder string

    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

    Parameters string

    Parameters used within the 'input_notebook_file' notebook.

    ParamsYamlFile string

    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

    ScaleTier string

    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated:

    Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    ServiceAccount string

    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

    Tensorboard string

    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

    VertexAiParameters VertexAIParametersResponse

    Parameters used in Vertex AI JobType executions.

    acceleratorConfig SchedulerAcceleratorConfigResponse

    Configuration (count and accelerator type) for hardware running notebook execution.

    containerImageUri String

    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

    dataprocParameters DataprocParametersResponse

    Parameters used in Dataproc JobType executions.

    inputNotebookFile String

    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

    jobType String

    The type of Job to be used on this execution.

    kernelSpec String

    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

    labels Map<String,String>

    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

    masterType String

    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

    outputNotebookFolder String

    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

    parameters String

    Parameters used within the 'input_notebook_file' notebook.

    paramsYamlFile String

    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

    scaleTier String

    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated:

    Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    serviceAccount String

    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

    tensorboard String

    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

    vertexAiParameters VertexAIParametersResponse

    Parameters used in Vertex AI JobType executions.

    acceleratorConfig SchedulerAcceleratorConfigResponse

    Configuration (count and accelerator type) for hardware running notebook execution.

    containerImageUri string

    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

    dataprocParameters DataprocParametersResponse

    Parameters used in Dataproc JobType executions.

    inputNotebookFile string

    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

    jobType string

    The type of Job to be used on this execution.

    kernelSpec string

    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

    labels {[key: string]: string}

    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

    masterType string

    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

    outputNotebookFolder string

    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

    parameters string

    Parameters used within the 'input_notebook_file' notebook.

    paramsYamlFile string

    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

    scaleTier string

    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated:

    Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    serviceAccount string

    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

    tensorboard string

    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

    vertexAiParameters VertexAIParametersResponse

    Parameters used in Vertex AI JobType executions.

    accelerator_config SchedulerAcceleratorConfigResponse

    Configuration (count and accelerator type) for hardware running notebook execution.

    container_image_uri str

    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

    dataproc_parameters DataprocParametersResponse

    Parameters used in Dataproc JobType executions.

    input_notebook_file str

    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

    job_type str

    The type of Job to be used on this execution.

    kernel_spec str

    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

    labels Mapping[str, str]

    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

    master_type str

    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

    output_notebook_folder str

    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

    parameters str

    Parameters used within the 'input_notebook_file' notebook.

    params_yaml_file str

    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

    scale_tier str

    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated:

    Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    service_account str

    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

    tensorboard str

    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

    vertex_ai_parameters VertexAIParametersResponse

    Parameters used in Vertex AI JobType executions.

    acceleratorConfig Property Map

    Configuration (count and accelerator type) for hardware running notebook execution.

    containerImageUri String

    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container

    dataprocParameters Property Map

    Parameters used in Dataproc JobType executions.

    inputNotebookFile String

    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb

    jobType String

    The type of Job to be used on this execution.

    kernelSpec String

    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.

    labels Map<String>

    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.

    masterType String

    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.

    outputNotebookFolder String

    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks

    parameters String

    Parameters used within the 'input_notebook_file' notebook.

    paramsYamlFile String

    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml

    scaleTier String

    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated:

    Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    serviceAccount String

    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.

    tensorboard String

    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

    vertexAiParameters Property Map

    Parameters used in Vertex AI JobType executions.

    ExecutionTemplateScaleTier, ExecutionTemplateScaleTierArgs

    ScaleTierUnspecified
    SCALE_TIER_UNSPECIFIED

    Unspecified Scale Tier.

    Basic
    BASIC

    A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.

    Standard1
    STANDARD_1

    Many workers and a few parameter servers.

    Premium1
    PREMIUM_1

    A large number of workers with many parameter servers.

    BasicGpu
    BASIC_GPU

    A single worker instance with a K80 GPU.

    BasicTpu
    BASIC_TPU

    A single worker instance with a Cloud TPU.

    Custom
    CUSTOM

    The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.

    ExecutionTemplateScaleTierScaleTierUnspecified
    SCALE_TIER_UNSPECIFIED

    Unspecified Scale Tier.

    ExecutionTemplateScaleTierBasic
    BASIC

    A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.

    ExecutionTemplateScaleTierStandard1
    STANDARD_1

    Many workers and a few parameter servers.

    ExecutionTemplateScaleTierPremium1
    PREMIUM_1

    A large number of workers with many parameter servers.

    ExecutionTemplateScaleTierBasicGpu
    BASIC_GPU

    A single worker instance with a K80 GPU.

    ExecutionTemplateScaleTierBasicTpu
    BASIC_TPU

    A single worker instance with a Cloud TPU.

    ExecutionTemplateScaleTierCustom
    CUSTOM

    The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.

    ScaleTierUnspecified
    SCALE_TIER_UNSPECIFIED

    Unspecified Scale Tier.

    Basic
    BASIC

    A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.

    Standard1
    STANDARD_1

    Many workers and a few parameter servers.

    Premium1
    PREMIUM_1

    A large number of workers with many parameter servers.

    BasicGpu
    BASIC_GPU

    A single worker instance with a K80 GPU.

    BasicTpu
    BASIC_TPU

    A single worker instance with a Cloud TPU.

    Custom
    CUSTOM

    The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.

    ScaleTierUnspecified
    SCALE_TIER_UNSPECIFIED

    Unspecified Scale Tier.

    Basic
    BASIC

    A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.

    Standard1
    STANDARD_1

    Many workers and a few parameter servers.

    Premium1
    PREMIUM_1

    A large number of workers with many parameter servers.

    BasicGpu
    BASIC_GPU

    A single worker instance with a K80 GPU.

    BasicTpu
    BASIC_TPU

    A single worker instance with a Cloud TPU.

    Custom
    CUSTOM

    The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.

    SCALE_TIER_UNSPECIFIED
    SCALE_TIER_UNSPECIFIED

    Unspecified Scale Tier.

    BASIC
    BASIC

    A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.

    STANDARD1
    STANDARD_1

    Many workers and a few parameter servers.

    PREMIUM1
    PREMIUM_1

    A large number of workers with many parameter servers.

    BASIC_GPU
    BASIC_GPU

    A single worker instance with a K80 GPU.

    BASIC_TPU
    BASIC_TPU

    A single worker instance with a Cloud TPU.

    CUSTOM
    CUSTOM

    The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.

    "SCALE_TIER_UNSPECIFIED"
    SCALE_TIER_UNSPECIFIED

    Unspecified Scale Tier.

    "BASIC"
    BASIC

    A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.

    "STANDARD_1"
    STANDARD_1

    Many workers and a few parameter servers.

    "PREMIUM_1"
    PREMIUM_1

    A large number of workers with many parameter servers.

    "BASIC_GPU"
    BASIC_GPU

    A single worker instance with a K80 GPU.

    "BASIC_TPU"
    BASIC_TPU

    A single worker instance with a Cloud TPU.

    "CUSTOM"
    CUSTOM

    The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.

    ScheduleState, ScheduleStateArgs

    StateUnspecified
    STATE_UNSPECIFIED

    Unspecified state.

    Enabled
    ENABLED

    The job is executing normally.

    Paused
    PAUSED

    The job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.

    Disabled
    DISABLED

    The job is disabled by the system due to error. The user cannot directly set a job to be disabled.

    UpdateFailed
    UPDATE_FAILED

    The job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.

    Initializing
    INITIALIZING

    The schedule resource is being created.

    Deleting
    DELETING

    The schedule resource is being deleted.

    ScheduleStateStateUnspecified
    STATE_UNSPECIFIED

    Unspecified state.

    ScheduleStateEnabled
    ENABLED

    The job is executing normally.

    ScheduleStatePaused
    PAUSED

    The job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.

    ScheduleStateDisabled
    DISABLED

    The job is disabled by the system due to error. The user cannot directly set a job to be disabled.

    ScheduleStateUpdateFailed
    UPDATE_FAILED

    The job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.

    ScheduleStateInitializing
    INITIALIZING

    The schedule resource is being created.

    ScheduleStateDeleting
    DELETING

    The schedule resource is being deleted.

    StateUnspecified
    STATE_UNSPECIFIED

    Unspecified state.

    Enabled
    ENABLED

    The job is executing normally.

    Paused
    PAUSED

    The job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.

    Disabled
    DISABLED

    The job is disabled by the system due to error. The user cannot directly set a job to be disabled.

    UpdateFailed
    UPDATE_FAILED

    The job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.

    Initializing
    INITIALIZING

    The schedule resource is being created.

    Deleting
    DELETING

    The schedule resource is being deleted.

    StateUnspecified
    STATE_UNSPECIFIED

    Unspecified state.

    Enabled
    ENABLED

    The job is executing normally.

    Paused
    PAUSED

    The job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.

    Disabled
    DISABLED

    The job is disabled by the system due to error. The user cannot directly set a job to be disabled.

    UpdateFailed
    UPDATE_FAILED

    The job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.

    Initializing
    INITIALIZING

    The schedule resource is being created.

    Deleting
    DELETING

    The schedule resource is being deleted.

    STATE_UNSPECIFIED
    STATE_UNSPECIFIED

    Unspecified state.

    ENABLED
    ENABLED

    The job is executing normally.

    PAUSED
    PAUSED

    The job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.

    DISABLED
    DISABLED

    The job is disabled by the system due to error. The user cannot directly set a job to be disabled.

    UPDATE_FAILED
    UPDATE_FAILED

    The job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.

    INITIALIZING
    INITIALIZING

    The schedule resource is being created.

    DELETING
    DELETING

    The schedule resource is being deleted.

    "STATE_UNSPECIFIED"
    STATE_UNSPECIFIED

    Unspecified state.

    "ENABLED"
    ENABLED

    The job is executing normally.

    "PAUSED"
    PAUSED

    The job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.

    "DISABLED"
    DISABLED

    The job is disabled by the system due to error. The user cannot directly set a job to be disabled.

    "UPDATE_FAILED"
    UPDATE_FAILED

    The job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.

    "INITIALIZING"
    INITIALIZING

    The schedule resource is being created.

    "DELETING"
    DELETING

    The schedule resource is being deleted.

    SchedulerAcceleratorConfig, SchedulerAcceleratorConfigArgs

    CoreCount string

    Count of cores of this accelerator.

    Type Pulumi.GoogleNative.Notebooks.V1.SchedulerAcceleratorConfigType

    Type of this accelerator.

    CoreCount string

    Count of cores of this accelerator.

    Type SchedulerAcceleratorConfigType

    Type of this accelerator.

    coreCount String

    Count of cores of this accelerator.

    type SchedulerAcceleratorConfigType

    Type of this accelerator.

    coreCount string

    Count of cores of this accelerator.

    type SchedulerAcceleratorConfigType

    Type of this accelerator.

    core_count str

    Count of cores of this accelerator.

    type SchedulerAcceleratorConfigType

    Type of this accelerator.

    SchedulerAcceleratorConfigResponse, SchedulerAcceleratorConfigResponseArgs

    CoreCount string

    Count of cores of this accelerator.

    Type string

    Type of this accelerator.

    CoreCount string

    Count of cores of this accelerator.

    Type string

    Type of this accelerator.

    coreCount String

    Count of cores of this accelerator.

    type String

    Type of this accelerator.

    coreCount string

    Count of cores of this accelerator.

    type string

    Type of this accelerator.

    core_count str

    Count of cores of this accelerator.

    type str

    Type of this accelerator.

    coreCount String

    Count of cores of this accelerator.

    type String

    Type of this accelerator.

    SchedulerAcceleratorConfigType, SchedulerAcceleratorConfigTypeArgs

    SchedulerAcceleratorTypeUnspecified
    SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED

    Unspecified accelerator type. Default to no GPU.

    NvidiaTeslaK80
    NVIDIA_TESLA_K80

    Nvidia Tesla K80 GPU.

    NvidiaTeslaP100
    NVIDIA_TESLA_P100

    Nvidia Tesla P100 GPU.

    NvidiaTeslaV100
    NVIDIA_TESLA_V100

    Nvidia Tesla V100 GPU.

    NvidiaTeslaP4
    NVIDIA_TESLA_P4

    Nvidia Tesla P4 GPU.

    NvidiaTeslaT4
    NVIDIA_TESLA_T4

    Nvidia Tesla T4 GPU.

    NvidiaTeslaA100
    NVIDIA_TESLA_A100

    Nvidia Tesla A100 GPU.

    TpuV2
    TPU_V2

    TPU v2.

    TpuV3
    TPU_V3

    TPU v3.

    SchedulerAcceleratorConfigTypeSchedulerAcceleratorTypeUnspecified
    SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED

    Unspecified accelerator type. Default to no GPU.

    SchedulerAcceleratorConfigTypeNvidiaTeslaK80
    NVIDIA_TESLA_K80

    Nvidia Tesla K80 GPU.

    SchedulerAcceleratorConfigTypeNvidiaTeslaP100
    NVIDIA_TESLA_P100

    Nvidia Tesla P100 GPU.

    SchedulerAcceleratorConfigTypeNvidiaTeslaV100
    NVIDIA_TESLA_V100

    Nvidia Tesla V100 GPU.

    SchedulerAcceleratorConfigTypeNvidiaTeslaP4
    NVIDIA_TESLA_P4

    Nvidia Tesla P4 GPU.

    SchedulerAcceleratorConfigTypeNvidiaTeslaT4
    NVIDIA_TESLA_T4

    Nvidia Tesla T4 GPU.

    SchedulerAcceleratorConfigTypeNvidiaTeslaA100
    NVIDIA_TESLA_A100

    Nvidia Tesla A100 GPU.

    SchedulerAcceleratorConfigTypeTpuV2
    TPU_V2

    TPU v2.

    SchedulerAcceleratorConfigTypeTpuV3
    TPU_V3

    TPU v3.

    SchedulerAcceleratorTypeUnspecified
    SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED

    Unspecified accelerator type. Default to no GPU.

    NvidiaTeslaK80
    NVIDIA_TESLA_K80

    Nvidia Tesla K80 GPU.

    NvidiaTeslaP100
    NVIDIA_TESLA_P100

    Nvidia Tesla P100 GPU.

    NvidiaTeslaV100
    NVIDIA_TESLA_V100

    Nvidia Tesla V100 GPU.

    NvidiaTeslaP4
    NVIDIA_TESLA_P4

    Nvidia Tesla P4 GPU.

    NvidiaTeslaT4
    NVIDIA_TESLA_T4

    Nvidia Tesla T4 GPU.

    NvidiaTeslaA100
    NVIDIA_TESLA_A100

    Nvidia Tesla A100 GPU.

    TpuV2
    TPU_V2

    TPU v2.

    TpuV3
    TPU_V3

    TPU v3.

    SchedulerAcceleratorTypeUnspecified
    SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED

    Unspecified accelerator type. Default to no GPU.

    NvidiaTeslaK80
    NVIDIA_TESLA_K80

    Nvidia Tesla K80 GPU.

    NvidiaTeslaP100
    NVIDIA_TESLA_P100

    Nvidia Tesla P100 GPU.

    NvidiaTeslaV100
    NVIDIA_TESLA_V100

    Nvidia Tesla V100 GPU.

    NvidiaTeslaP4
    NVIDIA_TESLA_P4

    Nvidia Tesla P4 GPU.

    NvidiaTeslaT4
    NVIDIA_TESLA_T4

    Nvidia Tesla T4 GPU.

    NvidiaTeslaA100
    NVIDIA_TESLA_A100

    Nvidia Tesla A100 GPU.

    TpuV2
    TPU_V2

    TPU v2.

    TpuV3
    TPU_V3

    TPU v3.

    SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED
    SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED

    Unspecified accelerator type. Default to no GPU.

    NVIDIA_TESLA_K80
    NVIDIA_TESLA_K80

    Nvidia Tesla K80 GPU.

    NVIDIA_TESLA_P100
    NVIDIA_TESLA_P100

    Nvidia Tesla P100 GPU.

    NVIDIA_TESLA_V100
    NVIDIA_TESLA_V100

    Nvidia Tesla V100 GPU.

    NVIDIA_TESLA_P4
    NVIDIA_TESLA_P4

    Nvidia Tesla P4 GPU.

    NVIDIA_TESLA_T4
    NVIDIA_TESLA_T4

    Nvidia Tesla T4 GPU.

    NVIDIA_TESLA_A100
    NVIDIA_TESLA_A100

    Nvidia Tesla A100 GPU.

    TPU_V2
    TPU_V2

    TPU v2.

    TPU_V3
    TPU_V3

    TPU v3.

    "SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED"
    SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED

    Unspecified accelerator type. Default to no GPU.

    "NVIDIA_TESLA_K80"
    NVIDIA_TESLA_K80

    Nvidia Tesla K80 GPU.

    "NVIDIA_TESLA_P100"
    NVIDIA_TESLA_P100

    Nvidia Tesla P100 GPU.

    "NVIDIA_TESLA_V100"
    NVIDIA_TESLA_V100

    Nvidia Tesla V100 GPU.

    "NVIDIA_TESLA_P4"
    NVIDIA_TESLA_P4

    Nvidia Tesla P4 GPU.

    "NVIDIA_TESLA_T4"
    NVIDIA_TESLA_T4

    Nvidia Tesla T4 GPU.

    "NVIDIA_TESLA_A100"
    NVIDIA_TESLA_A100

    Nvidia Tesla A100 GPU.

    "TPU_V2"
    TPU_V2

    TPU v2.

    "TPU_V3"
    TPU_V3

    TPU v3.

    VertexAIParameters, VertexAIParametersArgs

    Env Dictionary<string, string>

    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

    Network string

    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

    Env map[string]string

    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

    Network string

    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

    env Map<String,String>

    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

    network String

    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

    env {[key: string]: string}

    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

    network string

    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

    env Mapping[str, str]

    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

    network str

    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

    env Map<String>

    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

    network String

    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

    VertexAIParametersResponse, VertexAIParametersResponseArgs

    Env Dictionary<string, string>

    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

    Network string

    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

    Env map[string]string

    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

    Network string

    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

    env Map<String,String>

    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

    network String

    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

    env {[key: string]: string}

    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

    network string

    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

    env Mapping[str, str]

    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

    network str

    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

    env Map<String>

    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/

    network String

    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.31.1 published on Thursday, Jul 20, 2023 by Pulumi