1. Packages
  2. Google Cloud Native
  3. API Docs
  4. notebooks
  5. notebooks/v1
  6. Execution

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.notebooks/v1.Execution

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Creates a new Execution in a given project and location. Auto-naming is currently not supported for this resource.

    Create Execution Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new Execution(name: string, args: ExecutionArgs, opts?: CustomResourceOptions);
    @overload
    def Execution(resource_name: str,
                  args: ExecutionArgs,
                  opts: Optional[ResourceOptions] = None)
    
    @overload
    def Execution(resource_name: str,
                  opts: Optional[ResourceOptions] = None,
                  execution_id: Optional[str] = None,
                  description: Optional[str] = None,
                  execution_template: Optional[ExecutionTemplateArgs] = None,
                  location: Optional[str] = None,
                  output_notebook_file: Optional[str] = None,
                  project: Optional[str] = None)
    func NewExecution(ctx *Context, name string, args ExecutionArgs, opts ...ResourceOption) (*Execution, error)
    public Execution(string name, ExecutionArgs args, CustomResourceOptions? opts = null)
    public Execution(String name, ExecutionArgs args)
    public Execution(String name, ExecutionArgs args, CustomResourceOptions options)
    
    type: google-native:notebooks/v1:Execution
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args ExecutionArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args ExecutionArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args ExecutionArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args ExecutionArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args ExecutionArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Constructor example

    The following reference example uses placeholder values for all input properties.

    var exampleexecutionResourceResourceFromNotebooksv1 = new GoogleNative.Notebooks.V1.Execution("exampleexecutionResourceResourceFromNotebooksv1", new()
    {
        ExecutionId = "string",
        Description = "string",
        ExecutionTemplate = new GoogleNative.Notebooks.V1.Inputs.ExecutionTemplateArgs
        {
            Labels = 
            {
                { "string", "string" },
            },
            OutputNotebookFolder = "string",
            InputNotebookFile = "string",
            JobType = GoogleNative.Notebooks.V1.ExecutionTemplateJobType.JobTypeUnspecified,
            KernelSpec = "string",
            AcceleratorConfig = new GoogleNative.Notebooks.V1.Inputs.SchedulerAcceleratorConfigArgs
            {
                CoreCount = "string",
                Type = GoogleNative.Notebooks.V1.SchedulerAcceleratorConfigType.SchedulerAcceleratorTypeUnspecified,
            },
            MasterType = "string",
            DataprocParameters = new GoogleNative.Notebooks.V1.Inputs.DataprocParametersArgs
            {
                Cluster = "string",
            },
            Parameters = "string",
            ParamsYamlFile = "string",
            ContainerImageUri = "string",
            ServiceAccount = "string",
            Tensorboard = "string",
            VertexAiParameters = new GoogleNative.Notebooks.V1.Inputs.VertexAIParametersArgs
            {
                Env = 
                {
                    { "string", "string" },
                },
                Network = "string",
            },
        },
        Location = "string",
        OutputNotebookFile = "string",
        Project = "string",
    });
    
    example, err := notebooks.NewExecution(ctx, "exampleexecutionResourceResourceFromNotebooksv1", &notebooks.ExecutionArgs{
    	ExecutionId: pulumi.String("string"),
    	Description: pulumi.String("string"),
    	ExecutionTemplate: &notebooks.ExecutionTemplateArgs{
    		Labels: pulumi.StringMap{
    			"string": pulumi.String("string"),
    		},
    		OutputNotebookFolder: pulumi.String("string"),
    		InputNotebookFile:    pulumi.String("string"),
    		JobType:              notebooks.ExecutionTemplateJobTypeJobTypeUnspecified,
    		KernelSpec:           pulumi.String("string"),
    		AcceleratorConfig: &notebooks.SchedulerAcceleratorConfigArgs{
    			CoreCount: pulumi.String("string"),
    			Type:      notebooks.SchedulerAcceleratorConfigTypeSchedulerAcceleratorTypeUnspecified,
    		},
    		MasterType: pulumi.String("string"),
    		DataprocParameters: &notebooks.DataprocParametersArgs{
    			Cluster: pulumi.String("string"),
    		},
    		Parameters:        pulumi.String("string"),
    		ParamsYamlFile:    pulumi.String("string"),
    		ContainerImageUri: pulumi.String("string"),
    		ServiceAccount:    pulumi.String("string"),
    		Tensorboard:       pulumi.String("string"),
    		VertexAiParameters: &notebooks.VertexAIParametersArgs{
    			Env: pulumi.StringMap{
    				"string": pulumi.String("string"),
    			},
    			Network: pulumi.String("string"),
    		},
    	},
    	Location:           pulumi.String("string"),
    	OutputNotebookFile: pulumi.String("string"),
    	Project:            pulumi.String("string"),
    })
    
    var exampleexecutionResourceResourceFromNotebooksv1 = new Execution("exampleexecutionResourceResourceFromNotebooksv1", ExecutionArgs.builder()
        .executionId("string")
        .description("string")
        .executionTemplate(ExecutionTemplateArgs.builder()
            .labels(Map.of("string", "string"))
            .outputNotebookFolder("string")
            .inputNotebookFile("string")
            .jobType("JOB_TYPE_UNSPECIFIED")
            .kernelSpec("string")
            .acceleratorConfig(SchedulerAcceleratorConfigArgs.builder()
                .coreCount("string")
                .type("SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED")
                .build())
            .masterType("string")
            .dataprocParameters(DataprocParametersArgs.builder()
                .cluster("string")
                .build())
            .parameters("string")
            .paramsYamlFile("string")
            .containerImageUri("string")
            .serviceAccount("string")
            .tensorboard("string")
            .vertexAiParameters(VertexAIParametersArgs.builder()
                .env(Map.of("string", "string"))
                .network("string")
                .build())
            .build())
        .location("string")
        .outputNotebookFile("string")
        .project("string")
        .build());
    
    exampleexecution_resource_resource_from_notebooksv1 = google_native.notebooks.v1.Execution("exampleexecutionResourceResourceFromNotebooksv1",
        execution_id="string",
        description="string",
        execution_template=google_native.notebooks.v1.ExecutionTemplateArgs(
            labels={
                "string": "string",
            },
            output_notebook_folder="string",
            input_notebook_file="string",
            job_type=google_native.notebooks.v1.ExecutionTemplateJobType.JOB_TYPE_UNSPECIFIED,
            kernel_spec="string",
            accelerator_config=google_native.notebooks.v1.SchedulerAcceleratorConfigArgs(
                core_count="string",
                type=google_native.notebooks.v1.SchedulerAcceleratorConfigType.SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED,
            ),
            master_type="string",
            dataproc_parameters=google_native.notebooks.v1.DataprocParametersArgs(
                cluster="string",
            ),
            parameters="string",
            params_yaml_file="string",
            container_image_uri="string",
            service_account="string",
            tensorboard="string",
            vertex_ai_parameters=google_native.notebooks.v1.VertexAIParametersArgs(
                env={
                    "string": "string",
                },
                network="string",
            ),
        ),
        location="string",
        output_notebook_file="string",
        project="string")
    
    const exampleexecutionResourceResourceFromNotebooksv1 = new google_native.notebooks.v1.Execution("exampleexecutionResourceResourceFromNotebooksv1", {
        executionId: "string",
        description: "string",
        executionTemplate: {
            labels: {
                string: "string",
            },
            outputNotebookFolder: "string",
            inputNotebookFile: "string",
            jobType: google_native.notebooks.v1.ExecutionTemplateJobType.JobTypeUnspecified,
            kernelSpec: "string",
            acceleratorConfig: {
                coreCount: "string",
                type: google_native.notebooks.v1.SchedulerAcceleratorConfigType.SchedulerAcceleratorTypeUnspecified,
            },
            masterType: "string",
            dataprocParameters: {
                cluster: "string",
            },
            parameters: "string",
            paramsYamlFile: "string",
            containerImageUri: "string",
            serviceAccount: "string",
            tensorboard: "string",
            vertexAiParameters: {
                env: {
                    string: "string",
                },
                network: "string",
            },
        },
        location: "string",
        outputNotebookFile: "string",
        project: "string",
    });
    
    type: google-native:notebooks/v1:Execution
    properties:
        description: string
        executionId: string
        executionTemplate:
            acceleratorConfig:
                coreCount: string
                type: SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED
            containerImageUri: string
            dataprocParameters:
                cluster: string
            inputNotebookFile: string
            jobType: JOB_TYPE_UNSPECIFIED
            kernelSpec: string
            labels:
                string: string
            masterType: string
            outputNotebookFolder: string
            parameters: string
            paramsYamlFile: string
            serviceAccount: string
            tensorboard: string
            vertexAiParameters:
                env:
                    string: string
                network: string
        location: string
        outputNotebookFile: string
        project: string
    

    Execution Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    The Execution resource accepts the following input properties:

    ExecutionId string
    Required. User-defined unique ID of this execution.
    Description string
    A brief description of this execution.
    ExecutionTemplate Pulumi.GoogleNative.Notebooks.V1.Inputs.ExecutionTemplate
    execute metadata including name, hardware spec, region, labels, etc.
    Location string
    OutputNotebookFile string
    Output notebook file generated by this execution
    Project string
    ExecutionId string
    Required. User-defined unique ID of this execution.
    Description string
    A brief description of this execution.
    ExecutionTemplate ExecutionTemplateArgs
    execute metadata including name, hardware spec, region, labels, etc.
    Location string
    OutputNotebookFile string
    Output notebook file generated by this execution
    Project string
    executionId String
    Required. User-defined unique ID of this execution.
    description String
    A brief description of this execution.
    executionTemplate ExecutionTemplate
    execute metadata including name, hardware spec, region, labels, etc.
    location String
    outputNotebookFile String
    Output notebook file generated by this execution
    project String
    executionId string
    Required. User-defined unique ID of this execution.
    description string
    A brief description of this execution.
    executionTemplate ExecutionTemplate
    execute metadata including name, hardware spec, region, labels, etc.
    location string
    outputNotebookFile string
    Output notebook file generated by this execution
    project string
    execution_id str
    Required. User-defined unique ID of this execution.
    description str
    A brief description of this execution.
    execution_template ExecutionTemplateArgs
    execute metadata including name, hardware spec, region, labels, etc.
    location str
    output_notebook_file str
    Output notebook file generated by this execution
    project str
    executionId String
    Required. User-defined unique ID of this execution.
    description String
    A brief description of this execution.
    executionTemplate Property Map
    execute metadata including name, hardware spec, region, labels, etc.
    location String
    outputNotebookFile String
    Output notebook file generated by this execution
    project String

    Outputs

    All input properties are implicitly available as output properties. Additionally, the Execution resource produces the following output properties:

    CreateTime string
    Time the Execution was instantiated.
    DisplayName string
    Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
    Id string
    The provider-assigned unique ID for this managed resource.
    JobUri string
    The URI of the external job used to execute the notebook.
    Name string
    The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
    State string
    State of the underlying AI Platform job.
    UpdateTime string
    Time the Execution was last updated.
    CreateTime string
    Time the Execution was instantiated.
    DisplayName string
    Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
    Id string
    The provider-assigned unique ID for this managed resource.
    JobUri string
    The URI of the external job used to execute the notebook.
    Name string
    The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
    State string
    State of the underlying AI Platform job.
    UpdateTime string
    Time the Execution was last updated.
    createTime String
    Time the Execution was instantiated.
    displayName String
    Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
    id String
    The provider-assigned unique ID for this managed resource.
    jobUri String
    The URI of the external job used to execute the notebook.
    name String
    The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
    state String
    State of the underlying AI Platform job.
    updateTime String
    Time the Execution was last updated.
    createTime string
    Time the Execution was instantiated.
    displayName string
    Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
    id string
    The provider-assigned unique ID for this managed resource.
    jobUri string
    The URI of the external job used to execute the notebook.
    name string
    The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
    state string
    State of the underlying AI Platform job.
    updateTime string
    Time the Execution was last updated.
    create_time str
    Time the Execution was instantiated.
    display_name str
    Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
    id str
    The provider-assigned unique ID for this managed resource.
    job_uri str
    The URI of the external job used to execute the notebook.
    name str
    The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
    state str
    State of the underlying AI Platform job.
    update_time str
    Time the Execution was last updated.
    createTime String
    Time the Execution was instantiated.
    displayName String
    Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
    id String
    The provider-assigned unique ID for this managed resource.
    jobUri String
    The URI of the external job used to execute the notebook.
    name String
    The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
    state String
    State of the underlying AI Platform job.
    updateTime String
    Time the Execution was last updated.

    Supporting Types

    DataprocParameters, DataprocParametersArgs

    Cluster string
    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
    Cluster string
    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
    cluster String
    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
    cluster string
    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
    cluster str
    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
    cluster String
    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

    DataprocParametersResponse, DataprocParametersResponseArgs

    Cluster string
    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
    Cluster string
    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
    cluster String
    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
    cluster string
    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
    cluster str
    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
    cluster String
    URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

    ExecutionTemplate, ExecutionTemplateArgs

    ScaleTier Pulumi.GoogleNative.Notebooks.V1.ExecutionTemplateScaleTier
    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    AcceleratorConfig Pulumi.GoogleNative.Notebooks.V1.Inputs.SchedulerAcceleratorConfig
    Configuration (count and accelerator type) for hardware running notebook execution.
    ContainerImageUri string
    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
    DataprocParameters Pulumi.GoogleNative.Notebooks.V1.Inputs.DataprocParameters
    Parameters used in Dataproc JobType executions.
    InputNotebookFile string
    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
    JobType Pulumi.GoogleNative.Notebooks.V1.ExecutionTemplateJobType
    The type of Job to be used on this execution.
    KernelSpec string
    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
    Labels Dictionary<string, string>
    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
    MasterType string
    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
    OutputNotebookFolder string
    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
    Parameters string
    Parameters used within the 'input_notebook_file' notebook.
    ParamsYamlFile string
    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
    ServiceAccount string
    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
    Tensorboard string
    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    VertexAiParameters Pulumi.GoogleNative.Notebooks.V1.Inputs.VertexAIParameters
    Parameters used in Vertex AI JobType executions.
    ScaleTier ExecutionTemplateScaleTier
    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    AcceleratorConfig SchedulerAcceleratorConfig
    Configuration (count and accelerator type) for hardware running notebook execution.
    ContainerImageUri string
    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
    DataprocParameters DataprocParameters
    Parameters used in Dataproc JobType executions.
    InputNotebookFile string
    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
    JobType ExecutionTemplateJobType
    The type of Job to be used on this execution.
    KernelSpec string
    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
    Labels map[string]string
    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
    MasterType string
    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
    OutputNotebookFolder string
    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
    Parameters string
    Parameters used within the 'input_notebook_file' notebook.
    ParamsYamlFile string
    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
    ServiceAccount string
    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
    Tensorboard string
    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    VertexAiParameters VertexAIParameters
    Parameters used in Vertex AI JobType executions.
    scaleTier ExecutionTemplateScaleTier
    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    acceleratorConfig SchedulerAcceleratorConfig
    Configuration (count and accelerator type) for hardware running notebook execution.
    containerImageUri String
    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
    dataprocParameters DataprocParameters
    Parameters used in Dataproc JobType executions.
    inputNotebookFile String
    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
    jobType ExecutionTemplateJobType
    The type of Job to be used on this execution.
    kernelSpec String
    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
    labels Map<String,String>
    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
    masterType String
    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
    outputNotebookFolder String
    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
    parameters String
    Parameters used within the 'input_notebook_file' notebook.
    paramsYamlFile String
    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
    serviceAccount String
    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
    tensorboard String
    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    vertexAiParameters VertexAIParameters
    Parameters used in Vertex AI JobType executions.
    scaleTier ExecutionTemplateScaleTier
    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    acceleratorConfig SchedulerAcceleratorConfig
    Configuration (count and accelerator type) for hardware running notebook execution.
    containerImageUri string
    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
    dataprocParameters DataprocParameters
    Parameters used in Dataproc JobType executions.
    inputNotebookFile string
    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
    jobType ExecutionTemplateJobType
    The type of Job to be used on this execution.
    kernelSpec string
    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
    labels {[key: string]: string}
    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
    masterType string
    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
    outputNotebookFolder string
    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
    parameters string
    Parameters used within the 'input_notebook_file' notebook.
    paramsYamlFile string
    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
    serviceAccount string
    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
    tensorboard string
    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    vertexAiParameters VertexAIParameters
    Parameters used in Vertex AI JobType executions.
    scale_tier ExecutionTemplateScaleTier
    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    accelerator_config SchedulerAcceleratorConfig
    Configuration (count and accelerator type) for hardware running notebook execution.
    container_image_uri str
    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
    dataproc_parameters DataprocParameters
    Parameters used in Dataproc JobType executions.
    input_notebook_file str
    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
    job_type ExecutionTemplateJobType
    The type of Job to be used on this execution.
    kernel_spec str
    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
    labels Mapping[str, str]
    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
    master_type str
    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
    output_notebook_folder str
    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
    parameters str
    Parameters used within the 'input_notebook_file' notebook.
    params_yaml_file str
    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
    service_account str
    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
    tensorboard str
    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    vertex_ai_parameters VertexAIParameters
    Parameters used in Vertex AI JobType executions.
    scaleTier "SCALE_TIER_UNSPECIFIED" | "BASIC" | "STANDARD_1" | "PREMIUM_1" | "BASIC_GPU" | "BASIC_TPU" | "CUSTOM"
    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    acceleratorConfig Property Map
    Configuration (count and accelerator type) for hardware running notebook execution.
    containerImageUri String
    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
    dataprocParameters Property Map
    Parameters used in Dataproc JobType executions.
    inputNotebookFile String
    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
    jobType "JOB_TYPE_UNSPECIFIED" | "VERTEX_AI" | "DATAPROC"
    The type of Job to be used on this execution.
    kernelSpec String
    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
    labels Map<String>
    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
    masterType String
    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
    outputNotebookFolder String
    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
    parameters String
    Parameters used within the 'input_notebook_file' notebook.
    paramsYamlFile String
    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
    serviceAccount String
    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
    tensorboard String
    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    vertexAiParameters Property Map
    Parameters used in Vertex AI JobType executions.

    ExecutionTemplateJobType, ExecutionTemplateJobTypeArgs

    JobTypeUnspecified
    JOB_TYPE_UNSPECIFIEDNo type specified.
    VertexAi
    VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
    Dataproc
    DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
    ExecutionTemplateJobTypeJobTypeUnspecified
    JOB_TYPE_UNSPECIFIEDNo type specified.
    ExecutionTemplateJobTypeVertexAi
    VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
    ExecutionTemplateJobTypeDataproc
    DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
    JobTypeUnspecified
    JOB_TYPE_UNSPECIFIEDNo type specified.
    VertexAi
    VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
    Dataproc
    DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
    JobTypeUnspecified
    JOB_TYPE_UNSPECIFIEDNo type specified.
    VertexAi
    VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
    Dataproc
    DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
    JOB_TYPE_UNSPECIFIED
    JOB_TYPE_UNSPECIFIEDNo type specified.
    VERTEX_AI
    VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
    DATAPROC
    DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
    "JOB_TYPE_UNSPECIFIED"
    JOB_TYPE_UNSPECIFIEDNo type specified.
    "VERTEX_AI"
    VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
    "DATAPROC"
    DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs

    ExecutionTemplateResponse, ExecutionTemplateResponseArgs

    AcceleratorConfig Pulumi.GoogleNative.Notebooks.V1.Inputs.SchedulerAcceleratorConfigResponse
    Configuration (count and accelerator type) for hardware running notebook execution.
    ContainerImageUri string
    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
    DataprocParameters Pulumi.GoogleNative.Notebooks.V1.Inputs.DataprocParametersResponse
    Parameters used in Dataproc JobType executions.
    InputNotebookFile string
    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
    JobType string
    The type of Job to be used on this execution.
    KernelSpec string
    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
    Labels Dictionary<string, string>
    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
    MasterType string
    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
    OutputNotebookFolder string
    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
    Parameters string
    Parameters used within the 'input_notebook_file' notebook.
    ParamsYamlFile string
    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
    ScaleTier string
    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    ServiceAccount string
    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
    Tensorboard string
    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    VertexAiParameters Pulumi.GoogleNative.Notebooks.V1.Inputs.VertexAIParametersResponse
    Parameters used in Vertex AI JobType executions.
    AcceleratorConfig SchedulerAcceleratorConfigResponse
    Configuration (count and accelerator type) for hardware running notebook execution.
    ContainerImageUri string
    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
    DataprocParameters DataprocParametersResponse
    Parameters used in Dataproc JobType executions.
    InputNotebookFile string
    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
    JobType string
    The type of Job to be used on this execution.
    KernelSpec string
    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
    Labels map[string]string
    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
    MasterType string
    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
    OutputNotebookFolder string
    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
    Parameters string
    Parameters used within the 'input_notebook_file' notebook.
    ParamsYamlFile string
    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
    ScaleTier string
    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    ServiceAccount string
    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
    Tensorboard string
    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    VertexAiParameters VertexAIParametersResponse
    Parameters used in Vertex AI JobType executions.
    acceleratorConfig SchedulerAcceleratorConfigResponse
    Configuration (count and accelerator type) for hardware running notebook execution.
    containerImageUri String
    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
    dataprocParameters DataprocParametersResponse
    Parameters used in Dataproc JobType executions.
    inputNotebookFile String
    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
    jobType String
    The type of Job to be used on this execution.
    kernelSpec String
    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
    labels Map<String,String>
    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
    masterType String
    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
    outputNotebookFolder String
    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
    parameters String
    Parameters used within the 'input_notebook_file' notebook.
    paramsYamlFile String
    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
    scaleTier String
    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    serviceAccount String
    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
    tensorboard String
    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    vertexAiParameters VertexAIParametersResponse
    Parameters used in Vertex AI JobType executions.
    acceleratorConfig SchedulerAcceleratorConfigResponse
    Configuration (count and accelerator type) for hardware running notebook execution.
    containerImageUri string
    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
    dataprocParameters DataprocParametersResponse
    Parameters used in Dataproc JobType executions.
    inputNotebookFile string
    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
    jobType string
    The type of Job to be used on this execution.
    kernelSpec string
    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
    labels {[key: string]: string}
    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
    masterType string
    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
    outputNotebookFolder string
    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
    parameters string
    Parameters used within the 'input_notebook_file' notebook.
    paramsYamlFile string
    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
    scaleTier string
    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    serviceAccount string
    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
    tensorboard string
    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    vertexAiParameters VertexAIParametersResponse
    Parameters used in Vertex AI JobType executions.
    accelerator_config SchedulerAcceleratorConfigResponse
    Configuration (count and accelerator type) for hardware running notebook execution.
    container_image_uri str
    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
    dataproc_parameters DataprocParametersResponse
    Parameters used in Dataproc JobType executions.
    input_notebook_file str
    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
    job_type str
    The type of Job to be used on this execution.
    kernel_spec str
    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
    labels Mapping[str, str]
    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
    master_type str
    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
    output_notebook_folder str
    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
    parameters str
    Parameters used within the 'input_notebook_file' notebook.
    params_yaml_file str
    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
    scale_tier str
    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    service_account str
    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
    tensorboard str
    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    vertex_ai_parameters VertexAIParametersResponse
    Parameters used in Vertex AI JobType executions.
    acceleratorConfig Property Map
    Configuration (count and accelerator type) for hardware running notebook execution.
    containerImageUri String
    Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
    dataprocParameters Property Map
    Parameters used in Dataproc JobType executions.
    inputNotebookFile String
    Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
    jobType String
    The type of Job to be used on this execution.
    kernelSpec String
    Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
    labels Map<String>
    Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
    masterType String
    Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
    outputNotebookFolder String
    Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
    parameters String
    Parameters used within the 'input_notebook_file' notebook.
    paramsYamlFile String
    Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
    scaleTier String
    Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

    serviceAccount String
    The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
    tensorboard String
    The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    vertexAiParameters Property Map
    Parameters used in Vertex AI JobType executions.

    ExecutionTemplateScaleTier, ExecutionTemplateScaleTierArgs

    ScaleTierUnspecified
    SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
    Basic
    BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
    Standard1
    STANDARD_1Many workers and a few parameter servers.
    Premium1
    PREMIUM_1A large number of workers with many parameter servers.
    BasicGpu
    BASIC_GPUA single worker instance with a K80 GPU.
    BasicTpu
    BASIC_TPUA single worker instance with a Cloud TPU.
    Custom
    CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.
    ExecutionTemplateScaleTierScaleTierUnspecified
    SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
    ExecutionTemplateScaleTierBasic
    BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
    ExecutionTemplateScaleTierStandard1
    STANDARD_1Many workers and a few parameter servers.
    ExecutionTemplateScaleTierPremium1
    PREMIUM_1A large number of workers with many parameter servers.
    ExecutionTemplateScaleTierBasicGpu
    BASIC_GPUA single worker instance with a K80 GPU.
    ExecutionTemplateScaleTierBasicTpu
    BASIC_TPUA single worker instance with a Cloud TPU.
    ExecutionTemplateScaleTierCustom
    CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.
    ScaleTierUnspecified
    SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
    Basic
    BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
    Standard1
    STANDARD_1Many workers and a few parameter servers.
    Premium1
    PREMIUM_1A large number of workers with many parameter servers.
    BasicGpu
    BASIC_GPUA single worker instance with a K80 GPU.
    BasicTpu
    BASIC_TPUA single worker instance with a Cloud TPU.
    Custom
    CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.
    ScaleTierUnspecified
    SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
    Basic
    BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
    Standard1
    STANDARD_1Many workers and a few parameter servers.
    Premium1
    PREMIUM_1A large number of workers with many parameter servers.
    BasicGpu
    BASIC_GPUA single worker instance with a K80 GPU.
    BasicTpu
    BASIC_TPUA single worker instance with a Cloud TPU.
    Custom
    CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.
    SCALE_TIER_UNSPECIFIED
    SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
    BASIC
    BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
    STANDARD1
    STANDARD_1Many workers and a few parameter servers.
    PREMIUM1
    PREMIUM_1A large number of workers with many parameter servers.
    BASIC_GPU
    BASIC_GPUA single worker instance with a K80 GPU.
    BASIC_TPU
    BASIC_TPUA single worker instance with a Cloud TPU.
    CUSTOM
    CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.
    "SCALE_TIER_UNSPECIFIED"
    SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
    "BASIC"
    BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
    "STANDARD_1"
    STANDARD_1Many workers and a few parameter servers.
    "PREMIUM_1"
    PREMIUM_1A large number of workers with many parameter servers.
    "BASIC_GPU"
    BASIC_GPUA single worker instance with a K80 GPU.
    "BASIC_TPU"
    BASIC_TPUA single worker instance with a Cloud TPU.
    "CUSTOM"
    CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterType to specify the type of machine to use for your master node. This is the only required setting.

    SchedulerAcceleratorConfig, SchedulerAcceleratorConfigArgs

    CoreCount string
    Count of cores of this accelerator.
    Type Pulumi.GoogleNative.Notebooks.V1.SchedulerAcceleratorConfigType
    Type of this accelerator.
    CoreCount string
    Count of cores of this accelerator.
    Type SchedulerAcceleratorConfigType
    Type of this accelerator.
    coreCount String
    Count of cores of this accelerator.
    type SchedulerAcceleratorConfigType
    Type of this accelerator.
    coreCount string
    Count of cores of this accelerator.
    type SchedulerAcceleratorConfigType
    Type of this accelerator.
    core_count str
    Count of cores of this accelerator.
    type SchedulerAcceleratorConfigType
    Type of this accelerator.

    SchedulerAcceleratorConfigResponse, SchedulerAcceleratorConfigResponseArgs

    CoreCount string
    Count of cores of this accelerator.
    Type string
    Type of this accelerator.
    CoreCount string
    Count of cores of this accelerator.
    Type string
    Type of this accelerator.
    coreCount String
    Count of cores of this accelerator.
    type String
    Type of this accelerator.
    coreCount string
    Count of cores of this accelerator.
    type string
    Type of this accelerator.
    core_count str
    Count of cores of this accelerator.
    type str
    Type of this accelerator.
    coreCount String
    Count of cores of this accelerator.
    type String
    Type of this accelerator.

    SchedulerAcceleratorConfigType, SchedulerAcceleratorConfigTypeArgs

    SchedulerAcceleratorTypeUnspecified
    SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
    NvidiaTeslaK80
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    NvidiaTeslaP100
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    NvidiaTeslaV100
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    NvidiaTeslaP4
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    NvidiaTeslaT4
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    NvidiaTeslaA100
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    TpuV2
    TPU_V2TPU v2.
    TpuV3
    TPU_V3TPU v3.
    SchedulerAcceleratorConfigTypeSchedulerAcceleratorTypeUnspecified
    SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
    SchedulerAcceleratorConfigTypeNvidiaTeslaK80
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    SchedulerAcceleratorConfigTypeNvidiaTeslaP100
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    SchedulerAcceleratorConfigTypeNvidiaTeslaV100
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    SchedulerAcceleratorConfigTypeNvidiaTeslaP4
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    SchedulerAcceleratorConfigTypeNvidiaTeslaT4
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    SchedulerAcceleratorConfigTypeNvidiaTeslaA100
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    SchedulerAcceleratorConfigTypeTpuV2
    TPU_V2TPU v2.
    SchedulerAcceleratorConfigTypeTpuV3
    TPU_V3TPU v3.
    SchedulerAcceleratorTypeUnspecified
    SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
    NvidiaTeslaK80
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    NvidiaTeslaP100
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    NvidiaTeslaV100
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    NvidiaTeslaP4
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    NvidiaTeslaT4
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    NvidiaTeslaA100
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    TpuV2
    TPU_V2TPU v2.
    TpuV3
    TPU_V3TPU v3.
    SchedulerAcceleratorTypeUnspecified
    SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
    NvidiaTeslaK80
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    NvidiaTeslaP100
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    NvidiaTeslaV100
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    NvidiaTeslaP4
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    NvidiaTeslaT4
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    NvidiaTeslaA100
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    TpuV2
    TPU_V2TPU v2.
    TpuV3
    TPU_V3TPU v3.
    SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED
    SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
    NVIDIA_TESLA_K80
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    NVIDIA_TESLA_P100
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    NVIDIA_TESLA_V100
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    NVIDIA_TESLA_P4
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    NVIDIA_TESLA_T4
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    NVIDIA_TESLA_A100
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    TPU_V2
    TPU_V2TPU v2.
    TPU_V3
    TPU_V3TPU v3.
    "SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED"
    SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
    "NVIDIA_TESLA_K80"
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    "NVIDIA_TESLA_P100"
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    "NVIDIA_TESLA_V100"
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    "NVIDIA_TESLA_P4"
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    "NVIDIA_TESLA_T4"
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    "NVIDIA_TESLA_A100"
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    "TPU_V2"
    TPU_V2TPU v2.
    "TPU_V3"
    TPU_V3TPU v3.

    VertexAIParameters, VertexAIParametersArgs

    Env Dictionary<string, string>
    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
    Network string
    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
    Env map[string]string
    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
    Network string
    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
    env Map<String,String>
    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
    network String
    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
    env {[key: string]: string}
    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
    network string
    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
    env Mapping[str, str]
    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
    network str
    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
    env Map<String>
    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
    network String
    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

    VertexAIParametersResponse, VertexAIParametersResponseArgs

    Env Dictionary<string, string>
    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
    Network string
    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
    Env map[string]string
    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
    Network string
    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
    env Map<String,String>
    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
    network String
    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
    env {[key: string]: string}
    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
    network string
    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
    env Mapping[str, str]
    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
    network str
    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
    env Map<String>
    Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
    network String
    The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi