Configure Azure Machine Learning Compute

The azure-native:machinelearningservices:Compute resource, part of the Pulumi Azure Native provider, defines compute resources within Azure Machine Learning workspaces: managed clusters for training, development instances for data science work, and attached infrastructure like AKS. This guide focuses on four capabilities: AmlCompute clusters with autoscaling, ComputeInstance development environments with custom services, scheduled shutdowns for cost control, and AKS cluster attachment.

Compute resources belong to Azure Machine Learning workspaces and may reference virtual network subnets, container images, or existing AKS clusters. The examples are intentionally small. Combine them with your own workspace configuration, networking, and identity management.

Create an AmlCompute cluster with autoscaling

Training workloads need compute clusters that scale from zero to multiple nodes based on job demand, reducing costs during idle periods while providing capacity when needed.

import * as pulumi from "@pulumi/pulumi";
import * as azure_native from "@pulumi/azure-native";

const compute = new azure_native.machinelearningservices.Compute("compute", {
    computeName: "compute123",
    location: "eastus",
    properties: {
        computeType: "AmlCompute",
        properties: {
            enableNodePublicIp: true,
            isolatedNetwork: false,
            osType: azure_native.machinelearningservices.OsType.Windows,
            remoteLoginPortPublicAccess: azure_native.machinelearningservices.RemoteLoginPortPublicAccess.NotSpecified,
            scaleSettings: {
                maxNodeCount: 1,
                minNodeCount: 0,
                nodeIdleTimeBeforeScaleDown: "PT5M",
            },
            virtualMachineImage: {
                id: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Compute/galleries/myImageGallery/images/myImageDefinition/versions/0.0.1",
            },
            vmPriority: azure_native.machinelearningservices.VmPriority.Dedicated,
            vmSize: "STANDARD_NC6",
        },
    },
    resourceGroupName: "testrg123",
    workspaceName: "workspaces123",
});
import pulumi
import pulumi_azure_native as azure_native

compute = azure_native.machinelearningservices.Compute("compute",
    compute_name="compute123",
    location="eastus",
    properties={
        "compute_type": "AmlCompute",
        "properties": {
            "enable_node_public_ip": True,
            "isolated_network": False,
            "os_type": azure_native.machinelearningservices.OsType.WINDOWS,
            "remote_login_port_public_access": azure_native.machinelearningservices.RemoteLoginPortPublicAccess.NOT_SPECIFIED,
            "scale_settings": {
                "max_node_count": 1,
                "min_node_count": 0,
                "node_idle_time_before_scale_down": "PT5M",
            },
            "virtual_machine_image": {
                "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Compute/galleries/myImageGallery/images/myImageDefinition/versions/0.0.1",
            },
            "vm_priority": azure_native.machinelearningservices.VmPriority.DEDICATED,
            "vm_size": "STANDARD_NC6",
        },
    },
    resource_group_name="testrg123",
    workspace_name="workspaces123")
package main

import (
	machinelearningservices "github.com/pulumi/pulumi-azure-native-sdk/machinelearningservices/v3"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		_, err := machinelearningservices.NewCompute(ctx, "compute", &machinelearningservices.ComputeArgs{
			ComputeName: pulumi.String("compute123"),
			Location:    pulumi.String("eastus"),
			Properties: &machinelearningservices.AmlComputeArgs{
				ComputeType: pulumi.String("AmlCompute"),
				Properties: &machinelearningservices.AmlComputePropertiesArgs{
					EnableNodePublicIp:          pulumi.Bool(true),
					IsolatedNetwork:             pulumi.Bool(false),
					OsType:                      pulumi.String(machinelearningservices.OsTypeWindows),
					RemoteLoginPortPublicAccess: pulumi.String(machinelearningservices.RemoteLoginPortPublicAccessNotSpecified),
					ScaleSettings: &machinelearningservices.ScaleSettingsArgs{
						MaxNodeCount:                pulumi.Int(1),
						MinNodeCount:                pulumi.Int(0),
						NodeIdleTimeBeforeScaleDown: pulumi.String("PT5M"),
					},
					VirtualMachineImage: &machinelearningservices.VirtualMachineImageArgs{
						Id: pulumi.String("/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Compute/galleries/myImageGallery/images/myImageDefinition/versions/0.0.1"),
					},
					VmPriority: pulumi.String(machinelearningservices.VmPriorityDedicated),
					VmSize:     pulumi.String("STANDARD_NC6"),
				},
			},
			ResourceGroupName: pulumi.String("testrg123"),
			WorkspaceName:     pulumi.String("workspaces123"),
		})
		if err != nil {
			return err
		}
		return nil
	})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using AzureNative = Pulumi.AzureNative;

return await Deployment.RunAsync(() => 
{
    var compute = new AzureNative.MachineLearningServices.Compute("compute", new()
    {
        ComputeName = "compute123",
        Location = "eastus",
        Properties = new AzureNative.MachineLearningServices.Inputs.AmlComputeArgs
        {
            ComputeType = "AmlCompute",
            Properties = new AzureNative.MachineLearningServices.Inputs.AmlComputePropertiesArgs
            {
                EnableNodePublicIp = true,
                IsolatedNetwork = false,
                OsType = AzureNative.MachineLearningServices.OsType.Windows,
                RemoteLoginPortPublicAccess = AzureNative.MachineLearningServices.RemoteLoginPortPublicAccess.NotSpecified,
                ScaleSettings = new AzureNative.MachineLearningServices.Inputs.ScaleSettingsArgs
                {
                    MaxNodeCount = 1,
                    MinNodeCount = 0,
                    NodeIdleTimeBeforeScaleDown = "PT5M",
                },
                VirtualMachineImage = new AzureNative.MachineLearningServices.Inputs.VirtualMachineImageArgs
                {
                    Id = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Compute/galleries/myImageGallery/images/myImageDefinition/versions/0.0.1",
                },
                VmPriority = AzureNative.MachineLearningServices.VmPriority.Dedicated,
                VmSize = "STANDARD_NC6",
            },
        },
        ResourceGroupName = "testrg123",
        WorkspaceName = "workspaces123",
    });

});
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.azurenative.machinelearningservices.Compute;
import com.pulumi.azurenative.machinelearningservices.ComputeArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        var compute = new Compute("compute", ComputeArgs.builder()
            .computeName("compute123")
            .location("eastus")
            .properties(AmlComputeArgs.builder()
                .computeType("AmlCompute")
                .properties(AmlComputePropertiesArgs.builder()
                    .enableNodePublicIp(true)
                    .isolatedNetwork(false)
                    .osType("Windows")
                    .remoteLoginPortPublicAccess("NotSpecified")
                    .scaleSettings(ScaleSettingsArgs.builder()
                        .maxNodeCount(1)
                        .minNodeCount(0)
                        .nodeIdleTimeBeforeScaleDown("PT5M")
                        .build())
                    .virtualMachineImage(VirtualMachineImageArgs.builder()
                        .id("/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Compute/galleries/myImageGallery/images/myImageDefinition/versions/0.0.1")
                        .build())
                    .vmPriority("Dedicated")
                    .vmSize("STANDARD_NC6")
                    .build())
                .build())
            .resourceGroupName("testrg123")
            .workspaceName("workspaces123")
            .build());

    }
}
resources:
  compute:
    type: azure-native:machinelearningservices:Compute
    properties:
      computeName: compute123
      location: eastus
      properties:
        computeType: AmlCompute
        properties:
          enableNodePublicIp: true
          isolatedNetwork: false
          osType: Windows
          remoteLoginPortPublicAccess: NotSpecified
          scaleSettings:
            maxNodeCount: 1
            minNodeCount: 0
            nodeIdleTimeBeforeScaleDown: PT5M
          virtualMachineImage:
            id: /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Compute/galleries/myImageGallery/images/myImageDefinition/versions/0.0.1
          vmPriority: Dedicated
          vmSize: STANDARD_NC6
      resourceGroupName: testrg123
      workspaceName: workspaces123

The scaleSettings property controls cluster capacity. When minNodeCount is zero, the cluster scales down completely when idle, eliminating compute costs. The nodeIdleTimeBeforeScaleDown property (in ISO 8601 duration format) determines how long nodes wait before shutting down. The vmSize property specifies the VM SKU; GPU-enabled sizes like STANDARD_NC6 support deep learning workloads.

Create a ComputeInstance with custom services

Data scientists often need development environments with custom tools like RStudio or Jupyter extensions running alongside their notebooks.

import * as pulumi from "@pulumi/pulumi";
import * as azure_native from "@pulumi/azure-native";

const compute = new azure_native.machinelearningservices.Compute("compute", {
    computeName: "compute123",
    location: "eastus",
    properties: {
        computeType: "ComputeInstance",
        properties: {
            applicationSharingPolicy: azure_native.machinelearningservices.ApplicationSharingPolicy.Personal,
            computeInstanceAuthorizationType: azure_native.machinelearningservices.ComputeInstanceAuthorizationType.Personal,
            customServices: [{
                docker: {
                    privileged: true,
                },
                endpoints: [{
                    name: "connect",
                    protocol: azure_native.machinelearningservices.Protocol.Http,
                    published: 8787,
                    target: 8787,
                }],
                environmentVariables: {
                    test_variable: {
                        type: azure_native.machinelearningservices.EnvironmentVariableType.Local,
                        value: "test_value",
                    },
                },
                image: {
                    reference: "ghcr.io/azure/rocker-rstudio-ml-verse:latest",
                    type: azure_native.machinelearningservices.ImageType.Docker,
                },
                name: "rstudio",
                volumes: [{
                    readOnly: false,
                    source: "/home/azureuser/cloudfiles",
                    target: "/home/azureuser/cloudfiles",
                    type: azure_native.machinelearningservices.VolumeDefinitionType.Bind,
                }],
            }],
            personalComputeInstanceSettings: {
                assignedUser: {
                    objectId: "00000000-0000-0000-0000-000000000000",
                    tenantId: "00000000-0000-0000-0000-000000000000",
                },
            },
            sshSettings: {
                sshPublicAccess: azure_native.machinelearningservices.SshPublicAccess.Disabled,
            },
            subnet: {
                id: "test-subnet-resource-id",
            },
            vmSize: "STANDARD_NC6",
        },
    },
    resourceGroupName: "testrg123",
    workspaceName: "workspaces123",
});
import pulumi
import pulumi_azure_native as azure_native

compute = azure_native.machinelearningservices.Compute("compute",
    compute_name="compute123",
    location="eastus",
    properties={
        "compute_type": "ComputeInstance",
        "properties": {
            "application_sharing_policy": azure_native.machinelearningservices.ApplicationSharingPolicy.PERSONAL,
            "compute_instance_authorization_type": azure_native.machinelearningservices.ComputeInstanceAuthorizationType.PERSONAL,
            "custom_services": [{
                "docker": {
                    "privileged": True,
                },
                "endpoints": [{
                    "name": "connect",
                    "protocol": azure_native.machinelearningservices.Protocol.HTTP,
                    "published": 8787,
                    "target": 8787,
                }],
                "environment_variables": {
                    "test_variable": {
                        "type": azure_native.machinelearningservices.EnvironmentVariableType.LOCAL,
                        "value": "test_value",
                    },
                },
                "image": {
                    "reference": "ghcr.io/azure/rocker-rstudio-ml-verse:latest",
                    "type": azure_native.machinelearningservices.ImageType.DOCKER,
                },
                "name": "rstudio",
                "volumes": [{
                    "read_only": False,
                    "source": "/home/azureuser/cloudfiles",
                    "target": "/home/azureuser/cloudfiles",
                    "type": azure_native.machinelearningservices.VolumeDefinitionType.BIND,
                }],
            }],
            "personal_compute_instance_settings": {
                "assigned_user": {
                    "object_id": "00000000-0000-0000-0000-000000000000",
                    "tenant_id": "00000000-0000-0000-0000-000000000000",
                },
            },
            "ssh_settings": {
                "ssh_public_access": azure_native.machinelearningservices.SshPublicAccess.DISABLED,
            },
            "subnet": {
                "id": "test-subnet-resource-id",
            },
            "vm_size": "STANDARD_NC6",
        },
    },
    resource_group_name="testrg123",
    workspace_name="workspaces123")
package main

import (
	machinelearningservices "github.com/pulumi/pulumi-azure-native-sdk/machinelearningservices/v3"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		_, err := machinelearningservices.NewCompute(ctx, "compute", &machinelearningservices.ComputeArgs{
			ComputeName: pulumi.String("compute123"),
			Location:    pulumi.String("eastus"),
			Properties: &machinelearningservices.ComputeInstanceArgs{
				ComputeType: pulumi.String("ComputeInstance"),
				Properties: &machinelearningservices.ComputeInstancePropertiesArgs{
					ApplicationSharingPolicy:         pulumi.String(machinelearningservices.ApplicationSharingPolicyPersonal),
					ComputeInstanceAuthorizationType: pulumi.String(machinelearningservices.ComputeInstanceAuthorizationTypePersonal),
					CustomServices: machinelearningservices.CustomServiceArray{
						&machinelearningservices.CustomServiceArgs{
							Docker: &machinelearningservices.DockerArgs{
								Privileged: pulumi.Bool(true),
							},
							Endpoints: machinelearningservices.EndpointArray{
								&machinelearningservices.EndpointArgs{
									Name:      pulumi.String("connect"),
									Protocol:  pulumi.String(machinelearningservices.ProtocolHttp),
									Published: pulumi.Int(8787),
									Target:    pulumi.Int(8787),
								},
							},
							EnvironmentVariables: machinelearningservices.EnvironmentVariableMap{
								"test_variable": &machinelearningservices.EnvironmentVariableArgs{
									Type:  pulumi.String(machinelearningservices.EnvironmentVariableTypeLocal),
									Value: pulumi.String("test_value"),
								},
							},
							Image: &machinelearningservices.ImageArgs{
								Reference: pulumi.String("ghcr.io/azure/rocker-rstudio-ml-verse:latest"),
								Type:      pulumi.String(machinelearningservices.ImageTypeDocker),
							},
							Name: pulumi.String("rstudio"),
							Volumes: machinelearningservices.VolumeDefinitionArray{
								&machinelearningservices.VolumeDefinitionArgs{
									ReadOnly: pulumi.Bool(false),
									Source:   pulumi.String("/home/azureuser/cloudfiles"),
									Target:   pulumi.String("/home/azureuser/cloudfiles"),
									Type:     pulumi.String(machinelearningservices.VolumeDefinitionTypeBind),
								},
							},
						},
					},
					PersonalComputeInstanceSettings: &machinelearningservices.PersonalComputeInstanceSettingsArgs{
						AssignedUser: &machinelearningservices.AssignedUserArgs{
							ObjectId: pulumi.String("00000000-0000-0000-0000-000000000000"),
							TenantId: pulumi.String("00000000-0000-0000-0000-000000000000"),
						},
					},
					SshSettings: &machinelearningservices.ComputeInstanceSshSettingsArgs{
						SshPublicAccess: pulumi.String(machinelearningservices.SshPublicAccessDisabled),
					},
					Subnet: &machinelearningservices.ResourceIdArgs{
						Id: pulumi.String("test-subnet-resource-id"),
					},
					VmSize: pulumi.String("STANDARD_NC6"),
				},
			},
			ResourceGroupName: pulumi.String("testrg123"),
			WorkspaceName:     pulumi.String("workspaces123"),
		})
		if err != nil {
			return err
		}
		return nil
	})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using AzureNative = Pulumi.AzureNative;

return await Deployment.RunAsync(() => 
{
    var compute = new AzureNative.MachineLearningServices.Compute("compute", new()
    {
        ComputeName = "compute123",
        Location = "eastus",
        Properties = new AzureNative.MachineLearningServices.Inputs.ComputeInstanceArgs
        {
            ComputeType = "ComputeInstance",
            Properties = new AzureNative.MachineLearningServices.Inputs.ComputeInstancePropertiesArgs
            {
                ApplicationSharingPolicy = AzureNative.MachineLearningServices.ApplicationSharingPolicy.Personal,
                ComputeInstanceAuthorizationType = AzureNative.MachineLearningServices.ComputeInstanceAuthorizationType.Personal,
                CustomServices = new[]
                {
                    new AzureNative.MachineLearningServices.Inputs.CustomServiceArgs
                    {
                        Docker = new AzureNative.MachineLearningServices.Inputs.DockerArgs
                        {
                            Privileged = true,
                        },
                        Endpoints = new[]
                        {
                            new AzureNative.MachineLearningServices.Inputs.EndpointArgs
                            {
                                Name = "connect",
                                Protocol = AzureNative.MachineLearningServices.Protocol.Http,
                                Published = 8787,
                                Target = 8787,
                            },
                        },
                        EnvironmentVariables = 
                        {
                            { "test_variable", new AzureNative.MachineLearningServices.Inputs.EnvironmentVariableArgs
                            {
                                Type = AzureNative.MachineLearningServices.EnvironmentVariableType.Local,
                                Value = "test_value",
                            } },
                        },
                        Image = new AzureNative.MachineLearningServices.Inputs.ImageArgs
                        {
                            Reference = "ghcr.io/azure/rocker-rstudio-ml-verse:latest",
                            Type = AzureNative.MachineLearningServices.ImageType.Docker,
                        },
                        Name = "rstudio",
                        Volumes = new[]
                        {
                            new AzureNative.MachineLearningServices.Inputs.VolumeDefinitionArgs
                            {
                                ReadOnly = false,
                                Source = "/home/azureuser/cloudfiles",
                                Target = "/home/azureuser/cloudfiles",
                                Type = AzureNative.MachineLearningServices.VolumeDefinitionType.Bind,
                            },
                        },
                    },
                },
                PersonalComputeInstanceSettings = new AzureNative.MachineLearningServices.Inputs.PersonalComputeInstanceSettingsArgs
                {
                    AssignedUser = new AzureNative.MachineLearningServices.Inputs.AssignedUserArgs
                    {
                        ObjectId = "00000000-0000-0000-0000-000000000000",
                        TenantId = "00000000-0000-0000-0000-000000000000",
                    },
                },
                SshSettings = new AzureNative.MachineLearningServices.Inputs.ComputeInstanceSshSettingsArgs
                {
                    SshPublicAccess = AzureNative.MachineLearningServices.SshPublicAccess.Disabled,
                },
                Subnet = new AzureNative.MachineLearningServices.Inputs.ResourceIdArgs
                {
                    Id = "test-subnet-resource-id",
                },
                VmSize = "STANDARD_NC6",
            },
        },
        ResourceGroupName = "testrg123",
        WorkspaceName = "workspaces123",
    });

});
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.azurenative.machinelearningservices.Compute;
import com.pulumi.azurenative.machinelearningservices.ComputeArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        var compute = new Compute("compute", ComputeArgs.builder()
            .computeName("compute123")
            .location("eastus")
            .properties(ComputeInstanceArgs.builder()
                .computeType("ComputeInstance")
                .properties(ComputeInstancePropertiesArgs.builder()
                    .applicationSharingPolicy("Personal")
                    .computeInstanceAuthorizationType("personal")
                    .customServices(CustomServiceArgs.builder()
                        .docker(DockerArgs.builder()
                            .privileged(true)
                            .build())
                        .endpoints(EndpointArgs.builder()
                            .name("connect")
                            .protocol("http")
                            .published(8787)
                            .target(8787)
                            .build())
                        .environmentVariables(Map.of("test_variable", EnvironmentVariableArgs.builder()
                            .type("local")
                            .value("test_value")
                            .build()))
                        .image(ImageArgs.builder()
                            .reference("ghcr.io/azure/rocker-rstudio-ml-verse:latest")
                            .type("docker")
                            .build())
                        .name("rstudio")
                        .volumes(VolumeDefinitionArgs.builder()
                            .readOnly(false)
                            .source("/home/azureuser/cloudfiles")
                            .target("/home/azureuser/cloudfiles")
                            .type("bind")
                            .build())
                        .build())
                    .personalComputeInstanceSettings(PersonalComputeInstanceSettingsArgs.builder()
                        .assignedUser(AssignedUserArgs.builder()
                            .objectId("00000000-0000-0000-0000-000000000000")
                            .tenantId("00000000-0000-0000-0000-000000000000")
                            .build())
                        .build())
                    .sshSettings(ComputeInstanceSshSettingsArgs.builder()
                        .sshPublicAccess("Disabled")
                        .build())
                    .subnet(ResourceIdArgs.builder()
                        .id("test-subnet-resource-id")
                        .build())
                    .vmSize("STANDARD_NC6")
                    .build())
                .build())
            .resourceGroupName("testrg123")
            .workspaceName("workspaces123")
            .build());

    }
}
resources:
  compute:
    type: azure-native:machinelearningservices:Compute
    properties:
      computeName: compute123
      location: eastus
      properties:
        computeType: ComputeInstance
        properties:
          applicationSharingPolicy: Personal
          computeInstanceAuthorizationType: personal
          customServices:
            - docker:
                privileged: true
              endpoints:
                - name: connect
                  protocol: http
                  published: 8787
                  target: 8787
              environmentVariables:
                test_variable:
                  type: local
                  value: test_value
              image:
                reference: ghcr.io/azure/rocker-rstudio-ml-verse:latest
                type: docker
              name: rstudio
              volumes:
                - readOnly: false
                  source: /home/azureuser/cloudfiles
                  target: /home/azureuser/cloudfiles
                  type: bind
          personalComputeInstanceSettings:
            assignedUser:
              objectId: 00000000-0000-0000-0000-000000000000
              tenantId: 00000000-0000-0000-0000-000000000000
          sshSettings:
            sshPublicAccess: Disabled
          subnet:
            id: test-subnet-resource-id
          vmSize: STANDARD_NC6
      resourceGroupName: testrg123
      workspaceName: workspaces123

The customServices array defines Docker containers that run on the instance. Each service specifies a container image, exposed endpoints (with protocol, published port, and target port), and volume mounts. The personalComputeInstanceSettings property assigns the instance to a specific user by objectId and tenantId. The docker.privileged flag grants elevated permissions when containers need host-level access.

Schedule automatic start and stop times

Development instances can run up significant costs when left running overnight or on weekends. Scheduled shutdowns reduce waste while ensuring availability during work hours.

import * as pulumi from "@pulumi/pulumi";
import * as azure_native from "@pulumi/azure-native";

const compute = new azure_native.machinelearningservices.Compute("compute", {
    computeName: "compute123",
    location: "eastus",
    properties: {
        computeType: "ComputeInstance",
        properties: {
            applicationSharingPolicy: azure_native.machinelearningservices.ApplicationSharingPolicy.Personal,
            computeInstanceAuthorizationType: azure_native.machinelearningservices.ComputeInstanceAuthorizationType.Personal,
            personalComputeInstanceSettings: {
                assignedUser: {
                    objectId: "00000000-0000-0000-0000-000000000000",
                    tenantId: "00000000-0000-0000-0000-000000000000",
                },
            },
            schedules: {
                computeStartStop: [{
                    action: azure_native.machinelearningservices.ComputePowerAction.Stop,
                    cron: {
                        expression: "0 18 * * *",
                        startTime: "2021-04-23T01:30:00",
                        timeZone: "Pacific Standard Time",
                    },
                    status: azure_native.machinelearningservices.ScheduleStatus.Enabled,
                    triggerType: azure_native.machinelearningservices.ComputeTriggerType.Cron,
                }],
            },
            sshSettings: {
                sshPublicAccess: azure_native.machinelearningservices.SshPublicAccess.Disabled,
            },
            vmSize: "STANDARD_NC6",
        },
    },
    resourceGroupName: "testrg123",
    workspaceName: "workspaces123",
});
import pulumi
import pulumi_azure_native as azure_native

compute = azure_native.machinelearningservices.Compute("compute",
    compute_name="compute123",
    location="eastus",
    properties={
        "compute_type": "ComputeInstance",
        "properties": {
            "application_sharing_policy": azure_native.machinelearningservices.ApplicationSharingPolicy.PERSONAL,
            "compute_instance_authorization_type": azure_native.machinelearningservices.ComputeInstanceAuthorizationType.PERSONAL,
            "personal_compute_instance_settings": {
                "assigned_user": {
                    "object_id": "00000000-0000-0000-0000-000000000000",
                    "tenant_id": "00000000-0000-0000-0000-000000000000",
                },
            },
            "schedules": {
                "compute_start_stop": [{
                    "action": azure_native.machinelearningservices.ComputePowerAction.STOP,
                    "cron": {
                        "expression": "0 18 * * *",
                        "start_time": "2021-04-23T01:30:00",
                        "time_zone": "Pacific Standard Time",
                    },
                    "status": azure_native.machinelearningservices.ScheduleStatus.ENABLED,
                    "trigger_type": azure_native.machinelearningservices.ComputeTriggerType.CRON,
                }],
            },
            "ssh_settings": {
                "ssh_public_access": azure_native.machinelearningservices.SshPublicAccess.DISABLED,
            },
            "vm_size": "STANDARD_NC6",
        },
    },
    resource_group_name="testrg123",
    workspace_name="workspaces123")
package main

import (
	machinelearningservices "github.com/pulumi/pulumi-azure-native-sdk/machinelearningservices/v3"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		_, err := machinelearningservices.NewCompute(ctx, "compute", &machinelearningservices.ComputeArgs{
			ComputeName: pulumi.String("compute123"),
			Location:    pulumi.String("eastus"),
			Properties: &machinelearningservices.ComputeInstanceArgs{
				ComputeType: pulumi.String("ComputeInstance"),
				Properties: &machinelearningservices.ComputeInstancePropertiesArgs{
					ApplicationSharingPolicy:         pulumi.String(machinelearningservices.ApplicationSharingPolicyPersonal),
					ComputeInstanceAuthorizationType: pulumi.String(machinelearningservices.ComputeInstanceAuthorizationTypePersonal),
					PersonalComputeInstanceSettings: &machinelearningservices.PersonalComputeInstanceSettingsArgs{
						AssignedUser: &machinelearningservices.AssignedUserArgs{
							ObjectId: pulumi.String("00000000-0000-0000-0000-000000000000"),
							TenantId: pulumi.String("00000000-0000-0000-0000-000000000000"),
						},
					},
					Schedules: &machinelearningservices.ComputeSchedulesArgs{
						ComputeStartStop: machinelearningservices.ComputeStartStopScheduleArray{
							&machinelearningservices.ComputeStartStopScheduleArgs{
								Action: pulumi.String(machinelearningservices.ComputePowerActionStop),
								Cron: &machinelearningservices.CronArgs{
									Expression: pulumi.String("0 18 * * *"),
									StartTime:  pulumi.String("2021-04-23T01:30:00"),
									TimeZone:   pulumi.String("Pacific Standard Time"),
								},
								Status:      pulumi.String(machinelearningservices.ScheduleStatusEnabled),
								TriggerType: pulumi.String(machinelearningservices.ComputeTriggerTypeCron),
							},
						},
					},
					SshSettings: &machinelearningservices.ComputeInstanceSshSettingsArgs{
						SshPublicAccess: pulumi.String(machinelearningservices.SshPublicAccessDisabled),
					},
					VmSize: pulumi.String("STANDARD_NC6"),
				},
			},
			ResourceGroupName: pulumi.String("testrg123"),
			WorkspaceName:     pulumi.String("workspaces123"),
		})
		if err != nil {
			return err
		}
		return nil
	})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using AzureNative = Pulumi.AzureNative;

return await Deployment.RunAsync(() => 
{
    var compute = new AzureNative.MachineLearningServices.Compute("compute", new()
    {
        ComputeName = "compute123",
        Location = "eastus",
        Properties = new AzureNative.MachineLearningServices.Inputs.ComputeInstanceArgs
        {
            ComputeType = "ComputeInstance",
            Properties = new AzureNative.MachineLearningServices.Inputs.ComputeInstancePropertiesArgs
            {
                ApplicationSharingPolicy = AzureNative.MachineLearningServices.ApplicationSharingPolicy.Personal,
                ComputeInstanceAuthorizationType = AzureNative.MachineLearningServices.ComputeInstanceAuthorizationType.Personal,
                PersonalComputeInstanceSettings = new AzureNative.MachineLearningServices.Inputs.PersonalComputeInstanceSettingsArgs
                {
                    AssignedUser = new AzureNative.MachineLearningServices.Inputs.AssignedUserArgs
                    {
                        ObjectId = "00000000-0000-0000-0000-000000000000",
                        TenantId = "00000000-0000-0000-0000-000000000000",
                    },
                },
                Schedules = new AzureNative.MachineLearningServices.Inputs.ComputeSchedulesArgs
                {
                    ComputeStartStop = new[]
                    {
                        new AzureNative.MachineLearningServices.Inputs.ComputeStartStopScheduleArgs
                        {
                            Action = AzureNative.MachineLearningServices.ComputePowerAction.Stop,
                            Cron = new AzureNative.MachineLearningServices.Inputs.CronArgs
                            {
                                Expression = "0 18 * * *",
                                StartTime = "2021-04-23T01:30:00",
                                TimeZone = "Pacific Standard Time",
                            },
                            Status = AzureNative.MachineLearningServices.ScheduleStatus.Enabled,
                            TriggerType = AzureNative.MachineLearningServices.ComputeTriggerType.Cron,
                        },
                    },
                },
                SshSettings = new AzureNative.MachineLearningServices.Inputs.ComputeInstanceSshSettingsArgs
                {
                    SshPublicAccess = AzureNative.MachineLearningServices.SshPublicAccess.Disabled,
                },
                VmSize = "STANDARD_NC6",
            },
        },
        ResourceGroupName = "testrg123",
        WorkspaceName = "workspaces123",
    });

});
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.azurenative.machinelearningservices.Compute;
import com.pulumi.azurenative.machinelearningservices.ComputeArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        var compute = new Compute("compute", ComputeArgs.builder()
            .computeName("compute123")
            .location("eastus")
            .properties(ComputeInstanceArgs.builder()
                .computeType("ComputeInstance")
                .properties(ComputeInstancePropertiesArgs.builder()
                    .applicationSharingPolicy("Personal")
                    .computeInstanceAuthorizationType("personal")
                    .personalComputeInstanceSettings(PersonalComputeInstanceSettingsArgs.builder()
                        .assignedUser(AssignedUserArgs.builder()
                            .objectId("00000000-0000-0000-0000-000000000000")
                            .tenantId("00000000-0000-0000-0000-000000000000")
                            .build())
                        .build())
                    .schedules(ComputeSchedulesArgs.builder()
                        .computeStartStop(ComputeStartStopScheduleArgs.builder()
                            .action("Stop")
                            .cron(CronArgs.builder()
                                .expression("0 18 * * *")
                                .startTime("2021-04-23T01:30:00")
                                .timeZone("Pacific Standard Time")
                                .build())
                            .status("Enabled")
                            .triggerType("Cron")
                            .build())
                        .build())
                    .sshSettings(ComputeInstanceSshSettingsArgs.builder()
                        .sshPublicAccess("Disabled")
                        .build())
                    .vmSize("STANDARD_NC6")
                    .build())
                .build())
            .resourceGroupName("testrg123")
            .workspaceName("workspaces123")
            .build());

    }
}
resources:
  compute:
    type: azure-native:machinelearningservices:Compute
    properties:
      computeName: compute123
      location: eastus
      properties:
        computeType: ComputeInstance
        properties:
          applicationSharingPolicy: Personal
          computeInstanceAuthorizationType: personal
          personalComputeInstanceSettings:
            assignedUser:
              objectId: 00000000-0000-0000-0000-000000000000
              tenantId: 00000000-0000-0000-0000-000000000000
          schedules:
            computeStartStop:
              - action: Stop
                cron:
                  expression: 0 18 * * *
                  startTime: 2021-04-23T01:30:00
                  timeZone: Pacific Standard Time
                status: Enabled
                triggerType: Cron
          sshSettings:
            sshPublicAccess: Disabled
          vmSize: STANDARD_NC6
      resourceGroupName: testrg123
      workspaceName: workspaces123

The schedules.computeStartStop array defines cron-based triggers. Each schedule specifies an action (Start or Stop), a cron expression for timing, and a timezone. The expression “0 18 * * *” means 6 PM daily. The status property enables or disables the schedule without deleting it. This extends the basic ComputeInstance configuration with cost optimization.

Attach an existing AKS cluster

Organizations with existing Kubernetes infrastructure can attach AKS clusters to Azure Machine Learning workspaces for model deployment and inference.

import * as pulumi from "@pulumi/pulumi";
import * as azure_native from "@pulumi/azure-native";

const compute = new azure_native.machinelearningservices.Compute("compute", {
    computeName: "compute123",
    location: "eastus",
    properties: {
        computeType: "AKS",
    },
    resourceGroupName: "testrg123",
    workspaceName: "workspaces123",
});
import pulumi
import pulumi_azure_native as azure_native

compute = azure_native.machinelearningservices.Compute("compute",
    compute_name="compute123",
    location="eastus",
    properties={
        "compute_type": "AKS",
    },
    resource_group_name="testrg123",
    workspace_name="workspaces123")
package main

import (
	machinelearningservices "github.com/pulumi/pulumi-azure-native-sdk/machinelearningservices/v3"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		_, err := machinelearningservices.NewCompute(ctx, "compute", &machinelearningservices.ComputeArgs{
			ComputeName: pulumi.String("compute123"),
			Location:    pulumi.String("eastus"),
			Properties: &machinelearningservices.AKSArgs{
				ComputeType: pulumi.String("AKS"),
			},
			ResourceGroupName: pulumi.String("testrg123"),
			WorkspaceName:     pulumi.String("workspaces123"),
		})
		if err != nil {
			return err
		}
		return nil
	})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using AzureNative = Pulumi.AzureNative;

return await Deployment.RunAsync(() => 
{
    var compute = new AzureNative.MachineLearningServices.Compute("compute", new()
    {
        ComputeName = "compute123",
        Location = "eastus",
        Properties = new AzureNative.MachineLearningServices.Inputs.AKSArgs
        {
            ComputeType = "AKS",
        },
        ResourceGroupName = "testrg123",
        WorkspaceName = "workspaces123",
    });

});
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.azurenative.machinelearningservices.Compute;
import com.pulumi.azurenative.machinelearningservices.ComputeArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        var compute = new Compute("compute", ComputeArgs.builder()
            .computeName("compute123")
            .location("eastus")
            .properties(AKSArgs.builder()
                .computeType("AKS")
                .build())
            .resourceGroupName("testrg123")
            .workspaceName("workspaces123")
            .build());

    }
}
resources:
  compute:
    type: azure-native:machinelearningservices:Compute
    properties:
      computeName: compute123
      location: eastus
      properties:
        computeType: AKS
      resourceGroupName: testrg123
      workspaceName: workspaces123

The computeType property set to “AKS” indicates this compute references an existing Azure Kubernetes Service cluster rather than creating new infrastructure. The cluster must exist before attachment; Azure Machine Learning uses it for deploying models as web services.

Beyond these examples

These snippets focus on specific compute resource features: AmlCompute clusters with autoscaling, ComputeInstance development environments, and AKS cluster attachment. They’re intentionally minimal rather than full machine learning infrastructure.

The examples reference pre-existing infrastructure such as Azure Machine Learning workspaces, resource groups and regions, virtual network subnets for network-isolated compute, and AKS clusters for Kubernetes-based compute. They focus on configuring the compute resource rather than provisioning the surrounding workspace and networking.

To keep things focused, common compute patterns are omitted, including:

  • Network isolation and private endpoints (subnet, vnetConfiguration)
  • User credentials and SSH access (userAccountCredentials, sshSettings)
  • Custom VM images (virtualMachineImage)
  • Identity and RBAC configuration (identity property)
  • DataFactory compute type configuration
  • Update operations (changing scale settings, agent counts)

These omissions are intentional: the goal is to illustrate how each compute type is wired, not provide drop-in machine learning environments. See the Compute resource reference for all available configuration options.

Let's configure Azure Machine Learning Compute

Get started with Pulumi Cloud, then follow our quick setup guide to deploy this infrastructure.

Try Pulumi Cloud for FREE

Frequently Asked Questions

Compute Types & Basic Configuration
What compute types are available in Azure Machine Learning?
Four types are supported: AmlCompute (managed compute clusters for training), DataFactory (Azure Data Factory integration), AKS (Azure Kubernetes Service for inference), and ComputeInstance (single-node development environments).
What's the minimum configuration needed to create a ComputeInstance?
Only vmSize is required (e.g., STANDARD_NC6). All other properties like SSH settings, schedules, and custom services are optional.
Can I attach an existing AKS cluster instead of creating a new one?
Yes, specify resourceId pointing to your existing Microsoft.ContainerService/managedClusters resource when creating an AKS compute.
Scaling & Scheduling
How do I configure auto-scaling for AmlCompute clusters?
Use scaleSettings with minNodeCount, maxNodeCount, and nodeIdleTimeBeforeScaleDown. The idle time uses ISO 8601 duration format (e.g., PT5M for 5 minutes).
How do I schedule automatic start/stop times for a ComputeInstance?
Configure schedules.computeStartStop with cron expressions, specifying expression, startTime, timeZone, and action (Start or Stop). For example, 0 18 * * * stops the instance at 6 PM daily.
Custom Services & Networking
Can I run custom Docker containers on a ComputeInstance?
Yes, use customServices to define Docker images with custom endpoints, volumes, and environment variables. The example shows running RStudio with port 8787 exposed.
How do I configure SSH access for a ComputeInstance?
Set sshSettings.sshPublicAccess to Enabled or Disabled. When enabled, you can provide an SSH public key for authentication.
How do I connect a ComputeInstance to a specific subnet?
Specify subnet.id with your subnet resource ID in the ComputeInstance properties.
Updates & Immutability
What properties can I update after creating a compute resource?
You can update scaleSettings (for AmlCompute), agentCount (for AKS), and description. The update examples demonstrate modifying these properties.
What properties are immutable after creation?
computeName, workspaceName, and resourceGroupName cannot be changed after the resource is created.
Can I configure a ComputeInstance for personal or shared use?
Yes, set applicationSharingPolicy to Personal or Shared. Personal instances can be assigned to specific users via personalComputeInstanceSettings.assignedUser.

Using a different cloud?

Explore analytics guides for other cloud providers: