1. Kubernetes Event-driven Autoscaling (KEDA) for AI Workloads on AKS

    Python

    Kubernetes Event-driven Autoscaling (KEDA) is an extension to Kubernetes that provides event-driven scaling capabilities. It allows you to automatically scale Kubernetes workloads, including Azure Functions, based on the number of events needing to be processed. Combining KEDA with Azure Kubernetes Service (AKS) enables you to build highly responsive applications by scaling services in response to events.

    To deploy KEDA within an AKS cluster, you must perform the following tasks:

    1. Create an AKS Cluster: Provision an Azure Kubernetes Service cluster where your applications and KEDA will be hosted.
    2. Install KEDA: Deploy KEDA onto your AKS cluster. This includes setting up all necessary Kubernetes resources, such as custom resource definitions, required by KEDA to function.
    3. Deploy Your Application: Deploy your AI workload as a containerized application on the AKS cluster.
    4. Configure KEDA ScaledObject: Define a KEDA ScaledObject resource, which sets the scaling rules for your application based on events from the specified source (e.g., a queue or stream).

    The provided Pulumi program below will accomplish the first two tasks: provisioning an AKS cluster and installing KEDA. The last two tasks will vary greatly depending on your specific AI workload and event source, therefore those are not covered within this basic setup.

    Let's walk through the provided Pulumi code:

    Python Program for Deploying AKS and KEDA

    import pulumi import pulumi_azure_native as azure_native from pulumi_kubernetes import Provider, helm # Step 1: Create an AKS Cluster # We start by defining the AKS cluster using the azure_native.containerservice.ManagedCluster class. # For the sake of simplicity, we use a small default node size and count in this example. # For a more detailed explanation of the managed cluster arguments, visit: # https://www.pulumi.com/registry/packages/azure-native/api-docs/containerservice/managedcluster/ aks_cluster = azure_native.containerservice.ManagedCluster( "myAksCluster", resource_group_name="myResourceGroup", agent_pool_profiles=[{ "count": 2, "max_pods": 110, "mode": "System", "name": "agentpool", "os_type": "Linux", "vm_size": "Standard_DS2_v2", }], dns_prefix="myaksdns", enable_rbac=True, ) # Step 2: Install KEDA using the Helm Chart # Once the AKS cluster is created, we set up a Kubernetes provider that targets the new AKS cluster. # Then we use the pulumi_kubernetes.helm.v3.Chart class to install KEDA from its Helm chart. # KEDA Helm chart information can be found here: # https://artifacthub.io/packages/helm/kedacore/keda k8s_provider = Provider("k8sProvider", kubeconfig=aks_cluster.kube_config_raw) keda_chart = helm.v3.Chart( "keda", helm.v3.ChartArgs( chart="keda", version="2.4.0", # Ensure to use the correct chart version here fetch_opts=helm.v3.FetchOptsArgs( repo="https://kedacore.github.io/charts" ), ), opts=pulumi.ResourceOptions(provider=k8s_provider) ) # Exports # The cluster's kubeconfig is exported here to be easily used to configure `kubectl` # on your local machine to connect to the cluster. pulumi.export("kubeconfig", aks_cluster.kube_config_raw)

    Here’s what each part of this program is doing:

    • The AKS cluster is created with two nodes using the azure_native.containerservice.ManagedCluster class. This specifies the necessary configuration, such as the number of nodes, the node type, and enabling RBAC authorization.
    • Once the cluster is successfully created, we create a Kubernetes Provider targeting the newly created AKS cluster.
    • Then we use Helm to deploy KEDA into our AKS cluster. Helm is a package manager for Kubernetes, and Pulumi can use Helm charts to deploy software like KEDA.
    • Lastly, we export the kubeconfig of the AKS cluster. This will allow you to use kubectl to interact with your cluster directly from your local machine.

    Remember to replace "myResourceGroup" with the name of the resource group you'd like to use for your AKS cluster. Also note that actual AI workloads and KEDA scaled objects need to be defined according to the specifics of your environment and application.

    This program serves as a starting point. You will need to add definitions for your containerized AI workloads and the KEDA ScaledObject resources that define your scaling rules in Kubernetes manifests or Helm charts as applicable.