1. Scalable Kubernetes Clusters on OCI for AI Workloads


    To create scalable Kubernetes clusters on Oracle Cloud Infrastructure (OCI) suitable for AI workloads using Pulumi, you primarily need to use OCI's Container Engine for Kubernetes (OKE), which provides a fully managed Kubernetes environment. AI workloads typically require high-performance compute capabilities and the ability to scale resources efficiently. OCI offers the necessary infrastructure to deploy such demanding applications.

    Below, I will guide you through setting up a basic Kubernetes cluster on OCI with Pulumi. This will involve the following steps:

    1. Cluster Creation: We'll start by defining a Kubernetes cluster using OCI's managed Kubernetes service.
    2. Node Pool Configuration: We will set up a node pool with the capacity and instance shapes suited for AI workloads.
    3. Scaling Options: While the initial setup will define a specific size for your node pool, you can manually adjust or automate scaling based on the workload requirements.

    To accomplish this, we will utilize the oci Pulumi package, which interacts with OCI to provision and manage cloud resources.


    Ensure that you have the following prerequisites in place before you proceed:

    • An OCI account with the required permissions to create and manage OKE clusters.
    • Pulumi CLI installed and configured with OCI credentials. For instructions, visit the Pulumi Installation Guide.
    • Python 3 installed on your system.

    Let's begin with the Pulumi Python program. It will:

    • Import necessary modules from the Pulumi OCI package.
    • Create a new OKE cluster.
    • Configure a node pool suitable for AI workloads with properties that optimize performance, such as specifying the number of OCPUs and memory.

    The code below demonstrates how to create a scalable Kubernetes cluster ready for AI workloads:

    import pulumi import pulumi_oci as oci # You must replace the following placeholder values with your specific OCI configuration values. compartment_id = "your-compartment-id" vcn_id = "your-vcn-id" # The VCN ID where the Kubernetes cluster will be launched # Create an OKE Kubernetes cluster oke_cluster = oci.containerengine.Cluster("okeCluster", compartment_id=compartment_id, kubernetes_version="v1.22.5", # Replace with the desired Kubernetes version options=oci.containerengine.ClusterOptionsArgs( service_lb_subnet_ids=[vcn_id] # Specify the subnets for load balancer ), vcn_id=vcn_id # Specify the VCN for the cluster ) # Define the node pool for the AI workloads ai_node_pool = oci.containerengine.NodePool("aiNodePool", cluster_id=oke_cluster.id, compartment_id=compartment_id, node_shape="VM.Standard2.24", # Choose an instance type optimized for AI workloads # Specify the number of nodes and scaling settings initial_node_labels=[oci.containerengine.NodePoolInitialNodeLabelsArgs( key="workload-type", value="ai" )], node_config_details=oci.containerengine.NodePoolNodeConfigDetailsArgs( size=3, # Start with 3 worker nodes placement_configs=[oci.containerengine.NodePoolNodeConfigDetailsPlacementConfigsArgs( availability_domain="aD:PHX-AD-1", # Replace with the desired availability domain subnet_id=vcn_id )] ), quantity_per_subnet=1 # Number of nodes per subnet ) # Export the cluster's kubeconfig pulumi.export('kubeconfig', oke_cluster.kube_config) # After completing the deployment, you can fetch the kubeconfig and start deploying AI applications on the cluster.


    • First, we import necessary modules (pulumi for base functionality and pulumi_oci for interacting with OCI).
    • We create a new Kubernetes cluster inside a specified compartment on OCI using oci.containerengine.Cluster. This includes specifying the version of Kubernetes you want to deploy and the VCN where the cluster will reside.
    • We also define the node pool configurations using oci.containerengine.NodePool. You need to specify the node shape suitable for AI workloads (e.g., VM.Standard2.24) and set the initial node count and availability domains. The node shape and count should be chosen based on the specific requirements of your AI applications.
    • The quantity_per_subnet specifies the number of nodes per subnet, which controls how the cluster scales.
    • Lastly, we export the kubeconfig for the cluster, which you can use to interact with your cluster using the Kubernetes CLI kubectl.

    Deploying this program with Pulumi will set up the underlying infrastructure, and the cluster will be ready for deploying AI workloads like machine learning models, data processing jobs, or any other compute-intensive tasks.

    Remember, the provided code is for setting up a basic Kubernetes cluster. Depending on your needs, you might need to further customize the cluster configuration, such as enabling autoscaling, adding storage, or securing the cluster with additional network configurations.