1. Containerized TensorFlow Serving on OCI Kubernetes

    Python

    Creating a containerized TensorFlow Serving on Oracle Cloud Infrastructure (OCI) using Kubernetes will involve several steps. This will include setting up an OCI Kubernetes cluster, building Docker images for TensorFlow Serving, creating container registry, and deploying the TensorFlow Serving to the cluster.

    Kubernetes Cluster on OCI

    The first step involves setting up the Kubernetes cluster on OCI. You would use the oci.ContainerEngine.Cluster resource for this. This resource allows you to define and manage a Kubernetes cluster in OCI's Container Engine for Kubernetes, which is a fully managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud.

    Container Registry

    You would then need a place to store your Docker images. The oci.Artifacts.ContainerRepository resource provides a repository for storing the Docker images that you will use to deploy TensorFlow Serving.

    Kubernetes Deployment

    Once the Kubernetes cluster is set up and the Docker image is stored in the OCI Container Registry, you can deploy the TensorFlow Serving using the standard Kubernetes resources provided by pulumi_kubernetes. Particularly, you would define a Deployment to manage the TensorFlow Serving containers and a Service to expose it to the public or internal network.

    Below is a Python program that demonstrates the setup of these components using Pulumi:

    import pulumi import pulumi_oci as oci from pulumi_kubernetes import Provider, apps, core # This would typically come from a Pulumi config. compartment_id = "ocid1.compartment.oc1..xxxxxx" # replace with your compartment OCID kubernetes_version = "v1.18.10" # use an appropriate version # Create an OCI Kubernetes Cluster cluster = oci.containerengine.Cluster("my_cluster", compartment_id=compartment_id, kubernetes_version=kubernetes_version, vcn_id="<Your VCN OCID>", options=oci.containerengine.ClusterOptionsArgs( service_lb_subnet_ids=["<Your LB Subnet Ocids>"], ) ) # Create a Kubernetes provider instance using the just-created cluster. k8s_provider = Provider("k8s_provider", kubeconfig=cluster.kubeconfig) # Deploy TensorFlow Serving to the cluster. # For this, you need a Docker image for TensorFlow Serving stored in your OCI Container Registry. # Here's how you might deploy it using the Kubernetes Deployment resource: tf_deploy = apps.v1.Deployment("tf-deployment", spec=apps.v1.DeploymentSpecArgs( selector=apps.v1.LabelSelectorArgs( match_labels={"app": "tensorflow"}, ), replicas=3, # Specify the number of replicas template=core.v1.PodTemplateSpecArgs( metadata=core.v1.ObjectMetaArgs( labels={"app": "tensorflow"}, ), spec=core.v1.PodSpecArgs( containers=[core.v1.ContainerArgs( name="tensorflow-serving", image="<Your-Container-Registry-URL>/tensorflow-serving:latest", # Use your actual image path ports=[core.v1.ContainerPortArgs(container_port=8500)], )], ), ), ), opts=pulumi.ResourceOptions(provider=k8s_provider) ) # Create a LoadBalancer Service to expose TensorFlow Serving. tf_service = core.v1.Service("tf-service", spec=core.v1.ServiceSpecArgs( type="LoadBalancer", selector={"app": "tensorflow"}, ports=[core.v1.ServicePortArgs( port=8500, target_port=pulumi.IntOrString(8500), )], ), opts=pulumi.ResourceOptions(provider=k8s_provider) ) # Export the LoadBalancer IP to access the TensorFlow Serving pulumi.export('tf_serving_endpoint', tf_service.status.apply(lambda status: status.load_balancer.ingress[0].ip))

    In this program:

    • We create an OCI Kubernetes Cluster by providing the compartment_id, kubernetes_version, vcn_id, and subnet IDs for the load balancer.
    • We create a Kubernetes Provider to interact with our OCI Kubernetes Cluster using the generated kubeconfig from the cluster.
    • We define a Deployment for TensorFlow Serving, specifying the container image and the number of replicas we want.
    • We expose the TensorFlow Serving Deployment using a LoadBalancer Service, which will allow us to access TensorFlow Serving from outside the Kubernetes cluster.

    After defining these resources in Pulumi, you can deploy the entire setup by running pulumi up. This will provision the OCI resources, build and upload the Docker image (assuming it is already built and pushed to your OCI Container Registry), and deploy the TensorFlow Serving application to the Kubernetes cluster.

    Please replace placeholders such as <Your VCN OCID>, <Your LB Subnet Ocids>, and <Your-Container-Registry-URL> with the actual values from your OCI account.