1. OCI Functions for Serverless AI Model Deployment


    When deploying an AI model on Oracle Cloud Infrastructure (OCI) using serverless functions, you will typically use a combination of OCI Functions, OCI Data Science Model Deployment, and other related services. OCI Functions allow you to run code without managing servers, and this is useful for deploying an AI model in a serverless environment. On the other hand, OCI Data Science Model Deployment is specifically designed for deploying machine learning models and provides features such as auto-scaling, access logs, and model serving using HTTP endpoints.

    Below, we'll set up an example that creates an OCI Application and Function, which could be used to deploy an AI model. Here's a high-level overview of what the Pulumi code will do:

    • Create an OCI Application (oci.Functions.Application), which is a required container for your functions. It defines configuration shared among all functions within it.
    • With the application in place, we will then define a Function (oci.Functions.Function) with a Docker image that includes your AI model and the necessary code to invoke it.
    • For this example, we'll assume that the OCI Functions are the vehicles to serve the AI model, and that the image we use in the function will have the AI model and dependencies packaged into it.

    Let's get started with the Pulumi program in Python:

    import pulumi import pulumi_oci as oci # Note: Before running this Pulumi program, ensure you have the appropriate OCI configuration set up. # Create an OCI Functions Application. app = oci.Functions.Application("my-ai-app", compartment_id="ocid1.compartment.oc1..xxxxx", # Replace with your compartment OCID display_name="my_ai_app", subnet_ids=["ocid1.subnet.oc1..xxxxx"], # Replace with your subnet OCIDs ) # Define the OCI Function for the AI model serving. # The Docker image should contain the AI model and the code necessary to serve predictions. ai_function = oci.Functions.Function("ai-function", application_id=app.id, display_name="ai_model_serving_function", image="phx.ocir.io/namespace/repo/imagename:tag", # Replace with your Docker image memory_in_mbs=128, # Adjust the memory allocation as needed ) # Outputs. pulumi.export('application_id', app.id) pulumi.export('ai_function_id', ai_function.id) pulumi.export('ai_function_name', ai_function.display_name)

    In this program, you first create an OCI Functions Application to contain the function. The compartment ID and subnet IDs must be replaced with information relevant to your OCI environment.

    Next, you define the function ai_function, which specifies the Docker image that contains the AI model and the necessary code to handle incoming requests and serve predictions. In a real-world scenario, this Docker image should be built from a Dockerfile that installs all the dependencies the AI model requires, as well as the application code.

    You must put in the correct image path to the Docker image in the Oracle Cloud Infrastructure Registry (OCIR) or your preferred container registry. Parameters such as memory_in_mbs can be adjusted to meet the computational requirements of your AI model.

    Finally, the program exports the IDs and display names of the created resources, which can be helpful for integration with other services or for reference.

    Once deployed, you would invoke the function via its HTTP endpoint to serve predictions from your AI model.

    Remember, this is a simplified example assuming that you have already packaged your AI model into a Docker image. In a real-world scenario, you may need additional setup, such as defining environment variables (using config), setting up the right access policies, and implementing logging, monitoring, and security measures.