1. Serverless Model Inference with GCP Cloud Functions


    When deploying a serverless model inference service using Google Cloud Functions, the overall goal is to set up a cloud function that can respond to events and execute code (inference in this case) without the need for managing servers. You'll need a pre-trained machine learning model, which typically may not be included in the setup itself but referenced through your code, and an API to trigger the inference.

    Google Cloud Functions allows you to execute your code in response to HTTP events (HTTP trigger) or other Google Cloud events (Pub/Sub trigger, Storage trigger, etc.). The function could be written in Python, Node.js, Go, or Java, and it would load your machine learning model to process the data and return predictions.

    Here's what we need to set up a basic inference function with GCP Cloud Functions using Pulumi:

    1. Google Cloud Function: The serverless runtime where we'll deploy our inference code.
    2. Trigger: A mechanism to activate the function - Either HTTP for synchronous calls or a Pub/Sub topic for asynchronous events.
    3. Other GCP services: Such as Cloud Storage for model files, Cloud Secret Manager for sensitive credentials, or Cloud Pub/Sub for event-based triggers.

    The code below sets up an HTTP-triggered Google Cloud Function. We'll assume you have provided the inference code along with any dependencies.

    import pulumi from pulumi_gcp import cloudfunctions, storage # Define the Google Cloud Storage Bucket where our Function's source code will reside bucket = storage.Bucket("bucket", location='US-CENTRAL1', ) # Archive our Cloud Function's source code into the bucket we just created source_archive_object = storage.BucketObject("source-archive-object", bucket=bucket.name, source=pulumi.AssetArchive({ '.': pulumi.FileArchive('./function_source') # This is a path to a directory with our Function's code }) ) # Define a Google Cloud Function, which responds to HTTP requests function = cloudfunctions.Function("function", source_archive_bucket=bucket.name, runtime="python37", # Chose a runtime compatible with your machine learning model and application source_archive_object=source_archive_object.name, entry_point="handler", # This is the name of the function within your code to execute trigger_http=True, # We are setting an HTTP trigger for this function available_memory_mb=1024, # Adjust the memory needed for your function ) # Export the URL so it can be easily accessed pulumi.export('function_url', function.https_trigger_url)

    In this code:

    • We define a GCP Storage Bucket to store the source code archive for the Cloud Function.
    • An AssetArchive is created pointing to the directory having the Function's code, which is then uploaded as a BucketObject into the Bucket.
    • We then define the Cloud Function, pointing to the Bucket and Object where the code is stored. The runtime is set to python37 for simplicity, but you should use a runtime that matches the needs of your code (e.g., python38, python39).
    • The entry_point should be the name of the function you wrote in your code that will take the incoming requests and perform inference.
    • trigger_http is set to True to allow the function to be triggered via HTTP requests.

    Replace './function_source' with the path to your function's source code. The source code should include your Python handlers and any other resources required to perform inference.

    You need to make sure you have the correct IAM permissions set up for this Cloud Function to access other Google Cloud services (e.g., Storage Buckets for model files).

    This setup is ideal for lightweight inference tasks where the model can be loaded rapidly, and the prediction does not require intensive computation, as functions do have a limited lifetime and memory/CPU limitations. If the inference task is too heavy or takes a long time, you might need to look into Google Cloud Run or AI Platform Predictions, which are not covered in this setup.