1. Autoscaling AI Inference Endpoints with Cloud Run IAM

    Python

    When setting up autoscaling AI Inference Endpoints with Google Cloud Run, you’ll likely be considering at least two things:

    1. Deployment of the inference service: This involves creating a Cloud Run service that will host your AI model and handle incoming requests. Cloud Run services will automatically scale your containers up and down depending on the incoming request volume, without any manual intervention on scaling rules.

    2. Access Management: This concerns how you control who or what can interact with your AI Inference Endpoint. For this, you'll deal with Cloud Run IAM (Identity and Access Management) which allows you to set permissions defining who can invoke your service.

    For the autoscaling feature, Cloud Run handles this automatically. When you deploy a service to Cloud Run, it automatically scales based on the number of incoming requests.

    On the other hand, IAM policies in Cloud Run can be defined at the service level, enabling you to specify who has the ability to invoke or manage the service.

    Now let's exemplify how you can use Pulumi to deploy a Cloud Run service and set its IAM policy for an AI Inference Endpoint.

    Here's a Pulumi program in Python that demonstrates these concepts:

    import pulumi import pulumi_gcp as gcp # Create a Cloud Run service that hosts your AI Inference model cloud_run_service = gcp.cloudrun.Service("ai-inference-service", location="us-central1", template=gcp.cloudrun.ServiceTemplateArgs( spec=gcp.cloudrun.ServiceSpecArgs( # This is where you would define the container image for your AI model, # along with any environment variables, resources, etc. containers=[ gcp.cloudrun.ServiceSpecTemplateSpecContainerArgs( image="gcr.io/my-project/my-inference-image", ), ], # Optionally, you can define the amount of request concurrency. # The default allows for multiple requests to be processed simultaneously. container_concurrency=1, ), ), ) # IAM policy: grant the invoker role to a specific member (e.g., a service account) # This sets who can invoke the Cloud Run service invoker_member = gcp.cloudrun.IamMember("invoker", service=cloud_run_service.name, location=cloud_run_service.location, role="roles/run.invoker", member="serviceAccount:my-invoker@my-project.iam.gserviceaccount.com", ) # Export the service URL pulumi.export("service_url", cloud_run_service.statuses.apply(lambda status: status[0].url))

    In the above code:

    • We created a Cloud Run service using gcp.cloudrun.Service, specifying the container image, which would be the Docker image containing your AI model and inference code.
    • We specified container_concurrency to control how many requests can be processed at the same time.
    • Then, we set an IAM policy on that Cloud Run service using gcp.cloudrun.IamMember. The role roles/run.invoker grants permission to invoke the service, and we specified a service account as the member. You can change the service account email to match the one you wish to grant access to.
    • We exported the service URL as an output of our Pulumi stack, so you can easily retrieve it after the deployment.

    This program sets up the service and its access controls, and Cloud Run takes care of the autoscaling aspect without additional configuration needed from our side.

    Please make sure you replace placeholder values (like the image URL and service account) with the actual values that are appropriate for your use case. Also, ensure that you have the right permissions and that the gcp Pulumi provider has been configured with the correct credentials.