1. Throttling and Caching for Machine Learning APIs with Apigee

    Python

    When creating a machine learning (ML) API, you might want to introduce throttling to limit the number of requests a user can make within a certain period of time, and caching to temporarily store the results of expensive computations for reuse. Apigee, now part of Google Cloud, is a platform for developing and managing API proxies that provides out-of-the-box solutions for throttling, caching, and more.

    Using Apigee to manage your Machine Learning APIs gives you access to a suite of tools for monitoring, securing, and maintaining APIs. Throttling is typically achieved through setting up policies in Apigee that define how many requests a user can make. Similarly, caching can be configured to store responses for a period of time.

    To set this up using Pulumi and Google Cloud, you would utilize resources like ApiProxy, ApiProduct, and DeveloperApp, which are not directly reflected in the given Pulumi Registry results.

    Let's write a Pulumi program that assumes you have an ML API that you want to manage with Apigee. The program will demonstrate how you might define an ApiProduct (a bundle of API proxies combined with a service plan) that includes quotas (for throttling) and caching. We'll also create a DeveloperApp (an application deployed by a developer within the Apigee ecosystem) that will use this API product.

    First, we need to import the required modules and define our resources:

    import pulumi from pulumi_gcp import apigee # Create an Apigee API product. This is a bundle of API proxies combined with a service plan. api_product = apigee.ApiProduct("ml-api-product", name="ml-api-product", approval_type="auto", display_name="Machine Learning API Product", description="A product that bundles ML APIs with throttling and caching.", environments=["test", "prod"], proxies=["ml-api-proxy"], quota="10", quota_interval="1", quota_time_unit="minute", # Quota settings throttle API calls to 10 requests per minute. api_resources=["/forecast", "/predict"], # Assuming '/forecast' and '/predict' are the exposed endpoints. ) # Create a developer who will deploy apps within the Apigee ecosystem. developer = apigee.Developer("api-developer", email="developer@example.com", first_name="Jane", last_name="Doe", user_name="jdoe" ) # Create a developer app. This app will be used by the developer to interact with the packaged APIs. developer_app = apigee.DeveloperApp("ml-api-developer-app", name="ml-api-developer-app", api_products=[api_product.name], developer_email=developer.email, # Link the app to the developer created above. app_family="ML_App", attributes=[{ "name": "description", "value": "App to access ML APIs with throttling and caching" }] ) # Exporting the App’s client id pulumi.export("developer_app_id", developer_app.client_id)

    In the above code:

    • ApiProduct: We define an ApiProduct that includes the machine learning API proxies we want to bundle together. Here, we apply quota restrictions to limit the number of API calls that can be made (throttling).

    • Developer: We set up a developer entity that represents the developer within the Apigee ecosystem. Think of this as a user who will be deploying machine learning applications that call your API.

    • DeveloperApp: We create a DeveloperApp. This is essentially an application that the developer will use to interact with the APIs listed in the API product. The DeveloperApp is associated with the developer’s email, and it inherits the quota and caching behavior defined in the ApiProduct.

    Finally, we export the App’s client id so that it can be used to interact with the API.

    Please note that you would also need to set up your actual ML API and define the API proxies on Apigee. This setup would typically be performed directly in the Apigee user interface or by using the Apigee APIs to upload and configure your API proxy's specification. This Pulumi program helps you to automate the management of the API product, developer, and developer apps configuration.