1. Kubernetes Gateway API for Scalable AI Serving


    When building a scalable AI serving platform, you would typically need to start by setting up a Kubernetes cluster, as Kubernetes offers the flexibility and scalability required for such workloads. You would then define the services and possibly an ingress controller to manage external access to your AI services.

    For AI serving specifically, you might deploy machine learning models as services within the Kubernetes cluster, ensuring they are scalable by using Kubernetes' built-in mechanisms such as Horizontal Pod Autoscalers.

    The basics of this setup would include the following steps:

    1. Create a Kubernetes Cluster – A cluster that provides the foundation where all the resources will be deployed.
    2. Define the AI Services – The actual application logic or machine learning models you wish to serve, typically encapsulated in Docker containers running on pods within your cluster.
    3. Setup a Gateway – To manage and route external traffic to the different services in the cluster.

    Pulumi offers resources to create and manage cloud infrastructure, including Kubernetes resources. For setting up a Kubernetes Gateway API, Pulumi has components corresponding to Kubernetes APIs which can handle the ingress traffic and route it appropriately to services.

    Below is a program written in Python that uses Pulumi to accomplish this task. Please note that this is a high-level example and assumes you have already set up a Kubernetes cluster and deployed your AI model as a service within that cluster. The example will focus on defining a Gateway and HTTPRoute to route traffic to the AI service.

    import pulumi import pulumi_kubernetes as k8s # Configuration variables for the namespace and the service name # These should match your cluster's configuration and where the AI service is deployed namespace_name = 'ai-services' service_name = 'ai-model-service' # Defining a Gateway to handle the incoming traffic gateway = k8s.gateway.v1alpha2.Gateway( "ai-gateway", metadata=k8s.meta.v1.ObjectMetaArgs( name="ai-gateway", namespace=namespace_name, ), spec=k8s.gateway.v1alpha2.GatewaySpecArgs( # Ensure that the GatewayClass exists in your cluster gateway_class_name="example-gatewayclass", listeners=[ k8s.gateway.v1alpha2.ListenerArgs( name="http", protocol="HTTP", port=80, routes=k8s.gateway.v1alpha2.RouteBindingSelectorArgs( namespaces=k8s.gateway.v1alpha2.RouteBindingSelectorNamespacesArgs( from_="All", ), selector=k8s.meta.v1.LabelSelectorArgs( match_labels={ "app": "ai-model", }, ), kind="HTTPRoute", ), ), ], ) ) # Defining an HTTPRoute to route the traffic to the actual AI service http_route = k8s.gateway.v1alpha2.HTTPRoute( "ai-http-route", metadata=k8s.meta.v1.ObjectMetaArgs( name="ai-http-route", namespace=namespace_name, labels={ "app": "ai-model", }, ), spec=k8s.gateway.v1alpha2.HTTPRouteSpecArgs( hostnames=["ai.example.com"], rules=[ k8s.gateway.v1alpha2.HTTPRouteRuleArgs( matches=[ k8s.gateway.v1alpha2.HTTPRouteMatchArgs( path=k8s.gateway.v1alpha2.HTTPPathMatchArgs( type="Prefix", value="/", ), ), ], forwards_to=[ k8s.gateway.v1alpha2.HTTPRouteForwardToArgs( service_name=service_name, port=80, weight=1, ), ], ), ], ) ) # Export the Gateway and HTTPRoute URIs pulumi.export('gateway_name', gateway.metadata.apply(lambda meta: meta.name)) pulumi.export('http_route_name', http_route.metadata.apply(lambda meta: meta.name))


    • We create a Gateway object, which is part of the Kubernetes Gateway API, to handle incoming HTTP traffic. We specify a listener on port 80 (the default port for HTTP traffic).

    • We then define an HTTPRoute, which specifies how HTTP requests should be matched and routed to backend services—in this case, our deployed AI service.

    • You must ensure that your cluster's GatewayClass exists when you deploy this configuration. Also, make sure that the namespace and service_name variables match your actual Kubernetes configuration.

    • The listeners property in the Gateway object and the rules in the HTTPRoute object determine how traffic gets routed to your services.

    • Finally, we export the names of the created Gateway and HTTPRoute to enable easy access to their names, which can be used for further configuration or in your Continuous Deployment pipelines.

    Keep in mind that Kubernetes Gateway API is a evolving standard, and different Kubernetes clusters might have different support level. Always refer to your specific Kubernetes cluster's documentation for the most accurate and updated information regarding the Gateway API support.