Using kubernetes networking.gke.io with operator.knative.dev
TypeScriptIn this guide, we'll walk through the process of integrating Kubernetes networking resources within a Google Kubernetes Engine (GKE) environment with Knative, which is an open-source platform based on Kubernetes that provides a set of components for building, deploying, and managing modern serverless workloads.
Knative's serving component offers features like scale-to-zero, autoscaling, in-cluster builds, and eventing frameworks for cloud-native applications on Kubernetes. It encapsulates the complexity of networking and autoscaling into simple custom resource definitions (CRDs).
Here's an overview of the steps involved:
- Set up Knative on your Kubernetes cluster: We need to install Knative Serving which gives us the ability to manage our serverless workloads. It includes a networking layer that could be powered by the
networking.gke.io
resources for GKE. - Define Knative Serving resources: We'll create Knative Serving resources like
Service
andRoute
to define how our application should behave and be exposed within the Kubernetes cluster.
Below is a Pulumi TypeScript program that assumes you already have a Kubernetes cluster running in GKE. We'll focus on adding Knative to your cluster and setting up a simple Knative Service to demonstrate how the two systems can work together.
import * as k8s from '@pulumi/kubernetes'; // Create a Kubernetes provider instance that uses our existing GKE cluster. const provider = new k8s.Provider('gkeK8s', { kubeconfig: '<YOUR_KUBECONFIG>', // Ensure that you replace this with your actual kubeconfig. }); // Install Knative Serving const knativeServingNamespace = new k8s.core.v1.Namespace('knative-serving-ns', { metadata: { name: 'knative-serving', }, }, { provider }); const knativeServingYaml = new k8s.yaml.ConfigFile('knative-serving-install', { file: 'https://github.com/knative/serving/releases/download/v0.24.0/serving-crds.yaml', // Replace with your desired version. }, { provider, dependsOn: knativeServingNamespace }); // Define Knative Service const helloWorldService = new k8s.apiextensions.CustomResource('helloworld-service', { apiVersion: 'serving.knative.dev/v1', kind: 'Service', metadata: { namespace: knativeServingNamespace.metadata.name, name: 'helloworld-go', }, spec: { template: { spec: { containers: [ { image: 'gcr.io/knative-samples/helloworld-go', env: [ { name: 'TARGET', value: 'Go Sample v1' }, ], }, ], }, }, }, }, { provider, dependsOn: knativeServingYaml }); // Export the URL so we can easily access our application. export const url = helloWorldService.status.url; /* Please note that this is a simplified demonstration. In a real-world scenario, you will want to manage the versions and configurations more carefully, and potentially include domain configuration and other networking resources as necessary for your application to function correctly in a production environment. */
Here's a breakdown of the code:
- We start by importing the necessary Pulumi and Kubernetes packages.
- We create a
k8s.Provider
to communicate with our existing GKE cluster. Thekubeconfig
is the configuration file containing connection information for the cluster; replace<YOUR_KUBECONFIG>
with the actual contents of your kubeconfig file. - Next, we create a new Kubernetes namespace
knative-serving-ns
for our Knative serving components. - We then apply the Knative Serving configuration from the official YAML file. This will install the CRDs and necessary components in our cluster.
- After installing Knative Serving, we define a Knative
Service
namedhelloworld-service
that deploys a sample Go application. This service will automatically have networking set up by the Knative infrastructure.
Please note that this is a simplified demonstration and doesn't include everything you would need for production. You will need to manage configurations more carefully, set up DNS for your services, and so on.
This program demonstrates the interaction between Pulumi, Kubernetes networking on GKE, and Knative serving. Once applied, it allows us to deploy scale-to-zero services, handle auto-scaling and manage all the networking aspects of the serving layer with Knative and GKE.
- Set up Knative on your Kubernetes cluster: We need to install Knative Serving which gives us the ability to manage our serverless workloads. It includes a networking layer that could be powered by the