1. Deploy the litmus-agent helm chart on Google Kubernetes Engine (GKE)


    To deploy the Litmus Chaos Agent Helm Chart on Google Kubernetes Engine (GKE), we need to follow a few steps. First, we'll create a GKE cluster using Pulumi's GKE resources. Then, we'll deploy the Litmus Agent Helm chart to the cluster using Pulumi's Helm resources.

    Here is a high-level overview of the process:

    1. Set up a GKE cluster: We use the google-native.container/v1beta1.Cluster resource to create a new GKE cluster. In this example, we’ll create a simple cluster with default configurations.
    2. Install the Helm chart: Once the cluster is running, we use kubernetes.helm.sh/v3.Release to deploy the Helm chart. The Litmus Chaos Agent Helm chart needs to be specified along with any values we wish to override.

    Below you'll find a Pulumi program written in TypeScript that accomplishes these steps. The program assumes that:

    • You have Pulumi installed and set up.
    • You have already configured Pulumi to communicate with your Google Cloud account.
    • You have kubectl installed and configured to interact with your Kubernetes clusters.

    Let's start with the Pulumi program to create a GKE cluster and deploy the Litmus Agent Helm Chart onto it.

    import * as pulumi from "@pulumi/pulumi"; import * as gcp from "@pulumi/gcp"; import * as k8s from "@pulumi/kubernetes"; // Create a GKE cluster. const cluster = new gcp.container.Cluster("litmus-cluster", { initialNodeCount: 2, minMasterVersion: "latest", // Or specify your desired version nodeVersion: "latest", nodeConfig: { machineType: "n1-standard-1", oauthScopes: [ "https://www.googleapis.com/auth/compute", "https://www.googleapis.com/auth/devstorage.read_only", "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring" ], }, }); // Export the Cluster name export const clusterName = cluster.name; // Export the Kubeconfig to access the Cluster export const kubeConfig = pulumi. all([cluster.name, cluster.endpoint, cluster.masterAuth]). apply(([name, endpoint, masterAuth]) => { const context = `${gcp.config.project}_${name}`; return `apiVersion: v1 clusters: - cluster: certificate-authority-data: ${masterAuth.clusterCaCertificate} server: https://${endpoint} name: ${context} contexts: - context: cluster: ${context} user: ${context} name: ${context} current-context: ${context} kind: Config preferences: {} users: - name: ${context} user: auth-provider: config: cmd-args: config config-helper --format=json cmd-path: gcloud expiry-key: '{.credential.token_expiry}' token-key: '{.credential.access_token}' name: gcp `; }); // Setup a K8s provider with the Kubeconfig from the newly created GKE cluster. const k8sProvider = new k8s.Provider("k8s-provider", { kubeconfig: kubeConfig, }); // Deploy the Litmus Chaos Agent Helm Chart to the GKE cluster. const litmusChart = new k8s.helm.v3.Chart("litmus-agent", { chart: "litmuschaos/litmus", // You can specify the version of the chart here using the 'version' field. // version: "x.y.z" fetchOpts: { repo: "https://litmuschaos.github.io/litmus-helm/", // Litmus Helm repository to use. }, }, { provider: k8sProvider }); // Export the status of the Helm Chart export const litmusChartStatus = litmusChart.status;

    In this program:

    • We first import the required Pulumi libraries for GCP and Kubernetes.
    • We create a GKE cluster named litmus-cluster with a specified number of nodes and default configurations.
    • The cluster's kubeConfig is constructed from outputs of the newly created cluster, which we will use to communicate with our Kubernetes cluster using kubectl.
    • A Kubernetes provider is instantiated, which utilizes the kubeConfig from the GKE cluster.
    • Finally, we deploy the Litmus Agent using the Helm chart with the name litmus-agent. You can specify the version of the chart and use values to customize the Helm release as needed.

    Remember, to deploy this Pulumi program, you must first write the code to a file (e.g., index.ts) within a Pulumi project. After writing the program, navigate to the directory where your program is located in your terminal and run pulumi up, which initializes the deployment process. Pulumi will print out the proposed changes; if you confirm them, it will provision the resources as defined in your program.