1. Deploy the argocd-workflows helm chart on Google Kubernetes Engine (GKE)

    TypeScript

    To deploy the Argo CD Workflows Helm chart on Google Kubernetes Engine (GKE), you will need to accomplish several tasks:

    1. Create a GKE cluster.
    2. Set up a Kubernetes provider to interact with the created GKE cluster.
    3. Install Argo CD Workflows using the Helm chart.

    Below is a Pulumi program in TypeScript that demonstrates these steps:

    • We first create a GKE cluster using the gcp.container.Cluster resource.
    • Once the cluster is created, we configure the Pulumi Kubernetes provider to target our GKE cluster.
    • Finally, we deploy the Argo CD Workflows Helm chart using the kubernetes.helm.v3.Chart resource from Pulumi's Kubernetes provider.

    Let's set up the GKE cluster and deploy the chart:

    import * as pulumi from "@pulumi/pulumi"; import * as gcp from "@pulumi/gcp"; import * as k8s from "@pulumi/kubernetes"; // Create a GKE cluster const cluster = new gcp.container.Cluster("argocd-workflows-cluster", { initialNodeCount: 2, minMasterVersion: "latest", nodeVersion: "latest", nodeConfig: { machineType: "e2-standard-2", // This is a cost-effective machine type, you can select what suits your workload oauthScopes: [ "https://www.googleapis.com/auth/compute", "https://www.googleapis.com/auth/devstorage.read_only", "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", ], }, }); // Export the Cluster name export const clusterName = cluster.name; // Manufacture a Kubernetes provider instance using the GKE cluster credentials. const k8sProvider = new k8s.Provider("gke-k8s", { kubeconfig: cluster.name.apply(name => { return gcp.container.getCluster({ name: name, location: cluster.location, project: cluster.project, }).then(cluster => { const context = `${cluster.name}_${cluster.zones[0]}_${cluster.project}`; return `apiVersion: v1 clusters: - cluster: certificate-authority-data: ${cluster.masterAuth[0].clusterCaCertificate} server: https://${cluster.endpoint} name: ${context} contexts: - context: cluster: ${context} user: ${context} name: ${context} current-context: ${context} kind: Config preferences: {} users: - name: ${context} user: auth-provider: config: access-token: ${cluster.masterAuth[0].accessToken} cmd-args: config config-helper --format=json cmd-path: gcloud expiry-key: '{.credential.token_expiry}' token-key: '{.credential.access_token}' name: gcp `; }); }), }); // Add helm chart for argocd-workflows const argocdWorkflowsChart = new k8s.helm.v3.Chart("argocd-workflows", { chart: "argo-workflows", // This is the name of the chart, you may need to adjust the repository depending on where the Argo Workflows chart is located fetchOpts:{ repo: "https://argoproj.github.io/argo-helm", // This is the Argo Workflows Helm repository }, }, { provider: k8sProvider }); // Export the Helm chart name export const argocdWorkflowsChartName = argocdWorkflowsChart.name;

    Let's break down the code above:

    • We start by importing the Pulumi SDK, and the Google Cloud and Kubernetes providers.
    • The gcp.container.Cluster resource creates a new GKE cluster with a given node count and machine types of the compute instances we'll be using.
    • After the cluster is created, we use cluster.name.apply to asynchronously fetch the cluster data needed to generate a kubeconfig.
    • We create an instance of the k8s.Provider, which knows how to authenticate to our GKE cluster using the kubeconfig.
    • Lastly, the k8s.helm.v3.Chart resource is used to deploy the Argo CD Workflows Helm chart onto our GKE cluster.

    The fetchOpts field within the Helm chart resource should point to the location of the Argo Workflows Helm chart. Please ensure this is the correct URL and chart name as these can change over time.

    To deploy this stack using Pulumi:

    1. Save the code above in a file called index.ts.
    2. Run pulumi up to preview and deploy the changes.

    After running the above code with Pulumi, you will have a GKE cluster and an Argo CD Workflows instance installed and ready for use. You can use the kubeconfig to interact with your cluster using kubectl.