1. Deploy the datadog-apm helm chart on Google Kubernetes Engine (GKE)


    To deploy the Datadog APM Helm chart on a Google Kubernetes Engine (GKE) cluster using Pulumi, we will follow these steps:

    1. Create a GKE cluster.
    2. Install and configure the Helm release for Datadog APM.

    First, you need to have Pulumi installed and configured for use with your Google Cloud account. Additionally, since you're working with Kubernetes, you should have kubectl configured to interact with your clusters.

    Here's a Pulumi program in TypeScript that achieves the goal:

    Detailed Explanation

    • We'll start by creating a GKE cluster using the google-native.container.v1beta1.Cluster resource which allows us to provision a GKE cluster.
    • Once we have the GKE cluster ready, we'll use the kubernetes.helm.v3.Chart resource which is part of the Pulumi Kubernetes provider to deploy the Datadog APM Helm chart.
    • Helm charts are commonly used packages in Kubernetes that contain a pre-configured set of resources for deploying applications, and Datadog APM is available as a Helm chart.
    • We will set up the necessary configuration for the Datadog APM chart such as setting the API key which you must obtain from your Datadog account.

    Below is the program code:

    import * as gcp from "@pulumi/gcp"; import * as k8s from "@pulumi/kubernetes"; // Create a GKE cluster const cluster = new gcp.container.Cluster("my-gke-cluster", { initialNodeCount: 2, nodeConfig: { preemptible: true, machineType: "n1-standard-1", }, }); // Export the Kubeconfig export const kubeconfig = cluster.masterAuth.apply(masterAuth => { const context = `${gcp.config.project}_${gcp.config.zone}_${cluster.name}`; return `apiVersion: v1 clusters: - cluster: certificate-authority-data: ${masterAuth.clusterCaCertificate} server: https://${masterAuth.endpoint} name: ${context} contexts: - context: cluster: ${context} user: ${context} name: ${context} current-context: ${context} kind: Config preferences: {} users: - name: ${context} user: auth-provider: name: gcp config: cmd-args: config config-helper --format=json cmd-path: gcloud expiry-key: '{.credential.token_expiry}' token-key: '{.credential.access_token}'`; }); // Create a Kubernetes provider instance that uses our cluster from above. const k8sProvider = new k8s.Provider("k8s-provider", { kubeconfig: kubeconfig, }); // Deploy the Datadog APM Helm chart const datadogChart = new k8s.helm.v3.Chart("datadog-apm", { chart: "datadog", version: "2.4.5", // specify the version you wish to deploy fetchOpts: { repo: "https://helm.datadoghq.com/", }, values: { datadog: { apiKey: "YOUR_DATADOG_API_KEY", // replace with your Datadog API key apm: { enabled: true, }, // ... other Datadog configurations }, }, }, { provider: k8sProvider }); // Export the Datadog Helm chart name export const datadogChartName = datadogChart.metadata.apply(metadata => metadata.name);

    What the code does:

    • We define a GKE cluster named "my-gke-cluster" with 2 nodes using the n1-standard-1 machine type. Preemptible VMs are used as nodes to keep costs down, but these can be terminated at any time, so it's not recommended for production use.
    • We export the kubeconfig which is a file necessary for kubectl to connect to the Kubernetes cluster.
    • We create a Pulumi Kubernetes provider which enables our Pulumi program to interact with the GKE cluster.
    • We deploy the Datadog APM Helm chart to the Kubernetes cluster using the Pulumi Kubernetes provider. Notice the apiKey is set to "YOUR_DATADOG_API_KEY", which you need to replace with your actual Datadog API key.
    • We export datadogChartName which can be used to track the deployed Helm chart in the Pulumi stack outputs.

    Remember to replace "YOUR_DATADOG_API_KEY" with your actual Datadog API key which can be obtained from your Datadog account.

    Once you’ve prepared your Pulumi code with the correct configuration, you'll deploy it by running pulumi up. This will provision the GKE cluster and deploy the Datadog Helm chart into it, setting up Datadog APM in your Kubernetes environment.