1. Deploy the postgres-instance helm chart on AWS EKS


    To deploy a Postgres instance using a Helm chart on an AWS EKS cluster, here's what we need to accomplish:

    1. Create an EKS cluster.
    2. Add the necessary IAM role for the cluster.
    3. Deploy the Postgres Helm chart to the cluster.

    We'll be using Pulumi's aws, eks, and kubernetes packages to create the EKS cluster and deploy the Helm chart.

    The eks.Cluster class from the @pulumi/eks package will be used to create and configure an EKS cluster. This class abstracts away the details of setting up an EKS cluster and is easier to use compared to the lower-level aws.eks.Cluster resource.

    For deploying the Helm chart, we will use the kubernetes.helm.v3.Chart class from the @pulumi/kubernetes package which allows us to deploy Helm charts in a Kubernetes cluster managed by Pulumi.

    Let's go through the Pulumi TypeScript program step by step:

    Step 1: Define EKS Cluster

    We'll begin by creating a new EKS cluster. The eks.Cluster class makes it easy to stand up an EKS cluster, including the necessary VPC and IAM resources if they are not already specified.

    Step 2: Define IAM Role for EKS

    To allow our EKS cluster to manage resources on our behalf, we'll need to create an IAM role and attach the appropriate policies. Pulumi provides an aws-iam.EKSRole class that simplifies the creation of such roles.

    Step 3: Deploy Postgres Helm Chart

    Once our EKS cluster is up and running, we'll deploy the Postgres Helm chart to the cluster. The Helm chart will create the necessary Kubernetes resources to run Postgres in our EKS cluster.

    Here is the program that accomplishes these steps:

    import * as pulumi from "@pulumi/pulumi"; import * as aws from "@pulumi/aws"; import * as awsx from "@pulumi/awsx"; import * as eks from "@pulumi/eks"; import * as k8s from "@pulumi/kubernetes"; // Create an EKS cluster with the default configuration const cluster = new eks.Cluster("my-cluster", { instanceType: "t2.medium", desiredCapacity: 2, minSize: 1, maxSize: 2, storageClasses: "gp2", deployDashboard: false, }); // Export the cluster's kubeconfig. export const kubeconfig = cluster.kubeconfig; // Create a Kubernetes provider instance that uses our EKS cluster from above. const k8sProvider = new k8s.Provider("k8s-provider", { kubeconfig: cluster.kubeconfig.apply(JSON.stringify), }); // Deploy the Postgres Helm chart into the cluster. const postgresqlChart = new k8s.helm.v3.Chart("postgresql", { chart: "postgresql", version: "10.3.11", // Specify the version of the chart to be deployed namespace: "default", // Specify the namespace for the Postgres deployment. fetchOpts: { repo: "https://charts.bitnami.com/bitnami" }, }, { provider: k8sProvider }); // Export the Helm chart status export const postgresqlStatus = postgresqlChart.status;

    How It Works:

    • eks.Cluster creates a new EKS cluster.
    • new k8s.Provider creates a Kubernetes provider for Pulumi, it is how Pulumi communicates with the Kubernetes cluster. Note that it depends on the kubeconfig generated by the EKS cluster.
    • k8s.helm.v3.Chart is used for deploying the PostgreSQL Helm chart in our cluster. We've specified the version and the repository where the Helm chart can be found.


    • Once you have run pulumi up, Pulumi will perform the deployment of these resources to AWS.
    • The kubeconfig is exported so that you can interact with your cluster using kubectl by providing it as a configuration file.
    • The status of the PostgreSQL deployment can be useful for diagnostic purposes or for automation.

    Connecting to Your Cluster:

    After deployment, you can connect to your EKS cluster using kubectl:

    pulumi stack output kubeconfig > kubeconfig.yaml kubectl --kubeconfig=kubeconfig.yaml get pods --namespace default

    This will show you the pods running the Postgres instance you've just deployed.

    Remember that you'll need aws CLI and Pulumi CLI configured with the appropriate credentials and contexts to manage AWS resources and Pulumi stacks.