1. Docs
  2. Pulumi IaC
  3. Clouds
  4. Kubernetes
  5. Guides
  6. Access clusters

Accessing Kubernetes clusters

    After the cluster is created with a Pulumi update, there will be outputs with fields like the cluster’s kubeconfig file contents, and its cluster name for reference.

    The full code for this stack is on GitHub.

    The full code for this stack is on GitHub.

    The full code for this stack is on GitHub.

    Overview

    We’ll explore how to:

    Access the Cluster

    In EKS, the account caller will be placed into the system:masters Kubernetes RBAC group by default. The kubeconfig generated will be specific to this primary cluster creator use-case, and it must be copied, and reconfigured to use with other IAM roles the caller assumes, as demonstrated in Configure Access Control.

    As an Admin

    Authentication

    Authenticate as the admins role from the Identity stack.

    $ aws sts assume-role --role-arn `pulumi stack output adminsIamRoleArn` --role-session-name k8s-admin
    

    Kubeconfig Setup

    To access your new Kubernetes cluster using kubectl, we need to setup the kubeconfig file, and export the environment variable for kubectl usage from the Cluster Configuration stack.

    Setup the kubeconfig environment variable.

    $ export KUBECONFIG=`pwd`/kubeconfig-admin.json
    

    Get the Admins IAM Role ARN.

    $ pulumi stack output adminsIamRoleArn
    arn:aws:iam::000000000000:role/admins-eksClusterAdmin-0627674
    

    Make a copy of the kubeconfig file that will be edited for the admins to use the adminsIamRoleArn output.

    $ pulumi stack output kubeconfig > kubeconfig-admin.json
    

    Edit kubeconfig-admin.json to use a role for authentication in the args of the aws-iam-authenticator, e.g.

    ...
    "users": [
      {
        "name": "aws",
        "user": {
          "exec": {
            "apiVersion": "client.authentication.k8s.io/v1alpha1",
            "args": [
              "token",
              "-i",
              "k8s-aws-cluster-eksCluster-1ef1afe",
              "-r",
              "arn:aws:iam::000000000000:role/admins-eksClusterAdmin-0627674"
            ],
            "command": "aws-iam-authenticator"
          }
        }
      }
    ]
    

    As a Developer

    Authentication

    Authenticate as the devs role from the Identity stack.

    $ aws sts assume-role --role-arn `pulumi stack output devsIamRoleArn` --role-session-name k8s-devs
    

    Kubeconfig Setup

    To access your new Kubernetes cluster using kubectl, we need to setup the kubeconfig file, and export the environment variable for kubectl usage from the Cluster Configuration stack.

    Setup the kubeconfig environment variable.

    $ export KUBECONFIG=`pwd`/kubeconfig-devs.json
    

    Get the Devs IAM Role ARN.

    $ pulumi stack output devsIamRoleArn
    arn:aws:iam::000000000000:role/devs-eksClusterDeveloper-e332028
    

    Make a copy of the kubeconfig file that will be edited for the devs to use the devsIamRoleArn output.

    $ pulumi stack output kubeconfig > kubeconfig-devs.json
    

    Edit kubeconfig-devs.json to use a role for authentication in the args of the aws-iam-authenticator, e.g.

    ...
    "users": [
      {
        "name": "aws",
        "user": {
          "exec": {
            "apiVersion": "client.authentication.k8s.io/v1alpha1",
            "args": [
              "token",
              "-i",
              "k8s-aws-cluster-eksCluster-1ef1afe",
              "-r",
              "arn:aws:iam::000000000000:role/devs-eksClusterDeveloper-e332028"
            ],
            "command": "aws-iam-authenticator"
          }
        }
      }
    ]
    

    In AKS, the account caller will be placed into the system:masters Kubernetes RBAC group by default. Two kubeconfig files will be generated that will be specific to the admin and cluster user use-cases.

    To configure the cluster for use with IAM roles, check out Configure Access Control.

    Authentication

    Authenticate as the ServicePrincipal from the Identity stack.

    $ az login --service-principal --username $ARM_CLIENT_ID --password $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID
    

    Admin Kubeconfig Setup

    To access your new Kubernetes cluster using kubectl, we need to setup the kubeconfig file.

    $ pulumi stack output kubeconfigAdmin > kubeconfig-admin.json
    $ export KUBECONFIG=`pwd`/kubeconfig-admin.json
    

    Developers Kubeconfig Setup

    To access your new Kubernetes cluster using kubectl, we need to setup the kubeconfig file.

    $ pulumi stack output kubeconfig > kubeconfig-devs.json
    $ export KUBECONFIG=`pwd`/kubeconfig-devs.json
    

    In Google Cloud, the account caller will be placed into the system:masters Kubernetes RBAC group by default. The kubeconfig generated will be specific to this primary cluster creator use-case.

    Google Cloud authentication will use tokens to operate as Members such as Users or ServiceAccounts, and with certain permissions as detailed in Configure Access Control.

    Admin Authentication

    Authenticate as the admins ServiceAccount from the Identity stack.

    $ pulumi stack output adminsIamServiceAccountSecret > k8s-admin-sa-key.json
    $ gcloud auth activate-service-account --key-file k8s-admin-sa-key.json
    

    Developer Authentication

    Authenticate as the devs ServiceAccount from the Identity stack.

    $ pulumi stack output devsIamServiceAccountSecret > k8s-devs-sa-key.json
    $ gcloud auth activate-service-account --key-file k8s-devs-sa-key.json
    

    Kubeconfig Setup

    To access your new Kubernetes cluster using kubectl, we need to setup the kubeconfig file, and export the environment variable for kubectl usage.

    $ pulumi stack output --show-secrets kubeconfig > kubeconfig.json
    $ export KUBECONFIG=`pwd`/kubeconfig.json
    

    Query the Cluster

    Get cluster information.

    $ kubectl version
    $ kubectl cluster-info
    

    Get the Nodes.

    $ kubectl get nodes -o wide --show-labels
    

    Get all Pods in the cluster, and show output attributes.

    $ kubectl get pods --all-namespaces -o wide --show-labels
    

    Get all Pods in the designated developer Namespace, and show output attributes.

    $ kubectl get pods -n `pulumi stack output appsNamespaceName` -o wide --show-labels
    

    Get the ConfigMaps of the kube-system Namespace.

    $ kubectl get cm -n kube-system
    

    Deploy a Workload

    Imperatively deploy a NGINX Pod and public load-balanced service:

    $ kubectl run --generator=run-pod/v1 nginx --image=nginx --port=80 --expose --service-overrides='{"spec":{"type":"LoadBalancer"}}'
    

    After a few moments once it is deployed, visit the load balancer URL.

    $ if ING_LB=$((kubectl get svc nginx -o template --template='{{(index .status.loadBalancer.ingress 0).hostname}}') 2>&1) ; then echo "http://$ING_LB"; else echo "LB is not ready yet."; fi
    
    $ if ING_LB=$((kubectl get svc nginx -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}') 2>&1) ; then echo "http://$ING_LB"; else echo "LB is not ready yet."; fi
    
    $ if ING_LB=$((kubectl get svc nginx -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}') 2>&1) ; then echo "http://$ING_LB"; else echo "LB is not ready yet."; fi
    

    Delete the pod and service.

    $ kubectl delete pod/nginx svc/nginx
    

    Declaratively deploy a NGINX Pod and public load-balanced service:

    import * as k8s from "@pulumi/kubernetes";
    
    // Expose a k8s provider instance of the cluster.
    const provider = new k8s.Provider("provider", {kubeconfig: kubeconfig });
    
    // Create a NGINX Pod
    const nginx = new k8s.core.v1.Pod(name,
        {
            metadata: {labels: {app: "nginx"}},
            spec: {
                containers: [
                    {
                        name: name,
                        image: "nginx:latest",
                        ports: [{ name: "http", containerPort: 80 }]
                    }
                ],
            }
        }, {provider: provider}
    );
    
    // Create a LoadBalancer Service for the NGINX Deployment
    const service = new k8s.core.v1.Service(name,
        {
            metadata: {labels: {app: "nginx"}},
            spec: {
                type: "LoadBalancer",
                ports: [{ port: 80, targetPort: "http" }],
                selector: {app: "nginx"},
            },
        }, {provider: provider}
    );
    
    // Export the Service name and public LoadBalancer Endpoint
    export const serviceName = service.metadata.name;
    export const serviceHostname = service.status.loadBalancer.ingress[0].hostname;
    

    After a few moments, visit the load balancer listed in the serviceHostname.

    $ curl `pulumi stack output serviceHostname`
    
    // Export the Service name and public LoadBalancer Endpoint
    export const serviceName = service.metadata.name;
    export const serviceIp = service.status.loadBalancer.ingress[0].ip;
    

    After a few moments, visit the load balancer listed in the serviceIp.

    $ curl `pulumi stack output serviceIp`
    
    // Export the Service name and public LoadBalancer Endpoint
    export const serviceName = service.metadata.name;
    export const serviceIp = service.status.loadBalancer.ingress[0].ip;
    

    After a few moments, visit the load balancer listed in the serviceIp.

    $ curl `pulumi stack output serviceIp`
    

    To tear down NGINX, delete its definition in the Pulumi program and run a Pulumi update.

    Learn More

    See the official Kubernetes Basics tutorial for more details.

      PulumiUP 2024. Watch On Demand.