Access Control for Pods on Amazon EKS

Posted on

Amazon EKS clusters can use IAM roles and policies for Pods to assign fine-grained access control of AWS services. The AWS IAM entities map into Kubernetes RBAC to configure the permissions of Pods that work with AWS services.

Together, AWS IAM and Kubernetes RBAC enable least-privileged access for your apps, scoped to the appropriate policies and user requirements.


In <100 lines of code we’ll demonstrate how EKS Pods can use AWS IAM to create fine-grained permissions for apps that integrate with other AWS services.

Pod Access Control

AWS EKS supports using IAM entities in a Pod Service Account by leveraging an OIDC provider connected to the Kubernetes cluster.

Continuing with the example from the AWS blog post, when an S3 app written in Go pushes an object to a bucket with the AWS SDK, it will need write access to S3.

When a Pod is launched with a particular Service Account, the OIDC provider works with Kubernetes to verify the Pod’s identity, and in turn collaborates with the AWS Secure Token Service (STS) to grant the Pod temporary credentials to use with the IAM role.

Create an OIDC provider

Creating an OIDC provider is as simple as toggling the createOidcProvider option in the definition of your EKS cluster.

When enabled, the OIDC provider will be created and associated with the cluster’s OIDC provider URL.

import * as eks from '@pulumi/eks'

// Create an EKS cluster with default settings.
// Create and attach an OIDC provider to the cluster.
const cluster = new eks.Cluster('myCluster', {
      createOidcProvider: true,

Create IAM for a S3 app

We’ll use the OIDC provider URL and Amazon Resource Name (ARN) to compose the AssumeRoleWithWebIdentity, and S3 IAM policies that will be attached to a new S3 IAM role.

After the role is configured, a Service Account for the S3 Pod will be created, and annotated with the ARN of the S3 role to bind the two together.

import * as aws from '@pulumi/aws'
import * as k8s from '@pulumi/kubernetes'
import * as pulumi from '@pulumi/pulumi'

// Create a pulumi Kubernetes provider using the cluster's kubeconfig.
const k8sProvider = new k8s.Provider('k8s', {
  kubeconfig: cluster.kubeconfig.apply(JSON.stringify),

// Create a k8s namespace in the cluster.
const namespace = new k8s.core.v1.Namespace('apps', undefined, { k8sProvider });

// Get the OIDC provider's URL for the cluster.
const clusterOidcProvider = cluster.core.oidcProvider.url;

// Create the new IAM policy for the Service Account using the AssumeRoleWebWebIdentity action.
const saName = 's3'
const saAssumeRolePolicy = pulumi
  .all([clusterOidcProviderUrl, clusterOidcProvider.arn, appsNamespaceName])
  .apply(([url, arn, namespace]) =>
      statements: [
          actions: ['sts:AssumeRoleWithWebIdentity'],
          conditions: [
              test: 'StringEquals',
              values: [`system:serviceaccount:${}:${saName}`],
              variable: `${url.replace('https://', '')}:sub`,
          effect: 'Allow',
          principals: [{identifiers: [arn], type: 'Federated'}],

// Create a new IAM role that assumes the AssumeRoleWebWebIdentity policy.
const saRole = new aws.iam.Role(saName, {
  assumeRolePolicy: saAssumeRolePolicy.json,

// Attach the IAM role to an AWS S3 access policy.
const saS3Rpa = new aws.iam.RolePolicyAttachment(saName, {
  policyArn: 'arn:aws:iam::aws:policy/AmazonS3FullAccess',
  role: saRole,

// Create a Service Account with the IAM role annotated to use with the Pod.
const sa = new k8s.core.v1.ServiceAccount(
    metadata: {
      name: saName,
      annotations: {
        '': saRole.arn,
  { k8sProvider });

Deploy a S3 app

We’ll deploy the S3 app to use the new IAM-backed Service Account.

Once the Pod is running, the Service Account annotation will be automatically managed by a Kubernetes dynamic admission controller run by EKS on your behalf.

The AWS EKS webhook manages Pod identity, and injects STS credentials into the Pod to use with the S3 role.

With the credentials, the app can successfully upload data to S3.

import * as aws from "@pulumi/aws";
import * as k8s from "@pulumi/kubernetes";
import * as pulumi from "@pulumi/pulumi";

const bucket = new aws.s3.Bucket("pod-irsa-job-bucket");
const bucketName =;
const regionName = pulumi.output(aws.getRegion({}, {async: true})).name;
const s3Pod = new k8s.core.v1.Pod(podName,
        metadata: {labels: labels, namespace: appsNamespaceName},
        spec: {
            containers: [
                    name: podName,
                    image: "amazonlinux:2018.03",
                    command: ["sh", "-c",
                        pulumi.interpolate`curl -sL -o /s3-echoer && chmod +x /s3-echoer && echo This is an in-cluster test | /s3-echoer ${bucketName} && sleep 3600`,
                    env: [
                        {name: "AWS_DEFAULT_REGION", value: regionName},
                        {name: "ENABLE_IRP", value: "true"},
    }, { provider: provider },


Leveraging AWS IAM for Pod workloads is a secure and effective means of limiting privileged execution, and provides a native experience for users.

Pod IAM can be extended further by also using the Kubernetes RBAC system. This allows configuring permissions for Kubernetes API resources, and handle scenarios such as limiting the namespace an IAM role can use, and what resources can be managed in the namespace.

Next Steps

Learn more about how Pulumi works with Kubernetes, and get started if you’re new.

Check out code examples for the S3 app referenced in this post, along with other access control scenarios for EKS.

Watch the video below for more details on how OIDC and Kubernetes RBAC works in EKS. We demonstrate how to deploy fluentd-cloudwatch with IAM to forward Pod logs to AWS CloudWatch.

You can also follow us on Twitter, subscribe to PulumiTV on YouTube, or join our Community Slack channel if you have any questions.