Master Kubernetes Secrets with Pulumi ESC + Secrets Store CSI Driver
Posted on
Welcome to the second blog post of the Pulumi ESC and Kubernetes secrets series. If you haven’t had the chance to read the first blog post, go ahead and read it here.
In the previous blog post, we have learned how to manage secrets with Pulumi ESC and the External Secrets Operator. While the External Secrets Operator is a great tool to manage secrets in a cloud-native way, it still creates Kubernetes secrets in the cluster. Depending on your security requirements, you might want to avoid the use of Kubernetes secrets in your cluster at all. This is the point where you hit the limits of the External Secrets Operator.
But don’t worry, we also have a solution for this problem. In this blog post, we will introduce the Secrets Store CSI Driver.
TL;DR?
Jump straight to the comparison of the External Secrets Operator and the Secrets Store CSI Driver here.
Refresher: Why Should You Avoid Using Kubernetes Secrets?
Before we dive into the Secrets Store CSI Driver, let’s quickly recap why you might want to avoid using native Kubernetes secrets. I wrote in depth about this topic in the previous blog post, but here is a quick summary:
- By default, etcd stores them in a base64-encoded format, which is not secure.
- Developers create them either manually by using kubectl commands or by using a manifest file, which makes them hard to manage at scale.
- Hard to manage and synchronise secrets across different environments and clusters.
- There is no default way to rotate secrets automatically.
Point 1 is the most critical one, as there is even a warning in the Kubernetes documentation:
What is the Secrets Store CSI Driver?
The first thing you may notice is that in the name of the Secret Store CSI Driver, there is a reference to the Container Storage Interface (CSI) standard.
What is CSI and why it is used in the Secret Store CSI Driver*?*
Container Storage Interface (CSI)
The Container Storage Interface (CSI) is a standard to unify the interface between container orchestrators (like Kubernetes) and different storage vendors (like NetApp, Ceph, etc.). This helps to guarantee that implementing a CSI for a storage vendor is going to work with all orchestrators that support CSI.
Before CSI, we needed to create volume plugins for every orchestrator and storage vendor. This meant that developers coupled volume plugin development with the Kubernetes version and made it dependent on it. Bugs in volume plugins could break the Kubernetes components, instead of the volume plugin. Besides, volume plugins had full privileges on the Kubernetes components like the kubelet.
Combining CSI, Kubernetes and Secrets = Secret Store CSI Driver
The Secret Store CSI Driver is a CSI driver that allows you to mount many secrets, certificates, and keys from external secret stores into Kubernetes pods as volumes. After attaching the volume, the system mounts the secrets into the container file system.
The benefits of using the Secret Store CSI Driver are that you manage the lifecycle of the secrets outside of Kubernetes while still providing a Kubernetes-native experience of using the secrets in your pods.
The Architecture of the Secret Store CSI Driver
The Secret Store CSI Driver is a daemonset that lets all kublets communicate with each other and uses gRPC to talk to a provider. A SecretProviderClass custom resource specifies the definition of the external Secret Store. Then the system mounts a volume in the pod as tmpfs
, and it injects the secret into the volume. When you delete the pod, the system cleans up the volume and removes the secret from the tmpfs
volume.
How to Use the Secrets Store CSI Driver with Pulumi ESC
Before we start, make sure you have the following prerequisites:
- A Pulumi Cloud account. If you don’t have one, you can create one for free.
- A Kubernetes cluster (I will be using a local KinD cluster, but you can use any Kubernetes cluster)
- Pulumi CLI installed. You can also use the standalone ESC CLI to manage secrets and configurations.
- kubectl CLI installed for some debugging
Step 1: Deploy the Secrets Store CSI Driver and Pulumi ESC CSI Provider
We will use Pulumi with the pulumi-kubernetes provider to deploy the Secrets Store CSI Driver and the Pulumi ESC CSI Provider to our Kubernetes cluster.
And here we also use the integration of Pulumi ESC with Pulumi IaC to supply the Pulumi access token in a secure way.
First, we need to create a new Pulumi project:
# Choose your favorite Pulumi supported language
pulumi new kubernetes-<your-programming-language> --name
Before we dig into the code, let’s head to the Pulumi Cloud Console and create a new Pulumi ESC project with the name esc-secrets-store-csi-driver-demo
.
And create the environment dev
:
In the editor add the following yaml into the Environment definition
:
values:
pulumiConfig:
pulumi-pat:
fn::secret: <your-pulumi-pat>
If you prefer to use the Pulumi CLI, you can create the environment by running:
pulumi env init <your-org>/esc-secrets-store-csi-driver-demo/dev
And set the configuration by running the env edit
command and copy the above YAML into the editor:
pulumi env edit <your-org>/esc-secrets-store-csi-driver-demo/dev
Now, we need to link the Pulumi ESC project to the Pulumi IaC project. To do this, we need to add the following to your Pulumi.dev.yaml
:
environment:
- esc-secrets-store-csi-driver-demo/dev
import * as pulumi from "@pulumi/pulumi";
import * as k8s from "@pulumi/kubernetes";
const secretsStoreCSIDriver = new k8s.helm.v4.Chart("secrets-store-csi-driver", {
chart: "secrets-store-csi-driver",
namespace: "kube-system",
repositoryOpts: {
repo: "https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts",
},
values: {
nodeSelector: {
"kubernetes.io/os": "linux",
},
},
});
const secretsStoreCSIPulumiESCProvider = new k8s.helm.v4.Chart(
"secrets-store-csi-pulumi-esc-provider",
{
chart: "oci://ghcr.io/pulumi/helm-charts/pulumi-esc-csi-provider",
namespace: "kube-system",
values: {
nodeSelector: {
"kubernetes.io/os": "linux",
},
},
},
{ dependsOn: secretsStoreCSIDriver },
);
const config = new pulumi.Config();
const mySecret = new k8s.core.v1.Secret(
"my-secret",
{
metadata: {
namespace: "default",
name: "pulumi-access-token",
},
stringData: {
"pulumi-access-token": config.require("pulumi-pat"),
},
type: "Opaque",
},
{ dependsOn: secretsStoreCSIPulumiESCProvider },
);
"use strict";
const pulumi = require("@pulumi/pulumi");
const k8s = require("@pulumi/kubernetes");
const secretsStoreCSIDriver = new k8s.helm.v4.Chart("secrets-store-csi-driver", {
chart: "secrets-store-csi-driver",
namespace: "kube-system",
repositoryOpts: {
repo: "https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts",
},
values: {
nodeSelector: {
"kubernetes.io/os": "linux",
},
},
});
const secretsStoreCSIPulumiESCProvider = new k8s.helm.v4.Chart(
"secrets-store-csi-pulumi-esc-provider",
{
chart: "oci://ghcr.io/pulumi/helm-charts/pulumi-esc-csi-provider",
namespace: "kube-system",
values: {
nodeSelector: {
"kubernetes.io/os": "linux",
},
},
},
{ dependsOn: secretsStoreCSIDriver },
);
const config = new pulumi.Config();
const mySecret = new k8s.core.v1.Secret(
"my-secret",
{
metadata: {
namespace: "default",
name: "pulumi-access-token",
},
stringData: {
"pulumi-access-token": config.require("pulumi-pat"),
},
type: "Opaque",
},
{ dependsOn: secretsStoreCSIPulumiESCProvider },
);
import pulumi
import pulumi_kubernetes as k8s
secrets_store_csi_driver = k8s.helm.v4.Chart(
"secrets-store-csi-driver",
k8s.helm.v4.ChartArgs(
chart="secrets-store-csi-driver",
namespace="kube-system",
repository_opts=k8s.helm.v4.RepositoryOptsArgs(
repo="https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts",
),
values={
"nodeSelector": {
"kubernetes.io/os": "linux",
},
},
),
)
secrets_store_csi_pulumi_esc_provider = k8s.helm.v4.Chart(
"secrets-store-csi-pulumi-esc-provider",
k8s.helm.v4.ChartArgs(
chart="oci://ghcr.io/pulumi/helm-charts/pulumi-esc-csi-provider",
namespace="kube-system",
values={
"nodeSelector": {
"kubernetes.io/os": "linux",
},
},
),
opts=pulumi.ResourceOptions(depends_on=[secrets_store_csi_driver]),
)
config = pulumi.Config()
my_secret = k8s.core.v1.Secret(
"my-secret",
metadata=k8s.meta.v1.ObjectMetaArgs(
namespace="default", name="pulumi-access-token"
),
string_data={
"pulumi-access-token": config.require("pulumi-pat"),
},
type="Opaque",
opts=pulumi.ResourceOptions(depends_on=[secrets_store_csi_pulumi_esc_provider]),
)
package main
import (
k8s "github.com/pulumi/pulumi-kubernetes/sdk/v4/go/kubernetes"
"github.com/pulumi/pulumi-kubernetes/sdk/v4/go/kubernetes/apiextensions"
appv1 "github.com/pulumi/pulumi-kubernetes/sdk/v4/go/kubernetes/apps/v1"
corev1 "github.com/pulumi/pulumi-kubernetes/sdk/v4/go/kubernetes/core/v1"
chartv4 "github.com/pulumi/pulumi-kubernetes/sdk/v4/go/kubernetes/helm/v4"
metav1 "github.com/pulumi/pulumi-kubernetes/sdk/v4/go/kubernetes/meta/v1"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"
)
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
secretsStoreCsiDriver, err := chartv4.NewChart(ctx, "secrets-store-csi-driver", &chartv4.ChartArgs{
Chart: pulumi.String("secrets-store-csi-driver"),
Namespace: pulumi.String("kube-system"),
RepositoryOpts: chartv4.RepositoryOptsArgs{
Repo: pulumi.String("https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts"),
},
Values: pulumi.Map{
"nodeSelector": pulumi.Map{
"kubernetes.io/os": pulumi.String("linux"),
},
},
})
if err != nil {
return err
}
secretsStoreCsiPulumiEscProvider, err := chartv4.NewChart(ctx, "secrets-store-csi-pulumi-esc-provider", &chartv4.ChartArgs{
Chart: pulumi.String("oci://ghcr.io/pulumi/helm-charts/pulumi-esc-csi-provider"),
Namespace: pulumi.String("kube-system"),
Values: pulumi.Map{
"nodeSelector": pulumi.Map{
"kubernetes.io/os": pulumi.String("linux"),
},
},
}, pulumi.DependsOn([]pulumi.Resource{secretsStoreCsiDriver}))
if err != nil {
return err
}
pulumiPAT := config.Get(ctx, "pulumi-pat")
mySecret, err := corev1.NewSecret(ctx, "my-secret", &corev1.SecretArgs{
Metadata: &metav1.ObjectMetaArgs{
Namespace: pulumi.String("default"),
Name: pulumi.String("pulumi-access-token"),
},
StringData: pulumi.StringMap{
"pulumi-access-token": pulumi.String(pulumiPAT),
},
Type: pulumi.String("Opaque"),
}, pulumi.DependsOn([]pulumi.Resource{secretsStoreCsiPulumiEscProvider}))
if err != nil {
return err
}
return nil
})
}
using Pulumi;
using Pulumi.Kubernetes.Core.V1;
using Pulumi.Kubernetes.Types.Inputs.Core.V1;
using Pulumi.Kubernetes.Helm.V3;
using Pulumi.Kubernetes.Helm;
using Pulumi.Kubernetes.Types.Inputs.Meta.V1;
using Pulumi.Kubernetes.ApiExtensions;
using System.Collections.Generic;
return await Deployment.RunAsync(() =>
{
var secretsStoreCsiDriver = new Release("secrets-store-csi-driver", new()
{
Chart = "secrets-store-csi-driver",
Namespace = "kube-system",
RepositoryOpts = new Pulumi.Kubernetes.Types.Inputs.Helm.V3.RepositoryOptsArgs
{
Repo = "https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts",
},
Values = new Dictionary<string, object>
{
{ "nodeSelector", new Dictionary<string, object>
{
{ "kubernetes.io/os", "linux" },
} },
},
});
var secretsStoreCsiPulumiEscProvider = new Release("secrets-store-csi-pulumi-esc-provider", new()
{
Chart = "oci://ghcr.io/pulumi/helm-charts/pulumi-esc-csi-provider",
Namespace = "kube-system",
Values = new Dictionary<string, object>
{
{ "nodeSelector", new Dictionary<string, object>
{
{ "kubernetes.io/os", "linux" },
} },
},
}, new CustomResourceOptions { DependsOn = { secretsStoreCsiDriver } });
var config = new Config();
var pulumiPAT = config.Require("pulumi-pat");
var mySecret = new Secret("my-secret", new SecretArgs
{
Metadata = new ObjectMetaArgs
{
Namespace = "default",
Name = "pulumi-access-token",
},
StringData =
{
{ "pulumi-access-token", pulumiPAT }
},
Type = "Opaque",
}, new CustomResourceOptions { DependsOn = { secretsStoreCsiPulumiEscProvider } });
});
class SecretProviderClassArgs : CustomResourceArgs
{
public SecretProviderClassArgs(): base("secrets-store.csi.x-k8s.io/v1", "SecretProviderClass")
{
}
[Input("spec")]
public Input<SecretProviderClassSpecArgs>? Spec { get; set; }
}
class SecretProviderClassSpecArgs : ResourceArgs
{
[Input("provider")]
public Input<string>? Provider { get; set; }
[Input("parameters")]
public Input<InputMap<object>>? Parameters { get; set; }
}
class SecretProviderParametersArgs : ResourceArgs
{
[Input("apiUrl")]
public Input<string>? ApiUrl { get; set; }
[Input("organization")]
public Input<string>? Organization { get; set; }
[Input("project")]
public Input<string>? Project { get; set; }
[Input("environment")]
public Input<string>? Environment { get; set; }
[Input("authSecretName")]
public Input<string>? AuthSecretName { get; set; }
[Input("authSecretNamespace")]
public Input<string>? AuthSecretNamespace { get; set; }
[Input("secrets")]
public Input<string>? Secrets { get; set; }
}
Deploy the stack by running:
pulumi up
And you should see that the secret was created in the Kubernetes cluster and the ESO instance was deployed successfully.
kubectl get secret pulumi-access-token -o jsonpath='{.data.PULUMI_ACCESS_TOKEN}' | base64 -d
Step 2: Create a SecretProviderClass
Now, we will create a secret in the Pulumi ESC project and synchronize it into the Kubernetes cluster by creating an SecretProviderClass
:
Create a new ESC environment called csi-secrets-store-app
in the esc-secrets-store-csi-driver-demo
project:
values:
app:
hello: world
hello-secret:
fn::secret: world
If you prefer to use the Pulumi CLI, you can create the environment by running:
pulumi env init <your-org>/esc-secrets-store-csi-driver-demo/csi-secrets-store-app
And set the configuration by running the env edit
command and copy the above YAML into the editor:
pulumi env edit <your-org>/esc-secrets-store-csi-driver-demo/csi-secrets-store-app
Either way, you should see following environment configuration in the Pulumi Cloud Console:
We can now create the SecretProviderClass
in the Kubernetes cluster:
const secretProviderClass = new k8s.apiextensions.CustomResource(
"example-provider-pulumi-esc",
{
apiVersion: "secrets-store.csi.x-k8s.io/v1",
kind: "SecretProviderClass",
metadata: {
name: "example-provider-pulumi-esc",
namespace: "default",
},
spec: {
provider: "pulumi",
parameters: {
apiUrl: "https://api.pulumi.com/api/esc",
organization: "dirien",
project: "esc-secrets-store-csi-driver-demo",
environment: "csi-secrets-store-app",
authSecretName: mySecret.metadata.name,
authSecretNamespace: mySecret.metadata.namespace,
secrets: `- secretPath: "/"
fileName: "hello"
secretKey: "app.hello"
`,
},
},
},
{ dependsOn: secretsStoreCSIPulumiESCProvider },
);
const secretProviderClass = new k8s.apiextensions.CustomResource(
"example-provider-pulumi-esc",
{
apiVersion: "secrets-store.csi.x-k8s.io/v1",
kind: "SecretProviderClass",
metadata: {
name: "example-provider-pulumi-esc",
namespace: "default",
},
spec: {
provider: "pulumi",
parameters: {
apiUrl: "https://api.pulumi.com/api/esc",
organization: "dirien",
project: "esc-secrets-store-csi-driver-demo",
environment: "csi-secrets-store-app",
authSecretName: mySecret.metadata.name,
authSecretNamespace: mySecret.metadata.namespace,
secrets: `- secretPath: "/"
fileName: "hello"
secretKey: "app.hello"
`,
},
},
},
{ dependsOn: secretsStoreCSIPulumiESCProvider },
);
secret_provider_class = k8s.apiextensions.CustomResource(
"example-provider-pulumi-esc",
api_version="secrets-store.csi.x-k8s.io/v1",
kind="SecretProviderClass",
metadata=k8s.meta.v1.ObjectMetaArgs(
name="example-provider-pulumi-esc", namespace="default"
),
spec={
"provider": "pulumi",
"parameters": {
"apiUrl": "https://api.pulumi.com/api/esc",
"organization": "dirien",
"project": "esc-secrets-store-csi-driver-demo",
"environment": "csi-secrets-store-app",
"authSecretName": my_secret.metadata["name"],
"authSecretNamespace": my_secret.metadata["namespace"],
"secrets": '- secretPath: "/"\n fileName: "hello"\n secretKey: "app.hello"\n',
},
},
opts=pulumi.ResourceOptions(depends_on=[secrets_store_csi_pulumi_esc_provider]),
)
secretProviderClass, err := apiextensions.NewCustomResource(ctx, "example-provider-pulumi-esc", &apiextensions.CustomResourceArgs{
ApiVersion: pulumi.String("secrets-store.csi.x-k8s.io/v1"),
Kind: pulumi.String("SecretProviderClass"),
Metadata: &metav1.ObjectMetaArgs{
Name: pulumi.String("example-provider-pulumi-esc"),
Namespace: pulumi.String("default"),
},
OtherFields: k8s.UntypedArgs{
"provider": pulumi.String("pulumi"),
"parameters": pulumi.Map{
"apiUrl": pulumi.String("https://api.pulumi.com/api/esc"),
"organization": pulumi.String("dirien"),
"project": pulumi.String("esc-secrets-store-csi-driver-demo"),
"environment": pulumi.String("csi-secrets-store-app"),
"authSecretName": mySecret.Metadata.Name().Elem(),
"authSecretNamespace": mySecret.Metadata.Namespace().Elem(),
"secrets": pulumi.String(`- secretPath: "/"
fileName: "hello"
secretKey: "app.hello"
`),
},
},
}, pulumi.DependsOn([]pulumi.Resource{secretsStoreCsiPulumiEscProvider}))
if err != nil {
return err
}
var secretProviderClass = new Pulumi.Kubernetes.ApiExtensions.CustomResource("example-provider-pulumi-esc", new SecretProviderClassArgs
{
Metadata = new ObjectMetaArgs
{
Name = "example-provider-pulumi-esc",
Namespace = "default",
},
Spec = new SecretProviderClassSpecArgs
{
Provider = "pulumi",
Parameters = new InputMap<object>
{
{ "apiUrl", "https://api.pulumi.com/api/esc" },
{ "organization", "dirien" },
{ "project", "esc-secrets-store-csi-driver-demo" },
{ "environment", "csi-secrets-store-app" },
{ "authSecretName", mySecret.Metadata.Apply(metadata => metadata.Name) },
{ "authSecretNamespace", mySecret.Metadata.Apply(metadata => metadata.Namespace) },
{ "secrets", "- secretPath: \"/\"\n fileName: \"hello\"\n secretKey: \"app.hello\"\n" }
},
},
}, new CustomResourceOptions { DependsOn = { secretsStoreCsiPulumiEscProvider } });
We can check that the secret was successfully synchronized by running:
kubectl get secretproviderclasses example-provider-pulumi-esc
Step 3: Deploy an Application and Mount the Secret
Now, we can deploy an application that references the secret from the Kubernetes cluster. I am going to use busybox
that reads mounted file in.
const deployment = new k8s.apps.v1.Deployment("example-provider-pulumi-esc", {
metadata: {
name: "example-provider-pulumi-esc",
namespace: "default",
labels: {
app: "example-provider-pulumi-esc",
},
},
spec: {
replicas: 1,
selector: {
matchLabels: {
app: "example-provider-pulumi-esc",
},
},
template: {
metadata: {
labels: {
app: "example-provider-pulumi-esc",
},
},
spec: {
containers: [
{
name: "client",
image: "busybox:latest",
command: ["sh", "-c"],
args: [
`set -eux
ls /run/secrets
find /run/secrets/ -mindepth 1 -maxdepth 1 -not -name '.*' | xargs -t -I {} sh -c 'echo "$(cat "{}")"'
tail -f /dev/null`,
],
volumeMounts: [
{
name: "data",
mountPath: "/run/secrets",
},
],
},
],
volumes: [
{
name: "data",
csi: {
driver: "secrets-store.csi.k8s.io",
readOnly: true,
volumeAttributes: {
secretProviderClass: secretProviderClass.metadata.name,
},
},
},
],
},
},
},
});
export const deploymentName = deployment.metadata.name;
const deployment = new k8s.apps.v1.Deployment("example-provider-pulumi-esc", {
metadata: {
name: "example-provider-pulumi-esc",
namespace: "default",
labels: {
app: "example-provider-pulumi-esc",
},
},
spec: {
replicas: 1,
selector: {
matchLabels: {
app: "example-provider-pulumi-esc",
},
},
template: {
metadata: {
labels: {
app: "example-provider-pulumi-esc",
},
},
spec: {
containers: [
{
name: "client",
image: "busybox:latest",
command: ["sh", "-c"],
args: [
`set -eux
ls /run/secrets
find /run/secrets/ -mindepth 1 -maxdepth 1 -not -name '.*' | xargs -t -I {} sh -c 'echo "$(cat "{}")"'
tail -f /dev/null`,
],
volumeMounts: [
{
name: "data",
mountPath: "/run/secrets",
},
],
},
],
volumes: [
{
name: "data",
csi: {
driver: "secrets-store.csi.k8s.io",
readOnly: true,
volumeAttributes: {
secretProviderClass: secretProviderClass.metadata.name,
},
},
},
],
},
},
},
});
exports.deploymentName = deployment.metadata.name;
deployment = k8s.apps.v1.Deployment(
"example-provider-pulumi-esc",
metadata=k8s.meta.v1.ObjectMetaArgs(
namespace="default",
name="example-provider-pulumi-esc",
labels={
"app": "example-provider-pulumi-esc",
},
),
spec=k8s.apps.v1.DeploymentSpecArgs(
replicas=1,
selector=k8s.meta.v1.LabelSelectorArgs(
match_labels={
"app": "example-provider-pulumi-esc",
},
),
template=k8s.core.v1.PodTemplateSpecArgs(
metadata=k8s.meta.v1.ObjectMetaArgs(
labels={
"app": "example-provider-pulumi-esc",
},
),
spec=k8s.core.v1.PodSpecArgs(
containers=[
k8s.core.v1.ContainerArgs(
name="client",
image="busybox:latest",
command=["sh", "-c"],
args=[
"set -eux\nls /run/secrets\nfind /run/secrets/ -mindepth 1 -maxdepth 1 -not -name '.*' | xargs -t -I {} sh -c 'echo \"$(cat \"{}\")\"'\ntail -f /dev/null",
],
volume_mounts=[
k8s.core.v1.VolumeMountArgs(
name="data",
mount_path="/run/secrets",
),
],
),
],
volumes=[
k8s.core.v1.VolumeArgs(
name="data",
csi=k8s.core.v1.CSIVolumeSourceArgs(
driver="secrets-store.csi.k8s.io",
read_only=True,
volume_attributes={
"secretProviderClass": secret_provider_class.metadata[
"name"
],
},
),
),
],
),
),
),
)
pulumi.export("deploymentName", deployment.metadata["name"])
deployment, err := appv1.NewDeployment(ctx, "example-provider-pulumi-esc", &appv1.DeploymentArgs{
Metadata: &metav1.ObjectMetaArgs{
Name: pulumi.String("example-provider-pulumi-esc"),
Namespace: pulumi.String("default"),
Labels: pulumi.StringMap{
"app": pulumi.String("example-provider-pulumi-esc"),
},
},
Spec: &appv1.DeploymentSpecArgs{
Replicas: pulumi.Int(1),
Selector: &metav1.LabelSelectorArgs{
MatchLabels: pulumi.StringMap{
"app": pulumi.String("example-provider-pulumi-esc"),
},
},
Template: &corev1.PodTemplateSpecArgs{
Metadata: &metav1.ObjectMetaArgs{
Labels: pulumi.StringMap{
"app": pulumi.String("example-provider-pulumi-esc"),
},
},
Spec: &corev1.PodSpecArgs{
Containers: corev1.ContainerArray{
&corev1.ContainerArgs{
Name: pulumi.String("client"),
Image: pulumi.String("busybox:latest"),
Command: pulumi.StringArray{
pulumi.String("sh"),
pulumi.String("-c"),
},
Args: pulumi.StringArray{
pulumi.String(`set -eux
ls /run/secrets
find /run/secrets/ -mindepth 1 -maxdepth 1 -not -name '.*' | xargs -t -I {} sh -c 'echo "$(cat "{}")"'
tail -f /dev/null`),
},
VolumeMounts: &corev1.VolumeMountArray{
&corev1.VolumeMountArgs{
Name: pulumi.String("data"),
MountPath: pulumi.String("/run/secrets"),
},
},
},
},
Volumes: &corev1.VolumeArray{
&corev1.VolumeArgs{
Name: pulumi.String("data"),
Csi: corev1.CSIVolumeSourceArgs{
Driver: pulumi.String("secrets-store.csi.k8s.io"),
ReadOnly: pulumi.Bool(true),
VolumeAttributes: pulumi.StringMap{
"secretProviderClass": secretProviderClass.Metadata.Name().Elem(),
},
},
},
},
},
},
},
}, pulumi.DependsOn([]pulumi.Resource{secretProviderClass}))
if err != nil {
return err
}
var deployment = new Pulumi.Kubernetes.Apps.V1.Deployment("example-provider-pulumi-esc", new Pulumi.Kubernetes.Types.Inputs.Apps.V1.DeploymentArgs
{
Metadata = new ObjectMetaArgs
{
Name = "example-provider-pulumi-esc",
Namespace = "default",
Labels =
{
{ "app", "example-provider-pulumi-esc" },
},
},
Spec = new Pulumi.Kubernetes.Types.Inputs.Apps.V1.DeploymentSpecArgs
{
Replicas = 1,
Selector = new Pulumi.Kubernetes.Types.Inputs.Meta.V1.LabelSelectorArgs
{
MatchLabels =
{
{ "app", "example-provider-pulumi-esc" },
},
},
Template = new Pulumi.Kubernetes.Types.Inputs.Core.V1.PodTemplateSpecArgs
{
Metadata = new ObjectMetaArgs
{
Labels =
{
{ "app", "example-provider-pulumi-esc" },
},
},
Spec = new Pulumi.Kubernetes.Types.Inputs.Core.V1.PodSpecArgs
{
Containers =
{
new Pulumi.Kubernetes.Types.Inputs.Core.V1.ContainerArgs
{
Name = "client",
Image = "busybox:latest",
Command =
{
"sh",
"-c",
},
Args =
{
"set -eux\nls /run/secrets\nfind /run/secrets/ -mindepth 1 -maxdepth 1 -not -name '.*' | xargs -t -I {} sh -c 'echo \"$(cat \"{}\")\"'\ntail -f /dev/null",
},
VolumeMounts =
{
new Pulumi.Kubernetes.Types.Inputs.Core.V1.VolumeMountArgs
{
Name = "data",
MountPath = "/run/secrets",
},
},
},
},
Volumes =
{
new Pulumi.Kubernetes.Types.Inputs.Core.V1.VolumeArgs
{
Name = "data",
Csi = new Pulumi.Kubernetes.Types.Inputs.Core.V1.CSIVolumeSourceArgs
{
Driver = "secrets-store.csi.k8s.io",
ReadOnly = true,
VolumeAttributes =
{
{ "secretProviderClass", secretProviderClass.Metadata.Apply(metadata => metadata.Name) },
},
},
},
},
},
},
},
}, new CustomResourceOptions { DependsOn = { secretProviderClass } });
After deploying the stack, you can get the logs of the busybox
pod to see that the secret was successfully mounted:
NAME=$(kubectl get pods -o name | grep example-provider-pulumi-esc | cut -d'/' -f2)
kubectl logs $NAME
You should see the following output:
+ ls /run/secrets
hello
+ find /run/secrets/ -mindepth 1 -maxdepth 1 -not -name '.*'
+ xargs -t -I '{}' sh -c 'echo "$(cat "{}")"'
sh -c echo "$(cat "/run/secrets/hello")"
world
+ tail -f /dev/null
Or you can exec into the pod and check the content of the mounted secret:
kubectl exec -it $NAME -- cat /run/secrets/hello
You should see the following output:
world
Step 4: Clean Up
After you are done with the demo, you can clean up the resources by running:
pulumi destroy
Conclusion
The Secret Store CSI Driver is a great option for managing secrets in a cloud-native way when the use of Kubernetes secrets is not an option due to enhanced security requirements.
Below is a quick comparison of the External Secrets Operator and the Secrets Store CSI Driver to help you decide which one is the best fit for your use case. As always, the best way to find out is to try both solutions and see which one fits your requirements best.
The good part is, whatever you choose, you can use Pulumi ESC to manage your secrets and avoid any secret sprawl in your organization as both solutions are supported by Pulumi ESC.
Comparison of the External Secrets Operator and the Secrets Store CSI Driver
Feature/Aspect | External Secrets Operator | CSI Secrets Driver |
---|---|---|
Primary Use Case | Synchronizing external secrets into Kubernetes as native secrets. | Mounting secrets directly to pods as files or environment variables. |
Integration with External Secret Stores | Supports multiple external secret stores like AWS Secrets Manager, Google Secret Manager, Vault, Azure Key Vault, etc. | Primarily supports integrations defined by the CSI specification (e.g., Vault, AWS, Azure Key Vault, Pulumi ESC). |
Mechanism for Secrets Delivery | Secrets are pulled and stored as Kubernetes Secret resources. | Secrets are mounted directly into pods as files or injected as environment variables. |
Kubernetes Resource Requirements | Requires a custom Kubernetes resource (ExternalSecret ) and the External Secrets Operator. | Relies on the Kubernetes CSI (Container Storage Interface) standard and requires a specific CSI driver. |
Secret Rotation | Supports automatic rotation by polling the external secret store at defined intervals. | Supports rotation, but the application needs to handle reloading mounted secrets. |
Security Considerations | Secrets are stored temporarily in Kubernetes as native secrets, which may pose a risk if not properly managed. | Secrets are mounted directly and not stored as native Kubernetes secrets, reducing exposure but relying on file permissions. |
Customization and Features | Highly customizable with fine-grained mappings and transformation options. | Limited customization, focused on mounting secrets as-is. |
Application Compatibility | Applications access secrets as Kubernetes secrets (e.g., via env variables or volumes). | Applications access secrets directly via mounted files or injected env variables. |
Performance Considerations | Potential slight delay due to syncing secrets to Kubernetes secrets. | Faster access as secrets are directly mounted. |
Community and Ecosystem | Supported by a growing community; integrates well with diverse secret management solutions. | Part of the Kubernetes ecosystem; supports CSI-compliant secret storage solutions. |
Best Use Cases | Ideal for scenarios where secrets need to be accessed as Kubernetes native secrets. | Best for applications requiring direct access to secrets without intermediate storage in Kubernetes secrets. |
You can find detailed information about all current available Pulumi ESC Kubernetes integrations in the Pulumi ESC documentation.
Let us know what you think about the Secret Store CSI Driver
and how you are managing secrets in your Kubernetes clusters. We would love to hear your feedback and experiences.