Kubernetes cluster defaults
With a vanilla cluster running, create any desired resources, and logically segment the cluster as needed.
The full code for this stack is on GitHub.
The full code for this stack is on GitHub.
The full code for this stack is on GitHub.
Overview
We’ll examine how to create:
Prerequisites
Authenticate as the admins role from the Identity stack.
$ aws sts assume-role --role-arn `pulumi stack output adminsIamRoleArn` --role-session-name k8s-admin
$ export KUBECONFIG=`pwd`/kubeconfig-admin.json
$ az login --service-principal --username $ARM_CLIENT_ID --password $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID
$ export KUBECONFIG=`pwd`/kubeconfig-admin.json
Authenticate as the admins ServiceAccount from the Identity stack.
$ gcloud auth activate-service-account --key-file k8s-admin-sa-key.json
$ export KUBECONFIG=`pwd`/kubeconfig.json
Namespaces
Create namespaces for typical stacks:
- Cluster Services: Deploy cluster-scoped services, such as logging and monitoring.
- App Services: Deploy application-scoped services, such as ingress or DNS management.
- Apps: Deploy applications and workloads.
cat > namespaces.yaml << EOF
apiVersion: v1
kind: Namespace
metadata:
name: cluster-svcs
---
apiVersion: v1
kind: Namespace
metadata:
name: app-svcs
---
apiVersion: v1
kind: Namespace
metadata:
name: apps
EOF
$ kubectl apply -f namespaces.yaml
import * as k8s from "@pulumi/kubernetes";
// Create Kubernetes namespaces.
const clusterSvcsNamespace = new k8s.core.v1.Namespace("cluster-svcs", undefined, { provider: cluster.provider });
export const clusterSvcsNamespaceName = clusterSvcsNamespace.metadata.name;
const appSvcsNamespace = new k8s.core.v1.Namespace("app-svcs", undefined, { provider: cluster.provider });
export const appSvcsNamespaceName = appSvcsNamespace.metadata.name;
const appsNamespace = new k8s.core.v1.Namespace("apps", undefined, { provider: cluster.provider });
export const appsNamespaceName = appsNamespace.metadata.name;
Quotas
Create quotas to restrict the number of resources that can be consumed across all Pods in a namespace.
cat > quota.yaml << EOF
apiVersion: v1
kind: ResourceQuota
metadata:
name: quota
spec:
hard:
cpu: "20"
memory: "1Gi"
pods: "10"
replicationcontrollers: "20"
resourcequotas: "1"
services: "5"
EOF
$ kubectl apply -f quota.yaml
import * as k8s from "@pulumi/kubernetes";
// Create a resource quota in the apps namespace.
const quotaAppNamespace = new k8s.core.v1.ResourceQuota("apps", {
metadata: {namespace: appsNamespaceName},
spec: {
hard: {
cpu: "20",
memory: "1Gi",
pods: "10",
replicationcontrollers: "20",
resourcequotas: "1",
services: "5",
},
}
},{
provider: cluster.provider
});
Track the quota usage in the namespace using kubectl and Pulumi output.
$ kubectl describe quota -n `pulumi stack output appsNamespaceName`
Name: apps-tb8bxlvb
Namespace: apps-x1z818eg
Resource Used Hard
-------- ---- ----
cpu 0 20
memory 0 1Gi
pods 0 10
replicationcontrollers 0 20
resourcequotas 1 1
services 0 5
Pod Security Standards
Kubernetes replaced PodSecurityPolicy (PSP) with the Pod Security Admission controller, which became stable in Kubernetes 1.25. PSP was removed in Kubernetes 1.25, and is no longer available in current versions of EKS, AKS, or GKE.
Pod Security Admission enforces Pod Security Standards at the namespace level using namespace labels. There are three policy levels:
| Level | Description |
|---|---|
privileged | Unrestricted; allows known privilege escalations |
baseline | Minimally restrictive; prevents known privilege escalations |
restricted | Heavily restricted; follows pod hardening best practices |
Each level can be applied in three modes:
| Mode | Description |
|---|---|
enforce | Policy violations cause the pod to be rejected |
warn | Policy violations trigger a user-facing warning but do not block |
audit | Policy violations are recorded in the audit log |
Apply Pod Security Standards to a namespace
Apply pod security standards to a namespace by adding labels. For most application
workloads, enforce the restricted level:
cat > apps-namespace.yaml << EOF
apiVersion: v1
kind: Namespace
metadata:
name: apps
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/audit: restricted
EOF
$ kubectl apply -f apps-namespace.yaml
import * as k8s from "@pulumi/kubernetes";
const appsNamespace = new k8s.core.v1.Namespace("apps", {
metadata: {
labels: {
"pod-security.kubernetes.io/enforce": "restricted",
"pod-security.kubernetes.io/warn": "restricted",
"pod-security.kubernetes.io/audit": "restricted",
},
},
}, { provider: cluster.provider });
For workloads that require elevated permissions, such as ingress controllers, use the
privileged or baseline level on those specific namespaces instead:
cat > ingress-nginx-namespace.yaml << EOF
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
pod-security.kubernetes.io/enforce: privileged
EOF
$ kubectl apply -f ingress-nginx-namespace.yaml
import * as k8s from "@pulumi/kubernetes";
const ingressNamespace = new k8s.core.v1.Namespace("ingress-nginx", {
metadata: {
name: "ingress-nginx",
labels: {
"pod-security.kubernetes.io/enforce": "privileged",
},
},
}, { provider: cluster.provider });
For more details, see the Kubernetes Pod Security Standards documentation.
Thank you for your feedback!
If you have a question about how to use Pulumi, reach out in Community Slack.
Open an issue on GitHub to report a problem or suggest an improvement.