We often need answers to simple questions about Kubernetes resources. Questions like: How many distinct versions of MySQL are running in my cluster? Which Pods are scheduled on nodes with high memory pressure? Which Pods are publicly exposed to the internet via a load-balanced Service? Each of these questions would normally be answered by invoking kubectl multiple times to list resources of each type, and manually parsing the output to join it together into a single report.
The Helm community is one of the brightest spots in the infrastructure ecosystem: collectively, it has accumulated person-decades of operational expertise to produce Kubernetes manifests that “just work.”
But for many users, it is not feasible to run *everything* in Kubernetes, and the community is just starting to develop answers to questions like: what happens when a Helm Chart needs to interface with, for example, a managed database like AWS RDS or Azure CosmosDB?
Pulumi is a cloud native development platform designed to be able to express any cloud native infrastructure as code in a natural, intentional manner using familiar languages. The most natural way to solve this challenge would be to stand up an instance of AWS RDS, populate a Kubernetes Secret with the connection details, and then simply let my application use these newly available resources. Pulumi gives users the primitives they need in order to achieve tasks like this most effectively.
What is happening when a
Deployment rolls out a change to your app?
What does it actually do when a
Pod crashes or is killed? What happens
Pod is re-labled so that it’s not targeted by the
Deployment is probably the most complex resource type in Kubernetes
Deployment specifies how changes should be rolled out over
ReplicaSets, which themselves specify how
Pods should be replicated
in a cluster.
In this post we continue our exploration of the Kubernetes API, cracking
Deployment open using
kubespy, a small tool we developed to observe
Kubernetes resources in real-time.
Why isn’t my
Pod getting any traffic?
An experienced ops team running on GKE might assemble the following checklist to help answer this question:
- Does a
Serviceexist? Does that service have a
.spec.selectorthat matches some number of
- Are the
Pods alive and has their readiness probe passed?
- Did the
Endpointsobject that specifies one or more
Pods to direct traffic to?
- Is the
Servicereachable via DNS? When you
Podand you use
curlto poke the
Servicehostname, do you get a response? (If not, does any
Servicehave a DNS entry?)
- Is the
Servicereachable via IP? When you SSH into a
Nodeand you use
curlto poke the
ServiceIP, do you get a response?
kube-proxyup? Is it writing iptables rules? Is it proxying to the
This question might have the highest complexity-to-sentence-length ratio of any question in the Kubernetes ecosystem. Unfortunately, it’s also a question that every user finds themselves asking at some point. And when they do, it usually means their app is down.
To help answer questions like this, we’ve been developing a small
kubespy. In this post we’ll look at the new
kubespy trace command, which is broadly aimed at automating questions
1, 2, 3, and providing “hints” about 4 and 5.
One of the most popular features of the recent v0.15.2 release of Pulumi is fine-grained status updates for Kubernetes resources. On the CLI they look like this:
Kubernetes is a powerful container orchestrator for cloud native
applications that can run on any cloud – AWS, Azure, GCP – in
addition to hybrid and on-premises environments. Its CLI,
offers basic built-in support for performing deployments, but
intentionally stops short here. In particular, it doesn’t offer diffs
and previews, the ability to know when a deployment has succeeded or
failed, and why, and/or sophisticated deployment orchestration.
In this post, we’ll see how Pulumi, an open source cloud native development platform, can not only let you express Kubernetes programs in familiar programming languages, like TypeScript, instead of endless YAML templates, but also how Pulumi delivers simple and reproducible, yet powerful, Kubernetes deployment workflows.