kubespy trace: a real-time view into of a Kubernetes Service

This post is part 3 in a series on the Kubernetes API. Earlier, Part 1 focused on the lifecycle of a
Pod, and later Part 3 details how Kubernetes deployments work.
Why isn’t my Pod getting any traffic?
An experienced ops team running on GKE might assemble the following checklist to help answer this question:
- Does a
Serviceexist? Does that service have a.spec.selectorthat matches some number ofPods? - Are the
Pods alive and has their readiness probe passed? - Did the
Servicecreate anEndpointsobject that specifies one or morePods to direct traffic to? - Is the
Servicereachable via DNS? When youkubectl ``execinto aPodand you usecurlto poke theServicehostname, do you get a response? (If not, does anyServicehave a DNS entry?) - Is the
Servicereachable via IP? When you SSH into aNodeand you usecurlto poke theServiceIP, do you get a response? - Is
kube-proxyup? Is it writing iptables rules? Is it proxying to theService?
This question might have the highest complexity-to-sentence-length ratio of any question in the Kubernetes ecosystem. Unfortunately, it’s also a question that every user finds themselves asking at some point. And when they do, it usually means their app is down.
To help answer questions like this, we’ve been developing a small
diagnostic tool, kubespy. In this post we’ll look at the new
kubespy trace command, which is broadly aimed at automating questions
1, 2, 3, and providing “hints” about 4 and 5.





