-
Notifications
You must be signed in to change notification settings - Fork 975
Description
Debugging pods by creating a copy (specifically changing the container image or the running command) can lead to the pod being restarted via liveness & startup health check failures.
eg: if I have a http service with a httpGet
liveness probe configured, starting a debug container via kubectl debug myapp -it --copy-to=myapp-debug --container=myapp -- sh
won't be listening on a port, and will fail the liveness checks and be restarted, killing my debug shell.
What would you like to be added:
An easy way of disabling liveness & startup probes for debug pod copies, and doing it by default. Longer discussion on readiness probes follows below.
Why is this needed:
Debug shells dying every few seconds is a bit a hindrance to debugging. 😄
Readiness Probes:
These prevent pods from receiving traffic, so they're IMO conceptually useful to keep: if I start a http service in my debug pod with verbose logging enabled, having it receive traffic could be a good idea. Ctrl-C it and it'd go back to being unready and stop receiving traffic. Not copying readiness probes marks the pod as always ready.
Labels aren't copied by default to debug pods, so to get any service traffic routed I need to add labels to my running debug pod.
Alternative viewpoint: if I was making & stepping through test requests with a daemon running in a debugger, kubernetes doing GET /healthz
every 3 seconds could be extremely annoying.
Ideas:
- Don't copy liveness & startup probes when using
--copy-to=
, only readiness probes - Don't copy any probes when using
--copy-to=
. - Adding a
--copy-probes=all|none|(liveness|readiness|startup,...)
flag, with default asreadiness
- (3), with default as
none
(2) is probably the simplest approach that covers the most use cases.