How can you see the logs from a Pod that crashed in Kubernetes? That was the question I needed to answer when I took a look the output from watching the running Pods in my namespace:
watch -n 1 kubectl -n my-environment get pods
Every 1.0s: kubectl -n my-environment get pods
NAME READY STATUS RESTARTS AGE
myenv-123abc-987xy 1/1 Running 4 (44m ago) 3d21h
myenv-123abc-987mn 1/1 Running 7 (44m ago) 3d11h
myenv-api-456asd-654op 1/1 Running 0 5d1h
myenv-api-456asd-654lm 1/1 Running 1 (3d10h ago) 3d10h
Code language: Shell Session (shell)
From the above I could see that several of the pods had been regularly restarting. That was unexpected.
The problem is that in order to get the logs, normally you would do something like:
# one time / static
kubectl logs -n my-environment myenv-123abc-987xy
# real-time / regularly updated
kubectl logs -n my-environment myenv-123abc-987xy --follow
Code language: PHP (php)
And that kicks out the current logs.
All good.
But what that doesn’t show you is why the pod crashed.
Now, if you know how or why the pod is crashing, you could just sit and watch the terminal, then wait for the specific thing to happen which causes the pod to crash, et voilà, you have the log you need.
However, if you aren’t sure why the pod is crashing / restarting, or you simply don’t want to wait for the problem to recur, a better way is to view the logs from the previous container.
# see why the previous run of this container crashed
kubectl logs -n my-environment myenv-123abc-987xy --previous
Code language: PHP (php)
And that should spit out a whole raft of stuff, providing your app is set up to log helpful messages of course.
20:34:09.943Z ERROR MyApp: Error in some important process
node:internal/process/promises:288
triggerUncaughtException(err, true /* fromPromise */);
^
// ... lots of stuff
Code language: JavaScript (javascript)
Hopefully that’s plenty of data to get you started debugging what might be causing your container(s) to ‘randomly’ restart.