skip to Main Content

I am using a containerized Spring boot application in Kubernetes. But the application automatically exits and restarts with exit code 143 and error message "Error".

I am not sure how to identify the reason for this error.

My first idea was that Kubernetes stopped the container due to too high resource usage, as described here, but I can’t see the corresponding kubelet logs.

Is there any way to identify the cause/origin of the SIGTERM? Maybe from spring-boot itself, or from the JVM?

2

Answers


  1. Exit Code 143

    1. It denotes that the process was terminated by an external signal.

    2. The number 143 is a sum of two numbers: 128+x, # where x is the signal number sent to the process that caused it to terminate.

    3. In the example, x equals 15, which is the number of the SIGTERM signal, meaning the process was killed forcibly.

    Hope this helps better.

    Login or Signup to reply.
  2. I’ve just run into this exact same problem. I was able to track down the origin of the Exit Code 143 by looking at the logs on the Kubernetes nodes (note, the logs on the node not the pod). (I use Lens as an easy way to get a node shell but there are other ways)

    Then if you look in /var/log/messages for terminated you’ll see something like this:

    Feb  2 11:52:27 np-26992252-3 kubelet[23125]: I0202 11:52:27.541751   23125 kubelet.go:2214] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="default/app-compute-deployment-56ccffd87f-8s78v"
    Feb  2 11:52:27 np-26992252-3 kubelet[23125]: I0202 11:52:27.541920   23125 kubelet.go:2214] "SyncLoop (probe)" probe="readiness" status="" pod="default/app-compute-deployment-56ccffd87f-8s78v"
    Feb  2 11:52:27 np-26992252-3 kubelet[23125]: I0202 11:52:27.543274   23125 kuberuntime_manager.go:707] "Message for Container of pod" containerName="app" containerStatusID={Type:containerd ID:c3426d6b07fe3bd60bcbe675bab73b6b4b3619ef4639e1c23bca82692633765e} pod="default/app-comp
    ute-deployment-56ccffd87f-8s78v" containerMessage="Container app failed liveness probe, will be restarted"
    Feb  2 11:52:27 np-26992252-3 kubelet[23125]: I0202 11:52:27.543374   23125 kuberuntime_container.go:723] "Killing container with a grace period" pod="default/app-compute-deployment-56ccffd87f-8s78v" podUID=89fdc1a2-3a3b-4d57-8a4d-ab115e52dc85 containerName="app" containerID="con
    tainerd://c3426d6b07fe3bd60bcbe675bab73b6b4b3619ef4639e1c23bca82692633765e" gracePeriod=30
    Feb  2 11:52:27 np-26992252-3 containerd[22741]: time="2023-02-02T11:52:27.543834687Z" level=info msg="StopContainer for "c3426d6b07fe3bd60bcbe675bab73b6b4b3619ef4639e1c23bca82692633765e" with timeout 30 (s)"
    Feb  2 11:52:27 np-26992252-3 containerd[22741]: time="2023-02-02T11:52:27.544593294Z" level=info msg="Stop container "c3426d6b07fe3bd60bcbe675bab73b6b4b3619ef4639e1c23bca82692633765e" with signal terminated"
    

    The bit to look out for is containerMessage="Container app failed liveness probe, will be restarted"

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search