In K8s i’m practising the example 6.1. A pod with two containers sharing the same volume: fortune-pod.yaml from the book kubernetes in Action. In volumes concept where my pod contain 2 containers, one of the containers is not running, Please guide me where i’m doing wrong. to run the pod successfully.
on checking the logs of the container i’m getting the below error:
Defaulted container "fortune-cont" out of: fortune-cont, web-server
But where as in pod description events it looks like this.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40m default-scheduler Successfully assigned book/vol-1-fd556f5dc-8ggj6 to minikube
Normal Pulled 40m kubelet Container image "nginx:alpine" already present on machine
Normal Created 40m kubelet Created container web-server
Normal Started 40m kubelet Started container web-server
Normal Created 39m (x4 over 40m) kubelet Created container fortune-cont
Normal Started 39m (x4 over 40m) kubelet Started container fortune-cont
Normal Pulled 38m (x5 over 40m) kubelet Container image "xxxx/fortune:v1" already present on machine
Warning BackOff 25s (x188 over 40m) kubelet Back-off restarting failed container
here is my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: vol-1
namespace: book
spec:
replicas: 1
selector:
matchLabels:
name: fortune-vol-1
type: volume
template:
metadata:
labels:
name: fortune-vol-1
type: volume
spec:
containers:
- image: ****/fortune:v1
name: fortune-cont
volumeMounts:
- name: html
mountPath: /var/htdocs
- image: nginx:alpine
name: web-server
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
ports:
- containerPort: 80
protocol: TCP
volumes:
- name: html
emptyDir: {}
Here is my pod description for containers.
Containers:
fortune-cont:
Container ID: docker://3959e47a761b670ee826b2824efed09d8f5d6dfd6451c4c9840eebff018a3586
Image: prav33n/fortune:v1
Image ID: docker-pullable://prav33n/fortune@sha256:671257f6387a1ef81a293f8aef27ad7217e4281e30b777a7124b1f6017a330f8
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 24 Nov 2022 02:05:26 +0530
Finished: Thu, 24 Nov 2022 02:05:26 +0530
Ready: False
Restart Count: 17
Environment: <none>
Mounts:
/var/htdocs from html (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-spdq4 (ro)
web-server:
Container ID: docker://37d831a2f7e97abadb548a21ecb20b5c784b5b3d6102cf8f939f2c13cdfd08c0
Image: nginx:alpine
Image ID: docker-pullable://nginx@sha256:455c39afebd4d98ef26dd70284aa86e6810b0485af5f4f222b19b89758cabf1e
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 24 Nov 2022 01:02:55 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/usr/share/nginx/html from html (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-spdq4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
html:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-spdq4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 4m20s (x281 over 64m) kubelet Back-off restarting failed container
2
Answers
Your Pod named
vol-1
has two containers:fortune-cont
web-server
If you run
kubectl logs vol-1
, Kubernetes doesn’t know which container you’re asking about, so it has to pick one, and tells you:You can select a container explicitly with the
-c
option:Your fortune container terminates immediately after it has started with exit code 0. Without knowing what it is expected to do it is hard to tell what’s going wrong.
Exit code 0 usually is indicating a normal exit without errors.
In kubernetes, this is usually expted from init containers. So is your Pod spec wrong in that fortune shoud be an init container?
If not, you can show the log out put of the previously terminated container by using the -p flag:
Maybe this gives you a hint why it is exiting.