I have created a GCE Disk and I created a Persistent Volume with that Disk and claimed the PV successfully. But when I deploy the pod, it gives me an error. Below are the details.
$ gcloud compute disks list
NAME LOCATION LOCATION_SCOPE SIZE_GB TYPE STATUS
test-kubernetes-disk asia-southeast1-a zone 200 pd-standard READY
pod.yml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: /test-pd
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-gce
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 200Gi
storageClassName: fast
gcePersistentDisk:
pdName: test-kubernetes-disk
fsType: ext4
pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: fast
Below are the events of the pod.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned default/mypod to worker-0
Warning FailedMount 9m6s kubelet, worker-0 MountVolume.SetUp failed for volume "pv-gce" : mount of disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce
Output: Running scope as unit: run-r4b3f35b2b0354f26ba64375388054054.scope
mount: /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk does not exist.
Warning FailedMount 6m52s kubelet, worker-0 MountVolume.SetUp failed for volume "pv-gce" : mount of disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce
Output: Running scope as unit: run-ra8fb00a02d6145fa9c54e88adf81e942.scope
mount: /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk does not exist.
Warning FailedMount 5m52s (x2 over 8m9s) kubelet, worker-0 Unable to attach or mount volumes: unmounted volumes=[mypd], unattached volumes=[default-token-s82xz mypd]: timed out waiting for the condition
Warning FailedMount 4m35s kubelet, worker-0 MountVolume.SetUp failed for volume "pv-gce" : mount of disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce
Output: Running scope as unit: run-rf86d063bc5e44878831dc2734575e9cf.scope
mount: /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk does not exist.
Warning FailedMount 2m18s kubelet, worker-0 MountVolume.SetUp failed for volume "pv-gce" : mount of disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce
Output: Running scope as unit: run-rb9edbe05f62449d0aa0d5ed8bedafb29.scope
mount: /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk does not exist.
Warning FailedMount 80s (x3 over 10m) kubelet, worker-0 Unable to attach or mount volumes: unmounted volumes=[mypd], unattached volumes=[mypd default-token-s82xz]: timed out waiting for the condition
Warning FailedAttachVolume 8s (x5 over 11m) attachdetach-controller AttachVolume.NewAttacher failed for volume "pv-gce" : Failed to get GCE GCECloudProvider with error <nil>
Warning FailedMount 3s kubelet, worker-0 MountVolume.SetUp failed for volume "pv-gce" : mount of disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce
Output: Running scope as unit: run-r5290d9f978834d4681966a40c3f535fc.scope
mount: /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk does not exist.
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-gce 200Gi RWO Retain Bound default/myclaim fast 23m
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myclaim Bound pv-gce 200Gi RWO fast 22m
Please kindly help with this.
4
Answers
@Emon, here is the output for disk describe.
you are missing the
claimRef
spec inpv
. you need to add theclaimRef
field inpv
which is going to help you to bound thepv
with desiredpvc
.and also make sure that pv and pod are in same zone.
GCE Persistent Disks are a zonal resource, so the
pod
can only request aPersistent Disk
that is in its zone.try to apply them :
pv.yml
pvc.yml
storage class should be like this:
and pod should be like this
can you retry ? just delete everything.
follow this steps:
then apply this one
pod.yaml
here you don’t need to bother about
pv
andpvc
@Emon, Still the issue exists. I just deleted everything. Deleted the disk, pods, pv, pvc and storageclass. Just executed the provided pod.yml. And created the new disk.
BTW are you sure that I don’t want to specify the cloud provider flag?