I have what seems like a straightforward PV and PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: www-pvc
spec:
storageClassName: ""
volumeName: www-pv
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: www-pv
spec:
storageClassName: ""
claimRef:
name: www-pvc
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
nfs:
server: 192.168.1.100
path: "/www"
For some reason these do not bind to each other and the PVC stays "pending" forever:
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/www-pv 1Mi ROX Retain Available /www-pvc 107m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/www-pvc Pending www-pv 0 107m
How can I debug the matching? Which service does the matching in k3s? Would I be looking in the log of the k3s binary (running as a service under Debian)?
4
Answers
I think the problem is that the
PVC
is trying to get aPV
of size1Gi
but yourPV
is of size1M
.So, the bind is failing. You can fix this by either increasing the
PV
size or reducing thePVC
size.Use
kubectl describe pvc
to get more info about events and the reason for the failures.To further clarify, a
PVC
is a request for the storage so if you say you need1G
of storage in claim but you only provision1M
of actual storage, the PVC is going to be stay inPending
state. Based on this, the size defined inPVC
should always be less than or equal to thePV
size.PV size can not be smaller than PVC size.
in otherwords
PVC 1 GB size can not be greater than PV 1 MB size.
Please update the PV & PVC sizes.
In Kubernetes documentation about Persistent Volumes you can find information that :
In Binding section you have information :
In Openshift Documentation – Volume and Claim Pre-binding you can find information that when you are using
pre-binding
you are skipping some matchings.Issue 1
In your
PV
configuration you setwhich means that you have storage with 1Mi which is ~ 1.04 MB.
Your
PVC
was configured to request 1Gi which is ~ 1.07GB.Your
PV
didn’t fulfill yourPVC
request.You can have many
PV
with example5Gi
storage but none of them will be bound ifPVC
request is higher than5Gi
, like6Gi
. But ifPV
storage is higher6Gi
andPVC
request is lower, like5Gi
it will be bounded, however1Gi
will be wasted.Issue 2
If you will describe your
PVC
you will findWarning
below:In your configuration you are using something called
Pre-Binding
as you have specifiedvolumeName
inPVC
andclaimRef
inPV
.This example is well described in OpenShift Documentation – Using Persistent Volumes. In your current setup you’ve used
claimRef.name
but you didn’t specifyclaimRef.namespace
.But when you add
claimRef.namespace
it will work.You should specify
PVC's
namespace in yourPV's spec.claimRef.namespace
asPVC
isnamespaced
resource.Solution
In your
PV
changespec.capacity.storage
to1Gi
.In your
PV
addspec.claimRef.namespace: default
like on the example below:Please let me know if you were able to bind
PV
andPVC
.This is an addition to the answers provided above, (pv/pvc size correction)
You should make sure you have
nfs-common
package installed and that you can mount that nfs export in the node itself.Since
storageClassName
is empty in your definition – i can advise looking intohttps://github.com/kubernetes-sigs/nfs-subdir-external-provisioner