skip to Main Content

I have what seems like a straightforward PV and PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: www-pvc
spec:
  storageClassName: ""
  volumeName: www-pv
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: www-pv
spec:
  storageClassName: ""
  claimRef:
    name: www-pvc
  capacity:
    storage: 1Mi
  accessModes:
    - ReadOnlyMany
  nfs:
    server: 192.168.1.100
    path: "/www"

For some reason these do not bind to each other and the PVC stays "pending" forever:

$ kubectl get pv,pvc
NAME                      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM      STORAGECLASS   REASON   AGE
persistentvolume/www-pv   1Mi        ROX            Retain           Available   /www-pvc                           107m

NAME                            STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/www-pvc   Pending   www-pv   0                                        107m

How can I debug the matching? Which service does the matching in k3s? Would I be looking in the log of the k3s binary (running as a service under Debian)?

4

Answers


  1. I think the problem is that the PVC is trying to get a PV of size 1Gi but your PV is of size 1M.

    So, the bind is failing. You can fix this by either increasing the PV size or reducing the PVC size.

    Use kubectl describe pvc to get more info about events and the reason for the failures.

    To further clarify, a PVC is a request for the storage so if you say you need 1G of storage in claim but you only provision 1M of actual storage, the PVC is going to be stay in Pending state. Based on this, the size defined in PVC should always be less than or equal to the PV size.

    Login or Signup to reply.
  2. PV size can not be smaller than PVC size.

    in otherwords

    PVC 1 GB size can not be greater than PV 1 MB size.

    Please update the PV & PVC sizes.

    Login or Signup to reply.
  3. In Kubernetes documentation about Persistent Volumes you can find information that :

    A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.

    A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources.

    In Binding section you have information :

    Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.

    In Openshift Documentation – Volume and Claim Pre-binding you can find information that when you are using pre-binding you are skipping some matchings.

    If you know exactly what PersistentVolume you want your PersistentVolumeClaim to bind to, you can specify the PV in your PVC using the volumeName field. This method skips the normal matching and binding process. The PVC will only be able to bind to a PV that has the same name specified in volumeName. If such a PV with that name exists and is Available, the PV and PVC will be bound regardless of whether the PV satisfies the PVC’s label selector, access modes, and resource requests.

    Issue 1

    In your PV configuration you set

      capacity:
        storage: 1Mi
    

    which means that you have storage with 1Mi which is ~ 1.04 MB.

    Your PVC was configured to request 1Gi which is ~ 1.07GB.

      resources:
        requests:
          storage: 1Gi
    

    Your PV didn’t fulfill your PVC request.

    You can have many PV with example 5Gi storage but none of them will be bound if PVC request is higher than 5Gi, like 6Gi. But if PV storage is higher 6Gi and PVC request is lower, like 5Gi it will be bounded, however 1Gi will be wasted.

    Issue 2

    If you will describe your PVC you will find Warning below:

    Events:
      Type     Reason         Age               From                         Message
      ----     ------         ----              ----                         -------
      Warning  FailedBinding  2s (x2 over 17s)  persistentvolume-controller  volume "www-pv" already bound to a different claim.
    

    In your configuration you are using something called Pre-Binding as you have specified volumeName in PVC and claimRef in PV.

    This example is well described in OpenShift Documentation – Using Persistent Volumes. In your current setup you’ve used claimRef.name but you didn’t specify claimRef.namespace.

    $ kubectl get pv,pvc
    NAME                      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM      STORAGECLASS   REASON   AGE
    persistentvolume/www-pv   1Gi        ROX            Retain           Available   /www-pvc                           4m28s
    
    NAME                            STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    persistentvolumeclaim/www-pvc   Pending   www-pv   0                                        4m28s
    

    But when you add claimRef.namespace it will work.

    $ kubectl get pv,pvc
    NAME                      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
    persistentvolume/www-pv   1Gi        ROX            Retain           Bound    default/www-pvc                           7m3s
    
    NAME                            STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    persistentvolumeclaim/www-pvc   Bound    www-pv   1Gi        ROX                           7m3s
    

    You should specify PVC's namespace in your PV's spec.claimRef.namespace as PVC is namespaced resource.

    $ kubectl api-resources | grep pv
    persistentvolumeclaims            pvc                                         true         PersistentVolumeClaim
    persistentvolumes                 pv                                          false        PersistentVolume
    

    Solution

    In your PV change spec.capacity.storage to 1Gi.

    In your PV add spec.claimRef.namespace: default like on the example below:

    spec:
      storageClassName: ""
      claimRef:
        name: www-pvc
        namespace: default        # adding namespace: defaults
      capacity:
        storage: 1Gi              # changed storage size
    

    Please let me know if you were able to bind PV and PVC.

    Login or Signup to reply.
  4. This is an addition to the answers provided above, (pv/pvc size correction)

    You should make sure you have nfs-common package installed and that you can mount that nfs export in the node itself.

    Since storageClassName is empty in your definition – i can advise looking into
    https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search