skip to Main Content

I am trying to understand what these settings do.
PV – spec.capacity.storage
PVC – spec.resources.requests.storage

I am trying to limit the amount of storage space that can be consumed by a pod i.e. a fixed size. Both of these settings take a setting like "10G". Everything I have tried so far does not appear to impose a limit.

Can someone explain these settings or how I can limit the storage space used?

Thanks.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /mnt/Storage/nfs-test
    server: ip_address
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pv-pod
spec:
  volumes:
    - name: nginx-pv-storage
      persistentVolumeClaim:
        claimName: nfs-pvc
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
          name: "nginx-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: nginx-pv-storage

$ kubectl describe pv nfs-pv
Name:            nfs-pv
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    nfs
Status:          Bound
Claim:           default/nfs-pvc
Reclaim Policy:  Recycle
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        10Gi
Node Affinity:   <none>
Message:
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    ip_address
    Path:      /mnt/Storage/nfs-test
    ReadOnly:  false
Events:        <none>

$ kubectl describe pvc nfs-pvc
Name:          nfs-pvc
Namespace:     default
StorageClass:  nfs
Status:        Bound
Volume:        nfs-pv
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Used By:       nginx-pv-pod
Events:        <none>

So I am guessing the "capacity" setting does nothing recognizable.

2

Answers


  1. Chosen as BEST ANSWER

    What I have gleaned and learned:

    A PersistentVolume (PV) allows Kubernetes to allocate a section of disk space that can then be claimed (all or part) by a PersistentVolumeClaim. If you create a PV of say 10GB, you can claim up to 10GBs of space from that PV. You can have multiple PVCs claim space in a PV as long as the sum total of the PVCs are the same or less that the size of the PV. If you try to create a PVC that is more that the size of the PV, the PVC claim will fail and the pods that use the PVC will be marked "Pending".

    Creating a PV of 10GB only limits the claims of the PVCs. If you create a PVC and mount it into a pod, the pod can still have file(s) of greater that 10GB. In fact there is no file size limit in the PVC!


  2. I am trying to limit the amount of storage space that can be consumed by a pod

    There’s essentially three things here. One is the size of the image itself; there’s no way to limit this. A second is volumes you mount into the container; this is for example the PersistentVolume you show, and by analogy you might think of it like plugging in an external USB disk that doesn’t "count against" your internal hard drive storage. The third is disk space used up by the container process while it’s running, for things like temporary files or on-disk logs.

    When you say "limit the storage space of a container" I think of that third option. Kubernetes calls this ephemeral storage and you can assign resource limits to it the same way you can memory and CPU:

    apiVersion: v1
    kind: Pod
    spec:
      containers:
        - name: nginx
          resources:
            limits:
              ephemeral-storage: 10Gi
    

    For the example you show, you probably do not need an additional volume. It can be a little tricky to copy content in and out of a volume, so you don’t usually want to use a volume to hide things like the HTML content in an image.

    A more typical example of these would be a database container. You want persistent storage for the database data itself, that’s capable of following the database pod around the cluster if it gets deleted and recreated. A typical sequence there is

    1. The operator creates a StatefulSet to run the database
    2. Kubernetes creates a PersistentVolumeClaim for each replica from the template in the StatefulSet, including the PVC’s requested storage size
    3. A persistent volume provisioner running as cluster infrastructure gets appropriate storage (for example, an AWS EBS volume) and records its details in a PersistentVolume

    So with this flow you don’t directly create the PersistentVolume; but conversely you do specify the size you want that storage to be when you create the PersistentVolumeClaim.

    Your setup uses NFS as a storage backend. That’s fine if you have an NFS server, but the metadata in the PersistentVolume/Claim doesn’t imply any sort of disk quota on the NFS backend. The PVC requests that at least 10 GiB of storage is available and the PV asserts it’s there, but in reality you can probably use the full disk of the NFS server, however large it is.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search