skip to Main Content

I am having a problem in my Kubernetes cluster. Currently I am running my Laravel application in kubernetes with success. Now I am trying to make the storage folder in my app a persistant volume, because it can be used to store images and stuff. My deployment looks like this now:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: laravel-api-app
  namespace: my-project
  labels:
    app.kubernetes.io/name: laravel-api-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: laravel-api-app
  template:
    metadata:
      labels:
        app: laravel-api-app
    spec:
      containers:
        - name: laravel-api-app
          image: me/laravel-api:v1.0.0
          ports:
            - name: laravel
              containerPort: 8080
          imagePullPolicy: Always
          envFrom:
            - secretRef:
                name: laravel-api-secret
            - configMapRef:
                name: laravel-api-config
          volumeMounts:
            - name: storage
              mountPath: /var/www/html/storage
      imagePullSecrets:
        - name: regcred
      volumes:
        - name: storage
          persistentVolumeClaim:
            claimName: laravel-api-persistant-volume-claim

As you can see my claim is mounted to the /var/www/html/storage folder. Now in my Dockerfile I set all my folders to the user nobody like this:

USER nobody
COPY --chown=nobody . /var/www/html

However, using this results in the following folder rights in my pod (ls -la):

drwxrwxrwx    1 www-data www-data      4096 Mar 14 18:24 .
drwxr-xr-x    1 root     root          4096 Feb 26 17:43 ..
-rw-rw-rw-    1 nobody   nobody          48 Mar 12 22:27 .dockerignore
-rw-rw-rw-    1 nobody   nobody         220 Mar 12 22:27 .editorconfig
-rw-r--r--    1 nobody   nobody         718 Mar 14 18:22 .env
-rw-rw-rw-    1 nobody   nobody         660 Mar 14 18:22 .env.example
-rw-rw-rw-    1 nobody   nobody         718 Mar 14 12:10 .env.pipeline
-rw-rw-rw-    1 nobody   nobody         111 Mar 12 22:27 .gitattributes
-rw-rw-rw-    1 nobody   nobody         171 Mar 14 12:10 .gitignore
drwxrwxrwx    2 nobody   nobody        4096 Mar 14 12:30 .gitlab-ci-scripts
-rw-rw-rw-    1 nobody   nobody        2336 Mar 14 01:13 .gitlab-ci.yml
-rw-rw-rw-    1 nobody   nobody         174 Mar 12 22:27 .styleci.yml
-rw-rw-rw-    1 nobody   nobody         691 Mar 14 10:02 Makefile
drwxrwxrwx    6 nobody   nobody        4096 Mar 12 22:27 app
-rwxrwxrwx    1 nobody   nobody        1686 Mar 12 22:27 artisan
drwxrwxrwx    1 nobody   nobody        4096 Mar 12 22:27 bootstrap
-rw-rw-rw-    1 nobody   nobody        1476 Mar 12 22:27 composer.json
-rw-rw-rw-    1 nobody   nobody      261287 Mar 12 22:27 composer.lock
drwxrwxrwx    2 nobody   nobody        4096 Mar 14 12:10 config
drwxrwxrwx    5 nobody   nobody        4096 Mar 12 22:27 database
drwxrwxrwx    5 nobody   nobody        4096 Mar 13 09:45 docker
-rw-rw-rw-    1 nobody   nobody         569 Mar 14 12:27 docker-compose-test.yml
-rw-rw-rw-    1 nobody   nobody         584 Mar 14 12:27 docker-compose.yml
-rw-rw-rw-    1 nobody   nobody        1013 Mar 14 18:24 package.json
-rw-rw-rw-    1 nobody   nobody        1405 Mar 12 22:27 phpunit.xml
drwxrwxrwx    5 nobody   nobody        4096 Mar 14 18:23 public
-rw-rw-rw-    1 nobody   nobody        3496 Mar 12 22:27 readme.md
drwxrwxrwx    6 nobody   nobody        4096 Mar 12 22:27 resources
drwxrwxrwx    2 nobody   nobody        4096 Mar 12 22:27 routes
drwxrwxrwx    2 nobody   nobody        4096 Mar 12 22:27 scripts
-rw-rw-rw-    1 nobody   nobody         563 Mar 12 22:27 server.php
drwxr-xr-x    2 root     root          4096 Mar 14 18:18 storage
drwxrwxrwx    4 nobody   nobody        4096 Mar 12 22:27 tests
drwxr-xr-x   38 nobody   nobody        4096 Mar 14 18:22 vendor
-rw-rw-rw-    1 nobody   nobody         538 Mar 12 22:27 webpack.mix.js

As you can see, my storage folder has root/root which I also want to be nobody/nobody. I thought about creating an initContainer like this:

initContainers:
  - name: setup-storage
    image: busybox
    command: ['sh', '-c', '/path/to/setup-script.sh']
    volumeMounts:
      - name: storage
        mountPath: /path/to/storage/directory

With setup-script.sh containing:

#!/bin/sh

chown -R nobody:nobody /path/to/storage/directory
chmod -R 755 /path/to/storage/directory

But I have a feeling that there should be (or is) something much simpler to get the result I want.

I already tried adding securityContext with id: 65534 like so:

securityContext:
runAsUser: 65534
runAsGroup: 65534
fsGroup: 65534

But that resulted in the same root/root owner/group. The last thing I tried was creating a initContainer like this:

initContainers:
  - name: laravel-api-init
    image: me/laravel-api:v1.0.0
    args:
      - /bin/bash
      - -c
      - cp -Rnp /var/www/html/storage/* /mnt
    imagePullPolicy: Always
    envFrom:
      - secretRef:
          name: laravel-api-secret
      - configMapRef:
          name: laravel-api-config
    volumeMounts:
      - name: storage
        mountPath: /mnt

This "should" copy all the content to /mnt which is the mounted location for the storage and then start the real deployment which mounts the copied data in the app. Unfortunatly this returns the error: Init:ExitCode:127 kubernetes, which is weird, because both of those locations do exist. One other thing with this approach that should not happen (I don’t know if it will) is that once the volume contains data from a previous session (maybe after server reboot), that it doesn’t tamper with the already existing data of the app.

In short

So after this explanation and my tries, here is what I am trying to achieve. I want my Laravel application to have a Persistant Volume (the storage folder), so that I limit the developers of that Laravel app to a given storage. For instance, when I create a PV of 5GB, they cannot store more than 5GB of data for their application. This storage has to be persistant, so that after a server reboot, the storage is still there!

Update

Here is the updated yaml with security context:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: laravel-api-app
  namespace: my-project
  labels:
    app.kubernetes.io/name: laravel-api-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: laravel-api-app
  template:
    metadata:
      labels:
        app: laravel-api-app
    spec:
      containers:
        - name: laravel-api-init
          image: docker.argoplan.nl/clients/opus-volvere/laravel-api/production:v1.0.0
          args:
            - /bin/sh
            - -c
            - cp -Rnp /var/www/html/storage/* /mnt
          imagePullPolicy: Always
          envFrom:
            - secretRef:
                name: laravel-api-secret
            - configMapRef:
                name: laravel-api-config
          volumeMounts:
            - name: storage
              mountPath: /mnt
          securityContext:
            fsGroup: 65534
            fsGroupChangePolicy: "OnRootMismatch"
      imagePullSecrets:
        - name: regcred
      volumes:
        - name: storage
          persistentVolumeClaim:
            claimName: laravel-api-persistant-volume-claim

For debugging purpose I copied my initContainer as actual container, so I can see my container logs in ArgoCD. If is is an initContainer, I can’t see any logs. Using the yaml above, I see this in the logs:

cp: can't create directory '/mnt/app': Permission denied
cp: can't create directory '/mnt/framework': Permission denied

This is the live manifest, which apparantly does not contain the new security context, while I generated the app just now:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/containerID: 0a4ce0e873c92442fdaf1ac8a1313966bd995ae65471b34f70b9de2634edecf9
    cni.projectcalico.org/podIP: 10.1.10.55/32
    cni.projectcalico.org/podIPs: 10.1.10.55/32
  creationTimestamp: '2023-03-17T09:17:58Z'
  generateName: laravel-api-app-74b7d9584c-
  labels:
    app: laravel-api-app
    pod-template-hash: 74b7d9584c
  name: laravel-api-app-74b7d9584c-4dc9h
  namespace: my-project
  ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: laravel-api-app-74b7d9584c
      uid: d2e2ab4d-0916-43fc-b294-3e5eb2778c0d
  resourceVersion: '4954636'
  uid: 12327d67-cdf9-4387-afe8-3cf536531dd2
spec:
  containers:
    - args:
        - /bin/sh
        - '-c'
        - cp -Rnp /var/www/html/storage/* /mnt
      envFrom:
        - secretRef:
            name: laravel-api-secret
        - configMapRef:
            name: laravel-api-config
      image: 'me/laravel-api:v1.0.0'
      imagePullPolicy: Always
      name: laravel-api-init
      resources: {}
      securityContext: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
        - mountPath: /mnt
          name: storage
        - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
          name: kube-api-access-8cfg8
          readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
    - name: regcred
  nodeName: tohatsu
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
  volumes:
    - name: storage
      persistentVolumeClaim:
        claimName: laravel-api-persistant-volume-claim
    - name: kube-api-access-8cfg8
      projected:
        defaultMode: 420
        sources:
          - serviceAccountToken:
              expirationSeconds: 3607
              path: token
          - configMap:
              items:
                - key: ca.crt
                  path: ca.crt
              name: kube-root-ca.crt
          - downwardAPI:
              items:
                - fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
                  path: namespace
status:
  conditions:
    - lastProbeTime: null
      lastTransitionTime: '2023-03-17T09:17:58Z'
      status: 'True'
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: '2023-03-17T09:17:58Z'
      message: 'containers with unready status: [laravel-api-init]'
      reason: ContainersNotReady
      status: 'False'
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: '2023-03-17T09:17:58Z'
      message: 'containers with unready status: [laravel-api-init]'
      reason: ContainersNotReady
      status: 'False'
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: '2023-03-17T09:17:58Z'
      status: 'True'
      type: PodScheduled
  containerStatuses:
    - containerID: >-
        containerd://eaf8e09f0e2aceec6cb26e09406518a5d9851f94dfb8f8be3ce3e65ee47e282c
      image: 'me/laravel-api:v1.0.0'
      imageID: >-
        me/laravel-api@secret
      lastState:
        terminated:
          containerID: >-
            containerd://eaf8e09f0e2aceec6cb26e09406518a5d9851f94dfb8f8be3ce3e65ee47e282c
          exitCode: 1
          finishedAt: '2023-03-17T09:20:53Z'
          reason: Error
          startedAt: '2023-03-17T09:20:53Z'
      name: laravel-api-init
      ready: false
      restartCount: 5
      started: false
      state:
        waiting:
          message: >-
            back-off 2m40s restarting failed container=laravel-api-init
            pod=laravel-api-app-74b7d9584c-4dc9h_my-project(12327d67-cdf9-4387-afe8-3cf536531dd2)
          reason: CrashLoopBackOff
  hostIP: 192.168.1.8
  phase: Running
  podIP: 10.1.10.55
  podIPs:
    - ip: 10.1.10.55
  qosClass: BestEffort
  startTime: '2023-03-17T09:17:58Z'

2

Answers


  1. You didn’t mention your k8s version. My answer might not be suitable for you, when you’re using k8s below of v1.23.

    Kubernetes can setup the permissions for you. Use fsGroup and fsGroupChangePolicy and k8s will take over the job for you.

    containers:
      - name: laravel-api-app
        image: me/laravel-api:v1.0.0
        ports:
          - name: laravel
            containerPort: 8080
        imagePullPolicy: Always
        envFrom:
          - secretRef:
            name: laravel-api-secret
          - configMapRef:
              name: laravel-api-config
        volumeMounts:
          - name: storage
            mountPath: /var/www/html/storage
        # this part is new
        securityContext:
          # user/group of nobody should have the highest possible id
          fsGroup: 65534
          fsGroupChangePolicy: "OnRootMismatch"
    

    Related configuration specs from k8s

    Login or Signup to reply.
    • For dynamically provisioning of persistent volume of the required size use storageClass in your laravel-api-persistant-volume-claim definition and request storage of specific size (by using requests and limits). For example:
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: jenkins-pvc  
      labels:
        app: jenkins-pvc
    spec:
      storageClassName: storage-class  
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Gi
        limits:
          storage: 100Gi
    

    You may also set storageClassName to "". In that case, default storage class in your Kubernetes cluster will be used (e.g. cloud provider’s default storage class)

    • Your approach for setting storage permissions on the storage folder in initContainer is correct. Otherwise you need to provision the storage, create data folders of the required size there and change the permissions of the folders manually before deploying the pod which contradicts the whole point of dynamic storage provisioning on Kubernetes.

    Note, that init container has to run as root. You can see real world example of using init container for changing ownership and permissions of Jenkins data folder below.

    This is an excerpt from values.yaml of Jenkins helm chart, but you can take the relevant data and put to your Kubernetes manifiests.

      customInitContainers: 
      - name: fix-jenkins-home-permissions
        image: "alpine"
        securityContext: 
          runAsUser: 0
        volumeMounts:
        - name: jenkins-home
          mountPath: /var/jenkins_home
        command:
          - sh
          - -c
          - (chmod 0775 /var/jenkins_home; chown -R 1000:1000 /var/jenkins_home)
    

    Disclaimer: I wrote linked articles.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search