skip to Main Content

I have a microservice that is working on my laptop. However, I am using docker compose. I am working to deploy to a kubernetes cluster which I have already set up. I am stuck on making data persistent. E.g here is my mongodb in docker-compose

systemdb:
    container_name: system-db
    image: mongo:4.4.1
    restart: always
    ports:
      - '9000:27017'
    volumes:
      - ./system_db:/data/db
    networks:
      - backend

Since it is an on premise solution, I went with an NFS server. I have created a Persistent Volume and Persistent Volume Claim (pvc-nfs-pv1) which seem to work well when testing with nginx. However, I don’t know how to deploy a mongodb statefulset to use the pvc. I am not implementing a replicaset.

Here is my yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongod
spec:
  serviceName: mongodb-service
  replicas: 1
  selector:
    matchLabels:
      role: mongo
  template:
    metadata:
      labels:
        role: mongo
        environment: test
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongod-container
          image: mongo
          resources:
            requests:
              cpu: "0.2"
              memory: 200Mi
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: pvc-nfs-pv1
              mountPath: /data/db
  volumeClaimTemplates:
  - metadata:
      name: pvc-nfs-pv1
       annotations:
         volume.beta.kubernetes.io/storage-class: "standard"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 500Mi

How do i do it?

4

Answers


  1. volumeClaimTemplates are used for dynamic volume provisioning. So you’re defining one volume claim template which will be used to create a PersistentVolumeClaim for each pod.

    The volumeClaimTemplates will provide stable storage using
    PersistentVolumes
    provisioned by a PersistentVolume Provisioner

    So for your use case you would need to create storageclass with nfs provisioner. NFS Subdir external provisioner is an automatic provisioner that use your existing and already configured NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as ${namespace}-${pvcName}-${pvName}.

    Here`s an example how to define storage class:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: managed-nfs-storage
    provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
    parameters:
      pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}" # waits for nfs.io/storage-path annotation, if not specified will accept as empty string.
      onDelete: delete
    
    Login or Signup to reply.
  2. Your question is how the mongo StatefulSet is going to use the pvc u have created ? By default It wont . It will create numbers of new pvc (depending of number of replicaset) automatically via the volumeClaimTemplates and it will be named like so : pvc-nfs-pv1-mongod-0 , pvc-nfs-pv1-mongod-1 etc ..
    So if you want to use the pvc you created change the name to match pvc-nfs-pv1-mongod-0
    something like this

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      labels:
        role: mongo
      name: pvc-nfs-pv1-mongod-0
      namespace: default
    spec:
    ...
      volumeName: nfs-pv1
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 500Mi
    ...
    

    However I dont recommend using this method (issue: when you have many other replicasets .. do you have to create all the pvcs manually and the correspondent pv ).. Here is similar questions is asked in here and also in here , I recommend using Dynamic NFS Provisioning.

    Hope I helped

    Login or Signup to reply.
  3. I do not use NFS but volumes at hetzner.com where my dev server is running. But I have exactly the same problem: As it is my dev system I destroy and rebuild it regularly. And by doing so I want the data on my volumes survive the deletion of the whole cluster. And when I rebuild it, all the volumes shall be mounted to the right pod.

    For my postgres this works just fine. But using the mongodb kubernetes operator I am not able to get this running. The one mongodb pod stays forever in the state "Pending" because the PVC I created and bound manually to the volume is already bound to a volume. Or so it seems to me.

    I am thankful for any help,
    Tobias

    The exact message I can see is:

    0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims
    

    PVC and PV:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: data-volume-system-mongodb-0
      labels:
        app: moderetic
        type: mongodb
    spec:
      storageClassName: hcloud-volumes
      volumeName: mongodb-data-volume
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: mongodb-data-volume
      labels:
        app: moderetic
        type: mongodb
    spec:
      storageClassName: hcloud-volumes
      claimRef:
        name: data-volume-system-mongodb-0
      capacity:
        storage: 10Gi
      accessModes:
        - ReadWriteOnce
      csi:
        volumeHandle: "11099996"
        driver: csi.hetzner.cloud
        fsType: ext4
    

    And the mongodb StatefulSet:

    apiVersion: mongodbcommunity.mongodb.com/v1
    kind: MongoDBCommunity
    metadata:
      name: system-mongodb
      labels:
        app: moderetic
        type: mongodb
    spec:
      members: 1
      type: ReplicaSet
      version: "4.2.6"
      security:
        authentication:
          modes: ["SCRAM"]
      users:
        - name: moderetic
          db: moderetic
          passwordSecretRef:
            name: mongodb-secret
          roles:
            - name: clusterAdmin
              db: moderetic
            - name: userAdminAnyDatabase
              db: moderetic
          scramCredentialsSecretName: moderetic-scram-secret
      additionalMongodConfig:
        storage.wiredTiger.engineConfig.journalCompressor: zlib
      persistent: true
      statefulSet:
        spec:
          template:
            spec:
              containers:
                - name: mongod
                  resources:
                    requests:
                      cpu: 1
                      memory: 1Gi
                    limits:
                      memory: 8Gi
                - name: mongodb-agent
                  resources:
                    requests:
                      memory: 50Mi
                    limits:
                      cpu: 500m
                      memory: 256Mi
          volumeClaimTemplates:
            - metadata:
                name: data-volume
              spec:
                accessModes: ["ReadWriteOnce"]
                storageClassName: hcloud-volumes
                resources:
                  requests:
                    storage: 10Gi
            - metadata:
                name: logs-volume
              spec:
                accessModes: ["ReadWriteOnce"]
                storageClassName: hcloud-volumes
                resources:
                  requests:
                    storage: 10Gi
    
    
    Login or Signup to reply.
  4. Ok, I have a solution. It works simply by selecting the volume by using the matchLabels selector.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: mongodb-data-volume
      labels:
        app: moderetic
        type: mongodb
        role: data
    spec:
      storageClassName: hcloud-volumes
      capacity:
        storage: 10Gi
      accessModes:
        - ReadWriteOnce
      csi:
        volumeHandle: "11099996"
        driver: csi.hetzner.cloud
        fsType: ext4
    
    ---
    
    ---
    apiVersion: mongodbcommunity.mongodb.com/v1
    kind: MongoDBCommunity
    metadata:
      name: system-mongodb
      labels:
        app: moderetic
        type: mongodb
    spec:
      members: 1
      type: ReplicaSet
      version: "4.2.6"
      logLevel: INFO
      security:
        authentication:
          modes: ["SCRAM"]
      users:
        - name: moderetic
          db: moderetic
          passwordSecretRef:
            name: mongodb-secret
          roles:
            - name: clusterAdmin
              db: moderetic
            - name: userAdminAnyDatabase
              db: moderetic
          scramCredentialsSecretName: moderetic-scram-secret
      additionalMongodConfig:
        storage.wiredTiger.engineConfig.journalCompressor: zlib
      persistent: true
      statefulSet:
        spec:
          template:
            spec:
              containers:
                - name: mongod
                  resources:
                    requests:
                      cpu: 1
                      memory: 1Gi
                    limits:
                      memory: 8Gi
                - name: mongodb-agent
                  resources:
                    requests:
                      memory: 50Mi
                    limits:
                      cpu: 500m
                      memory: 256Mi
          volumeClaimTemplates:
            - metadata:
                name: data-volume
              spec:
                accessModes: ["ReadWriteOnce"]
                storageClassName: hcloud-volumes
                resources:
                  requests:
                    storage: 10Gi
                selector:
                  matchLabels:
                    app: moderetic
                    type: mongodb
                    role: data
            - metadata:
                name: logs-volume
              spec:
                accessModes: ["ReadWriteOnce"]
                storageClassName: hcloud-volumes
                resources:
                  requests:
                    storage: 10Gi
                selector:
                  matchLabels:
                    app: moderetic
                    type: mongodb
                    role: logs
    
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search