skip to Main Content

I have several microservices, each one with its own mongodb deployment. I would like to start with getting my auth service working with a persistent volume. I have watched courses where postgresql is used and read a lot in the kubernetes docs but am having trouble getting this to work for mongodb.

When I run skaffold dev the PVC is created with no errors. kubectl shows the PVC is in Bound status, and running describe on the PVC shows my mongo deployment as the user.

However, when I visit my client service in the browser, I signup, logout, signin again with no problem and then if I restart skaffold so it deletes and recreates the containers my data is gone and I have to signup again.

Here are my files
auth-mongo-depl.yaml

# auth-mongo service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
  name: auth-mongo-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: auth-mongo
  template:
    metadata:
      labels:
        app: auth-mongo
    spec:
      volumes:
        - name: auth-mongo-data
          persistentVolumeClaim:
            claimName: auth-mongo-pvc
      containers:
        - name: auth-mongo
          image: mongo
          ports:
            - containerPort: 27017
              name: 'auth-mongo-port'
          volumeMounts:
            - name: auth-mongo-data
              mountPath: '/data/db'
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
  name: auth-mongo-ip-srv
spec:
  selector:
    app: auth-mongo
  type: ClusterIP
  ports:
    - name: auth-mongo-db
      protocol: TCP
      port: 27017
      targetPort: 27017
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: auth-mongo-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi

auth-depl.yaml

# auth service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
  name: auth-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: auth
  template:
    metadata:
      labels:
        app: auth
    spec:
      containers:
        - name: auth
          image: isimmons33/ticketing-auth
          env:
            - name: MONGO_URI
              value: 'mongodb://auth-mongo-ip-srv:27017/auth'
            - name: JWT_KEY
              valueFrom:
                secretKeyRef:
                  name: jwt-secret
                  key: JWT_KEY
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
  name: auth-ip-srv
spec:
  selector:
    app: auth
  type: ClusterIP
  ports:
    - name: auth
      protocol: TCP
      port: 3000
      targetPort: 3000

api/users portion of my ingress-srv.yaml

- path: /api/users/?(.*)
            pathType: Prefix
            backend:
              service:
                name: auth-ip-srv
                port:
                  number: 3000

My client fires off a post request to /api/users/auth with which I can successfully signup or signin as long as I don’t restart skaffold.

I even used kubectl to get a shell into my mongo deployment and queried to see the new user account there as it should be. But of course it is gone after restarting skaffold.

I am on Windows 10 but am running everything through WSL2 (Ubuntu)

Thanks for any help

2

Answers


  1. Chosen as BEST ANSWER

    The solution as pointed out by raghu_manne was to use StatefulSets. But because the link posted is extremely old, here is the full solution that worked for me.

    Also here is a youtube video I just found that explains StatefulSet and volumeClaimTemplates quite well.

    How to run MongoDB with StatefulSets in Kubernetes

    auth-mongo-depl.yaml

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: auth-mongo-depl
    spec:
      replicas: 1
      serviceName: auth-mongo
      selector:
        matchLabels:
          app: auth-mongo
      template:
        metadata:
          labels:
            app: auth-mongo
        spec:
          containers:
            - name: auth-mongo
              image: mongo
              ports:
                - containerPort: 27017
              volumeMounts:
                - name: auth-mongo-data
                  mountPath: /data/db
      volumeClaimTemplates:
        - metadata:
            name: auth-mongo-data
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 500Mi
    ---
    # ClusterIp Service
    apiVersion: v1
    kind: Service
    metadata:
      name: auth-mongo-ip-srv
    spec:
      selector:
        app: auth-mongo
      type: ClusterIP
      ports:
        - name: auth-mongo-db
          protocol: TCP
          port: 27017
          targetPort: 27017
    
    

  2. It is highly recommended to use StatefulSets for running databases in Kubernetes. In Deployment if your pod crashes for some reason and creates new one, it’s not guaranteed the pod will get patched to the same PV, hence the you loose the data.
    Have a look on this https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search