I want deploy a mongodb chart using helm on my local dev environment.
I found all the possibile values on bitnami, but is overwhelming!
How can i configure something like that:
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-data
mountPath: /data/db/
volumes:
- name: mongo-data
hostPath:
path: /app/db
Using value.yml configuration file?
2
Answers
You need first to create a persistent volume claim, where it will create a persistent volume only if needed by a specific deployement( here you mondb helm chart):
For example: (or check https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
Check your volume is well created
Now, Even if you delete your mongodb, your volume will persist and can be accessed by any other deployment
The best approach here is to deploy something like the Bitnami MongoDB chart that you reference in the question with its default options
The chart will create a PersistentVolumeClaim for you, and a standard piece of Kubernetes called the persistent volume provisioner will create the corresponding PersistentVolume. The actual storage will be "somewhere inside Kubernetes", but for database storage there’s little you can do with the actual files directly, so this isn’t usually a practical problem.
If you can’t use this approach, then you need to manually create the storage and then tell the chart to use it. You need to create a pair of a PersistentVolumeClaim and a PersistentVolume, for example as shown in the start of Kubernetes Persistent Volume and hostpath, and manually submit these using
kubectl apply -f pv-pvc.yaml
. You then need to tell the Bitnami chart about that PersistentVolume:I’d avoid this sequence in a non-development environment. The cluster should normally have a persistent volume provisioner set up and so you shouldn’t need to manually create PersistentVolumes, and host-path volumes are unreliable in multi-node environments (they refer to a fixed path on whichever node the pod happens to be running on, so data can get misplaced if a pod is rescheduled on a different node).