I am looking to deploy a Redis instance to Kubernetes cluster.
In Kubernetes official website here, they use the following yaml configuration for example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-master
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: k8s.gcr.io/redis:e2e # or just image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
Is 100m CPU and 100Mi of memory a good figure to aim for, when using a Redis instance for caching purposes? What should be the limit be set to?
2
Answers
It depends on your scenario. If redis is used a lot and it should cache a lot, possibly 100m memory is too less.
Possibly you can use some tools to monitor your redis and see what it really needs.
shows you the current usage of the pod. Give it a stress test and you will see how much it needs. Better if you have something like prometheus or other monitoring tools, which can also tell you stats about evictions and other behaviour.
Further i would go for the redis helm charts from bitnami. It allows you to configure your redis in an easy way by setting values or a custom values file. Those are also production proven.
https://github.com/bitnami/charts/tree/master/bitnami/redis
There is no proper answer to your question. Everything depends on your needs in this scenario.
You have to consider resources on your node, your resource requests/limits, will you use HPA or VPA, it’s on local environment or on cloud environment, how many pods will be deployed on this node, what you will cache, etc.
Background
In your
Deployment
you set onlyrequests
soredis
pod will alwyas have allocated at leastCPU: 100m
andRAM: 100Mi
. As nolimits
were specified, it might use more and more resources till some errors might occur due to lack of resources (for example it canterminate
pods without specifiedrequests
).Please keep in mind that others pods which was deployed with
Kubernetes
also have some specificrequests
/limits
likekube-proxy
orkube-dns
, etc.To check current resource usage of your nodes you can use commands:
If you will
describe
node usingkubectl describe node <NodeName>
you can check node total resources inCapacity
section and resources which are already used by kubernetes inAllocatable
. SoCapacity
–Allocatable
is value of your free resources in this specifc moment (it might be changed due torequests
/limits
).As addition, you can check artice with Redis Request/limits.
Conclusion
Redis
pod can be deployed with thoserequests: 100m and 100Mi
. As there is nolimits
set,Redis
pod might use more and more resources which can cause to termination of other pods. If this node will be dedicated only forRedis
pod, you can use maximum available resources, value ofCapacity
–Allocatable
. Now, you can set limit to half of thenode resources
, and later change it depends on results. If pod reachedlimits
you can change it to higher values, if opposite, you can change to lower values.