skip to Main Content

I am looking to deploy a Redis instance to Kubernetes cluster.

In Kubernetes official website here, they use the following yaml configuration for example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
  labels:
    app: redis
spec:
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: k8s.gcr.io/redis:e2e  # or just image: redis
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Is 100m CPU and 100Mi of memory a good figure to aim for, when using a Redis instance for caching purposes? What should be the limit be set to?

2

Answers


  1. It depends on your scenario. If redis is used a lot and it should cache a lot, possibly 100m memory is too less.

    Possibly you can use some tools to monitor your redis and see what it really needs.

    kubectl top pods
    

    shows you the current usage of the pod. Give it a stress test and you will see how much it needs. Better if you have something like prometheus or other monitoring tools, which can also tell you stats about evictions and other behaviour.

    Further i would go for the redis helm charts from bitnami. It allows you to configure your redis in an easy way by setting values or a custom values file. Those are also production proven.

    https://github.com/bitnami/charts/tree/master/bitnami/redis

    Login or Signup to reply.
  2. There is no proper answer to your question. Everything depends on your needs in this scenario.

    You have to consider resources on your node, your resource requests/limits, will you use HPA or VPA, it’s on local environment or on cloud environment, how many pods will be deployed on this node, what you will cache, etc.

    Background

    In your Deployment you set only requests so redis pod will alwyas have allocated at least CPU: 100m and RAM: 100Mi. As no limits were specified, it might use more and more resources till some errors might occur due to lack of resources (for example it can terminate pods without specified requests).
    Please keep in mind that others pods which was deployed with Kubernetes also have some specific requests/limits like kube-proxy or kube-dns, etc.

    To check current resource usage of your nodes you can use commands:

    $ kubectl top nodes
    NAME                                       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
    gke-cluster-1-default-pool-55c97a92-m2bb   61m          6%     718Mi           25%
    gke-cluster-1-default-pool-55c97a92-s907   108m         11%    748Mi           26%
    

    If you will describe node using kubectl describe node <NodeName> you can check node total resources in Capacity section and resources which are already used by kubernetes in Allocatable. So CapacityAllocatable is value of your free resources in this specifc moment (it might be changed due to requests/limits).

    As addition, you can check artice with Redis Request/limits.

    Conclusion

    Redis pod can be deployed with those requests: 100m and 100Mi. As there is no limits set, Redis pod might use more and more resources which can cause to termination of other pods. If this node will be dedicated only for Redis pod, you can use maximum available resources, value of CapacityAllocatable. Now, you can set limit to half of the node resources, and later change it depends on results. If pod reached limits you can change it to higher values, if opposite, you can change to lower values.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search