skip to Main Content

I’m running the redis chart (https://artifacthub.io/packages/helm/bitnami/redis/15.7.0) as a dependency of a custom chart. I enabled sentinel, then the pods are running two containers (redis and sentinel).
I’m using the default values for the chart and I defined 4 replicas. The cluster have 10 nodes and I notice that three redis-sentinel’s pods run on a single node and only one runs in another node:

myapp-redis-node-0    2/2    Running    8d     ip    k8s-appname-ctw9v
myapp-redis-node-1    2/2    Running    34d    ip    k8s-appname-ctw9v
myapp-redis-node-2    2/2    Running    34d    ip    k8s-appname-ctw9v
myapp-redis-node-3    2/2    Running    34d    ip    k8s-appname-crm3k

This is the affinity section for the pod’s:

spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchLabels:
              app.kubernetes.io/component: node
              app.kubernetes.io/instance: myapp
              app.kubernetes.io/name: redis
          namespaces:
          - test
          topologyKey: kubernetes.io/hostname
        weight: 1

How I can do to have each pod on diferent nodes?

Thanks!

3

Answers


  1. Chosen as BEST ANSWER

    Thank you all for your answers. Finally I solved it with:

    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app.kubernetes.io/component: node
                app.kubernetes.io/instance: myapp
                app.kubernetes.io/name: redis
            namespaces:
            - test
            topologyKey: kubernetes.io/hostname
    

    BTW, this is generated automatically by the chart by setting in values for master and replicas (I'm using v15.7.0):

    podAntiAffinityPreset: hard
    

  2. You need to update the podAntiAffinity section of the pod template to add a certain k/v pair. This will ensure that for a node, if a pod with that k/v pair already exists, the schedular will attempt to schedule the pod on another node that doesn’t have a pod with that k/v pair. I say attempt, because anti-affinity rules are soft rules and if there are no nodes available, a pod will be scheduled on a node that might possibly violate the anti-affinity. Details here.

    Try patching the template as:

    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: <ADD_LABEL_HERE>
                  operator: In
                  values:
                  - <ADD_VALUE_HERE>
    
    Login or Signup to reply.
  3. There’s a dedicated configuration to ensure what you want, called: PodDisruptionBudget.

    https://kubernetes.io/docs/tasks/run-application/configure-pdb/

    It ensures that your pods are distributed among nodes for high availability and will help you if you want to replace a node etc.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search