skip to Main Content

I want multiple containers in the deployment template so I am just iterating over the values.yaml and had put the container configuration in the range loop.

values.yaml

images:
  image1:
    name: "hjdsh"
    repository: nginx
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: ""
    initialDelaySeconds: 5
    resources:
      requests:
        memory: "128Mi"
        cpu: "128m"
      limits:
        memory: "128Mi"
        cpu: "128m"
  image2:
    name: "kjbjk"
    repository: nginx
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: ""
    initialDelaySeconds: 5
    resources:
      requests:
        memory: "128Mi"
        cpu: "128m"
      limits:
        memory: "128Mi"
        cpu: "128m"

_deployment.yaml


{{- define "common-helm.deployment.tpl" -}}
{{- $requiredMsg := include "common.default-check-required-msg" . -}}
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "common.name"  .}}
  labels:
    {{- include "common.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas:  {{ .Values.deploymentReplicas }}
  {{- end }}
  minReadySeconds: {{ .Values.deployment.minReadySeconds | default 0 }}
  strategy: {}
  selector:
    matchLabels:
      app: {{- include "common.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        app: {{- include "common.labels" . | nindent 8 }}
    spec:
      {{- if .Values.gitlab.auth.enabled }}
      imagePullSecrets:
        - name: {{ include "common.fullname" . }}-gitlab-auth
      {{- end }}
      serviceAccountName: {{ .Values.serviceAccount.name | quote }}
      securityContext:
        fsGroup: {{ .Values.deployment.runAsUser | default 1000 }}
        runAsUser: {{ .Values.deployment.runAsUser | default 1000 }}
        runAsNonRoot: {{ .Values.deployment.runAsNonRoot | default true }}
        affinity:
          podAntiAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - {{ include "common.fullname" . }}
                topologyKey: "kubernetes.io/hostname"
        containers:

{{- range $key, $value := .Values.images}}
          -name: {{ $value.name }}_{{ $key }}
          securityContext:
            runAsNonRoot: true
            runAsUser: 1200
          image: {{ $value.repository }}
          imagePullPolicy: ""
          ports:
            -containerPort: 8080
          resources:
            requests:
              memory: {{ $value.resources.requests.memory }}
              cpu: {{ $value.resources.requests.cpu }}
            limits:
              memory: {{ $value.resources.limits.memory }}
              cpu: {{ $value.resources.limits.cpu }}
          readinessProbe:
            httpGet:
              path: /v1/healthCheck
              port: 8080
          initialDelaySeconds: ""
          periodSeconds: 10

  {{- end }}
{{- end  }}
{{- define "common-helm.deployment" -}}
{{- include "common-helm.util.merge" (append . "common-helm.deployment.tpl") -}}
{{- end -}}

outcome: it only creates a container for the last image.

enter image description here

It was expected to create two containers with both images.

2

Answers


  1. I would advise you to use an array of images instead of objects, and to follow closely the Helm documentation example: https://helm.sh/docs/chart_template_guide/control_structures/#looping-with-the-range-action

    images:
    - name: wqqwd1
      container: nginx
    - name: wqqwd2
      container: nginx
    
    Login or Signup to reply.
  2. Looks like the _deployment.yaml contains formatting and indentation issues. You need to fix those first. Also, follow the official debugging instructions to verify the rendered changes. For debugging, you may also use the print function.

    Here’s a small example tested online and it works fine:

    template.yaml

    images:
    {{- range $k, $v := .Values.images }}
    - name: {{ $v.name }}_{{ $k }}
      image: {{ $v.repository }}
    {{- end }}
    

    values.yaml

    images:
      image1:
        name: i1
        repository: r1
      image2:
        name: i2
        repository: r2
    

    Output

    images:
    - name: i1_image1
      image: r1
    - name: i2_image2
      image: r2
    

    UPDATE

    After reducing the updated config (as text) to bareminimum, it is working fine. See it online here.

    template.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: name
    spec:
      {{- if not .Values.autoscaling.enabled }}
      replicas: {{ .Values.deploymentReplicas }}
      {{- end }}
      minReadySeconds: {{ .Values.deployment.minReadySeconds | default 0 }}
      strategy: {}
      selector:
        matchLabels:
      template:
        metadata:
          labels:
        spec:
          {{- if .Values.gitlab.auth.enabled }}
          imagePullSecrets:
          {{- end }}
          serviceAccountName: {{ .Values.serviceAccount.name | quote }}
          securityContext:
            fsGroup: {{ .Values.deployment.runAsUser | default 1000 }}
            runAsUser: {{ .Values.deployment.runAsUser | default 1000 }}
            runAsNonRoot: {{ .Values.deployment.runAsNonRoot | default true }}
            affinity:
              podAntiAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  - labelSelector:
                      matchExpressions:
                        - key: app
                          operator: In
                          values:
                    topologyKey: "kubernetes.io/hostname"
            containers:
            {{- range $key, $value := .Values.images }}
            - name: {{ $value.name }}_{{ $key }}
              securityContext:
                runAsNonRoot: true
                runAsUser: 1200
              image: {{ $value.repository }}
              imagePullPolicy: ""
              ports:
                - containerPort: 8080
              resources: {{ toYaml $value.resources | nindent 10 }}
              readinessProbe:
                httpGet:
                  path: /v1/healthCheck
                  port: 8080
              initialDelaySeconds: ""
              periodSeconds: 10
            {{- end }}
    

    values.yaml

    images:
      image1:
        name: "hjdsh"
        repository: nginx
        pullPolicy: IfNotPresent
        tag: ""
        initialDelaySeconds: 5
        resources:
          requests:
            memory: "128Mi"
            cpu: "128m"
          limits:
            memory: "128Mi"
            cpu: "128m"
      image2:
        name: "kjbjk"
        repository: nginx
        pullPolicy: IfNotPresent
        tag: ""
        initialDelaySeconds: 5
        resources:
          requests:
            memory: "128Mi"
            cpu: "128m"
          limits:
            memory: "128Mi"
            cpu: "128m"
    

    Output

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: name
    spec:
      replicas: <no value>
      minReadySeconds: 0
      strategy: {}
      selector:
        matchLabels:
      template:
        metadata:
          labels:
        spec:
          serviceAccountName: 
          securityContext:
            fsGroup: 1000
            runAsUser: 1000
            runAsNonRoot: true
            affinity:
              podAntiAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  - labelSelector:
                      matchExpressions:
                        - key: app
                          operator: In
                          values:
                    topologyKey: "kubernetes.io/hostname"
            containers:
            - name: hjdsh_image1
              securityContext:
                runAsNonRoot: true
                runAsUser: 1200
              image: nginx
              imagePullPolicy: ""
              ports:
                - containerPort: 8080
              resources: 
              limits:
                cpu: 128m
                memory: 128Mi
              requests:
                cpu: 128m
                memory: 128Mi
              readinessProbe:
                httpGet:
                  path: /v1/healthCheck
                  port: 8080
              initialDelaySeconds: ""
              periodSeconds: 10
            - name: kjbjk_image2
              securityContext:
                runAsNonRoot: true
                runAsUser: 1200
              image: nginx
              imagePullPolicy: ""
              ports:
                - containerPort: 8080
              resources: 
              limits:
                cpu: 128m
                memory: 128Mi
              requests:
                cpu: 128m
                memory: 128Mi
              readinessProbe:
                httpGet:
                  path: /v1/healthCheck
                  port: 8080
              initialDelaySeconds: ""
              periodSeconds: 10
    

    Your issue might be coming from someplace else. You need to debug the rest of the template and fix any intermediate issues that are causing this. As there are include also so it cannot be reproduced in entirety without them.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search