skip to Main Content

I have met a problem like this:

Firstly, I using helm to create a release nginx:

helm upgrade --install --namespace test nginx bitnami/nginx --debug

LAST DEPLOYED: Wed Jul 22 15:17:50 2020
NAMESPACE: test
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                DATA  AGE
nginx-server-block  1     2s

==> v1/Deployment
NAME   READY  UP-TO-DATE  AVAILABLE  AGE
nginx  0/1    1           0          2s

==> v1/Pod(related)
NAME                    READY  STATUS             RESTARTS  AGE
nginx-6bcbfcd548-kdf4x  0/1    ContainerCreating  0         1s

==> v1/Service
NAME   TYPE          CLUSTER-IP    EXTERNAL-IP  PORT(S)                     AGE
nginx  LoadBalancer  10.219.6.148  <pending>    80:30811/TCP,443:31260/TCP  2s


NOTES:
Get the NGINX URL:

  NOTE: It may take a few minutes for the LoadBalancer IP to be available.
        Watch the status with: 'kubectl get svc --namespace test -w nginx'

  export SERVICE_IP=$(kubectl get svc --namespace test nginx --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
  echo "NGINX URL: http://$SERVICE_IP/"

K8s only create a deployment with 1 pods:

# Source: nginx/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app.kubernetes.io/name: nginx
    helm.sh/chart: nginx-6.0.2
    app.kubernetes.io/instance: nginx
    app.kubernetes.io/managed-by: Tiller
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: nginx
      app.kubernetes.io/instance: nginx
  replicas: 1
  ...

Secondly, I using kubectl command to edit the deployment to scaling up to 2 pods

kubectl -n test  edit deployment nginx

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2020-07-22T08:17:51Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: nginx
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: nginx
    helm.sh/chart: nginx-6.0.2
  name: nginx
  namespace: test
  resourceVersion: "128636260"
  selfLink: /apis/extensions/v1beta1/namespaces/test/deployments/nginx
  uid: d63b0f05-cbf3-11ea-99d5-42010a8a00f1
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  ...

And i save this, check status to see the deployment has scaled up to 2 pods:

kubectl -n test get deployment

NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   2/2     2            2           7m50s

Finally, I using helm to upgrade release, as expected, helm will override the deployment to 1 pod like first step but in for now, the deployment will keep the values replicas: 2 even you set the values (in values.yaml file of helm) to any number.
I have using option --recreate-pods of helm command:

helm upgrade --install --namespace test nginx bitnami/nginx --debug --recreate-pods
Release "nginx" has been upgraded. Happy Helming!
LAST DEPLOYED: Wed Jul 22 15:31:24 2020
NAMESPACE: test
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                DATA  AGE
nginx-server-block  1     13m

==> v1/Deployment
NAME   READY  UP-TO-DATE  AVAILABLE  AGE
nginx  0/2    2           0          13m

==> v1/Pod(related)
NAME                    READY  STATUS             RESTARTS  AGE
nginx-6bcbfcd548-b4bfs  0/1    ContainerCreating  0         1s
nginx-6bcbfcd548-bzhf2  0/1    ContainerCreating  0         1s
nginx-6bcbfcd548-kdf4x  0/1    Terminating        0         13m
nginx-6bcbfcd548-xfxbv  1/1    Terminating        0         6m16s

==> v1/Service
NAME   TYPE          CLUSTER-IP    EXTERNAL-IP    PORT(S)                     AGE
nginx  LoadBalancer  10.219.6.148  34.82.120.134  80:30811/TCP,443:31260/TCP  13m


NOTES:
Get the NGINX URL:

  NOTE: It may take a few minutes for the LoadBalancer IP to be available.
        Watch the status with: 'kubectl get svc --namespace test -w nginx'

  export SERVICE_IP=$(kubectl get svc --namespace test nginx --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
  echo "NGINX URL: http://$SERVICE_IP/"

Result: after I edit the replicas in deployment manually, I can not use helm to override this values replicas, but I still can change the images and etc, … only replicas will not change
I have run --debug and helm still create the deployment with replicas: 1

# Source: nginx/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app.kubernetes.io/name: nginx
    helm.sh/chart: nginx-6.0.2
    app.kubernetes.io/instance: nginx
    app.kubernetes.io/managed-by: Tiller
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: nginx
      app.kubernetes.io/instance: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app.kubernetes.io/name: nginx
        helm.sh/chart: nginx-6.0.2
        app.kubernetes.io/instance: nginx
        app.kubernetes.io/managed-by: Tiller
    spec:      
      containers:
        - name: nginx
          image: docker.io/bitnami/nginx:1.19.1-debian-10-r0
          imagePullPolicy: "IfNotPresent"
          ports:
            - name: http
              containerPort: 8080
            
          livenessProbe:
            failureThreshold: 6
            initialDelaySeconds: 30
            tcpSocket:
              port: http
            timeoutSeconds: 5
            
          readinessProbe:
            initialDelaySeconds: 5
            periodSeconds: 5
            tcpSocket:
              port: http
            timeoutSeconds: 3
            
          resources:
            limits: {}
            requests: {}
            
          volumeMounts:
            - name: nginx-server-block-paths
              mountPath: /opt/bitnami/nginx/conf/server_blocks
      volumes:
        - name: nginx-server-block-paths
          configMap:
            name: nginx-server-block
            items:
              - key: server-blocks-paths.conf
                path: server-blocks-paths.conf

But the k8s deployment will keep the values replicas the same like the edit manual once replicas: 2

As far as I know, the output of helm command is create k8s yaml file, Why I can not use helm to override the specific values replicas in this case?

Tks in advance!!!

P/S: I just want to know what is behavior here, Tks

Helm version

Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}

4

Answers


  1. Please use the replicaCount field from helm to manage replicas.

    I see it as option here

    Login or Signup to reply.
  2. let me know helm version you are using. There is a known bug as well, where it is not upgrading the replicas, check the link
    https://github.com/helm/helm/issues/4654

    Login or Signup to reply.
  3. Please take a look at Supported Version Skew.

    When a new version of Helm is released, it is compiled against a particular minor version of Kubernetes. For example, Helm 3.0.0 interacts with Kubernetes using the Kubernetes 1.16.2 client, so it is compatible with Kubernetes 1.16.

    As of Helm 3, Helm is assumed to be compatible with n-3 versions of Kubernetes it was compiled against. Due to Kubernetes’ changes between minor versions, Helm 2’s support policy is slightly stricter, assuming to be compatible with n-1 versions of Kubernetes.

    For example, if you are using a version of Helm 3 that was compiled against the Kubernetes 1.17 client APIs, then it should be safe to use with Kubernetes 1.17, 1.16, 1.15, and 1.14. If you are using a version of Helm 2 that was compiled against the Kubernetes 1.16 client APIs, then it should be safe to use with Kubernetes 1.16 and 1.15.

    It is not recommended to use Helm with a version of Kubernetes that is newer than the version it was compiled against, as Helm does not make any forward compatibility guarantees.

    If you choose to use Helm with a version of Kubernetes that it does not support, you are using it at your own risk.

    I have tested those behavior using 1.17.9 k8s version with helm 3.2v and all below mentioned approaches for deployment update are working as expected.

    helm upgrade --install nginx bitnami/nginx 
    helm fetch bitnami/nginx --untar (create custom vaules.yaml and change the replicaCount parameter in values.yaml and save it)
    helm upgrade --install nginx bitnami/nginx -f values.yaml ./nginx
    helm upgrade --install nginx bitnami/nginx -f values.yaml ./nginx --set replicaCount=2
    

    Note: Values Files

    values.yaml is the default, which can be overridden by a parent chart’s values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by –set parameters.

    So my advice is to keep your tools up to date.

    Note:
    Helm 2 support plan.

    For Helm 2, we will continue to accept bug fixes and fix any security issues that arise, but no new features will be accepted. All feature development will be moved over to Helm 3.

    6 months after Helm 3’s public release, Helm 2 will stop accepting bug fixes. Only security issues will be accepted.

    12 months after Helm 3’s public release, support for Helm 2 will formally end.

    Login or Signup to reply.
  4. Follow the offical document from Helm: Helm | Docs

    Helm 2 used a two-way strategic merge patch. During an upgrade, it compared the most recent chart’s manifest against the proposed chart’s manifest (the one supplied during helm upgrade). It compared the differences between these two charts to determine what changes needed to be applied to the resources in Kubernetes. If changes were applied to the cluster out-of-band (such as during a kubectl edit), those changes were not considered. This resulted in resources being unable to roll back to its previous state: because Helm only considered the last applied chart’s manifest as its current state, if there were no changes in the chart’s state, the live state was left unchanged.

    And this thing will be improved in Helm v3, because Helm v3 have removed Tiller, your values will be apply exactly to Kubernetes resources, and values of Helm and Kubernetes will be consistent.

    ==> Result is you will not meet this problem again if you use Helm version 3

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search