skip to Main Content

Sorry for the long text, but I needed to report in detail.

I am building a very simple project with a index.html file. My goal with this project is improve my kubernetes skill, so what i am doing is create a cluster and a vm in Google Cloud Plataform. I used Kubernetes Engine to create the cluster and to update my cluster automatically i used a .gitlab-ci.yml.

All of the pipelines running properly with no problems. In the first time that i push my changes and the pipeline start to run, all of the changes get deployed with success and i just need to create a rule in the GCP by using this command: gcloud compute firewall-rules create cd-cd-kube –allow tcp:30005
When i run it, i just opened my external ip with the port 30005 and all works fine. My problem happened when i made a change in the html file and push it again. All of the pipelines worked fine and the file in the cluster has changed too. Everything was supposed to be working, but it wasn’t, because when i tried to access the external ip with the port, the change hasn’t be applied. I also tried to run: kubectl apply deployment.yml
But it says that there’s files has no changes to be applied. I also try to run the deploy locally with mikinube and all works fine. Apparently all its working, but the page just doesn’t update. How can i solve this?

HTML file:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>App version 2.1</title>
</head>
<body>
    <h1>App 2.1</h1>
</body>
</html>

.gitlab-ci.yml file:

stages:
  - build
  - deploy_gcp

build_images: 
  stage: build
  image: docker:20.10.16

  services:
    - docker:20.10.16-dind
  
  variables:
    DOCKER_TLS_CERTDIR: "/certs"
  
  before_script:
    - docker login -u $REGISTRY_USER -p $REGISTRY_PASS 

  script:
    - docker build -t guirms/app-cicd-dio:1.0 app/.
    - docker push guirms/app-cicd-dio:1.0

deploy_gcp: 
  stage: deploy_gcp

  before_script:
    - chmod 400 $SSH_KEY  
    
  script:
    - ssh -o StrictHostKeyChecking=no -i $SSH_KEY gcp@$SSH_SERVER "sudo rm -Rf ./ci-cd-kubernetes/ && sudo git clone https://gitlab.com/guirms/ci-cd-kubernetes.git && cd ci-cd-kubernetes && sudo chmod +x ./script.sh && ./script.sh" 

deployment.yml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
  labels:
    app: app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: app
  template:
    metadata:
      labels:
        app: app
    spec:
      containers:
      - name: app
        image: guirms/app-cicd-dio:1.0
        imagePullPolicy: Always
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: app-service
spec:
  type: NodePort
  ports:
    - targetPort: 80
      port: 80
      nodePort: 30005
  selector:
    app: app

script.sh file:

#!/bin/bash

kubectl apply -f deployment.yml

dockerfile file:

FROM httpd:latest

WORKDIR /usr/local/apache2/htdocs/

COPY index.html /usr/local/apache2/htdocs/

EXPOSE 80

2

Answers


  1. The problem here is you are building the image with the same tag again.

    guirms/app-cicd-dio:1.0
    

    Since the deployment yaml always has the same image version. Without any change in deployment yaml, even if you deploy again kubectl will see no change in the deployment yaml and wont apply any changes.

    To fetch the latest image you will have to restart the pod. Since you have the image pull policy as "imagePullPolicy: Always", kubernetes will fetch the new image after restart and will not take the cached image on the node. So thats good.

    You will need to do the following to restart your pod

    kubectl rollout restart deployment app
    

    This will restart the pod and fetch the new image and create a new container.

    Change your script.sh like this

    #!/bin/bash
    
    kubectl get deployment app
    
    if [ $? -eq 0 ]; then
      kubectl rollout restart deployment app
    else
      kubectl apply -f deployment.yml
    fi
    

    This will create the deployment when it is first time and subsequent deployments it will restart the pod to help fetch the latest image as the image pull policy is "imagePullPolicy: Always" any restart will fetch the image again.

    General suggestion: if you want to stick to a fixed tag, it is better to tag your image "latest" instead of "1.0" always.

    If you want to maintain versions then you should look at incrementing your versions like "1.0.0", 1.0.1"…. with each build.

    Login or Signup to reply.
  2. The problem is based on cache, try to read something about

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search