skip to Main Content

We’re using Gitlab for CI/CD. I’ll include the script which we’re using gitlab ci-cd file

services:
  - docker:19.03.11-dind
workflow:
  rules:
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH == "developer" || $CI_COMMIT_BRANCH == "stage"|| ($CI_COMMIT_BRANCH =~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
      when: always
    - if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH != "developer" || $CI_COMMIT_BRANCH != "stage"|| ($CI_COMMIT_BRANCH !~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
      when: never 
stages:
  - build
  - Publish
  - deploy
cache:
  paths:
    - .m2/repository
    - target

build_jar:
  image: maven:3.8.3-jdk-11
  stage: build
  script: 
    - mvn clean install package -DskipTests=true
  artifacts:
    paths:
      - target/*.jar

docker_build_dev:
  stage: Publish
  image: docker:19.03.11
  services:
    - docker:19.03.11-dind      
  variables:
    IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
  script: 
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker build -t $IMAGE_TAG .
    - docker push $IMAGE_TAG
  only:
    - /^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i
    - developer

docker_build_stage:
  stage: Publish
  image: docker:19.03.11
  services:
    - docker:19.03.11-dind   
  variables:
    IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
  script: 
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker build -t $IMAGE_TAG .
    - docker push $IMAGE_TAG   
  only:
    - stage

deploy_dev:
  stage: deploy
  image: stellacenter/aws-helm-kubectl
  variables:
    ENV_VAR_NAME: development  
  before_script:
    - apt update
    - apt-get install gettext-base
    - aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
    - aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
    - aws configure set region ${DEV_AWS_DEFAULT_REGION}
  script:
    - sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" patient-service.yml     
    - mkdir -p  $HOME/.kube
    - cp $KUBE_CONFIG_DEV $HOME/.kube/config
    - chown $(id -u):$(id -g) $HOME/.kube/config 
    - export KUBECONFIG=$HOME/.kube/config
    - cat patient-service.yml | envsubst | kubectl apply -f patient-service.yml -n ${KUBE_NAMESPACE_DEV}
  only:
    - /^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i
    - developer

deploy_stage:
  stage: deploy
  image: stellacenter/aws-helm-kubectl
  variables:
    ENV_VAR_NAME: stage
  before_script:
    - apt update
    - apt-get install gettext-base
    - aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
    - aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
    - aws configure set region ${DEV_AWS_DEFAULT_REGION}
  script:
    - sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" patient-service.yml    
    - mkdir -p  $HOME/.kube
    - cp $KUBE_CONFIG_STAGE $HOME/.kube/config
    - chown $(id -u):$(id -g) $HOME/.kube/config 
    - export KUBECONFIG=$HOME/.kube/config
    - cat patient-service.yml | envsubst | kubectl apply -f patient-service.yml -n ${KUBE_NAMESPACE_STAGE}
  only:
    - stage

According to the script, we just merged the script not to face conflicts/clashes for stage and development enviornment while deployment. Previously, we having each docker files for each environment(stage and developer). Now I want to merge the dockerfile & k8’s yml file also, I merged, but the dockerfile is not fetching. Having clashes (its showing the warning message "back-off restarting failed container"after pipeline succeeds) in Kubernetes. I don’t know how to clear the warning in Kubernetes. I’ll enclose the docker file and yml file for your reference which I merged.

k8’s yaml file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: patient-app
  labels:
    app: patient-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app : patient-app
  template:
    metadata:
      labels:
        app: patient-app
    spec:
      containers:
      - name: patient-app
        image: registry.gitlab.com/stella-center/backend-services/patient-service:<VERSION>
        imagePullPolicy: Always
        ports:
          - containerPort: 8094
        env:
        - name: ENV_VAR_NAME
          value: "${ENV_VAR_NAME}"          
      imagePullSecrets:
        - name:  gitlab-registry-token-auth

---

apiVersion: v1
kind: Service
metadata:
  name:  patient-service
spec:
  type: NodePort
  selector:
    app:  patient-app
  ports:
  - port:  8094
    targetPort:  8094

Docker file

FROM maven:3.8.3-jdk-11 AS MAVEN_BUILD
COPY pom.xml /build/
COPY src /build/src/
WORKDIR /build/
RUN mvn clean install package -DskipTests=true
FROM openjdk:11
WORKDIR /app
COPY --from=MAVEN_BUILD /build/target/patient-service-*.jar /app/patient-service.jar
ENV PORT 8094
EXPOSE $PORT
ENTRYPOINT ["java","-Dspring.profiles.active=$ENV_VAR_NAME","-jar","/app/patient-service.jar"]

In dockerfile , before we used the last line, we used before,

 ENTRYPOINT ["java","-Dspring.profiles.active=development","-jar","/app/patient-service.jar"] -for developer dockerfile
    ENTRYPOINT ["java","-Dspring.profiles.active=stage","-jar","/app/patient-service.jar"] - for stage dockerfile 

At the time, its working fine, I’m not facing any issue on Kubernetes. I just added environment variable to fetch along with whether development or stage .I don’t know why the warning is happening. Please help me to sort this out . Thanks in advance.

kubectl describe pods

> Name:         patient-app-6cd8c88d6-s7ldt Namespace:   
> stellacenter-dev Priority:     0 Node:        
> ip-192-168-49-35.us-east-2.compute.internal/192.168.49.35 Start Time: 
> Wed, 25 May 2022 20:09:23 +0530 Labels:       app=patient-app
>               pod-template-hash=6cd8c88d6 Annotations:  kubernetes.io/psp: eks.privileged Status:       Running IP:          
> 192.168.50.146 IPs:   IP:           192.168.50.146 Controlled By:  ReplicaSet/patient-app-6cd8c88d6 Containers:   patient-app:
>     Container ID:   docker://2d3431a015a40f551e51285fa23e1d39ad5b257bfd6ba75c3972f422b94b12be
>     Image:          registry.gitlab.com/stella-center/backend-services/patient-service:96e21d80
>     Image ID:       docker-pullable://registry.gitlab.com/stella-center/backend-services/patient-service@sha256:3f9774efe205c081de4df5b6ee22cba9940f974311b094
> 2a8473ee02b9310b43
>     Port:           8094/TCP
>     Host Port:      0/TCP
>     State:          Running
>       Started:      Wed, 25 May 2022 20:09:24 +0530
>     Ready:          True
>     Restart Count:  0
>     Environment:    <none>
>     Mounts:
>       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sxbzc (ro) Conditions:   Type              Status  
> Initialized       True   Ready             True   ContainersReady  
> True   PodScheduled      True Volumes:   kube-api-access-sxbzc:
>     Type:                    Projected (a volume that contains injected data from multiple sources)
>     TokenExpirationSeconds:  3607
>     ConfigMapName:           kube-root-ca.crt
>     ConfigMapOptional:       <nil>
>     DownwardAPI:             true QoS Class:                   BestEffort Node-Selectors:              <none> Tolerations:           
> node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
>                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events:                      <none>

2

Answers


  1. Your Dockerfile uses exec form ENTRYPOINT syntax. This form doesn’t expand environment variables; Spring is literally getting the string $ENV_VAR_NAME as the profile name, and failing on this.

    Spring knows how to set properties from environment variables, though. Rather than building that setting into the Dockerfile, you can use an environment variable to set the profile name at deploy time.

    # Dockerfile: do not set `-Dspring.profiles.active`
    ENTRYPOINT ["java", "-jar", "/app/patient-service.jar"]
    
    # Deployment YAML: do set `$SPRING_PROFILES_ACTIVE`
    env:
      - name: SPRING_PROFILES_ACTIVE
        value: "${ENV_VAR_NAME}" # Helm: {{ quote .Values.environment }}
    

    However, with this approach, you still need to set deployment-specific settings in your src/main/resources/application-*.yml file, then rebuild the jar file, then rebuild the Docker image, then redeploy. This doesn’t make sense for most settings, particularly since you can set them as environment variables. If one of these values needs to change you can just change the Kubernetes configuration and redeploy, without recompiling anything.

    # Deployment YAML: don't use Spring profiles; directly set variables instead
    env:
      - name: SPRING_DATASOURCE_URL
        value: "jdbc:postgresql://postgres-dev/database"
    
    Login or Signup to reply.
  2. Run the following command to get the output of why your pod crashes:

    kubectl describe pod -n <your-namespace> <your-pod>.

    Additionally the output of kubectl get pod -o yaml -n <your-namespace> <your-pod> has a status section that holds the reason for restarts. You might have to lookup the exit code. E.g. 137 stands for OOM.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search