skip to Main Content

I use Gitlab runner on an EC2 to build, test and deploy docker images on a ECS.

I start my CI workflow using a "push/pull" logic: I build all my docker images during the first stage and push them to my gitlab repository then I pull them during the test stage.

I thought that I could drastically improve the workflow time by keeping the image built during the build stage between build and test stages.

My gitlab-ci.yml looks like this:

stages:
  - build
  - test
  - deploy

build_backend:
  stage: build
  image: docker
  services:
    - docker:dind
  before_script:
    - echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
  script:
    - docker build -t backend:$CI_COMMIT_BRANCH ./backend
  only:
    refs:
      - develop
      - master

build_generator:
  stage: build
  image: docker
  services:
    - docker:dind
  before_script:
    - echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
  script:
    - docker build -t generator:$CI_COMMIT_BRANCH ./generator
  only:
    refs:
      - develop
      - master

build_frontend:
  stage: build
  image: docker
  services:
    - docker:dind
  before_script:
    - echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
  script:
    - docker build -t frontend:$CI_COMMIT_BRANCH ./frontend
  only:
    refs:
      - develop
      - master

build_scraping:
  stage: build
  image: docker
  services:
    - docker:dind
  before_script:
    - echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
  script:
    - docker build -t scraping:$CI_COMMIT_BRANCH ./scraping
  only:
    refs:
      - develop
      - master


test_backend:
  stage: test
  needs: ["build_backend"]
  image: docker
  services:
    - docker:dind
  before_script:
    - echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
    - DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
    - mkdir -p $DOCKER_CONFIG/cli-plugins
    - apk add curl
    - curl -SL https://github.com/docker/compose/releases/download/v2.3.2/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose
    - chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose
  script:
    - docker compose -f docker-compose-ci.yml up -d backend
    - docker exec backend pip3 install --no-cache-dir --upgrade -r requirements-test.txt
    - docker exec db sh mongo_init.sh
    - docker exec backend pytest test --junitxml=report.xml -p no:cacheprovider
  artifacts:
    when: always
    reports:
      junit: backend/report.xml
  only:
    refs:
      - develop
      - master

test_generator:
  stage: test
  needs: ["build_generator"]
  image: docker
  services:
    - docker:dind
  before_script:
    - echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
    - DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
    - mkdir -p $DOCKER_CONFIG/cli-plugins
    - apk add curl
    - curl -SL https://github.com/docker/compose/releases/download/v2.3.2/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose
    - chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose
  script:
    - docker compose -f docker-compose-ci.yml up -d generator
    - docker exec generator pip3 install --no-cache-dir --upgrade -r requirements-test.txt
    - docker exec generator pip3 install --no-cache-dir --upgrade -r requirements.txt
    - docker exec db sh mongo_init.sh
    - docker exec generator pytest test --junitxml=report.xml -p no:cacheprovider
  artifacts:
    when: always
    reports:
      junit: generator/report.xml
  only:
    refs:
      - develop
      - master
   
[...]

gitlab-runner/config.toml:

concurrent = 5
check_interval = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "Docker Runner"
  url = "https://gitlab.com/"
  token = ""
  executor = "docker"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.docker]
    tls_verify = false
    image = "docker:19.03.12"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/certs/client", "/cache"]
    shm_size = 0

docker-compose-ci.yml:

services:
  backend:
    container_name: backend
    image: backend:$CI_COMMIT_BRANCH
    build:
      context: backend
    volumes:
      - ./backend:/app
    networks:
      default:
    ports:
      - 8000:8000
      - 587:587
      - 443:443
    environment:
      - ENVIRONMENT=development
    depends_on:
      - db

  generator:
    container_name: generator
    image: generator:$CI_COMMIT_BRANCH
    build:
      context: generator
    volumes:
      - ./generator:/var/task
    networks:
      default:
    ports:
      - 9000:8080
    environment:
      - ENVIRONMENT=development
    depends_on:
      - db

  db:
    container_name: db
    image: mongo
    volumes:
      - ./mongo_init.sh:/mongo_init.sh:ro
    networks:
      default:
    environment:
      MONGO_INITDB_DATABASE: DB
      MONGO_INITDB_ROOT_USERNAME: admin
      MONGO_INITDB_ROOT_PASSWORD: admin
    ports:
      - 27017:27017

  frontend:
    container_name: frontend
    image: frontend:$CI_COMMIT_BRANCH
    build:
      context: frontend
    volumes:
      - ./frontend:/app
    networks:
      default:
    ports:
      - 8080:8080
    depends_on:
      - backend

networks:
  default:
    driver: bridge

When I comment context: in my docker-compose-ci.yml, Docker can’t find my image and indeed it is not keep between jobs.

What is the best Docker approach during CI to build -> test -> deploy?
Should I zip my docker image and share them between stages using artifacts? It doesn’t seem to be the most efficient way to do this.

I’m a bit lost about which approach I should use to perform a such common workflow in Gitlab CI using Docker.

2

Answers


  1. Try mounting the "Docker Root Dir" as a persistent/nfs volume that is shared by the fleet of runners.

    Docker images are stored in "Docker Root Dir" path. You can find out your docker root by running:

    # docker info
    ...
     Storage Driver: overlay2
     Docker Root Dir: /var/lib/docker
    ...
    

    Generally the default paths based on the OS are

    Ubuntu: /var/lib/docker/
    Fedora: /var/lib/docker/
    Debian: /var/lib/docker/
    Windows: C:ProgramDataDockerDesktop
    MacOS: ~/Library/Containers/com.docker.docker/Data/vms/0/
    

    Once properly mounted to all agents, you will be able to access all local docker images.

    References:

    https://docs.gitlab.com/runner

    https://blog.nestybox.com/2020/10/21/gitlab-dind.html

    Login or Signup to reply.
  2. The best way to do this is to push the image to the registry and pull it in other stages where it is needed. You appear to be missing the push/pull logic.

    You also want to make sure you’ve leveraging docker caching in your docker builds. You’ll probably want to specify the cache_from: key in your compose file.

    For example:

    build:
      stage: build
      script:
        - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
        # pull latest image to leverage cached layers
        - docker pull $CI_REGISTRY_IMAGE:latest || true
    
        # build and push the image to be used in subsequent stages
        - docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
        - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA  # push the image
    
    test:
      stage: test
      needs: [build]
      script:
        - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
        # pull the image that was built in the previous stage
        - docker pull $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
        - docker-compose up # or docker run or whatever
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search