skip to Main Content

I have a strange situation where a shell launched from within the container is able to see a mounted volume, but any CMD cannot see that volume for some reason.

I have the following source tree:

[~/workspace/docker-test]$ tree
.
├── Dockerfile
└── source
    └── file.txt

1 directory, 2 files

where Dockerfile is

FROM ubuntu:jammy as prereqs
RUN ls /usr/source

When I build and run the dockerfile, somehow it can’t find the mounted directory

[~/workspace/docker-test]$ docker build -t docker-test . && docker run -it -v $(pwd)/source:/usr/source docker-test
[+] Building 1.6s (5/5) FINISHED
 => [internal] load build definition from Dockerfile                                    0.0s
 => => transferring dockerfile: 36B                                                     0.0s
 => [internal] load .dockerignore                                                       0.0s
 => => transferring context: 2B                                                         0.0s
 => [internal] load metadata for docker.io/library/ubuntu:jammy                         1.1s
 => CACHED [1/2] FROM docker.io/library/ubuntu:jammy@sha256:27cb6e6ccef575a4698b66f5de  0.0s
 => ERROR [2/2] RUN ls /usr/source                                                      0.3s
------
 > [2/2] RUN ls /usr/source:
#5 0.318 ls: cannot access '/usr/source': No such file or directory
------
executor failed running [/bin/sh -c ls /usr/source]: exit code: 2

and yet when I run a shell from within the docker container, I’m able to find it just fine. New Dockerfile:

FROM ubuntu:jammy as prereqs
# RUN ls /usr/source
CMD ["/bin/bash"]

Launches shell from which the mounted volume is perfectly visible.

[~/workspace/docker-test]$ docker build -t docker-test . && docker run -it -v $(pwd)/source:/usr/source docker-test
[+] Building 0.6s (5/5) FINISHED
 => [internal] load build definition from Dockerfile                                    0.0s
 => => transferring dockerfile: 111B                                                    0.0s
 => [internal] load .dockerignore                                                       0.0s
 => => transferring context: 2B                                                         0.0s
 => [internal] load metadata for docker.io/library/ubuntu:jammy                         0.6s
 => CACHED [1/1] FROM docker.io/library/ubuntu:jammy@sha256:27cb6e6ccef575a4698b66f5de  0.0s
 => exporting to image                                                                  0.0s
 => => exporting layers                                                                 0.0s
 => => writing image sha256:aa762c0645f70aea2a82508f0654abb2aadb27d2abf7e971b5eed85a7e  0.0s
 => => naming to docker.io/library/docker-test                                          0.0s

root@d8b3952e2e80:/# ls /usr/source/
file.txt

I don’t understand why it’s not visible in the Dockerfile but perfectly visible in the container shell!

2

Answers


  1. Volumes are only mounted at run-time. As you can see on your build and run commands, the volume specification is only on the docker run command.

    Login or Signup to reply.
  2. RUN ls /usr/source is running on docker build, but CMD is executed on docker run. Volume can’t be mounted when image building, so you have to use COPY command inside dockerfile to be able access external resources inside image.

    QA:

    If I were toRUN make after using COPY to make the source visible, those changes would only stay within the container. Any guidance on the right strategy here?

    The proper way depends on your needs and environment. Here is two possible ways:

    1. If storing of built application locally is necessary: a. Mount external volume and run container with building process, b. Run another container that will use built code. It will be look like:

      $ docker build -t builder-image ./builder && 
        docker run -it -v $(pwd)/source:/usr/source builder-image && 
        docker build -t docker-test ./runner && 
        docker run -it -v $(pwd)/source:/usr/source docker-test
      

      where builder-image‘s Dockerfile will be look like:

      FROM ubuntu:jammy
      # install libraries or dependencies to be able build app
      RUN ...
      # run build (CMD will be applied on docker run command when volume is already mounted)
      CMD ["clang-tidy", "-fix-errors", "..."]
      
    2. If storing of built application locally is NOT necessary: Multi-stage builds is a most common practice in this way, but built application will be stored directly inside docker image in this case (Actually, what is correct behavior for most cases, especially in production). And Dockerfile will be like

      # create builder image
      FROM ubuntu:jammy as builder
      # copy source files
      COPY /usr/source .
      # install libraries or dependencies to be able build app
      RUN ...
      # run build inside image build process
      RUN clang-tidy -fix-errors ...
      
      # create application runner image
      FROM ubuntu:jammy as builder
      # copy built application to new image
      COPY --from=builder /dir/where/app/built ./
      # run built application on docker run
      CMD ["./built_app_exec_file"]
      

    Also here is a good article about docker builds that may be useful

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search