skip to Main Content

It was provided to me an image that run a service with the root user.

I have the need to bind mount a directory where the service running in the container must create files that will be used as input files of another non-containerized service.

The problem is well know: the generated files keep UID and GID from the in-container user.

So, I tried with this config:

  myservice:
    image: "someonewho/liketorunwithroot: latest"
    user: "${UID}:${GID}"
    volumes:
       - type: bind
         source: /home/user/output
         target: /service/path/outputdir
       - type: bind
         source: /etc/passwd
         target: /etc/passwd
       - type: bind
         source: /etc/group
         target: /etc/group
    ports:
      - "80:80"

doing like this, I can force the in-container user to use uid and gid that are the same of the user on the docker host, so I have no problem at all with the files generated under /service/path/outputdir.

But in this way the service is not working anymore, because the user with those ID can’t read and write in other folders.

So I tried with the user namespace, forcing the range of subuid to one specif id (the same of the docker host user)

In this case I get this error:

docker: failed to register layer: Error processing tar file (exit status 1): Container ID 100 cannot be mapped to a host ID.

I can’t manipulate the Dockerfile, nor the container, because it must be "portable".

Is there a solution?

2

Answers


  1. you should consider using docker volumes for storing the files, you can always backup, restore or migrate the data

      myservice:
        image: "someonewho/liketorunwithroot: latest"
        volumes:
           - output:/service/path/outputdir
        ports:
          - "80:80"
      volumes:
        output:
    

    and you get get the volume name with

    docker volume ls --format '{{json .}}' | jq -r '.Name' | grep "$(basename $PWD)_output"
    
    Login or Signup to reply.
  2. The best I can come up with is running this on the host

    mkdir container-files
    chmod g+s container-files
    docker run --rm -v $(pwd)/container-files:/output ubuntu bash -c "echo test > /output/test.txt"
    

    Now there’s a file in the container-files directory that was created by a container running as root:root. But on the host, the file is owned by root:your group. And since you’re in your group, you can read the file. Unfortunately, you can’t do much else with it.

    It uses the SGID bit on the container-files directory, so files created in the directory will have the same group ownership as the container-files directory.
    On my Linux machine, the s-bit is set by default when the directory is created, so you might not need to run chmod.

    The best solution, though, is to change the container so it doesn’t need to run as root. I don’t see how running as root is more portable. In fact, there are container environments that don’t allow containers to run as root due to the security risks it poses (OpenShift).

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search