skip to Main Content

Hope I’m doing this correctly…

First of we are using docker-compose with a yml file.
Looking something like that:

sudo docker-compose -f docker-compose.yml up -d

In the yml file we have something similar to:

version: '3.4'
services:
  MyContainer:
    image: "MyContainer:latest"
    container_name: MyContainer
    restart: always
    environment:
        - DISPLAY=unix$DISPLAY
        - QT_X11_NO_MITSHM=1
    devices:
        - /dev/dri:/dev/dri
    volumes:
        - /tmp/.X11-unix:/tmp/.X11-unix:rw
        - /dev/dri:/dev/dri
        - /usr/lib/x86_64-linux-gnu/libXv.so.1:/usr/lib/x86_64-linux-gnu/libXv.so.1:rw
        - ~/MyFiles/:/root/Myfiles
        - ~:/root/home

Now the problem starts. We have two operating systems used by the team. One time Ubuntu and then Arch and Manjaro. As a experienced Linux User might know this will not work on Arch. Because x86_64-linux-gnu is a folder in Ubuntu. This is a very specific folder on Debian/Ubuntu systems. The equivalent on Arch/Manjaro and nearly every other Linux Distro is /usr/lib or /usr/lib64.

Of course a hack would be to make this folder to link into lib, but I don’t want to do that for every new team-member/machine without Ubuntu.

So these are all the upfront information to give.
My Question is:

What is the best approach in your opinion to solve this problem?

I had a google search, but either I used the wrong keywords, or people don’t have that problem, because they design their containers smarter.

I know that there are docker volumes that can be created and then used in the docker-compose file but for that, we would need to rerun the setup on all PC’s, Laptops and Servers we have, would like to avoid that if possible…

I have a lot to learn so if you have more experience and knowledge please be so kind and explain me my mistakes.

Regards,
Stefan

2

Answers


  1. volumes section in docker compose supports environment variables. You can make use of that and it will be machine specific.

    Login or Signup to reply.
  2. If you’re trying to use the host display, host libraries, host filesystem, and host hardware devices, then the only thing you’re getting out of Docker is an inconvenient packaging mechanism that requires root privileges to run. It’d be significantly easier to build a binary and run the application directly on the host.

    If you must run this in Docker, the image should be self-contained: all of the code and libraries necessary to run the application needs to be in the image and copied in the Dockerfile. Most images start FROM some Linux distribution (maybe indirectly through a language runtime) and so you need to install the required libraries using its package manager.

    FROM ubuntu:18.04
    RUN apt-get update 
     && apt-get install --no-install-recommends --assume-yes 
          libxv1
    ...
    

    Bind-mounting binaries or libraries into containers leads to not just filesystem inconsistencies like you describe but in some cases also binary-compatibility issues. The bind mount won’t work properly on a MacOS host, for instance. (Earlier recipes for using the Docker socket inside a Docker container recommended bind-mounting /usr/bin/docker into the container, but this could hit problems if a CentOS host Docker was built against different shared libraries than an Ubuntu container Docker.)

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search