skip to Main Content

I am attempting to build some log observability for my docker containers. What I need to do is be able to spin up any number of containers and mount their log directories to the host system so that a Datadog agent can persist logs from all containers off of the host machine. Currently I am doing this with a docker compose file and the --scale command

docker compose -p application -f application-compose.yaml up -d --scale service=3

Because we are using DataDog I cannot build the agent and deploy it into the container because of DataDog’s pricing structure. This would cause a large amount of hosts being detected on their end and is cost-prohibitive for us. I also do not have the option of going with another log aggregator.

If it is of any consequence I am running Alpine Linux on both the host and the container.

This is the solution I have come to. At the time of the entrypoint script running, I am mounting the /var/log dir to a /host_logs dir. I am doing this using the hostname of the docker container to be able to separate out the logs from other containers using the following command:

mount --bind "/host_logs/${HOSTNAME}" /var/log

I then in my docker-compose file mount this dir onto the host with the following config:

volumes:
  - "${HOST_LOG_PATH:-/var/log/application/}:/host_logs"

This allows me to specify a HOST_LOG_PATH in the .env file, or alternatively mount the volume directly to the host under a defined location for the application running inside the container.

This results in the host system seeing the application logs dir mounted like so:

mount structure

This all works well with the one issue that this bind mount requires the container to have the SYS_ADMIN capability enabled for the container. My understanding is that this is potentially dangerous as it could allow any code running inside the container to escape the container onto the host.

Because I don’t have access to any variables that I can script and therefore randomize in the docker-compose file or the .env file, I cannot figure out a way to effectively segregate the logs on the host per container. I have looked at slightly modifying the default security profile of the container and allowing only the additional mount command but this also seems like it’s potentially dangerous, and still requires granting the additional capability, as just granting the mount command in the security profile does not seem to be enough. I also do not want to do any sort of bash scripting here to dynamically set env vars and then run the compose command, as this complexity will quickly get out of hand.

Before going down the route of using the bind mount, I attempted to simply symlink the /var/log directory to another directory so that I could transport all logs without a bind mount. Still, I am required to use a hard link in order to make sure the file contents map correctly to the new location. I cannot use a hard mount with a directory in the container due to symlinking hard links only being possible for files and not directories.

I could go the route of symlinking independent files over to the new location but then I lose the ability to capture anything new that comes into the /var/log directory without specifying it in the entry point script. This also doesn’t work because I need to be able to dynamically grab everything in the /var/log dir on the container and pass it to the host in order to make sure that nothing else has started running inside the container without us knowing.

So my requirements are as follows:

  1. Make sure that all logs are segregated per container on the host in some discernible way.
  2. Be as secure as possible and do not add additional problematic capabilities to the containers.
  3. Make sure that all items inside the container in /var/log are collected whenever they show up.
  4. Require as little tinkering with the entry point, Dockerfile, and docker-compose file as possible.
  5. I would like to avoid another container on the host purely to collect logs from the other containers.

Is there a reasonable way to allow this functionality without the use of the SYS_ADMIN capability? Is there a better way to do this log aggregation? Or is this just the complex reality of shipping logs off of docker containers?

2

Answers


  1. This is how I mount directories inside a docker container using php.

    docker run -d -v /var/log/application/:/host_logs k3_s3:latest

    Login or Signup to reply.
  2. Mount your different container logs to different directories on the host computer

    version: '3.8'
    
    services:
      nginx:
        build:
          context: ./nginx
        ports:
          - "80:80"
        volumes:
          - ./nginx/nginx.conf:/etc/nginx/nginx.conf
          - /path/to/your_host_nginx_log:/path/to/your_nginx_container_log
        networks:
          - mynetwork
    
      redis:
        image: redis:latest
        ports:
          - "6379:6379"
        volumes:
          - /path/to/your_host_redis_log:/path/to/your_redis_container_log
        networks:
          - mynetwork
    
    networks:
      mynetwork:
        driver: bridge
    
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search