skip to Main Content

I’m trying to connect a Json file which resides in a docker volume of the following container to my main docker container which is running a django project.

Since I am using Caprover my Docker Compose options are very limited.
So Docker Composer is not really an option. I want to instead just expose the json file over the web with a link.

Something like domain.com/folder/jsonfile.json

Can somebody tell me if this is possible inside this dockerfile?

The image I am using is crucial to the container so can I just add an nginx image or do I need any other changes to make this work?

Or is nginx not even necessary?

FROM ubuntu:devel
ENV TZ=Etc/UTC
ARG APP_HOME=/app
WORKDIR ${APP_HOME}
ENV DEBIAN_FRONTEND=noninteractive
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime
RUN echo $TZ > /etc/timezone
RUN apt-get update && apt-get upgrade -y
RUN apt-get install gnumeric -y
RUN mkdir -p /etc/importer/data
RUN mkdir /voldata
COPY config.toml /etc/importer/
COPY datasets/* /etc/importer/data/
VOLUME /voldata
COPY importer /usr/bin/
RUN chmod +x /usr/bin/importer
COPY . ${APP_HOME}
CMD sleep 999d

2

Answers


  1. If you want to just have two containers access the same file, just use a volume with –mount.

    Login or Signup to reply.
  2. Using the same volume in 2 containers

    docker-compose:

    volumes: 
      shared_vol:
    services:
      service1:
        volumes:
          - 'shared_vol:/path/to/file'
      service2:
        volumes:
          - 'shared_vol:/path/to/file'
    

    the mechanism above replaces the volumes_from since v3, but this works for v2 as well:

    volumes: 
      shared_vol:
    services:
      service1:
        volumes:
          - 'shared_vol:/path/to/file'
      service2:
        volumes_from:
          - service1
    

    If you want to avoid unintentional altering add :ro for readonly to the target service:

      service1:
        volumes:
          - 'shared_vol:/path/to/file'
      service2:
        volumes:
          - 'shared_vol:/path/to/file:ro'
    

    http-server

    Surely you can provide the file via http (or other protocol). There are two oppertunities:

    Including a http-service to your container (quite easy depending on what is already given in the container) e.g. using nodejs you can use this https://www.npmjs.com/package/http-server very easy. Size doesn’t matter? So just install:

    RUN apt-get install -y nodejs npm
    RUN npm install -g http-server
    EXPOSE  8080
    CMD ["http-server", "--cors", "-p8080", "/path/to/your/json"]
    

    docker-compose (Runs per default on 8080, so open this):

    existing_service:
        ports:
          - '8080:8080'
    

    Run a stand alone http-server (nginx, apache httpd,..) in another container, but then you depend again on using the same volume for two services, so for local solutions quite an overkill.

    Base image

    If you don’t have good reasons i’ll would never use something like :devel, :rolling or :latest as base image. Stick to a LTS version instead like ubuntu:22.04

    Testing for http-server

    Dockerfile

    FROM ubuntu:20.04
    
    ENV TZ=Etc/UTC
    RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
    RUN apt-get update
    RUN apt-get install -y nodejs npm
    RUN npm install -g [email protected] # Issue with JSON-File in V14: https://github.com/http-party/http-server/issues/634 
    COPY ./test.json ./usr/wwwhttp/test.json
    EXPOSE  8080
    CMD ["http-server", "--cors", "-p8080", "/usr/wwwhttp/"]
    
    # docker build -t test/httpserver:latest .
    # docker run -p 8080:8080 test/httpserver:latest
    

    Disclaimer:
    I am not that familiar with node-docker-images, this is just to give a quick working solution and go on from there. I’m not using nodeJS in production, but I’m sure it can be optimized from being fat to.. well.. being rather fat. But for quick prototyping size doesn’t matter.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search