skip to Main Content

docker-compose.yml:

  python-api: &python-api
    build:
      context: /Users/AjayB/Desktop/python-api/
    ports:
      - "8000:8000"
    networks:
      - app-tier
    expose:
      - "8000"
    depends_on:
      - python-model
    volumes:
      - .:/python_api/
    environment:
      - PYTHON_API_ENV=development
    command: >
      sh -c "ls /python-api/ &&
             python_api_setup.sh development
             python manage.py migrate &&
             python manage.py runserver 0.0.0.0:8000"

  python-model: &python-model
    build:
      context: /Users/AjayB/Desktop/Python/python/
    ports:
      - "8001:8001"
    networks:
      - app-tier
    environment:
      - PYTHON_API_ENV=development
    expose:
      - "8001"
    volumes:
      - .:/python_model/
    command: >
      sh -c "ls /python-model/
             python_setup.sh development
             cd /server/ &&
             python manage.py migrate &&
             python manage.py runserver 0.0.0.0:8001"

  python-celery:
    <<: *python-api
    environment:
      - PYTHON_API_ENV=development
    networks:
      - app-tier
    links:
      - redis:redis
    depends_on:
      - redis
    command: >
      sh -c "celery -A server worker -l info"

  redis:
    image: redis:5.0.8-alpine
    hostname: redis
    networks:
          - app-tier
    expose:
      - "6379"
    ports:
      - "6379:6379"
    command: ["redis-server"]

python-celery is inside python-api which should run as a separate container. But it is trying to occupy the same port as python-api, which should never be the case.

The error that I’m getting is:

AjayB$ docker-compose up
Creating integrated_redis_1        ... done
Creating integrated_python-model_1 ... done
Creating integrated_python-api_1   ... 
Creating integrated_python-celery_1 ... error

Creating integrated_python-api_1    ... done
e1d1055165b1f85f179f69c): Bind for 0.0.0.0:8000 failed: port is already allocated

ERROR: for python-celery  Cannot start service python-celery: driver failed programming external connectivity on endpoint integrated_python-celery_1 (ab5e079dbc3a30223e16052f21744c2b5dfc56adbe1d1055165b1f85f179f69c): Bind for 0.0.0.0:8000 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.

on doing docker ps -a, I get this:

AjayB$ docker ps -a
CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS                      PORTS                    NAMES
2ff1277fb7a7        integrated_python-celery   "sh -c 'celery -A se…"   10 seconds ago      Created                                              integrated_python-celery_1
5b60221b42a4        integrated_python-api      "sh -c 'ls /crackd-a…"   11 seconds ago      Up 9 seconds                0.0.0.0:8000->8000/tcp   integrated_python-api_1
bacd8aa3268f        integrated_python-model    "sh -c 'ls /crackd-m…"   12 seconds ago      Exited (2) 10 seconds ago                            integrated_python-model_1
9fdab833b436        redis:5.0.8-alpine         "docker-entrypoint.s…"   12 seconds ago      Up 10 seconds               0.0.0.0:6379->6379/tcp   integrated_redis_1

Tried force removing the containers and tried docker-compose up getting the same error. :/ where am I making mistake?
Just doubtful on volumes: section. Can anyone please tell me if volumes is correct?
and please help me on this error. PS, first try on docker.
Thanks!

2

Answers


  1. This is because you re-use the full config of python-api including the ports section which will expose port 8000 (by the way, expose is redundant since your ports section already exposes the port).

    I would create a common section that could be used by any services. In your case, it would be something like that:

    version: '3.7'
    
    x-common-python-api:
       &default-python-api
        build:
          context: /Users/AjayB/Desktop/python-api/
        networks:
          - app-tier
        environment:
          - PYTHON_API_ENV=development
        volumes:
          - .:/python_api/
    
    services:
    
      python-api:
        <<: *default-python-api
        ports:
          - "8000:8000"
        depends_on:
          - python-model
        command: >
          sh -c "ls /python-api/ &&
                 python_api_setup.sh development
                 python manage.py migrate &&
                 python manage.py runserver 0.0.0.0:8000"
    
      python-model: &python-model
         .
         .
         .
    
      python-celery:
        <<: *default-python-api
        links:
          - redis:redis
        depends_on:
          - redis
        command: >
          sh -c "celery -A server worker -l info"
    
      redis:
         .
         .
         .
    
    Login or Signup to reply.
  2. There is a lot in that docker-compose.yml file, but much of it is unnecessary. expose: in a Dockerfile does almost nothing; links: aren’t needed with the current networking system; Compose provides a default network for you; your volumes: try to inject code into the container that should already be present in the image. If you clean all of this up, the only part that you’d really want to reuse from one container to the other is its build: (or image:), at which point the YAML anchor syntax is unnecessary.

    This docker-compose.yml should be functionally equivalent to what you show in the question:

    version: '3'
    services:
      python-api:
        build:
          context: /Users/AjayB/Desktop/python-api/
        ports:
          - "8000:8000"
        # No networks:, use `default`
        # No expose:, use what's in the Dockerfile (or nothing)
        depends_on:
          - python-model
        # No volumes:, use what's in the Dockerfile
        # No environment:, this seems to be a required setting in the Dockerfile
        # No command:, use what's in the Dockerfile
    
      python-model:
        build:
          context: /Users/AjayB/Desktop/Python/python/
        ports:
          - "8001:8001"
    
      python-celery:
        build: # copied from python-api
          context: /Users/AjayB/Desktop/python-api/
        depends_on:
          - redis
        command: celery -A server worker -l info # one line, no sh -c wrapper
    
      redis:
        image: redis:5.0.8-alpine
        # No hostname:, it doesn't do anything
        ports:
          - "6379:6379"
        # No command:, use what's in the image
    

    Again, notice that the only thing we’ve actually copied from the python-api container to the python-celery container is the build: block; all of the other settings that would be shared across the two containers (code, exposed ports) are included in the Dockerfile that describes how to build the image.

    The flip side of this is that you need to make sure all of these settings are in fact included in your Dockerfile:

    # Copy the application code in
    COPY . .
    
    # Set the "development" environment variable
    ENV PYTHON_API_ENV=development
    
    # Document which port you'll use by default
    EXPOSE 8000
    
    # Specify the default command to run
    # (Consider writing a shell script with this content instead)
    CMD python_api_setup.sh development && 
        python manage.py migrate && 
        python manage.py runserver 0.0.0.0:8000
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search