skip to Main Content

I have a Python project and now I am trying to create a Makefile that should run specific commands, such as apt-get, and access variable values that are passed to make commands as arguments. Below is my Makefile code:

VENV = venvs
PYTHON = $(VENV)/bin/python3
PIP = $(VENV)/bin/pip

run : $(VENV)/bin/activate 
    $(PYTHON) jobs/first_file.py

$(VENV)/bin/activate: 
    docker run -it python:3.8-buster /bin/bash
    python3 -m venv $(VENV)
    $(PIP) install --upgrade pip
    $(PIP) install -r requirements.txt

clean :
    rm -rf __pycache__
    rm -rf $(VENV)   

Now, my intention is to invoke the docker image and run the pip commands and later on all the further commands on that docker image. It also includes connecting to an AWS account whose values would be passed in the make command as arguments.

But when I run make in the project’s root directory. It just connects to the docker bash and does nothing. What exactly am I missing here?

3

Answers


  1. Yes I think it cannot work, just like when you want to run a bunch of linux commands after a ssh call, you need to pass a script file.
    The easiest way to do that I think, would be to put all the command in a bash files, and call docker with something like that :

    docker run -it python:3.8-buster /bin/bash -c script.sh

    I didn’t try, might be some syntax error in the exact command I put.

    Login or Signup to reply.
  2. Pass commands as input to another command (su, ssh, sh, etc) explains the basic problem with your syntax. The other commands will run after bash exits. A minimal fix would look like

    $(VENV)/bin/activate: 
        docker run -it python:3.8-buster /bin/bash '
        python3 -m venv $(VENV); 
        $(PIP) install --upgrade pip; 
        $(PIP) install -r requirements.txt'
    

    However, you also need to understand that docker run creates a new container, runs the commands, and then exits the conthiner. All your changes will be lost after that.

    If I’m able to guess your intentions correctly, these commands should simply go in your Dockerfile instead. That will create an image with those changes which you can then docker run as many times as you like.

    Alternatively, create scripts (or a Makefile if you like) inside the container, start it up once, with docker run (make sure its starting point CMD is a command or script which runs forever, or until you separately tell it to shut down) and run them with docker exec containername make foo or whatever.

    Login or Signup to reply.
  3. You’re taking a rather interesting approach here. What you should consider instead is creating a Dockerfile to build your image, and put those commands into the dockerfile itself.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search