skip to Main Content

I want to start a bunch of docker containers with a help of a Python script. I am using subprocess library for that. Essentially, I am trying to run this docker command

docker = f"docker run -it --rm {env_vars} {hashes} {results} {script} {pipeline} --name {project} {CONTAINER_NAME}"

in a new terminal window.

Popen(f'xterm -T {project} -geometry 150x30+100+350 -e {docker}', shell=True)
# or
Popen(f'xfce4-terminal -T {project} --minimize {hold} -e="{docker}"', shell=True)

Container’s CMD looks like this. It’s a bash script that runs other scripts and funtions in them.

CMD ["bash", "/run_pipeline.sh"]

What I am trying to do is to run an interective shell (bash) from one of these nested scripts in a specific place in case of a failure (i.e. when some condition met) to be able to investigate the problem in script, do something to fix it and continue execution (or just exit if I could not fix it).

if [ $? -ne 0 ]; then
  echo Investigate manually: "$REPO_NAME"
  bash
  if [ $? -ne 0 ]; then exit 33; fi
fi

I want to do these fully automatically so I don’t have to manually keep track of what is going on with a script and execute docker attach... when needed, because I will run multiple of such containers simultaneously.

The problem is that this "rescue" bash process exits immediately and I don’t know why. I think it should be something about ttys and stuff, but I’ve tried bunch of fiddling around with it and had no success.

I tried different combinations of -i, -t and -d of a docker command, tried to use docker attach... right after starting container with -d and also tried starting python script directly from bash in a terminal (I am using Pycharm by default). Besides I tried to use socat, screen, script and getty commands (in nested bash script), but I don’t know how to use them properly so it didn’t end good as well. At this point I’m too confused to understand why it isn’t working.

EDIT:

Adding minimal NOT reproducable (of what is not working) example of how I am starting a container.

# ./Dockerfile
FROM debian:bookworm-slim
SHELL ["bash", "-c"]
CMD ["bash", "/run_pipeline.sh"]

# run 'docker build -t test .'
# ./small_example.py
from subprocess import Popen

if __name__ == '__main__':
    env_vars = f"-e REPO_NAME=test -e PROJECT=test_test"
    script = f'-v "$(pwd)"/run_pipeline.sh:/run_pipeline.sh:ro'
    docker = f"docker run -it --rm {env_vars} {script} --name test_name test"

    # Popen(f'xterm -T test -geometry 150x30+100+350 +hold -e "{docker}"', shell=True).wait()
    Popen(f'xfce4-terminal -T test --hold -e="{docker}"', shell=True).wait()
# ./run_pipeline.sh

# do some hard work

ls non/existent/path

if [ $? -ne 0 ]; then
  echo Investigate manually: "$REPO_NAME"
  bash
  if [ $? -ne 0 ]; then exit 33; fi
fi

It seems like the problem can be in a run_pipeline.sh script, but I don’t want to upload it here, it’s a bigger mess than what I described earlier. But I will say anyway that I am trying to run this thing – https://github.com/IBM/D2A.

So I just wanted some advice on a tty stuff that I am probably missing.

2

Answers


  1. Chosen as BEST ANSWER

    As I said in a comment to Matt's answer, his solution in my situation does not work either. I think it's a problem with the script that I'm running. I think it's because some of the many shell processes (https://imgur.com/a/JiPYGWd) are taking up allocated tty, but I don't know for sure.

    So I came up with my own workaround. I simply block an execution of the script by creating a named pipe and then reading it.

    if [ $? -ne 0 ]; then
      echo Investigate _make_ manually: "$REPO_NAME"
      mkfifo "/tmp/mypipe_$githash" && echo "/tmp/mypipe_$githash" && read -r res < "/tmp/mypipe_$githash"
      if [ $res -ne 0 ]; then exit 33; fi
    fi
    

    Then I just launch terminal emulator and execute docker exec in it to start a new bash process. I do it with a help of Docker Python SDK by monitoring the output of a container so I know when to launch terminal.

    def monitor_container_output(container):
        line = b''
        for log in container.logs(stream=True):
            if log == b'n':
                print(line.decode())
                if b'mypipe_' in line:
                    Popen(f'xfce4-terminal -T {container.name} -e="docker exec -it {container.name} bash"', shell=True).wait()
                line = b''
                continue
            line += log
    
    
    client = docker.from_env()
    conatiner = client.containers.run(IMAGE_NAME, name=project, detach=True, stdin_open=True, tty=True,
                                      auto_remove=True, environment=env_vars, volumes=volumes)
    monitor_container_output(container)
    

    After I finish my investigation of a problem in that new bash process, I will send "status code of investigation" to tell the script to continue running or exit.

    echo 0 > "/tmp/mypipe_$githash"
    

  2. Run the initial container detached, with input and a tty.

    docker run -dit --rm {env_vars} {script} --name test_name test
    

    Monitor the container logs for the output, then attach to it.

    Here is a quick script example (without a tty in this case, only because of the demo using echo to input)

    #!/bin/bash
    
    docker run --name test_name -id debian 
      bash -c 'echo start; sleep 10; echo "reading"; read var; echo "var=$var"'
    
    while ! docker logs test_name | grep reading; do
      sleep 3
    done
    
    echo "attach input" | docker attach test_name
    

    The complete output after it finishes:

    $ docker logs test_name
    start
    reading
    var=attach input
    

    The whole process would be easier to control via the Docker Python SDK rather than having a layer of shell between the python and Docker.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search