I want to start a bunch of docker containers with a help of a Python script. I am using subprocess
library for that. Essentially, I am trying to run this docker command
docker = f"docker run -it --rm {env_vars} {hashes} {results} {script} {pipeline} --name {project} {CONTAINER_NAME}"
in a new terminal window.
Popen(f'xterm -T {project} -geometry 150x30+100+350 -e {docker}', shell=True)
# or
Popen(f'xfce4-terminal -T {project} --minimize {hold} -e="{docker}"', shell=True)
Container’s CMD
looks like this. It’s a bash script that runs other scripts and funtions in them.
CMD ["bash", "/run_pipeline.sh"]
What I am trying to do is to run an interective shell (bash) from one of these nested scripts in a specific place in case of a failure (i.e. when some condition met) to be able to investigate the problem in script, do something to fix it and continue execution (or just exit if I could not fix it).
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
I want to do these fully automatically so I don’t have to manually keep track of what is going on with a script and execute docker attach...
when needed, because I will run multiple of such containers simultaneously.
The problem is that this "rescue" bash process exits immediately and I don’t know why. I think it should be something about tty
s and stuff, but I’ve tried bunch of fiddling around with it and had no success.
I tried different combinations of -i
, -t
and -d
of a docker
command, tried to use docker attach...
right after starting container with -d
and also tried starting python script directly from bash in a terminal (I am using Pycharm by default). Besides I tried to use socat
, screen
, script
and getty
commands (in nested bash script), but I don’t know how to use them properly so it didn’t end good as well. At this point I’m too confused to understand why it isn’t working.
EDIT:
Adding minimal NOT reproducable (of what is not working) example of how I am starting a container.
# ./Dockerfile
FROM debian:bookworm-slim
SHELL ["bash", "-c"]
CMD ["bash", "/run_pipeline.sh"]
# run 'docker build -t test .'
# ./small_example.py
from subprocess import Popen
if __name__ == '__main__':
env_vars = f"-e REPO_NAME=test -e PROJECT=test_test"
script = f'-v "$(pwd)"/run_pipeline.sh:/run_pipeline.sh:ro'
docker = f"docker run -it --rm {env_vars} {script} --name test_name test"
# Popen(f'xterm -T test -geometry 150x30+100+350 +hold -e "{docker}"', shell=True).wait()
Popen(f'xfce4-terminal -T test --hold -e="{docker}"', shell=True).wait()
# ./run_pipeline.sh
# do some hard work
ls non/existent/path
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
It seems like the problem can be in a run_pipeline.sh
script, but I don’t want to upload it here, it’s a bigger mess than what I described earlier. But I will say anyway that I am trying to run this thing – https://github.com/IBM/D2A.
So I just wanted some advice on a tty
stuff that I am probably missing.
2
Answers
As I said in a comment to Matt's answer, his solution in my situation does not work either. I think it's a problem with the script that I'm running. I think it's because some of the many shell processes (https://imgur.com/a/JiPYGWd) are taking up allocated tty, but I don't know for sure.
So I came up with my own workaround. I simply block an execution of the script by creating a named pipe and then reading it.
Then I just launch terminal emulator and execute
docker exec
in it to start a new bash process. I do it with a help of Docker Python SDK by monitoring the output of a container so I know when to launch terminal.After I finish my investigation of a problem in that new bash process, I will send "status code of investigation" to tell the script to continue running or exit.
Run the initial container detached, with input and a tty.
Monitor the container logs for the output, then attach to it.
Here is a quick script example (without a tty in this case, only because of the demo using
echo
to input)The complete output after it finishes:
The whole process would be easier to control via the Docker Python SDK rather than having a layer of shell between the python and Docker.