I am trying to run another Python script(service_scheduler.py) along with the one which runs the app server( main.py) which is actually a scheduler. It will schedule a job and it will be called at a regular time interval. The problem is when I add this to my bash script neither job gets scheduled nor the app server runs.
Here is my bash script(svc_invoke.sh):
#!/bin/bash
cd /opt
/opt/env/bin/python3.11 main.py &
/opt/env/bin/python3.11 /api_topics/services/service_scheduler.py &
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start python process: $status"
exit $status
else
echo "Started python $status"
fi
This one works, app server goes up and running:
#!/bin/bash
cd /opt
/opt/env/bin/python3.11 main.py
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start python process: $status"
exit $status
else
echo "Started python $status"
fi
And here is my Dockerfile:
FROM python:3.11-slim
RUN apt-get update &&
apt-get install -y gcc &&
apt-get clean;
# Add files
COPY topic_modelling/ /opt
COPY svc_invoke.sh /svc_invoke.sh
RUN chmod -R 755 /svc_invoke.sh
RUN cd /opt
RUN python3.11 -m venv /opt/env
RUN /opt/env/bin/python3.11 -m pip install --upgrade pip
RUN /opt/env/bin/python3.11 -m pip install wheel
RUN /opt/env/bin/python3.11 -m pip install -r /opt/requirements.txt
ENV PYTHONUNBUFFERED=1
# Expose Port for the continuous learning
EXPOSE 8000
# Run the service
CMD ./svc_invoke.sh
I am new to Docker and shell scripting and I am stuck here. Any help will be highly appreciated.
Thanks
2
Answers
Try this:
When you move the python scripts into the background with
&
the main script is allowed to exit before they finish, which in turn would allow Docker to exit. If you want them both to have an opportunity to finish running then you need towait
for them.Normally a Docker container only runs a single process. If you need to run multiple processes, usually the easiest path is to launch multiple containers from the same image. It is important that you launch the main container process as a foreground process: in your first script you launch two background processes and then the script completes, and when it completes the container exits.
I’d recommend setting the image’s
CMD
to the most common thing you expect the container to do. You do not need a wrapper script to report its exit code (this can also interfere with some things likedocker stop
).(Based only on the Dockerfile you’ve shown, the latter container will probably fail because there is no
/api_topics/
directory in the image. Note that this won’t affect the main container running, and you can debug and restart the scheduler container separately.)There are two more changes you can make that will make this a little easier to run. You shouldn’t normally need to explicitly mention the
python
interpreter on the command line. Your Python scripts should start with a "shebang" line, usuallythat tells the system where to find the interpreter. If you also make the script(s) executable
chmod +x main.py
(commit the permission change to source control) then you can just run./main.py
and it will use "the default" Python.The other change is to make "the default" Python have your dependencies. In your setup you can make the virtual environment be part of the command search path, so you’ll find the
python3
in the virtual environmentHowever, a Docker image is already an isolated environment and there’s not risk of the "system" Python interfering with other applications. In a Docker context it’s common to not use a virtual environment at all and just install your dependencies in the "system" Python, but still isolated in this image.