I am making a multi-stage docker image that uses Python’s official image for the builder image and Google’s distroless image as the base for the runner image. Before this, i tested the multi-stage build from Python’s official image for both the builder and runner image as follows.
FROM python:3.11.4-slim AS builder-image
# avoid stuck build due to user prompt
ARG DEBIAN_FRONTEND=noninteractive
# create and activate virtual environment
# using final folder name to avoid path issues with packages
RUN python3.11 -m venv /home/myuser/venv
ENV PATH="/home/myuser/venv/bin:$PATH"
# install requirements
COPY requirements.txt .
RUN pip3 install --no-cache-dir wheel
RUN pip3 install --no-cache-dir -r requirements.txt
FROM python:3.11.4-slim AS runner-image
RUN useradd botuser
COPY /app /app
RUN chmod 755 /app && mkdir /files && chmod 744 /files
USER botuser
WORKDIR /tmp
ENV APP_TMP_DATA=/tmp
# activate virtual environment
COPY --from=builder-image /home/myuser/venv /home/myuser/venv
ENV VIRTUAL_ENV=/home/myuser/venv
ENV PATH="/home/myuser/venv/bin:$PATH"
CMD python3.11 /app/script_bot.py
This worked fine, and thus, i proceeded with creating a Dockerfile for a distroless build. The following is what i got.
FROM python:3.11.4-slim AS builder-image
# avoid stuck build due to user prompt
ARG DEBIAN_FRONTEND=noninteractive
# create and activate virtual environment
# using final folder name to avoid path issues with packages
RUN python3.11 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# install requirements
COPY requirements.txt .
RUN pip3 install --no-cache-dir wheel
RUN pip3 install --no-cache-dir -r requirements.txt
COPY /app /app
RUN chmod 755 /app && mkdir /files && chmod 744 /files
FROM gcr.io/distroless/static-debian12:nonroot AS runner-image
# Determine chipset architecture for copying python
ARG CHIPSET_ARCH=x86_64-linux-gnu
# required by lots of packages - e.g. six, numpy, wsgi
COPY --from=builder-image /lib/${CHIPSET_ARCH}/libz.so.1 /lib/${CHIPSET_ARCH}/
# required by google-cloud/grpcio
COPY --from=builder-image /usr/lib/${CHIPSET_ARCH}/libffi* /usr/lib/${CHIPSET_ARCH}/
COPY --from=builder-image /lib/${CHIPSET_ARCH}/libexpat* /lib/${CHIPSET_ARCH}/
# Copy python from builder
COPY --from=builder-image /usr/local/lib/ /usr/local/lib/
COPY --from=builder-image /usr/local/bin/python3.11 /usr/local/bin/python3.11
COPY --from=builder-image /etc/ld.so.cache /etc/ld.so.cache
COPY --from=builder-image /app /app
COPY --from=builder-image /files /files
WORKDIR /tmp
ENV APP_TMP_DATA=/tmp
# activate virtual environment
COPY --from=builder-image /opt/venv /opt/venv
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="/opt/venv/bin:$PATH"
ENTRYPOINT ["/usr/local/bin/python3.11", "/app/script_bot.py"]
However, this returned an error: "exec /usr/local/bin/python3.11: no such file or directory" when the container is run.
I took this article as the guide in making the regular multi-stage python build, and this article as the guide in making the distroless multi-stage python build.
I tried changing every python variable from python3.11 to python3 and python in the Dockerfile, but to no avail. I tried running ls -l /usr/local/bin/
in a container running python:3.11.4-slim and got the following output.
total 48
lrwxrwxrwx 1 root root 9 Aug 16 05:25 2to3 -> 2to3-3.11
-rwxr-xr-x 1 root root 102 Aug 16 05:25 2to3-3.11
lrwxrwxrwx 1 root root 5 Aug 16 05:26 idle -> idle3
lrwxrwxrwx 1 root root 8 Aug 16 05:25 idle3 -> idle3.11
-rwxr-xr-x 1 root root 100 Aug 16 05:25 idle3.11
-rwxr-xr-x 1 root root 226 Aug 16 05:26 pip
-rwxr-xr-x 1 root root 226 Aug 16 05:26 pip3
-rwxr-xr-x 1 root root 226 Aug 16 05:26 pip3.11
lrwxrwxrwx 1 root root 6 Aug 16 05:26 pydoc -> pydoc3
lrwxrwxrwx 1 root root 9 Aug 16 05:25 pydoc3 -> pydoc3.11
-rwxr-xr-x 1 root root 85 Aug 16 05:25 pydoc3.11
lrwxrwxrwx 1 root root 7 Aug 16 05:26 python -> python3
lrwxrwxrwx 1 root root 14 Aug 16 05:26 python-config -> python3-config
lrwxrwxrwx 1 root root 10 Aug 16 05:25 python3 -> python3.11
lrwxrwxrwx 1 root root 17 Aug 16 05:25 python3-config -> python3.11-config
-rwxr-xr-x 1 root root 14472 Aug 16 05:25 python3.11
-rwxr-xr-x 1 root root 3005 Aug 16 05:25 python3.11-config
-rwxr-xr-x 1 root root 213 Aug 16 05:26 wheel
In here we can see that python
is symlinked to python3
, which in turn is symlinked to python3.11
, so i don’t think the issue is with the name of the executable itself (please correct me if I’m wrong, though).
I tried using the docker image made by the author of the distroless python image article by using the following Dockerfile.
FROM python:3.11.4-slim AS builder-image
# avoid stuck build due to user prompt
ARG DEBIAN_FRONTEND=noninteractive
# create and activate virtual environment
# using final folder name to avoid path issues with packages
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# install requirements
COPY requirements.txt .
RUN pip3 install --no-cache-dir -Ur requirements.txt
RUN mkdir /files
FROM al3xos/python-builder:3.10-debian11 AS runner-image
COPY --chmod=755 /app /app
COPY --from=builder-image --chmod=744 /files /files
WORKDIR /tmp
ENV APP_TMP_DATA=/tmp
# activate virtual environment
COPY --from=builder-image /opt/venv /opt/venv
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="/opt/venv/bin:$PATH"
CMD ["/app/script_bot.py"]
This, however returns an error "exec /app/script_bot.py: no such file or directory". I also tried activating the virtual env (which i also didn’t do in my first multi-stage buld without distroless runner base image) as suggested here, and the result is the following Dockerfile.
FROM python:3.11.4-slim AS builder-image
# avoid stuck build due to user prompt
ARG DEBIAN_FRONTEND=noninteractive
# create and activate virtual environment
# using final folder name to avoid path issues with packages
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# install requirements
COPY requirements.txt .
RUN pip3 install --no-cache-dir -Ur requirements.txt
RUN mkdir /files
FROM al3xos/python-builder:3.10-debian11 AS runner-image
COPY --chmod=755 /app /app
COPY --from=builder-image --chmod=744 /files /files
WORKDIR /tmp
ENV APP_TMP_DATA=/tmp
# activate virtual environment
COPY --from=builder-image --chmod=755 /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
RUN /opt/venv/bin/activate
CMD ["/usr/local/bin/python", "/app/script_bot.py"]
The resulting container returned a "ModuleNotFoundError: No module named ‘geopy’" error when run. I then tried copying the author’s example on a Dockerfile for a python code utilizing pandas and came up with the following Dockerfile.
FROM python:3.11.4-slim AS builder
WORKDIR /app
COPY requirements.txt /app/
RUN pip install --no-cache-dir -r requirements.txt
FROM al3xos/python-builder:3.10-debian11
COPY . /app
COPY --from=builder /home/monty/.local /home/monty/.local
ENV PYTHONPATH=/home/monty/.local/lib/python3.11/site-packages
WORKDIR /app
CMD ["script_bot.py"]
I don’t understand how this or the example could actually work, and it gave me a "COPY failed: stat home/monty/.local: file does not exist" error. Was the example needed some other configuration not mentioned in the article?
As much as i want to use the distroless python image published on gcr.io
, i can’t really use it since it’s still experimental and they don’t recommend using it for production. By the way, I built the images using DOCKER_BUILDKIT=1 and I’m creating a dockerized telegram bot, if that matters. Also, is the docker image published by the author of the (distroless multi-stage python build)[https://alex-moss.medium.com/creating-an-up-to-date-python-distroless-container-image-e3da728d7a80] likely to be more stable and/or secure than the python distroless docker image published by gcr.io
?
2
Answers
You’re correct that if the python3.11 executable isn’t present or isn’t working in the final distroless image, then installing packages using pip might also be problematic. The issue you’re encountering with the missing Python executable suggests there may be an issue with how you’re copying the Python interpreter or its dependencies from the builder image to the final image.
Here are some steps you can take to troubleshoot and potentially resolve the issue:
Check the Python Executable in the Builder Image:
First, ensure that in your builder image, python3.11 is indeed available and functional. You can run some basic Python commands in your builder image to verify this. For example:
If this doesn’t work as expected, it might indicate an issue with the builder image itself.
Explicitly Specify the Python Executable:
In your Dockerfile for the runner image, you can try explicitly specifying the Python executable as /usr/local/bin/python3.11 when setting the entry point. This ensures that the exact path is used:
Verify Copy Paths:
Ensure that the paths you are using to copy files from the builder image to the runner image match the actual paths in the builder image. You can use the ls command in the builder image to list the files and directories and make sure you’re copying from the correct locations.
Consider Using a Python Base Image:
Instead of attempting to copy the Python interpreter and libraries manually, you might consider using an official Python base image for the runner image as well. This can simplify your Dockerfile and reduce potential issues related to copying.
This approach uses the same Python version in both the builder and runner images, reducing potential compatibility issues.
If you’re still encountering issues, it’s essential to thoroughly review the paths, file permissions, and the content of your builder image to ensure that the necessary Python components are correctly copied. Additionally, you can use debugging techniques like running containers with shell access (docker run -it) to explore the file system and identify any problems.
When you know that the actual binary exists, this error typically means the kernel can’t find the appropriate dynamic loader. If we boot into a
python:3.11.4-slim
image and runldd
to show dependencies, we see thatpython3.11
requires:Your Dockerfile is failing to copy the dynamic loader (
ld-linux...
) as well as bothlibc
andlibm
. If I add the following lines to your Dockerfile……then I am able to successfully start the Python interpreter.
The Dockerfile I used for testing looks like this (I’ve removed the parts relating to your application and I’ve added the above lines):
And running an image built from that Dockerfile looks like: