This is the dockerfile that I’m using:
FROM google/cloud-sdk:latest
COPY . /app
WORKDIR /app
# Copy your credentials file
COPY project-key.json /app/project-key.json
# Set the environment variable for the credentials
ENV GOOGLE_APPLICATION_CREDENTIALS /app/project-key.json
# Download the file from GCS using the gsutil command
RUN gsutil cp gs://project-id/file.txt /app/file.txt
RUN apt-get update && apt-get install -y python3
RUN apt-get update && apt-get install -y python3-pip
RUN apt-get update && apt-get install -y git
RUN pip install -r /app/requirements.txt
EXPOSE 8080
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8080"]
It works when building from Cloud Shell, but not when running from Cloud Build. I get the following error:
ServiceException: 401 Anonymous caller does not have
storage.objects.get access to the Google Cloud Storage object.
Permission ‘storage.objects.get’ denied on resource (or it may not
exist). The command ‘/bin/sh -c gsutil cp
gs://project-id/file.txt /app/file.txt’ returned a
non-zero code: 1
Where are the credentials/roles missing?
2
Answers
Before you run your Docker step, do this:
There is an approach to interface with GCStorage running the Cloud Build pipeline.
the cloud build yaml would one have one step and remember to add this param
–network value cloudbuild :
The docker file would have 2 Parent Images(python:3.9, gcr.io/cloud-builders/gsutil) so it is enable to interface amoung python and gcs at the same env on runtime.
Run all commands to install python libs etc
Run all commands to interface with cloud storage
In my scenario i created a zip with all python files and lib dependencies, then i send it to gcs. (the zip file is used with dataproc)