I have a GitHub workflow, in which I am running a container, and in that container a set of tests that require access to private AWS S3. Trouble is, inside the container I am getting an error about missing credentials, while outside all is well.
Here’s how I configure the credentials:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: "arn:aws:iam::ACCOUNTID:role/MYROLE"
aws-region: "us-east-1"
Now if I access my S3 bucket from the workflow, it works just fine:
- name: Test S3 credentials
run: aws s3 ls s3://misery/
However, the same within a Docker container will result in an error:
- name: Run tests in Docker container
run: docker run my-image /bin/bash -c "aws s3 ls s3://misery/"
Reading this SO post, I thought getting the credentials should happen automagically, but apparently I am missing something. Any clues?
2
Answers
Turns out one way to pass the credentials is pass them as environmental variables. The latter are produced by
aws-actions/configure-aws-credentials@v4
and available underenv
.Example:
Importantly, this would NOT work:
When printing
steps.creds.outputs
, it would come up empty.You can indeed use the
aws-actions/configure-aws-credentials
action with your AWS IAM ROLE, as it generates the following outputs automatically:Doing so, you can use those outputs as environment variables through the
docker run
command.Here is an example regarding how to achieve this by using
outputs
from theConfigure aws credentials
step (using the id aws-cred reference), in theRun tests
step:Note that if you are using a
self hosted
runner with AWS credentials already managed there, you can directly inform the environment variables from the Github Context. Example: