skip to Main Content

I built this docker image that I would run like this.
docker run -e AWS_ACCESS_KEY_ID=<my-access-key> -e AWS_SECRET_ACCESS_KEY=<my-secret-access-key> -it --rm -p 9000:8080 etl_pipeline:latest I’m mostly looking at the -e parameters. So when running this image locally it all works fine. But I’m running this with a lambda function which runs the ECR image. But I get a permission forbidden error when reading from my S3 bucket. Firstly I thought there was something wrong with my permissions on my bucket, but I changed the permissions and that should all be fine. But then I forgot that I run this container with these parameters. So how could I enter these parameters to my ECR image or should I change my Dockerfile, my Dockerfile looks like this now.

ENV POETRY_VERSION=1.4.0

RUN pip install "poetry==$POETRY_VERSION"

WORKDIR ${LAMBDA_TASK_ROOT}

COPY . ${LAMBDA_TASK_ROOT}/

ENV AWS_ACCESS_KEY_ID=
ENV AWS_SECRET_ACCESS_KEY=

RUN poetry config virtualenvs.create false 
RUN poetry install --only main --no-interaction --no-ansi

CMD [ "app.handler" ]

So I would guess that because I run it on AWS that there is some way of including my access keys in there. All help is greatly appreciated!

2

Answers


  1. Chosen as BEST ANSWER

    As posted in the answer by @ctgopinaath it is indeed not good to use your AWS access keys as variables in your docker image. If you for example want your lambda function to access your s3 bucket like I needed go to your IAM console, create a Role to access your s3 bucket this could look like this

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "ListObjectsInBucket",
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket"
                ],
                "Resource": [
                    "arn:aws:s3:::YOUR_BUCKET"
                ]
            },
            {
                "Sid": "AllObjectActions",
                "Effect": "Allow",
                "Action": "s3:*Object",
                "Resource": [
                    "arn:aws:s3:::YOUR_BUCKET/*"
                ]
            }
        ]
    }
    

    This makes sure you can perform all needed actions on your s3 bucket (if you only need read-access make sure you don't give full permission as is in the example above) Then you can just attach this policy to the execution role on your AWS lambda function. You can do this by first check what the execution role is on your function Then go to the IAM console click on your newly created policy -> Attach and then search for your execution role. All should work now.


  2. Its highly not recommended to use IAM access key and secret key in the container IMAGE, which may end at high risk.

    Instead, you can use IAM roles with an appropriate policies(Permissions equal to access key) at the time of provisioning the container through ECS or EKS.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search