I am working on a .net 6.0 application that will run in an EKS cluster, but I am developing using Docker Desktop and Kubernetes. Whilst I can pass in the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, etc via the Helm Chart as Environment variables the issue I am facing is that I keep seeing The security token included in the request is expired, I can update the values and redeploy but clearly that is a pain to do.
When this is running in EKS it will use IAM Roles etc so isn’t an issue but just wondered if anyone has a solution for doing this locally. If I run up the code in VS rather than Docker it picks up the credentials from the aws credentials file and runs (even though I haven’t updated the token).
In the code snippet above the environment var is set to local when running in my Docker Desktop Kubernetes.
Any ideas if there is a solution for this?
2
Answers
I have solved this eventually, I was working on other aspects of the project so have only just come back to the problem.
As I am deploying with a helm chart I needed to setup the volume in the deployment mapping the volume requires the path of the folder to be prefixed with:
/run/desktop/mnt/host/
so the yaml will looks something like this:
Note the volume is not read only it seems that the AWS code writes back a value to the file. As suggested I needed to provide the desired profile in the AWS_PROFILE environment and also you can specify the credentials folder with the env var AWS_SHARED_CREDENTIALS_FILE.
Also if your profile has any sso values it doesn't seem to work so it makes sense to have a profile that specifically is for your containers like the following.
I still need to refresh these when the values expire but it means that I only need to do it in one place rather than modify environment variables in each helm chart.
Why can’t your create a test user in aws & use its keys & secret for your local test. As it’s user & not role that you need to assume keys & secrets for test user will never be expired.