In SSH, there is an agent forwarding option to use local credentials, e.g. for working with services like GitHub or for chaining connections onward to log in to other servers, whereby the private key stays under the protection of the local OS keychain and no trusted private key material is ever present on the "jumpbox" or remote environment. Is anything analogous possible for AWS credentials?
For example, if connecting to an EC2 Instance with AWS SSM (or EC2 Instance Connect, or SSH via an SSM proxy tunnel, etc) is there any way to automatically transfer temporary AWS credentials to the remote session (or any way to proxy AWS SDK authentication calls back to the local system)?
I’m looking for an approach that can avoid needing to manually repeat the user authentication process twice (once locally and then again inside the remote session, potentially involving MFA each time). Ideally it would only provide temporary credentials to the remote session, and automate the temporary credential rotation to be transparent to applications (like how IRSA automatically rotates temporary AWS keys for AWS SDK based processes on EKS pods, except that IRSA uses k8s OIDC instead of depending on an active connection from an authenticated user). How could this be implemented?
2
Answers
Background
There are several mechanisms that the AWS SDKs (including the CLI) will consistently attempt for authorisation. Env vars and a config file will be checked, for either an access key or for details of an (OIDC) identity provider, before resorting to checking the EC2 or ECS metadata service.
There are also provisions (in the config file) for specifying an arbitrary process to obtain the credentials.
Note SSH does have facility for propagating some env vars to the remote session, but simply passing a static AWS access key in this manner would be limiting (as there would not be potential for refreshing short-term tokens nor for interactively switching between multiple AWS accounts/roles).
Reference: Generic SDK Credentials configuration docs
Solution
One approach would be to script a local service (e.g., a simple flask app invoking aws sts commands) for producing temporary credentials on demand. Then connect to the remote instance by SSH proxied via SSM (potentially using EC2 Instance Connect to temporarily upload an SSH public key) and use an SSH tunnel so that the remote system can access the local credentials service as long as the session is active. Then modify the config on the remote system, to request credentials tunnelled from the local service (e.g. via curl localhost). To prevent other simultaneous users of the remote system from also being able to retrieve credentials, an improvement would be to have SSH pass a bearer token or nonce into the remote session (e.g. as an env var). Another enhancement would be for the local service utility to support revoking the temporary credentials and switching (interactively from a UI on the local system) between different AWS accounts/roles.
This would allow the developer to log in just once for their organisation (leveraging MFA and SSO) then perform various work from remote interactive compute environments, retaining convenient granular control of delegated access, and minimising exposure for credential leaks.
You can give the ec2 an instance role with IAM permissions to access the required service and therefore do not use temporary AWS credentials. Then when you run aws cli commands or SDK programs in the ec2 instance it assumes the role and permissions of the instance role you setup. This is further secure over copying credentials from your local to the remote instance.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html