I am trying to add possibility to use host machine ssh keys to fetch private repositories. However, I haven’t been successful to do so in a secure manner. I don’t want to copy ssh key’s at build time, use root inside container, set uid and gid to match the user on the host machine because all this pose security risks. The way I am trying to go is using ssh socket forwarding. I am successful with doing so, however, it requires to use it with sudo which is unacceptable for me. I am trying to do so for ages and getting more fed up with cause I am able to find plenty of solutions, but non of them are following good practices.
The command I am running the container:
docker run -it --rm -v $(dirname $SSH_AUTH_SOCK):/home/slomka/ssh-agent/ -e SSH_AUTH_SOCK=/home/slomka/ssh-agent/$(basename $SSH_AUTH_SOCK) -v ~/.ssh/:/home/slomka/.ssh example-container
then I am running:
socat -dddd UNIX-LISTEN:/home/slomka/.ssh/socket,fork,user=slomka,group=build,mode=777
UNIX-CONNECT:/home/slomka/ssh-agent/
and then I set:
export SSH_AUTH_SOCK=/home/slomka/.ssh/socket
(I tried with setting owner and group of /ssh-agent dir and its content to the ones used in container)
And then I get connection refused:
2023/09/21 09:26:31 socat[196] I close(5)
2023/09/21 09:26:31 socat[196] N opening connection to AF=1 "/home/slomka/ssh-agent/"
2023/09/21 09:26:31 socat[196] I socket(1, 1, 0) -> 5
2023/09/21 09:26:31 socat[196] E connect(5, AF=1 "/home/slomka/ssh-agent/", 25): Connection refused
2023/09/21 09:26:31 socat[196] N exit(1)
2023/09/21 09:26:31 socat[196] I shutdown(6, 2)
2023/09/21 09:26:31 socat[196] I shutdown(5, 2)
2023/09/21 09:26:31 socat[165] N childdied(): handling signal 17
2023/09/21 09:26:31 socat[165] I childdied(signum=17)
2023/09/21 09:26:31 socat[165] I childdied(17): cannot identify child 196
2023/09/21 09:26:31 socat[165] I waitpid(): child 196 exited with status 1
2023/09/21 09:26:31 socat[165] I waitpid(-1, {}, WNOHANG): No child processes
2023/09/21 09:26:31 socat[165] I childdied() finished
2
Answers
Don’t do this.
I think you mix up containers with virtual machines and what you really need is a VM.
A container is not a VM and shouldn’t be treated as such, even if you can exec into the container and get a shell and do stuff like you would do in a baremetal/vm OS.
A container is "just" a process (and it’s child processes) running natively in a separated area on the host with the help of the linux namespaces.
So your container starts from an image and you build your own image and start the container and let it do it’s sole purpose, to run that one process that it is supposed to run.
For example create this Dockerfile:
put index.js with this content into the same directory as the Dockerfile is:
Build the image:
docker build . --progress=plain -f Dockerfile -t hello-node:1.0.0
Start a container from the image with rm flag so the container is removed when you stop it with ctrl+c
docker run --rm hello-node:1.0.0
you can see the output of the container in your terminal:
Or you can make it run in the background (-d or –detach flag) with
docker run -d --name hello-node-container hello-node:1.0.0
with naming it hello-node-container and get it’s logs withdocker logs hello-node-container -f
You don’t want to ever ssh into a container and do stuff there because they are – by design – ephemeral and must be treated so. So if you want to modify your container you need to update the image (with modifying the Dockerfile and building a new image) stop the old container and start the new container from the new image.
You can however
docker exec
into a running container withdocker exec -it hello-node-container bash
and do stuff there but this is only to debug and investigate problems and when you know what the problem is you modify Dockerfile, create new image, stop bad container, start new container from new image.This command start a container using
ubuntu
image bypassing ssh keys in ssh-agent :