My question is probably a Docker networking one, but I’m not entirely sure, it may be a broader AWS question.
We are migrating a part of the company infrastructure to AWS and it’s been setup by an external vendor. They have setup an MS SQL RDS server that hosts our databases on an isolated subnet. So, what we need to do is to port-forward port 1433 of the RDS server to the computer we are working from (whether it’s our own laptop or a local VM). We do this by communicating with a bastion host EC2 instance (that runs on the same subnet) with the command: aws ssm start-session --region $region --target $bastionID --document-name AWS-StartPortForwardingSessionToRemoteHost --parameters portNumber="$remoteport",localPortNumber="$localport",host="$remotehost"
. Then, we are able to connect to the database, by simply giving localhost:1433
as the address.
The above command is running in a terminal and times out after an hour, which is the default.
In order to test on AWS with all the cloud resources, we need to push to our private repository, on a branch that is part of a deployment/pipeline. That deployment will then compile a docker image. That docker image will be pushed to ECR (Elastic Container Registry). Then, an ECS (Elastic Container Service) will spawn a new container of the new image and then we’ll be able to see if our code changes worked or not. All this is done on the development account/env.
So, the question is can we bypass this entire process, by making code changes locally (laptop, on-prem linux VM), then compiling the docker image and spawning a container that somehow ‘thinks’ that it is running on AWS instead of locally, so that it can communicate with the actual RDS instance? And so that we can test, as if it was really running on AWS?
I have already tried to run the port-forwarding command on my own laptop and then use localhost:1433
as the database address in the docker-compose.yml
file, in the hopes that I could pass it as an environment variable, but that didn’t work. My docker networking interface was set to host, which is the default I guess. I also tried with bridge, but none of them worked. Most of my attempts were on an on-prem Ubuntu VM.
Is it possible to do this? Thank you in advance for your time and any pointers.
2
Answers
Generally, all is possible. But first of all:
--network=host
should be not the default, it breaks the concept of isolation.Without
network=host
you can’t runport-forward ...
on your local machine and then spin up a container, which can use the forwarded port.As already wrote in comments,
localhost
in your docker doesn’t point to your local machine. Docker uses a isolated network and normally you have to use the ip of yourdocker0
network-device (e.g.172.17.0.1
).Anyways, up to your problem: It seems to be same as Cannot connect to RDS from inside a docker container, I can from the host, and I can from a local docker container.
Running in ECS, Docker container will be in a subnet with access to RDS subnet. Inside the Docker you need to connect to the database via RDS DNS name, not via localhost. You can find RDS DNS in AWS Console, it should look like
mydb.<aws_account_id>.<aws_region>.rds.amazonaws.com
.Then how to build and test this Docker container locally?
There are two options:
127.0.0.1 mydb.<aws_account_id>.<aws_region>.rds.amazonaws.com
. This would allow to use same RDS DNS name to connect to DB from a local laptop, and from ECS.Also there are could be potential issues, when you can connect to DB from local (using RDS DNS), but it doesn’t work from ECS. In this case: