skip to Main Content

I’ve got a database running in a private network (say IP 1.2.3.4).

In my own computer, I can do these steps in order to access the database:

  • Start a Docker container using something like docker run --privileged --sysctl net.ipv4.ip_forward=1 ...
  • Get the container IP
  • Add a routing rule, such as ip route add 1.2.3.4/32 via $container_ip

And then I’m able to connect to the database as usual.

I wonder if there’s a way to route traffic through a specific pod in Kubernetes for certain IPs in order to achieve the same results. We use GKE, by the way, I don’t know if this helps in any way.

PS: I’m aware of the sidecar pattern, but I don’t think this would be ideal for our use case, as our jobs are short-lived tasks, and we are not able to run multiple "gateway" containers at the same time.

4

Answers


  1. I wonder if there's a way to route traffic through a specific pod in Kubernetes for certain IPs in order to achieve the same results. We use GKE, by the way, I don't know if this helps in any way.

    You can start a GKE in a fully private network like this, then you run application that needs to be fully private in this cluster. Access to this cluster is only possible when explicitly granted; just like those commands you used in your question, but of course now you will use the cloud platform (eg. service control, bastion etc etc), there is no need to "route traffic through a specific pod in Kubernetes for certain IPs". But if you have to run everything in a cluster, then likely a fully private cluster will not work for you, in this case you can use network policy to control access to your database pod.

    Login or Signup to reply.
  2. GKE doesn’t support the use case you mentionned @gabriel Milan.

    What’s your requirement ? Do you need to know which IP the pod will use to reach the database so you can open a firewall for it ?

    Login or Signup to reply.
  3. Replying here as the comments have limited character count

    Unfortunately GKE doesn’t support that use case.

    However You have couple of options:

    • Option#1: Create a dedicated nodepool with couple of nodes, force the pods to be scheduled on these nodes using taints and tolerations [1]. Use the IP addresses of these nodes on your firewall
    • Option#2: Install a Service Mesh like Istio, Use the Egress gateway[2] to route traffic toward your onPrem system and force the gateways to be deployed on a specific set of nodes so you have a know IP address. This quite complicated as a solution

    [1] https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/

    [2] https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway/

    Login or Signup to reply.
  4. i would suggest using or creating the NAT gateway instead of using the container as a gateway option.

    Using container or Istio is a good idea however it has its own limitations hard to implement, management and resources usage of that gateway containers.

    Ultimately you want Single IP for your K8s cluster, instead request going out instead of Node’s IP on which POD is scheduled.

    Here terraform of GKE NAT gateway which you can use it.

    https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway

    NAT gateway will forward all PODs traffic from a single VM and you can use that IP in the database to whitelist also.

    After implementation, there will be single Egress point in your cluster.

    GitHub Repo link – Click to deploy available GCP magic 😉

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search