skip to Main Content

I am trying to set up an Express.js application running in a Kubernetes pod to connect to a MongoDB Atlas database. I’ve set up an egress droplet to control and log the outbound traffic from the cluster. The IP of the egress droplet has been whitelisted in MongoDB Atlas, but the application still can’t connect to the database. I’m looking for guidance on how to correctly configure the Express app, or the Kubernetes cluster, to route traffic through the egress droplet.

Environment Details:

Express App: Running in a Kubernetes pod.
Egress Droplet: Set up to control outbound traffic from the Kubernetes cluster.
Database: MongoDB Atlas, with the egress droplet’s IP whitelisted.

Current Configuration:

  1. Network Policy: I have a network policy in place that’s intended to allow egress traffic from the pod to the egress droplet.

Here is my allow-egress-via-gateway.yaml that is applied:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-egress-via-gateway
spec:
  podSelector:
    matchExpressions:
      - {
        key: app, 
        operator: In, 
        values: [
          express-app
        ]
      }
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 153.000.000.000/32  # The egress gateway IP
  # Additional rules for DNS traffic
  - ports:
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 53

Here is my express-deployment.yaml, which creates the express-app pod:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: express-app-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: express-app
  template:
    metadata:
      labels:
        app: express-app
      annotations:
        kubectl.kubernetes.io/restartedAt: "$(date +%s)"
    spec:
      containers:
      - name: express-app
        image: mydockerhub/myexpress-backend:latest
        ports:
        - containerPort: 8000
        envFrom:
        - secretRef:
            name: express-env-secret
      imagePullSecrets:
      - name: my-docker-credentials

  1. IP Masquerading: On the egress droplet, I’ve set up IP masquerading rules in iptables to modify the source IP of the outgoing packets.
  2. Database Connection: The Express app is configured to connect to MongoDB Atlas using a connection string provided by Atlas.

Issues Encountered:

Despite the above setup, the Express app fails to connect to MongoDB Atlas. The logs suggest it’s a connection issue, likely related to the IP not being whitelisted, even though I’ve confirmed the egress droplet’s IP is on the whitelist.

Specific Questions:

  1. How should I configure the Express app or the Kubernetes pod to ensure it routes traffic through the egress droplet?
  2. Are there any common pitfalls or additional steps I need to consider when setting up an egress controller for a Kubernetes cluster?
  3. Is there any additional configuration required on the egress droplet to ensure it properly routes and logs the traffic from the Kubernetes pods?

Any insights, suggestions, or further diagnostic steps would be greatly appreciated!

2

Answers


  1. Make sure your network policy (allow-egress-via-gateway.yaml) correctly allows traffic to the egress droplet. The policy you provided looks correct, but double-check the egress droplet IP (the cidr part: cidr: <Egress-Droplet-IP>/32).

    Make sure the deployment (kubectl apply -f express-deployment.yaml) is successful and the pod is running.

    On the egress droplet, you might use a command like the following to set up IP masquerading:

    iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    

    Confirm that IP masquerading is correctly set up on the egress droplet. That step is very important for ensuring that responses from MongoDB Atlas are routed back to the correct pod.
    So check if the IP masquerade rule is present in the iptables configuration: sudo iptables -t nat -L POSTROUTING, look for a line similar to: MASQUERADE all -- anywhere anywhere

    Verify that the egress droplet correctly routes traffic to MongoDB Atlas and logs this traffic. Make sure no firewall or security rules on the egress droplet are blocking traffic to MongoDB Atlas.

    In your Express app, use environment variables to store the MongoDB connection string securely:

    const mongoUri = process.env.MONGO_URI;
    

    Make sure the Express app uses the correct MongoDB Atlas connection string. That connection string should be stored securely, for example, in a Kubernetes Secret.

    Verify that the necessary environment variables are correctly passed into the Express app via Kubernetes secrets or config maps.

    envFrom:
    - secretRef:
        name: express-env-secret
    

    Make sure to include error handling in your Express app for MongoDB connection, like:

    mongoose.connect(mongoUri, { useNewUrlParser: true, useUnifiedTopology: true })
        .then(() => console.log('MongoDB connected'))
        .catch(err => console.error('MongoDB connection error:', err));
    

    You can also implement retry logic in your Express app, in case of transient network issues:

    mongoose.connection.on('error', (err) => {
        console.log('Retrying MongoDB connection...');
        setTimeout(() => mongoose.connect(mongoUri, { useNewUrlParser: true, useUnifiedTopology: true }), 5000);
    });
    

    After all that, test the connection from within your pod (from the Express app to MongoDB Atlas)

    kubectl exec -it <pod-name> -- /bin/bash
    curl http://<mongodb-atlas-url>
    

    And check the pod’s logs: kubectl logs <pod-name>

    Inside the pod, use network tools for diagnosing:

    kubectl exec -it <pod-name> -- /bin/bash
    ping <mongodb-atlas-url>
    traceroute <mongodb-atlas-url>
    

    (more cently, tracepath or mtr)

    Login or Signup to reply.
  2. Based on the documentation, you should be able to connect your express app with the database by retrieving secrets of the MongoDB Atlas and creating secrets, see sample below:

    kubectl get secret -n my-namespace my-project-cluster-name-dbadmin  -ojson | jq -r '.data | with_entries(.value |= @base64d)';
    
    containers:
     - name: test-app
       env:
         - name: "CONNECTION_STRING"
           valueFrom:
             secretKeyRef:
               name: my-project-cluster-name-dbadmin
               key: connectionStringStandardSrv
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search