skip to Main Content

I have a database in AWS that I need to connect to from Kubernetes, but security settings in that database prevent this. My solution is to SSH tunnel to a proxy from within the Kubernetes pod, and connect through that to the database in AWS.

However, I’m not quite sure how to actually get this going in Kubernetes, as the sidecar container is throwing a "CrashLoopBackOff" error.

my Dockerfile is pretty thin. It’s an alpine container that really doesn’t do anything at all, other than copy a shell script which handles the tunneling.

Dockerfile

FROM alpine:3.14.0

COPY tunnel.sh /

RUN apk update && apk add curl 
    wget 
    nano 
    bash 
    ca-certificates 
    openssh-client

RUN chmod +x /tunnel.sh
RUN mkdir ~/.ssh

RUN ssh-keyscan -Ht ecdsa proxysql-sshtunnel.domain.com > ~/.ssh/known_hosts

CMD /bin/bash

tunnel.sh

#!/bin/bash
ssh -i /keys/sql_proxy.private -L 3306:10.0.0.229:6033 [email protected] -N

They SSH keys are mounted to the pod from a secret volume in Kubernetes. My deployment looks like this:

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: accounts-deployment
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: api-accounts
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    spec:
      containers:
      - image: gcr.io/xxxxxxxx/accounts:VERSION-2.0.6
        imagePullPolicy: Always
        name: accounts
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /tmp
          name: accounts-keys
          readOnly: true
        - mountPath: /var/www/html/var/spool
          name: mail-spool
      - image: gcr.io/xxxxxxxx/sql-proxy:latest
        imagePullPolicy: IfNotPresent
        name: sql-proxy
        args:
          - -c
          - /tunnel.sh
        command:
          - /bin/bash
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /keys
          name: keys-sql-proxy
          readOnly: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: accounts-keys
        secret:
          defaultMode: 420
          secretName: accounts-keys
      - name: spoonity-sql-proxy
        secret:
          defaultMode: 420
          secretName: spoonity-sql-proxy
      - emptyDir: {}
        name: mail-spool
status:

<——- The relevant portion is here ——->

...
- image: gcr.io/xxxxxxxx/sql-proxy:latest
  imagePullPolicy: IfNotPresent
  name: sql-proxy
  args:
    - -c
    - /tunnel.sh
  command:
    - /bin/bash
  resources: {}
  terminationMessagePath: /dev/termination-log
  terminationMessagePolicy: File
  volumeMounts:
    - mountPath: /keys
      name: keys-sql-proxy
      readOnly: true
...

The only logs I get from Kubernetes is: "/bin/bash: line 1: /tunnel.sh: No such file or directory"

If I try to run the container locally in docker with docker run sql-proxy:latest /tunnel.sh, then I get a different error complaining that the keys don’t exist (which is exactly what I’d be expecting to see).

Not sure where the issue is with this one.

EDIT: tried rebuilding the container locally and including the keys manually. I was able to successfully launch the container. So it looks like it’s definitely a Kubernetes issue, but I’m really not sure why.

2

Answers


  1. Chosen as BEST ANSWER

    So the problem was here:

    volumes:
          - name: accounts-keys
            secret:
              defaultMode: 420
              secretName: accounts-keys
          - name: spoonity-sql-proxy
            secret:
              defaultMode: 420 #<----------- this is wrong
              secretName: spoonity-sql-proxy
    

    SSH requires specific key permissions in order to connect. Kubernetes uses decimal-based file permissions, so the correct value here should be 384, which will mount the keys with the proper permissions of 0600 in Linux.

    Because the permissions were wrong, every time the script tried to execute, it would fail and exit, triggering Kubernetes to try to restart it.

    Still not sure why those logs were never generated, but I found this by arbitrarily changing the command and args in my deployment manifest to instead just ping localhost continuously so the container would at least start:

    ...
     - image: gcr.io/xxxxxxxxx/sql-proxy:latest
       command: ["ping"]
       args: ["127.0.0.1"]
    ...
    

    I then connected to the now-running pod, and tried to run the tunnel.sh command by hand. Now that I could actually see why it was failing, I could fix it.


  2. The problem here is you are probably copying the file to the / directory of the container, but when you start the container the shell starts from ~/ directory. So it cannot find the file.

    Add a WORKDIR statement at the beginning of your Dockerfile which will make sure when you start the container, you know where you are starting from.

    FROM alpine:3.14.0
    
    WORKDIR /usr/src/app
    
    COPY tunnel.sh .
    
    RUN apk update && apk add curl 
        wget 
        nano 
        bash 
        ca-certificates 
        openssh-client
    
    RUN chmod +x ./tunnel.sh
    
    RUN mkdir ~/.ssh
    
    RUN ssh-keyscan -Ht ecdsa proxysql-sshtunnel.domain.com > ~/.ssh/known_hosts
    
    CMD /bin/bash
    

    Also, it’s recommended to change CMD to the actual command you want to run instead of passing it by kubernetes.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search