skip to Main Content

I’m creating a kind cluster with kind create cluster --name kind and I want to access it from another docker container but when I try to apply a Kubernetes file from a container (kubectl apply -f deployment.yml) I got this error:

The connection to the server 127.0.0.1:6445 was refused - did you specify the right host or port?

Indeed when I try to curl kind control-plane from a container, it’s unreachable.

> docker run --entrypoint curl curlimages/curl:latest 127.0.0.1:6445
curl: (7) Failed to connect to 127.0.0.1 port 6445 after 0 ms: Connection refused

However kind control-plane is publishing to the right port but only to the localhost.

> docker ps --format "table {{.Image}}t{{.Ports}}"
IMAGE                  PORTS
kindest/node:v1.23.4   127.0.0.1:6445->6443/tcp

Currently the only solution I found is to set the host network mode.

> docker run --network host --entrypoint curl curlimages/curl:latest 127.0.0.1:6445
Client sent an HTTP request to an HTTPS server.

This solution don’t look to be the most secure. Is there another way like connecting the kind network to my container or something like that that I missed ?

2

Answers


  1. I don’t know exactly why you want to do this. but no problem I think this could help you:

    first, lets pull your docker image:

    ❯ docker pull curlimages/curl
    

    In my kind cluster I got 3 control plane nodes and 3 worker nodes. Here are the pod of my kind cluster:

    ❯ docker ps
    CONTAINER ID   IMAGE                                COMMAND                  CREATED          STATUS          PORTS                       NAMES
    39dbbb8ca320   kindest/node:v1.23.5                 "/usr/local/bin/entr…"   7 days ago       Up 7 days       127.0.0.1:35327->6443/tcp   so-cluster-1-control-plane
    62b5538275e9   kindest/haproxy:v20220207-ca68f7d4   "haproxy -sf 7 -W -d…"   7 days ago       Up 7 days       127.0.0.1:35625->6443/tcp   so-cluster-1-external-load-balancer
    9f189a1b6c52   kindest/node:v1.23.5                 "/usr/local/bin/entr…"   7 days ago       Up 7 days       127.0.0.1:40845->6443/tcp   so-cluster-1-control-plane3
    4c53f745a6ce   kindest/node:v1.23.5                 "/usr/local/bin/entr…"   7 days ago       Up 7 days       127.0.0.1:36153->6443/tcp   so-cluster-1-control-plane2
    97e5613d2080   kindest/node:v1.23.5                 "/usr/local/bin/entr…"   7 days ago       Up 7 days       0.0.0.0:30081->30080/tcp    so-cluster-1-worker2
    0ca64a907707   kindest/node:v1.23.5                 "/usr/local/bin/entr…"   7 days ago       Up 7 days       0.0.0.0:30080->30080/tcp    so-cluster-1-worker
    9c5d26caee86   kindest/node:v1.23.5                 "/usr/local/bin/entr…"   7 days ago       Up 7 days       0.0.0.0:30082->30080/tcp    so-cluster-1-worker3
    

    enter image description here

    The container that is interesting for us here is the haproxy one (kindest/haproxy:v20220207-ca68f7d4) which have the role of loadbalancing the enterring traffic to the nodes (and, in our example, especially the control plane nodes.) we can see that the port 35625 of our host machine is mapped to the port 6443 of the haproxy container. (127.0.0.1:35625->6443/tcp)

    so, our cluster endpoint is https://127.0.0.1:35625, we can confirm this in our kubeconfig file (~/.kube/config):

    ❯ cat .kube/config
    apiVersion: v1
    kind: Config
    preferences: {}
    users:
      - name: kind-so-cluster-1
        user:
            client-certificate-data: <base64data>
            client-key-data: <base64data>
    clusters:
      - cluster:
            certificate-authority-data: <certificate-authority-dataBase64data>
            server: https://127.0.0.1:35625
        name: kind-so-cluster-1
    contexts:
      - context:
            cluster: kind-so-cluster-1
            user: kind-so-cluster-1
            namespace: so-tests
        name: kind-so-cluster-1
    current-context: kind-so-cluster-1
    

    let’s run the curl container in background:

    ❯ docker run -d --network host curlimages/curl sleep 3600
    ba183fe2bb8d715ed1e503a9fe8096dba377f7482635eb12ce1322776b7e2366
    

    as expected, we cant HTTP request the endpoint that listen on an HTTPS port:

    ❯  docker exec -it ba curl 127.0.0.1:35625
    Client sent an HTTP request to an HTTPS server.
    

    we can try to use the certificate that is in the field "certificate-authority-data" in our kubeconfig to check if that change something (it should):
    Lets create a file named my-ca.crt that contain the stringData of the certificate:

    base64 -d <<<  <certificate-authority-dataBase64dataFromKubeConfig> > my-ca.crt 
    

    since the working directory of the curl docker image is "/" lets copy our cert to this location in the container and verify that it is actually there:

    docker cp my-ca.crt ba183fe:/
    
    ❯ docker exec -it ba sh
    / $ ls my-ca.crt
    my-ca.crt
    

    Let’s try again our curl request but with the certificate:

    ❯ docker exec -it ba curl --cacert my-ca.crt https://127.0.0.1:35625
    {
      "kind": "Status",
      "apiVersion": "v1",
      "metadata": {},
      "status": "Failure",
      "message": "forbidden: User "system:anonymous" cannot get path "/"",
      "reason": "Forbidden",
      "details": {},
      "code": 403
    }
    

    YOU, can get the same result by adding the "–insecure" flag to your curl request:

    ❯  docker exec -it ba curl https://127.0.0.1:35625 --insecure
    {
      "kind": "Status",
      "apiVersion": "v1",
      "metadata": {},
      "status": "Failure",
      "message": "forbidden: User "system:anonymous" cannot get path "/"",
      "reason": "Forbidden",
      "details": {},
      "code": 403
    }
    

    However, we can’t access our cluster with anonymous user ! So lets get a token from kubernetes (cf https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/):

    # Create a secret to hold a token for the default service account
    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: default-token
      annotations:
        kubernetes.io/service-account.name: default
    type: kubernetes.io/service-account-token
    EOF
    

    Once the token controller has populated the secret with a token:

    # Get the token value
    ❯ kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode
    eyJhbGciOiJSUzI1NiIsImtpZCI6InFSTThZZ05lWHFXMWExQlVSb1hTcHNxQ3F6Z2Z2aWpUaUYwd2F2TGdVZ0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzby10ZXN0cyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzYzY0OTg1OS0xNzkyLTQzYTQtOGJjOC0zMDEzZDgxNjRmY2IiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c28tdGVzdHM6ZGVmYXVsdCJ9.VLfjuym0fohYTT_uoLPwM0A6u7dUt2ciWZF2K9LM_YvQ0UZT4VgkM8UBVOQpWjTmf9s2B5ZxaOkPu4cz_B4xyDLiiCgqiHCbUbjxE9mphtXGKQwAeKLvBlhbjYnHb9fCTRW19mL7VhqRgfz5qC_Tae7ysD3uf91FvqjjxsCyzqSKlsq0T7zXnzQ_YQYoUplGa79-LS_xDwG-2YFXe0RfS9hkpCILpGDqhLXci_gwP9DW0a6FM-L1R732OdGnb9eCPI6ReuTXQz7naQ4RQxZSIiNd_S7Vt0AYEg-HGvSkWDl0_DYIyHShMeFHu1CtfTZS5xExoY4-_LJD8mi
    

    Now lets execute the curl command directly with the token !

    ❯ docker exec -it ba curl -X GET https://127.0.0.1:35625/api --header "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6InFSTThZZ05lWHFXMWExQlVSb1hTcHNxQ3F6Z2Z2aWpUaUYwd2F2TGdVZ0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzby10ZXN0cyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzYzY0OTg1OS0xNzkyLTQzYTQtOGJjOC0zMDEzZDgxNjRmY2IiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c28tdGVzdHM6ZGVmYXVsdCJ9.VLfjuym0fohYTT_uoLPwM0A6u7dUt2ciWZF2K9LM_YvQ0UZT4VgkM8UBVOQpWjTmf9s2B5ZxaOkPu4cz_B4xyDLiiCgqiHCbUbjxE9mphtXGKQwAeKLvBlhbjYnHb9fCTRW19mL7VhqRgfz5qC_Tae7ysD3uf91FvqjjxsCyzqSKlsq0T7zXnzQ_YQYoUplGa79-LS_xDwG-2YFXe0RfS9hkpCILpGDqhLXci_gwP9DW0a6FM-L1R732OdGnb9eCPI6ReuTXQz7naQ4RQxZSIiNd_S7Vt0AYEg-HGvSkWDl0_DYIyHShMeFHu1CtfTZS5xExoY4-_LJD8mi" --insecure
    {
      "kind": "APIVersions",
      "versions": [
        "v1"
      ],
      "serverAddressByClientCIDRs": [
        {
          "clientCIDR": "0.0.0.0/0",
          "serverAddress": "172.18.0.5:6443"
        }
      ]
    }
    

    It works !
    I still don’t know why you want to do this but I hope that this helped you.

    Since It’s not what you wanted because here I use host network, You can use this : How to communicate between Docker containers via "hostname" as proposed @SergioSantiago thanks for your comment !

    bguess

    Login or Signup to reply.
  2. Don’t have enough rep to comment on the other answer, but wanted to comment on what ultimately worked for me.

    Takeaways

    • Kind cluster running in it own bridge network kind
    • Service with kubernetes client running in another container with a mounted kube config volume
    • As described above the containers need to be in the same network unless you want your service to run in the host network.
    • The server address for the kubeconfig is the container name + internal port e.g. kind-control-plane:6443. The port is NOT the exposed port in the example below 6443 NOT 38669
      CONTAINER ID   IMAGE                                PORTS
      7f2ee0c1bd9a   kindest/node:v1.25.3                 127.0.0.1:38669->6443/tcp
      

    Kube config for the container

    # path/to/some/kube/config
    apiVersion: v1
    clusters:
      - cluster:
          insecure-skip-tls-verify: true # Don't use in Prod equivalent of --insecure on cli
          server: https://<kind-control-plane container name>:6443 # NOTE port is internal container port
        name: kind-kind # or whatever
    contexts:
      - context:
          cluster: kind-kind
          user: <some-service-account>
        name: kind-kind # or whatever
    current-context: kind-kind
    kind: Config
    preferences: {}
    users:
      - name: <some-service-account>
        user:
          token: <TOKEN>
    

    Docker container stuff

    • If using docker-compose you can add the kind network to the container such as:

      #docker-compose.yml
      services:
        foobar:
          build:
            context: ./.config
          networks:
            - kind # add this container to the kind network
          volumes:
            - path/to/some/kube/config:/somewhere/in/the/container
      networks:
        kind: # define the kind network
          external: true # specifies that the network already exists in docker
      
    • If running a new container:

      docker run --network kind -v path/to/some/kube/config:/somewhere/in/the/container <image>
      
    • Container already running?

      docker network connect kind <container name>
      
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search