skip to Main Content

I am trying to install Kubernetes on my CentOS machine, when I intialize the cluster, I have the following error.

I specify that I am behind a corporate proxy. I have already configured it for Docker in the directory: /etc/systemd/system/docker.service.d/http-proxy.conf
Docker work fine.

No matter how hard I look, I can’t find a solution to this problem.

Thank you for your help.

# kubeadm init
W1006 14:29:38.432071    7560 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": x509: certificate signed by unknown authority
W1006 14:29:38.432147    7560 version.go:103] falling back to the local client version: v1.19.2
W1006 14:29:38.432367    7560 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.2
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING HTTPProxy]: Connection to "https://192.168.XXX.XXX" uses proxy "http://proxyxxxxx.xxxx.xxx:xxxx/". If that is not intended, adjust your proxy settings
        [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://proxyxxxxx.xxxx.xxx:xxxx/". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.13-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.7.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1

# kubeadm config images pull
W1006 17:33:41.362395   80605 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": x509: certificate signed by unknown authority
W1006 17:33:41.362454   80605 version.go:103] falling back to the local client version: v1.19.2
W1006 17:33:41.362685   80605 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
failed to pull image "k8s.gcr.io/kube-apiserver:v1.19.2": output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher

8

Answers


  1. Maybe root certificates on your machine are outdated – so it does not consider certificate of k8s.gcr.io as valid one. This message x509: certificate signed by unknown authority hints to it.

    Try to update them: yum update ca-certificates || yum reinstall ca-certificates

    Login or Signup to reply.
  2. Working also with v1.19.2 – I’ve got the same error.

    It seems to be related to the issue mentioned here (and I think also in here).

    I re-install kubeadm on the node and ran the kubeadm init workflow again – it is now working with v1.19.3 and the errors are gone.

    All master nodes images are pulled successfully.

    Also verified with:

    sudo kubeadm config images pull
    

    (*) You can run kubeadm init with --kubernetes-version=X.Y.Z (1.19.3 in our case).

    Login or Signup to reply.
  3. I just did a dig to k8s.gcr.io, and I added the IP given by the request to /etc/hosts.

    # dig k8s.gcr.io
    
    ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.2 <<>> k8s.gcr.io
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44303
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 512
    ;; QUESTION SECTION:
    ;k8s.gcr.io.            IN  A
    
    ;; ANSWER SECTION:
    k8s.gcr.io.     21599   IN  CNAME   googlecode.l.googleusercontent.com.
    googlecode.l.googleusercontent.com. 299 IN A    64.233.168.82
    
    ;; Query time: 72 msec
    ;; SERVER: 8.8.8.8#53(8.8.8.8)
    ;; WHEN: Tue Nov 24 11:45:37 CST 2020
    ;; MSG SIZE  rcvd: 103
    
    # cat /etc/hosts
    64.233.168.82   k8s.gcr.io
    

    And now it works!

    # kubeadm config images pull
    W1124 11:46:41.297352   50730 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [config/images] Pulled k8s.gcr.io/kube-apiserver:v1.19.4
    [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.19.4
    [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.19.4
    [config/images] Pulled k8s.gcr.io/kube-proxy:v1.19.4
    [config/images] Pulled k8s.gcr.io/pause:3.2
    [config/images] Pulled k8s.gcr.io/etcd:3.4.13-0
    [config/images] Pulled k8s.gcr.io/coredns:1.7.0
    
    Login or Signup to reply.
  4. I had the same error. Maybe as others say, it’s because of an outdated certificate. I believe it’s not required to delete anything.

    Simple solution was running one of those two commands, which will reconnect to Container repositories via:

    podman login

    docker login

    Source: podman-login

    Login or Signup to reply.
  5. I had this issue on version version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.2" when i tried joining a second control panel.

    error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:v1.9.3: output: E0923 04:47:51.763983    1598 remote_image.go:242] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image "k8s.gcr.io/coredns:v1.9.3": failed to resolve reference "k8s.gcr.io/coredns:v1.9.3": k8s.gcr.io/coredns:v1.9.3: not found" image="k8s.gcr.io/coredns:v1.9.3"
    time="2022-09-23T04:47:51Z"...
    

    See #99321 it’s now k8s.gcr.io/coredns/coredns:v1.9.3 instead of
    k8s.gcr.io/coredns:v1.9.3 and i don’t now why

    by kluevandrew,
    refererence: https://github.com/kubernetes/kubernetes/issues/112131

    This worked, am using containerd:

    crictl pull k8s.gcr.io/coredns/coredns:v1.9.3
    ctr --namespace=k8s.io image tag k8s.gcr.io/coredns/coredns:v1.9.3 k8s.gcr.io/coredns:v1.9.3
    

    docker solution:

    docker pull k8s.gcr.io/coredns/coredns:v1.9.3
    docker tag k8s.gcr.io/coredns/coredns:v1.9.3 k8s.gcr.io/coredns:v1.9.3 
    
    Login or Signup to reply.
  6. Check imageRepository in kubeadm-config configmap (or your kubeadm config file, if You run something like kubeadm init –config=/tmp/kubeadm-config.yml).

    Login or Signup to reply.
  7. The problem is described here – https://github.com/kubernetes/kubernetes/pull/114978

    They’ve moved the coredns image to another registry. Since I’m initializing the cluster using a kubeadm yaml file I had to add a line under the dns field as the last answer suggests, see here:

    dns:
      imageRepository: k8s.gcr.io/coredns
    

    This basically does what some people suggested above via manual commands.

    Login or Signup to reply.
  8. Official image repository for Kuberentes is got changed.

    Old one : k8s.gcr.io

    New one : registry.k8s.io

    enter image description here

    20th of March images started getting migrated although in April first or second week those will be in ReadOnly mode.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search