I’d like to modify etcd pod to listening 0.0.0.0(or host machine IP) instead of 127.0.0.1.
I’m working on a migration from a single master to multi-master kubernetes cluster, but I faced with an issue that after I modified /etc/kubernetes/manifests/etcd.yaml with correct settings and restart kubelet and even docker daemons, etcd still working on 127.0.0.1.
Inside docker container I’m steel seeing that etcd started with –listen-client-urls=https://127.0.0.1:2379 instead of host IP
cat /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
containers:
- command:
- etcd
- --advertise-client-urls=https://192.168.22.9:2379
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --initial-advertise-peer-urls=https://192.168.22.9:2380
- --initial-cluster=test-master-01=https://192.168.22.9:2380
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --listen-client-urls=https://192.168.22.9:2379
- --listen-peer-urls=https://192.168.22.9:2380
- --name=test-master-01
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --snapshot-count=10000
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
image: k8s.gcr.io/etcd-amd64:3.2.18
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /bin/sh
- -ec
- ETCDCTL_API=3 etcdctl --endpoints=https://[192.168.22.9]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
get foo
failureThreshold: 8
initialDelaySeconds: 15
timeoutSeconds: 15
name: etcd
resources: {}
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /var/lib/etcd
type: DirectoryOrCreate
name: etcd-data
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
status: {}
[root@test-master-01 centos]# kubectl -n kube-system get po etcd-test-master-01 -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/config.hash: c3eef2d48a776483adc00311df8cb940
kubernetes.io/config.mirror: c3eef2d48a776483adc00311df8cb940
kubernetes.io/config.seen: 2019-05-24T13:50:06.335448715Z
kubernetes.io/config.source: file
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: 2019-05-24T14:08:14Z
labels:
component: etcd
tier: control-plane
name: etcd-test-master-01
namespace: kube-system
resourceVersion: "6288"
selfLink: /api/v1/namespaces/kube-system/pods/etcd-test-master-01
uid: 5efadb1c-7e2d-11e9-adb7-fa163e267af4
spec:
containers:
- command:
- etcd
- --advertise-client-urls=https://127.0.0.1:2379
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --initial-advertise-peer-urls=https://127.0.0.1:2380
- --initial-cluster=test-master-01=https://127.0.0.1:2380
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --listen-client-urls=https://127.0.0.1:2379
- --listen-peer-urls=https://127.0.0.1:2380
- --name=test-master-01
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --snapshot-count=10000
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
image: k8s.gcr.io/etcd-amd64:3.2.18
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /bin/sh
- -ec
- ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
get foo
2
Answers
Reviewed my automation scripts step by step and found that I've performed a backup of etcd yaml in the same folder with .bak extension. Looks like kubelet daemon uploads all the files inside the manifests folder and despite the file extension.
First check your kubelet option
--pod-manifest-path
, put your correct yaml in this path.To make sure
etcd
pod has been deleted, move yaml file out ofpod-manifest-path
, wait this pod has been deleted bydocker ps -a
. Then put your correct yaml file intopod-manifest-path
.