I have created a Kubernetes cluster with 2 nodes, one Master node and one Worker node (2 different VMs).
The worker node has joined the cluster successfully, so when I run the commanad:
kubectl get nodes
in my master node it appears the 2 nodes exists in the cluster!
However, when I run the command kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
from my worker node terminal, in order to create a deployment in the worker node, I have the following error:
The connection to the server localhost:8080 was refused. - did you specify the right host or port?
Any help what is going on here?
2
Answers
The easy way to do it is to copy the config from master node usually found here : /etc/kubernetes/admin.conf , to whetever node you want to configure kubectl ( even on master node) . The location to be copied is : $HOME/.kube/config
Also, you can this command from master node by specify nodeselector or label.
Assign POD node
It looks like you have issue with your
kubeconfig
file, as usuallylocalhost:8080
is the default server to connect in absence of this file . Generally, Kubernetes uses this file to store cluster authentication information and a list of contexts to whichkubectl
refers when running commands – that’s whykubectl
can’t work properly without this file.To check the presence of kubeconfig file, enter this command:
kubectl config view
.Or just check the presence of the file named
config
in the$HOME/.kube
directory, which is the default location forkubeconfig
file.If it is absent, you would need to copy the config file to your node, e.g.:
It is also possible to generate config file in a more difficult way – instead of copying – as described here.