Problem
In our project, we want two pods working as server-client that communicate via the Python socket
library. Both containers are built locally with docker build
, pulled locally via imagePullPolicy: IfNotPresent
on the yaml files and run on the same node of the k8s cluster (I’m running kubernetes vanilla, if that’s important).
The communication works well when we
- run both python scripts in the command line
- run both scripts as containers using
docker build
anddocker run
- the server app container is deployed in the K8s cluster and the client app is run either on the command line or as a docker container.
The communication fails when both server and client are deployed in K8s. kubectl logs client -f
returns :
Traceback (most recent call last):
File "client.py", line 7, in <module>
client_socket.connect((IP_Server,PORT_Server))
TimeoutError: [Errno 110] Connection timed out
I suspect there’s a problem with the outgoing request from the client script when it’s deployed on the cluster, but I can’t find where the problem lies.
Codes
server.py
import socket
IP = "0.0.0.0"
PORT = 1234
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server_socket.bind((IP, PORT))
server_socket.listen()
...
server.yaml
apiVersion: v1
kind: Service
metadata:
name: server
labels:
app: server
spec:
ports:
- port: 1234
targetPort: 1234
protocol: TCP
selector:
app: server
---
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
app: server
spec:
containers:
- name: server
image: server:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1234
client.py
import socket
IP_Server = # the IP of the server service, obtained from "kubectl get svc"
PORT_Server = 1234
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client_socket.connect((IP_Server,PORT_Server)) # fails here
...
client.yaml
apiVersion: v1
kind: Pod
metadata:
name: client
labels:
app: client
spec:
containers:
- name: client
image: client:latest
imagePullPolicy: IfNotPresent
3
Answers
In case anyone gets here in the future, I found a solution that worked for me by executing
and then deleting all coredns pods.
Reference : https://github.com/kubernetes/kubernetes/issues/86762#issuecomment-836338017
In a default setup – there shouldn’t be anything preventing you to connect between 2 pods. However, you shouldn’t be relying on the IP addresses for communication inside your cluster. Try using the service name:
server
should be typically available to all pods running inside the same namespace.There may be something wrong with the way you resolve the IP address of the server, but the "service" created by the server should implicitly be accessible over DNS (e.g. as server:1234), as explained here, so maybe you can use that instead?