My Environment: Mac dev machine with latest Minikube/Docker
I built (locally) a simple docker image with a simple Django REST API “hello world”.I’m running a deployment with 3 replicas. This is my yaml
file for defining it:
apiVersion: v1
kind: Service
metadata:
name: myproj-app-service
labels:
app: myproj-be
spec:
type: LoadBalancer
ports:
- port: 8000
selector:
app: myproj-be
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myproj-app-deployment
labels:
app: myproj-be
spec:
replicas: 3
selector:
matchLabels:
app: myproj-be
template:
metadata:
labels:
app: myproj-be
spec:
containers:
- name: myproj-app-server
image: myproj-app-server:4
ports:
- containerPort: 8000
env:
- name: DATABASE_URL
value: postgres://myname:@10.0.2.2:5432/myproj2
- name: REDIS_URL
value: redis://10.0.2.2:6379/1
When I apply this yaml
it generates things correctly.
– one deployment
– one service
– three pods
Deployments:
NAME READY UP-TO-DATE AVAILABLE AGE
myproj-app-deployment 3/3 3 3 79m
Services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 83m
myproj-app-service LoadBalancer 10.96.91.44 <pending> 8000:31559/TCP 79m
Pods:
NAME READY STATUS RESTARTS AGE
myproj-app-deployment-77664b5557-97wkx 1/1 Running 0 48m
myproj-app-deployment-77664b5557-ks7kf 1/1 Running 0 49m
myproj-app-deployment-77664b5557-v9889 1/1 Running 0 49m
The interesting thing is that when I SSH
into the Minikube
, and hit the service using curl 10.96.91.44:8000
it respects the LoadBalancer
type of the service and rotates between all three pods as I hit the endpoints time and again. I can see that in the returned results which I have made sure to include the HOSTNAME of the pod.
However, when I try to access the service from my Hosting Mac — using kubectl port-forward service/myproj-app-service 8000:8000
— Every time I hit the endpoint, I get the same pod to respond. It doesn’t load balance. I can see that clearly when I kubectl logs -f <pod>
to all three pods and only one of them is handling the hits, as the other two are idle…
Is this a kubectl port-forward
limitation or issue? or am I missing something greater here?
2
Answers
The reason was that my pods were randomly in a crashing state due to Python *.pyc files that were left in the container. This causes issues when Django is running in a multi-pod Kubernetes deployment. Once I removed this issue and all pods ran successfully, the round-robin started working.
kubectl port-forward
looks up the first Pod from the Service information provided on the command line and forwards directly to a Pod rather than forwarding to the ClusterIP/Service port. The cluster doesn’t get a chance to load balance the service like regular service traffic.The kubernetes API only provides Pod port forward operations (
CREATE
andGET
). Similar API operations don’t exist for Service endpoints.kubectl
codeHere’s a little bit of the flow from the
kubectl
code that seems to back that up (I’ll just add that Go isn’t my primary language)The portforward.go
Complete
function is wherekubectl portforward
does the first look up for a pod from options viaAttachablePodForObjectFn
:The
AttachablePodForObjectFn
is defined asattachablePodForObject
in this interface, then here is theattachablePodForObject
function.To my (inexperienced) Go eyes, it appears the
attachablePodForObject
is the thingkubectl
uses to look up a Pod to from a Service defined on the command line.Then from there on everything deals with filling in the Pod specific
PortForwardOptions
(which doesn’t include a service) and is passed to the kubernetes API.