I currently have a website deployed using multiple pods: 1 for the client (nginx), and 4 pods for the server (node.js). But I’ve had to copy/paste the yaml for the server pods, name them differently and change their ports (3001, 3002, 3003, 3004).
I’m guessing this could be simplified by using kind: Deployment
and replicas: 4
for the server yaml, but I don’t know how to change the port numbers.
I currently use the following commands to get everything up and running:
podman play kube server1-pod.yaml
podman play kube server2-pod.yaml
podman play kube server3-pod.yaml
podman play kube server4-pod.yaml
podman play kube client-pod.yaml
Here’s my existing setup on a CentOS 8 machine with Podman 3.0.2-dev:
client-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-07-29T00:00:00Z"
labels:
app: client-pod
name: client-pod
spec:
hostName: client
containers:
- name: client
image: registry.example.com/client:1.2.3
ports:
- containerPort: 8080
hostPort: 8080
resources: {}
status: {}
server1-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-07-29T00:00:00Z"
labels:
app: server1-pod
name: server1-pod
spec:
hostName: server1
containers:
- name: server1
image: registry.example.com/server:1.2.3
ports:
- containerPort: 3000
hostPort: 3001 # server2 uses 3002 etc.
env:
- name: NODE_ENV
value: production
resources: {}
status: {}
nginx.conf
# node cluster
upstream server_nodes {
server api.example.com:3001 fail_timeout=0;
server api.example.com:3002 fail_timeout=0;
server api.example.com:3003 fail_timeout=0;
server api.example.com:3004 fail_timeout=0;
}
server {
listen 8080;
listen [::]:8080;
server_name api.example.com;
location / {
root /usr/share/nginx/html;
index index.html;
}
# REST API requests go to node.js
location /api {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'Upgrade';
proxy_read_timeout 300;
proxy_request_buffering off;
proxy_redirect off;
proxy_buffering off;
proxy_http_version 1.1;
proxy_pass http://server_nodes;
client_max_body_size 10m;
}
}
I tried using kompose convert
to turn the Pod into a Deployment, then setting replicas to 4, but since the ports are all the same, the first container is started on 3001, but the rest fail to start since 3001 is already taken.
server-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.7.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: server
name: server
spec:
replicas: 4
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: server
spec:
containers:
ports:
- containerPort: 3000
hostPort: 3001
- env:
- name: NODE_ENV
value: production
image: registry.example.com/server:1.2.3
name: server
resources: {}
restartPolicy: Always
status: {}
How can I specify that each subsequent replica needs to use the next port up?
3
Answers
docker-compose
sounds like it suited you well, you may have an interest in usingpodman-compose
which is meant to be a drop in replacement:https://github.com/containers/podman-compose
This should allow you to use your original workflow that you enjoyed. Alternately, Podman 3 includes docker-compose support natively.
In terms of incrementing the port automatically, a few rough suggestions for solving the underlying problem are:
compose
with the above and use yaml anchors to define multiple images with the same config, but override the port.None of these tools really have support for auto-incrementing port allocations, as if you’re looking at manually specifying ports you either have a smallish simple stack (within a docker-compose file say), or you’ve got a special workload among a sea of (likely) auto-routed and managed services on a Kubernetes cluster.
Fortunately both of these options have the ability to dynamically set ports for you, as you may be aware when using
docker-compose
to scale a service (docker-compose scale service-name=4
), with the caveat that you have not pinned a port in the service spec.Hope that gives you options to think about that may help you resolve this challenge in your workflow.
You can explore using Kubernetes Services in front of a replica set.
The service is in charge of load balancing the request between all pods with a valid
selector
field. Now all your backend pods, can be using the same port as you are already doing usingreplicas
, and you do not need to reconfigure different port in each pod.As you access the pods via the service, you need to modify also
nginx.conf
to acces the service directly. You no longer need to specify all different pods in a line. This way you also gain flexibility. If you scale up the deployment with 10 replicas por example, you do not need to include all servers here. The service does this dirty work for you.I came across this old post and like to add another option.
Below is a snipped from an Ansible playbook which create 4 Pods and expose them to different ports.