The context
Let me know if I’ve gone down a rabbit hole here.
I have a simple web app with a frontend and backend component, deployed using Docker/Helm inside a Kubernetes cluster. The frontend is servable via nginx, and the backend component will be running a NodeJS microservice.
I had been thinking to have both run on the same pod inside Docker, but ran into some problems getting both nginx and Node to run in the background. I could try having a startup script that runs both, but the Internet says it’s a best practice to have different containers each be responsible for only running one service – so one container to run nginx and another to run the microservice.
The problem
That’s fine, but then say the nginx server’s HTML pages need to know what to send a POST request to in the backend – how can the HTML pages know what IP to hit for the backend’s Docker container? Articles like this one come up talking about manually creating a Docker network for the two containers to speak to one another, but how can I configure this with Helm so that the frontend container knows how to hit the backend container each time a new container is deployed, without having to manually configure any network service each time? I want the deployments to be automated.
2
Answers
You mention that your frontend is based on Nginx.
Accordingly,Frontend must hit the public URL of backend.
Thus, backend must be exposed by choosing the service type, whether:
http://<any-node-ip>:<node-port>
http://loadbalancer-external-IP:service-port
of the service.http://ingress.host.com
.We recommended the last way, but it requires to have ingress controller.
Once you tested one of them and it works, then, you can extend your helm chart to update the service and add the ingress resource if needed
You may try to setup two containers in one pod and then communicate between containers via localhost (but on different ports!). Good example is here – Kubernetes multi-container pods and container communication.
Another option is to create two separate deployments and for each create service. Instead of using IP addresses (won’t be the same for every re-deployment of your app) use a DNS name for connecting to them.
Example – two NGINX services communication.
First create two NGINX deplyoments:
Let’s expose them using the
kubectl expose
command. It’s the same if I had created a service from a yaml file:Now let’s check services – as you can see both of them are
ClusterIP
type:I will exec into pod from
nginx-one
deployment andcurl
the second service:If you have problems, make sure you have a proper CNI plugin installed for your cluster – also check this article – Cluster Networking for more details.
Also check these: