I am a student and have to make a Bachelor thesis for my school. Is it possible to make a hybrid Kubernetes cluster, and how is this possible?
Is there a good application I can run in this cluster to show that it works?
I have made an AKS cluster and a on-prem cluster. Then I made a nginx loadbalancer and loadbalanced the 2, but the application isn’t synced (which is logical). I tried using rancher but somehow I always got errors while trying to make a local cluster. Is it possible to have the storages synced somehow and be able to control the 2 clusters from one place or just make them one cluster? I have found you can use Azure Arc with azure AKS, is this a viable solution? Should I use a VPN instead?
2
Answers
Check out Cilium Cluster Mesh. You need to configure the Azure CNI with BYOCNI and install Cilium manually describe here (there is a AKS section).
Cilium need to be also the CNI of your on-premise cluster.
Then you configure Cilium Cluster Mesh. There is also a AKS to AKS Cluster Mesh docu from Cilium.
PS: I never tried this on a hybrid scenario.
If by hybrid k8s cluster you mean a cluster that has nodes over different cloud providers, then yes that is entirely possible.
You can create a simple example cluster of this by using k3s (lightweight Kubernetes) and then using the –node-external-ip flag. This tells your nodes to talk to eachother via their public IP.
This sort of setup is described in Running in Multiple Zones on the Kubernetes Documentation. You will have to configure the different places you place nodes at as different zones.
You can fix storage on a cluster like this by using CSI drivers for the different environments you use, like AWS, GCP, AKS, etc. When you then deploy a PVC and it creates a PV at AWS for example, when you mount this volume on a pod, that pod will always be scheduled in the zone the PV resides in, otherwise scheduling will be impossible.
I personally am not running this set up in production, but I am using a technique that also suits this multiple zones idea with regards to networking. To save money on my personal cluster, I am telling my Nginx ingress controller to not make a LoadBalancer resource and to run the controllers as a DaemonSet. The Nginx controller pods have a HostPort open on the node they run on (since its a DaemonSet there won’t be more than one of those pods per node) and this HostPort opens ports 80 and 443 on the host. When you then add more nodes, every one of the nodes with an ingress controller pod on it will become an ingress entrypoint. Just set up your DNS records to include all of those nodes and you’ll have them load balanced.