I need to expose a service to the outside using Ingress on an EC2 instance. I have three services running in the Kubernetes cluster, one of them must be accessible from outside, the other two only communicate internally with the first. To expose the service I am trying to use Ingress, but I am a little confused with the necessary configurations and the way Ingress works.
What I have tried:
deployment.yaml
### DEPLOYMENTS ###
apiVersion: apps/v1
kind: Deployment
metadata:
name: agify-deployment
labels:
app: agify
spec:
replicas: 1
selector:
matchLabels:
app: agify
template:
metadata:
labels:
app: agify
spec:
containers:
- name: agify
image: myrepo/svc_agify:v1
ports:
- containerPort: 9010
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: genderize-deployment
labels:
app: genderize
spec:
replicas: 1
selector:
matchLabels:
app: genderize
template:
metadata:
labels:
app: genderize
spec:
containers:
- name: genderize
image: myrepo/svc_genderize:v1
ports:
- containerPort: 9020
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: core-deployment
labels:
app: core
spec:
replicas: 1
selector:
matchLabels:
app: core
template:
metadata:
labels:
app: core
spec:
containers:
- name: core
image: myrepo/svc_core:v1
ports:
- containerPort: 9030
---
### SERVICES ###
apiVersion: v1
kind: Service
metadata:
name: agify-svc
spec:
selector:
app: agify
ports:
- protocol: TCP
port: 80
targetPort: 9010
---
apiVersion: v1
kind: Service
metadata:
name: genderize-svc
spec:
selector:
app: genderize
ports:
- protocol: TCP
port: 80
targetPort: 9020
---
# CORE SERVICE
apiVersion: v1
kind: Service
metadata:
name: core-svc
spec:
selector:
app: core
ports:
- protocol: TCP
port: 80
targetPort: 9030
type: NodePort
---
# INGRESS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-core
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my-service.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: core-svc
port:
number: 80
In my /etc/hosts file on the EC2 instance I have placed the following so that my-service.com resolves to the minikube IP address.
XXX.XXX.XXX.XXX my-service.com
If I try it within the EC2 instance where I have the cluster, it works perfectly. But how do I consume the service from outside the EC2 instance? What is lacking?
To clarify, I already have the inbound and outbound rules configured on my EC2 instance.
I found that I need to add the following configuration to the Ingress resource, but the configuration seems to be invalid
spec:
ingressClassName: nginx
loadBalancerIP: xx.xx.xx.xx # EC2 instance public IP
2
Answers
Since your
core-svc
service is currently of typeNodePort,
it’s already accessible from outside the cluster using the public IP of your EC2 instance and the assignedNodePort
port number. However, for more seamless external access, consider changing the service type toLoadBalancer
. This will provision an external load balancer, like an Elastic Load Balancer on AWS, and expose your service externally. Additionally, settingloadBalancerIP
in the Ingress resource might not be supported by all cloud providers and could lead to unexpected behavior.To expose your Kubernetes service to the outside world using Ingress on an EC2 instance:
Deploy an Ingress controller in your Kubernetes cluster. You can use
a Helm chart for this purpose. Refer to the NGINX Ingress Controller
documentation for details on how it works: NGINX Ingress Controller
Documentation.
Ensure that the service you want to expose
core-svc
is of typeClusterIP
. It should not beNodePort
if you’re using Ingress.To clarify your question if you are using ingress to expose the service you don’t need to specify NodePort this will be security violation. You should use only the cluster IP. If you are not providing the service type default it will take CluterIP. To answer your question yes there is no ingressClassName has been specified in the ingress I have updated your ingress yaml
Make sure to do the proper indentation it works for me.