skip to Main Content

I’ve setup an ingress resource to route requests to single service.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    #kubernetes.io/ingress.class: nginx
    #ingress.kubernetes.io/rewrite-target: /

spec:
  defaultBackend:
    service:
      name: dashboard
      port:
        number: 80
$ kubectl get ing
NAME              CLASS    HOSTS   ADDRESS         PORTS   AGE
example-ingress   <none>   *       102.16.50.202   80      3m28s

The nginx-controller:

$ kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS      AGE
ingress-nginx-admission-create--1-gl59f     0/1     Completed   0             15h
ingress-nginx-admission-patch--1-9kbz6      0/1     Completed   0             15h
ingress-nginx-controller-54d8b558d4-2ss8f   1/1     Running     1 (13h ago)   15h

$ kubectl get svc -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.233.48.242   102.16.50.202   80:31690/TCP,443:32666/TCP   15h
ingress-nginx-controller-admission   ClusterIP      10.233.17.68    <none>          443/TCP                      15h

I’m able to reach and get response from the service via the controller’s cluster IP:

$ curl -i 10.233.48.242
HTTP/1.1 200 OK
Date: Tue, 08 Feb 2022 04:50:44 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 2306
Connection: keep-alive
X-Powered-By: Express
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Tue, 25 Jan 2022 09:35:14 GMT
ETag: W/"902-17e9096e050"
...

But not on its (nginx-controller’s) external IP address:

$ curl -i 102.16.50.202
curl: (7) Failed to connect to 102.16.50.202 port 80: Connection refused

$ curl -i http://102.16.50.202
curl: (7) Failed to connect to 102.16.50.202 port 80: Connection refused

$ curl -i http://102.16.50.202/
curl: (7) Failed to connect to 102.16.50.202 port 80: Connection refused

I tried creating a new path (prefix), changed service type to NodePort, disabled the firewall, with no success; same issue.

Any observation or input would help a lot. Thanks.

Edit-1:

The nginx ingress controller is installed (kubectl apply) without modifying the default configuration:

#file: ingress-controller-deploy.yml
...
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  ipFamilyPolicy: SingleStack
  ipFamilies:
    - IPv4
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      appProtocol: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller

The externalTrafficPolicy:Local seems to be ok if I’m using a load balancer, which in my case is MetalLB .

#file: ingress-controller-deploy.yml
apiVersion: apps/v1
kind: Deployment
...
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true

The securityContext section seems ok too.

3

Answers


  1. The ingress object is not your issue. You have to concentrate on the ingress-controller setup. Also, which load-balancing mechanism are you using? If you are on Bare Metal, you need to deploy something like MetalLB, which you probably already have (otherwise the service of type LoadBalancer stays in pending state).

    Good documentation around this topic can be found at https://kubernetes.github.io/ingress-nginx/deploy/baremetal/

    Login or Signup to reply.
  2. I was facing exactly the same issue on my bare metal setup. Only difference was that I was using kube-vip instead of MetalLB. It was finally fixed when I changed the externalTrafficPolicy on ingress-nginx-controller service to Cluster.

    I’m unable to reason why, but the closest I could come up with is that the Loadbalancer IP was acquired by node-1 of my bare metal setup, whereas the web frontend was scheduled on node-2

    Login or Signup to reply.
  3. After reading tons of docs and github issues I came to conclusion than on bare-metal setup without editing ingress controller deploy.yaml (like adding hostNetwork: true) you can’t access exposed ingress IP address by port 80.
    However, if you specify port which is displayed in service ingress-nginx-controller then you can successfully access exposed service from external network.

    My steps:

    • install ingress controller

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/baremetal/deploy.yaml

    • launch echo server (yaml files taken from echo server )
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: echo-deployment
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: echo-server
      template:
        metadata:
          labels:
            app: echo-server
        spec:
          containers:
            - name: echo-server
              image: jmalloc/echo-server
              ports:
                - name: http-port
                  containerPort: 8080
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: echo-service
    spec:
      ports:
        - name: http-port
          port: 80
          targetPort: http-port
          protocol: TCP
      selector:
        app: echo-server
    
    • install ingress
        apiVersion: networking.k8s.io/v1
        kind: Ingress
        metadata:
          name: echo-ingress
        spec:
          rules:
          - host: echo.k8s-test
            http:
              paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: echo-service
                    port:
                      number: 80
          ingressClassName: nginx
    
    • While ingress is aquiring IP address lets check data on ingress-nginx-controller:
    $ kubectl get svc ingress-nginx-controller -n ingress-nginx
    NAME                       TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
    ingress-nginx-controller   NodePort   10.105.96.55   <none>        80:30097/TCP,443:32746/TCP   82m
    

    Note port 30097

    • Check ingress status
    $ kubectl get ing echo-ingress
    NAME           CLASS   HOSTS           ADDRESS           PORTS   AGE
    echo-ingress   nginx   echo.k8s-test   192.168.122.222   80      81m
    

    192.168.122.222 is IP assigned to NIC on one of worker nodes.

    Now, from outside k8s cluster, we can access

    $ curl http://192.168.122.222:30097 -H "Host: echo.k8s-test"
    Request served by echo-deployment-8586b5977c-pt2pp
    
    HTTP/1.1 GET /
    
    Host: echo.k8s-test
    User-Agent: curl/7.47.0
    Accept: */*
    X-Real-Ip: 10.244.1.1
    X-Forwarded-Host: echo.k8s-test
    X-Forwarded-Proto: http
    X-Forwarded-Scheme: http
    X-Scheme: http
    X-Request-Id: 188379f183fe0c3e7ddc612b85d0d76d
    X-Forwarded-For: 10.244.1.1
    X-Forwarded-Port: 80
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search