skip to Main Content
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1160.45.1.el7.x86_64

I am using an external load balancer HAProxy and Keepalived. My Virtual IP 172.24.16.6. If I create a service with NodePort, then i can connect from outside to pod. This is the premise that IP from the load balancer is available to my cluster.

Im installed NGINX Ingress Controller via this instruction https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/

I also applied $ kubectl apply -f service/loadbalancer.yaml with with such parameters:

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
spec:
  externalTrafficPolicy: Local
  type: LoadBalancer
  externalIPs:
  - 172.24.16.6
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  - port: 443
    targetPort: 443
    protocol: TCP
    name: https
  selector:
    app: nginx-ingress

As a result, it all looks like this:

]$ kubectl get all -o wide -n nginx-ingress
NAME                                 READY   STATUS    RESTARTS   AGE   IP                NODE                          NOMINATED NODE   READINESS GATES
pod/nginx-ingress-768698d9df-c2wlx   1/1     Running   0          27m   192.168.105.197   srv-dev-k8s-worker-05   <none>           <none>

NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
service/nginx-ingress   LoadBalancer   10.104.239.149   172.24.16.6   80:30053/TCP,443:30021/TCP   22m   app=nginx-ingress

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS      IMAGES                      SELECTOR
deployment.apps/nginx-ingress   1/1     1            1           28m   nginx-ingress   nginx/nginx-ingress:2.0.2   app=nginx-ingress

NAME                                       DESIRED   CURRENT   READY   AGE   CONTAINERS      IMAGES                      SELECTOR
replicaset.apps/nginx-ingress-6454cfbc49   0         0         0       28m   nginx-ingress   nginx/nginx-ingress:2.0.2   app=nginx-ingress,pod-template-hash=6454cfbc49
replicaset.apps/nginx-ingress-768698d9df   1         1         1       27m   nginx-ingress   nginx/nginx-ingress:2.0.2   app=nginx-ingress,pod-template-hash=768698d9df

nginx-ingress pod:

$ kubectl -n nginx-ingress get pod -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP                NODE                          NOMINATED NODE   READINESS GATES
nginx-ingress-768698d9df-c2wlx   1/1     Running   0          72m   192.168.105.197   srv-dev-k8s-worker-05   <none>           <none>

The netstat shows that ports 80 and 443 are open and bound to 172.24.16.6:

$ netstat -tulpn
(No info could be read for "-p": geteuid()=1002 but you should be root.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 172.24.16.6:80          0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:10257         0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:179             0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:10259         0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -
tcp        0      0 172.24.16.6:443         0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:43707         0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:32000           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:30021           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:30053           0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:9098          0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:9099          0.0.0.0:*               LISTEN      -
tcp        0      0 172.24.25.141:2379      0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:6444          0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:6444            0.0.0.0:*               LISTEN      -
tcp        0      0 172.24.25.141:2380      0.0.0.0:*               LISTEN      -
tcp6       0      0 :::10256                :::*                    LISTEN      -
tcp6       0      0 :::22                   :::*                    LISTEN      -
tcp6       0      0 :::31231                :::*                    LISTEN      -
tcp6       0      0 :::5473                 :::*                    LISTEN      -
tcp6       0      0 :::10250                :::*                    LISTEN      -
tcp6       0      0 :::6443                 :::*                    LISTEN      -
udp        0      0 127.0.0.1:323           0.0.0.0:*                           -
udp        0      0 0.0.0.0:4789            0.0.0.0:*                           -
udp        0      0 0.0.0.0:58191           0.0.0.0:*                           -
udp        0      0 0.0.0.0:68              0.0.0.0:*                           -
udp6       0      0 ::1:323                 :::*                                -

But iptables don’t open any ports https://pastebin.com/BvV32sjD

Please help me to access from outside.

2

Answers


  1. Chosen as BEST ANSWER

    Yes, i added ingress to namespace for-only-test.

    $ kubectl get all -o wide
    NAME                                    READY   STATUS    RESTARTS   AGE    IP               NODE                          NOMINATED NODE   READINESS GATES
    pod/nginx-deployment-559d658b74-6p4tb   1/1     Running   0          179m   192.168.240.70   srv-dev-k8s-worker-08   <none>           <none>
    pod/nginx-deployment-559d658b74-r96s9   1/1     Running   0          179m   192.168.240.71   srv-dev-k8s-worker-08   <none>           <none>
    
    NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE    SELECTOR
    service/nginx-deployment   ClusterIP   10.108.39.147   <none>        80/TCP    178m   app=nginx
    
    NAME                               READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES         SELECTOR
    deployment.apps/nginx-deployment   2/2     2            2           3h1m   nginx        nginx:1.16.1   app=nginx
    
    NAME                                          DESIRED   CURRENT   READY   AGE    CONTAINERS   IMAGES         SELECTOR
    replicaset.apps/nginx-deployment-559d658b74   2         2         2       179m   nginx        nginx:1.16.1   app=nginx,pod-template-hash=559d658b74
    

    Then created ingress:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: ingress-for-nginx-deployment
      annotations:
    #    kubernetes.io/ingress.class: "nginx"
    #    nginx.ingress.kubernetes.io/rewrite-target: /
    spec:
      rules:
      - host: k8s.domain.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginx-deployment
                port:
                  number: 80
    
    $ kubectl get ingress -o wide
    NAME                           CLASS   HOSTS              ADDRESS   PORTS   AGE
    ingress-for-nginx-deployment   nginx   k8s.domain.com             80      7s
    

  2. The Loadbalancer type of service needs to be connected to external Load balancers. AWS and other cloud providers do that natively, but on on-prem cluster you need to use ingress controller and ingress for that.

    Here it seems that you don’t have a external load balancer available which can serve traffic to your Loadbalancer type of service. In order to do that, we install nginx ingress controller and create a ingress resource which will then talk to your loadbalancer service

    so just customize below ingress resource according to your need, deploy and it should then work.

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: ingress-wildcard-host
    spec:
      rules:
      - host: "foo.bar.com"
        http:
          paths:
          - pathType: Prefix
            path: "/bar"
            backend:
              service:
                name: service1
                port:
                  number: 80
    

    So in your on-prem cluster traffic is like: nginx-ingress-controller -> Ingress -> Loadbalancer service

    Whilst on AWS traffic is like: AWS ELB -> Loadbalancer service

    (here aws auto provisions ELB for every loadbalancer type of service.)

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search