skip to Main Content

I’m trying to add an NGINX Ingress controller on a GKE cluster, with existing HAProxy Ingress controller (which has some problem with rewriting rules)

First I tried to expose the controller’s Service to LoadBalancer type. The traffic can reach ingress and backends, but it didn’t work with Managed Certificates.

So I tried to use L7 Load Balancer (URL Map) to forward traffic to GKE cluster IP, and create an Ingress object for my ingress controller itself.

The problem is, this Ingress object seems not bound to external IP. And routing to the domain yields "default backend – 404" response.

$ kubectl -n ingress-controller get service
NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
haproxy-ingress           NodePort    172.16.xxx.xxx  <none>        80:31579/TCP,443:31769/TCP   595d
ingress-default-backend   ClusterIP   172.16.xxx.xxx  <none>        8080/TCP                     595d
nginx-ingress-svc         NodePort    172.16.xxx.xxx  <none>        80:32416/TCP,443:31299/TCP   2d17h

$ kubectl -n ingress-controller get ing
NAME                CLASS    HOSTS   ADDRESS          PORTS   AGE
haproxy-l7-ing      <none>   *       34.xxx.xxx.aaa   80      594d
ingress-nginx-ing   nginx    *       172.xxx.xxx.xxx  80      2d16h

$ gcloud compute addresses list --global --project my-project
NAME                    ADDRESS/RANGE   TYPE      PURPOSE  NETWORK  REGION  SUBNET  STATUS
my-ext-ip               34.xxx.xxx.aaa  EXTERNAL                                    IN_USE
my-test-ext-ip          34.xxx.xxx.bbb  EXTERNAL                                    IN_USE

In this case, I suppose the ingress-nginx-ing should be bound to 34.xxx.xxx.bbb (my-test-ext-ip), just like haproxy-l7-ing is bound to 34.xxx.xxx.aaa (my-ext-ip) but it doesn’t.

Load Balancers:

$ gcloud compute forwarding-rules list --global --project my-project
NAME                              REGION  IP_ADDRESS      IP_PROTOCOL  TARGET
haproxy-http-fwdrule                      34.xxx.xxx.aaa  TCP          haproxy-http-proxy
haproxy-https-fwdrule                     34.xxx.xxx.aaa  TCP          haproxy-https-proxy
nginx-http-fwdrule                        34.xxx.xxx.bbb  TCP          nginx-http-proxy
nginx-https-fwdrule                       34.xxx.xxx.bbb  TCP          nginx-https-proxy

$ gcloud compute target-http-proxies list --global --project my-project
NAME                URL_MAP
haproxy-http-proxy  haproxy-http-urlmap
nginx-http-proxy    nginx-https-urlmap

$ gcloud compute target-https-proxies list --global --project my-project
NAME                                  SSL_CERTIFICATES                    URL_MAP
haproxy-https-proxy                   default-cert,mcrt-xxxxxx-xxxxxx     haproxy-https-urlmap
nginx-https-proxy                     mcrt-xxxxxx-xxxxxx                  nginx-https-urlmap

$ gcloud compute url-maps list --global --project my-project
NAME                      DEFAULT_SERVICE
haproxy-https-urlmap      backendServices/k8s-be-xxxxxx--xxxxxx
haproxy-http-urlmap
nginx-https-urlmap        backendServices/nginx-lb-backendservice

$ gcloud compute backend-services list --global --project my-project
NAME                            BACKENDS                                         PROTOCOL
k8s-be-xxxxxx--xxxxxx           asia-southeast1-a/instanceGroups/k8s-ig--xxxxxx  HTTP
nginx-lb-backendservice         asia-southeast1-a/instanceGroups/k8s-ig--xxxxxx  HTTP

Backend: asia-southeast1-a/instanceGroups/k8s-ig--xxxxxx points to GKE cluster.

The K8S YAML is like this:

---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: nginx
  namespace: ingress-controller
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  controller: k8s.io/ingress-nginx

---
kind: Service
apiVersion: v1
metadata:
  name: nginx-ingress-svc
  namespace: ingress-controller
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  externalTrafficPolicy: Local
  type: NodePort
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: http
      port: 80
      targetPort: http
      protocol: TCP
      appProtocol: http
    - name: https
      port: 443
      targetPort: https
      protocol: TCP
      appProtocol: https

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nginx-ing
  namespace: ingress-controller
  labels:
    app: ingress-nginx
    tier: ingress
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  annotations:
    # kubernetes.io/ingress.allow-http: 'false'
    kubernetes.io/ingress.global-static-ip-name: 'my-test-ext-ip'
    ingress.kubernetes.io/url-map: nginx-https-urlmap
    networking.gke.io/managed-certificates: 'my-managed-cert'
    ingress.gcp.kubernetes.io/pre-shared-cert: 'default-cert'
spec:
  ingressClassName: nginx
  defaultBackend:
    service:
      name: nginx-ingress-svc
      port:
        number: 80

Any idea what I might be missing here?
Thanks!


UPDATE

I’ve tweaked some configs for load balancer, creating my own backend and health checks like this:

$ gcloud compute backend-services describe nginx-lb-backendservice --global
affinityCookieTtlSec: 0
backends:
- balancingMode: RATE
  capacityScaler: 1.0
  group: https://www.googleapis.com/compute/v1/projects/my-project/zones/asia-southeast1-a/instanceGroups/k8s-ig--xxxxxx
  maxRatePerInstance: 1.0
cdnPolicy:
  cacheKeyPolicy:
    includeHost: true
    includeProtocol: true
    includeQueryString: false
  cacheMode: USE_ORIGIN_HEADERS
  negativeCaching: false
  requestCoalescing: true
  serveWhileStale: 0
  signedUrlCacheMaxAgeSec: '0'
connectionDraining:
  drainingTimeoutSec: 0
creationTimestamp: '2022-01-07T00:48:38.900-08:00'
description: '{"kubernetes.io/service-name":"ingress-controller/nginx-ingress-svc","kubernetes.io/service-port":"80"}'
enableCDN: true
fingerprint: ****
healthChecks:
- https://www.googleapis.com/compute/v1/projects/mtb-development-project/global/healthChecks/nginx-lb-backend-healthcheck
id: '7699213954898870409'
kind: compute#backendService
loadBalancingScheme: EXTERNAL
logConfig:
  enable: true
  sampleRate: 1.0
name: nginx-lb-backendservice
port: 31579
portName: port31579
protocol: HTTP
selfLink: https://www.googleapis.com/compute/v1/projects/my-project/global/backendServices/nginx-lb-backendservice
sessionAffinity: NONE
timeoutSec: 30

Then, I added this annotation into the Ingress ingress-nginx-ing:

ingress.kubernetes.io/url-map: nginx-https-urlmap

The backend status is HEALTHY, but somehow ingress-nginx-ing still won’t bind to reserved external IP.

And also there is none of these annotations attached to it: ingress.kubernetes.io/backends, ingress.kubernetes.io/https-forwarding-rule, ingress.kubernetes.io/https-target-proxy, unlike HAProxy’s.

Sending HTTP(S) to myhost.mydomain/whatever (resolved to IP: 34.xxx.xxx.bbb) still got "default backend – 404" responses.

UPDATE#2 (WORKED!)

I tried boredabdel‘s answer here, removing ingressClassName: nginx from ingress-nginx-ing and it seemed to work.

After deleting manually created LB objects as per the new warnings, and tweaked the auto-generated health check, the traffics can reach APIs as expected.

(Source of confusion came from having both kubernetes.io/ingress.class annotation and ingressClassName from examples.)

3

Answers


  1. Managed Certificates only work with L7 (HTTP) LoadBalancer not with TCP ones.

    My understanding is you want to use nginx as an Ingress controller on GKE but you want to expose it behind an L7 LoadBalancer so you can use Google Managed Certificates ?

    Login or Signup to reply.
  2. Yeah so the issue i see in your YAML files, is that you are trying to expose the NGINX ingress itself using the nginx IngressClass, that would no work.

    What you have to do is expose NGINX using the GKE default IngressClass called gce. If you omit it in the Ingress object it’s the default. So your objects will roughly look like this

    HTTP LB(via Ingress with gce IngressClass) -> nginx Service -> NGINX pods –> App Service –> App pod

    We do have an example here

    There are however few things you have to keep in mind. The NGINX Ingress Controller does pretty much the same thing GKE default Ingress controller do. They both setup an HTTP(s) LoadBalancer in front of your app. In this setup you are trying to acheive you will end up with 2 LoadBalancers, Google HTTP LB provisioned via Ingress and an other one which is the NGINX one. That means 2 times tcp termination and could lead to increased latency. Just something to keep in mind

    Login or Signup to reply.
  3. In my case it was publishservice in the nginx helm chart values set to false. Ofcourse you need the annotation:
    kubernetes.io/ingress.class: nginx

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search