skip to Main Content

How can I get real client IP from Nginx ingress load balancer in GKE? According to the online resource, I have configured the External Traffic Policy: Local and added use-proxy-protocol: "true" property also.

But still, I’m seen the GKE node IP/interface in the log, not the real client IP.

My load balancer service ->

Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/version=0.41.2
                          helm.sh/chart=ingress-nginx-3.10.1
Annotations:              networking.gke.io/load-balancer-type: Internal
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       xx.xxx.xx.xx
IPs:                      xx.xx.xxx.xx
LoadBalancer Ingress:     xx.xx.xx.xx
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  32118/TCP
Endpoints:                xx.x.xx.xx:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  31731/TCP
Endpoints:                xx.x.xx.xxx:443
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     30515

My config map ->

apiVersion: v1
data:
  access-log-path: /var/log/nginx-logs/access.log
  compute-full-forwarded-for: "true"
  enable-real-ip: "true"
  enable-underscores-in-headers: "true"
  error-log-path: /var/log/nginx-logs/error.log
  large-client-header-buffers: 4 64k
  log-format-upstream: $remote_addr - $request_id - [$proxy_add_x_forwarded_for] -
    $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer"
    "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr
    $upstream_response_length $upstream_response_time $upstream_status
  proxy-read-timeout: "240"
  proxy-send-timeout: "240"
  real-ip-header: proxy_protocol
  use-forwarded-headers: "true"
  use-proxy-protocol: "true"

2

Answers


  1. How are you?

    Also you need to set this on Kubernetes spec:

    externalTrafficPolicy: Local
    

    For instance,

    apiVersion: v1
    kind: Service
    metadata:
      name: example-service
    spec:
      selector:
        app: example
      ports:
        - port: 8765
          targetPort: 9376
      externalTrafficPolicy: Local
      type: LoadBalancer
    

    So you can use those as following in Nginx settings:

    use-proxy-protocol: "false"
    enable-real-ip: "true"
    use-forwarded-headers: "true"
    proxy-real-ip-cidr: "YOUR-LB-IP/32"
    

    PLUS

    If you want to force HTTPS redirecting, so you can set this as desired in the Nginx settings:

    force-ssl-redirect: "true"
    

    Read futher on https://cloud.google.com/kubernetes-engine/docs/how-to/service-parameters#externalTrafficPolicy

    Login or Signup to reply.
  2. I tried to use the following ConfigMap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.5.1
      name: ingress-nginx-controller
      namespace: ingress-nginx
    data:
      allow-snippet-annotations: "true"
      enable-real-ip: "true"
      use-forwarded-headers: "true"
      proxy-real-ip-cidr: "<pods_cidr>,<services_cidr>,<load_balance_ip>/32"
      use-proxy-protocol: "false"
    

    And added the statement externalTrafficPolicy: Local on Service that assign the load balance:

    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.5.1
      name: ingress-nginx-controller
      namespace: ingress-nginx
    spec:
      externalTrafficPolicy: Local
      ipFamilies:
        - IPv4
      ipFamilyPolicy: SingleStack
      ports:
        - appProtocol: https
          name: https
          port: 443
          protocol: TCP
          targetPort: https
      selector:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
      type: LoadBalancer
      loadBalancerIP: <load_balance_ip>
    

    I hadn’t success. Then, I also tried to configure ip-masq-agent with the follow ConfigMap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ip-masq-agent
      namespace: kube-system
    data:
      config: |
        nonMasqueradeCIDRs:
          - <load_balance_ip>/32
          - <pods_cidr>
          - <services_cidr>
        masqLinkLocal: false
        resyncInterval: 30s
    

    So, I deleted the DaemonSet ip-masq-agent and automatic recreated it.

    After that, I got my cluster working as expected.

    You can find more information about ip-masq-agent on accessing https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search