How can I get real client IP from Nginx ingress load balancer in GKE? According to the online resource, I have configured the External Traffic Policy: Local and added use-proxy-protocol: "true" property also.
But still, I’m seen the GKE node IP/interface in the log, not the real client IP.
My load balancer service ->
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=0.41.2
helm.sh/chart=ingress-nginx-3.10.1
Annotations: networking.gke.io/load-balancer-type: Internal
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: xx.xxx.xx.xx
IPs: xx.xx.xxx.xx
LoadBalancer Ingress: xx.xx.xx.xx
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 32118/TCP
Endpoints: xx.x.xx.xx:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31731/TCP
Endpoints: xx.x.xx.xxx:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 30515
My config map ->
apiVersion: v1
data:
access-log-path: /var/log/nginx-logs/access.log
compute-full-forwarded-for: "true"
enable-real-ip: "true"
enable-underscores-in-headers: "true"
error-log-path: /var/log/nginx-logs/error.log
large-client-header-buffers: 4 64k
log-format-upstream: $remote_addr - $request_id - [$proxy_add_x_forwarded_for] -
$remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer"
"$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr
$upstream_response_length $upstream_response_time $upstream_status
proxy-read-timeout: "240"
proxy-send-timeout: "240"
real-ip-header: proxy_protocol
use-forwarded-headers: "true"
use-proxy-protocol: "true"
2
Answers
How are you?
Also you need to set this on Kubernetes spec:
For instance,
So you can use those as following in Nginx settings:
PLUS
If you want to force HTTPS redirecting, so you can set this as desired in the Nginx settings:
Read futher on https://cloud.google.com/kubernetes-engine/docs/how-to/service-parameters#externalTrafficPolicy
I tried to use the following nginx ConfigMap:
And added the statement
externalTrafficPolicy: Local
on nginx Service that assign the load balance:I hadn’t success. Then, I also tried to configure ip-masq-agent with the follow ConfigMap:
So, I deleted the DaemonSet ip-masq-agent and gke automatic recreated it.
After that, I got my gke cluster working as expected.
You can find more information about ip-masq-agent on gke accessing https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent