skip to Main Content

i installed nginx ingress with the yaml file

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml

when deploy i can see that the endpoints/externalIPs by default are all the ip of my nodes
enter image description here

but i only want 1 externalIPs to be access able to my applications

i had tried bind-address(https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#bind-address) in a configuration file and applied it but it doesn’t work, my ConfigMap file:

apiVersion: v1
data:
  bind-address: "192.168.30.16"
kind: ConfigMap
metadata:
  name: ingress-nginx-controller

I tried kubectl edit svc/ingress-nginx-controller -n ingress-nginx to edit the svc adding externalIPs but it still doesn’t work.

enter image description here

The only thing the nginx ingress document mentioned is https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#external-ips but i tried editing the svc, after i changed, it was set to single IP, but later it re-add the IPs again. Seems like there an automatic update of external IPs mechanic in ingress-nginx?

Is there anyway to set nginx ingress externals ip to only 1 of the node ip? i’m running out of option for googling this. Hope someone can help me

2

Answers


  1. Dependent on whether there is a LoadBalancer implementation for your cluster that might as intended.

    If you want to use a specified node use type: NodePort

    https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types

    It might then also be useful to use a nodeSelector so you can control what node the nxinx controller gets scheduled to, for DNS reasons.

    Login or Signup to reply.
  2. but I only want 1 external IPs to be access able to my applications

    If you wish to "control" who can access your service(s) and from which ip/subnet/namesapce etc you should use NetworkPolicy


    https://kubernetes.io/docs/concepts/services-networking/network-policies/

    The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:

    1. Other pods that are allowed (exception: a pod cannot block access to itself)
    2. Namespaces that are allowed.
    3. IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)

    When defining a pod- or namespace-based NetworkPolicy, you use a selector to specify what traffic is allowed to and from the Pod(s) that match the selector.

    Meanwhile, when IP-based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: test-network-policy
      namespace: default
    spec:
      podSelector:
        matchLabels:
          role: db
      policyTypes:
        - Ingress
        - Egress
      ingress:
        - from:
            - ipBlock:
                cidr: 172.17.0.0/16
                except:
                  - 172.17.1.0/24
            - namespaceSelector:
                matchLabels:
                  project: myproject
            - podSelector:
                matchLabels:
                  role: frontend
          ports:
            - protocol: TCP
              port: 6379
      egress:
        - to:
            - ipBlock:
                cidr: 10.0.0.0/24
          ports:
            - protocol: TCP
              port: 5978
    

    enter image description here

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search