skip to Main Content

I have a K8S cluster with an Nginx ingress controller, linkerd, etc.

I want to apply strict network policies, like blocking ingress and egress connections in the entire namespace.

This works, but some services need access to the Kubernetes API server. Since I can’t use the service domain kubernetes.default.svc.cluster.local in my network policy, I must provide the full IP as CIDR.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-kube-server
spec:
  podSelector: {}
  policyTypes:
    - Ingress
  egress:
    - to:
        - ipBlock:
            cidr: 1.2.3.4/32 # K8S API Endpoint
      ports:
        - port: 443

I got that IP thanks to this question

Now, this cluster is not running 24/7, so it’s shut down and restarted multiple times a week. This cause the K8S API IP to change on each restart, breaking my network policies, and I need to update the rules manually.

Is there any way to solve this issue, or do I need to start thinking about implementing some automation to update the policies after the restart?

2

Answers


  1. You can restrict the traffic based on either namespace or pod label selectors as shown in the official documentation. Just specify the labels of kube-apiserver’s Pods.

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: test-network-policy
      namespace: default
    spec:
      podSelector:
        matchLabels:
          role: db
      policyTypes:
        - Ingress
      ingress:
        - from:
            - namespaceSelector:
                matchLabels:
                  project: myproject
            - podSelector:
                matchLabels:
                  role: frontend
          ports:
            - protocol: TCP
              port: 6379
    
    Login or Signup to reply.
  2. It is derived that your current environment is a test environment and you are testing out various features and plugins for deploying your application on kubernetes

    “Now, this cluster is not running 24/7, so it’s shut down and
    restarted multiple times a week. This causes the K8S API IP to change
    on each restart, breaking my network policies, and I need to update
    the rules manually.”

    Adding to the answer provided by TAM you can simply reserve your IP pools and assign them statically to your pods using calico plugin since it is just a test environment.

    There are certain prerequisites for following this process: you need to install calico-ipam and also reserve a pool of IPs for static assignment, once these prerequisites are fulfilled you can configure static IP pools in your calico plugin and assign IPs manually to your pods.

    Follow this calico official documentation for more information on how to reserve IPs and statically assign them.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search