skip to Main Content

I would like to have a script that produces Kubernetes manifests to deploy a bare nginx container with service port 80 and ingress for host as example.nginx.com. I will deploy it into EKS cluster. Can someone give me clue?

2

Answers


  1. You must have deployed nginx-ingress to your cluster. Then run the following script at your command prompt to deploy a bare nginx container with service port 80 and ingress for host as example.nginx.com:

    cat << EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:alpine
            imagePullPolicy: IfNotPresent
            ports:
            - name: http
              containerPort: 80
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
    spec:
      selector:
        app: nginx
      ports:
      - name: http
        protocol: TCP
        port: 80
        targetPort: 80
    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: nginx
      annotations:
        kubernetes.io/ingress.class: "nginx"
    spec:
      rules:
      - host: example.nginx.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginx
                port:
                  number: 80
    EOF
    
    Login or Signup to reply.
  2. Recently I started configuring NLB with Nginx controller on EKS. So documenting the complete flow with the script you needed.
    I tried other approached like cloud provider based Nginx deployment but it didn’t work as expected ( instead of ELB it was creating Classic LB).
    Ref- https://github.com/kubernetes/ingress-nginx/issues/6292

    In short below approach is the best so far.

    1. Install Nginx controller- This will create a deployment and a NodePort service for HTTP say port- 31848, HTTPS- 30099

    #kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.1/deploy/static/provider/baremetal/deploy.yaml

    1. Create Production deployment, service and ingress resource.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: production
      labels:
        app: production
      namespace: app 
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: production
      template:
        metadata:
          labels:
            app: production
        spec:
          containers:
          - name: production
            image: mirrorgooglecontainers/echoserver:1.10
            ports:
            - containerPort: 8080
            env:
              - name: NODE_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: spec.nodeName
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: POD_NAMESPACE
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.namespace
              - name: POD_IP
                valueFrom:
                  fieldRef:
                    fieldPath: status.podIP
    
    ---
    
    apiVersion: v1
    kind: Service
    metadata:
      name: production
      labels:
        app: production
      namespace: app
    spec:
      ports:
      - port: 80
        targetPort: 8080
        protocol: TCP
        name: http
      selector:
        app: production
    ---
    
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: production
      annotations:
        kubernetes.io/ingress.class: nginx
      namespace: app
    spec:
      rules:
      - http:
          paths:
            - path: /
              pathType: Prefix
              backend:
                serviceName: production
                servicePort: 80
    
    1. Create Canary Deployment.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: canary
      labels:
        app: canary
      namespace: app 
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: canary
      template:
        metadata:
          labels:
            app: canary
        spec:
          containers:
          - name: canary
            image: mirrorgooglecontainers/echoserver:1.10
            ports:
            - containerPort: 8080
            env:
              - name: NODE_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: spec.nodeName
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: POD_NAMESPACE
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.namespace
              - name: POD_IP
                valueFrom:
                  fieldRef:
                    fieldPath: status.podIP
    ---
    
    apiVersion: v1
    kind: Service
    metadata:
      name: canary
      labels:
        app: canary
      namespace: app
    spec:
      ports:
      - port: 80
        targetPort: 8080
        protocol: TCP
        name: http
      selector:
        app: canary
    ---
    
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: canary
      annotations:
        kubernetes.io/ingress.class: nginx
        nginx.ingress.kubernetes.io/canary: "true"
        nginx.ingress.kubernetes.io/canary-weight: "30"
      namespace: app
    spec:
      rules:
      - http:
          paths:
            - path: /
              pathType: Prefix
              backend:
                serviceName: canary
                servicePort: 80
    
    1. Create a NLB type Load Balancer on EKS. If you are choosing "internet-facing".

    2. Create a Target Group with "Target Type" as instance and Port/Health Check Port- 31848 (HTTP).

    3. Attach Target Group to Autoscaling group.

    4. Create a listener on NLB (TLS- Secure TCP) and forward it to the Target Group.

    5. Although we would be launching worker nodes on Private subnets but we need to open port "31848" for all the IP. This is how EC2 would be able to communicate with NLB.

    Hope I am able to provide you clear idea on this. Please do let me know in case you face any issue.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search