skip to Main Content

We have an EKS cluster in AWS. After pointing to our eks cluster using following command,

aws eks --region us-east-1 update-kubeconfig --name cluster-name.

Then we deployed nginx for that cluster using following shell script.

file: 1_cert_manager.sh

###Nginx
# This script install nginx and cert manager using helm install..

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx 
    --set controller.replicaCount=2 
    --set controller.nodeSelector."beta.kubernetes.io/os"=linux 
    --set defaultBackend.nodeSelector."beta.kubernetes.io/os"=linux

sleep 60
kubectl get service nginx-ingress-ingress-nginx-controller


###########
#Cert-manager
##########


# Label the cert-manager namespace to disable resource validation
kubectl label  cert-manager.io/disable-validation=true

# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io

# Update your local Helm chart repository cache
helm repo update

# Install the cert-manager Helm chart
helm install 
  cert-manager 
  --version v0.16.1 
  --set installCRDs=true 
  --set nodeSelector."beta.kubernetes.io/os"=linux 
  jetstack/cert-manager

We ran the above script using

chmod +x ./1_cert_manager.sh

sh ./1_cert_manager.sh

After installing nginx, we could see the nginx home page when we hit the DNS provided in the AWS load balancer.

kubectl get services gave the DNS address of the load balancer.

The page is loaded with http. To enable support for https, we installed cert manager.

We have installed letsencrypt-issuer ClusterIssuer.

File: 2_cluster-issuer.yaml

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt-issuer
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
      name: letsencrypt-issuer
    solvers:
    - http01:
        ingress:
          class: nginx
          podTemplate:
            spec:
              nodeSelector:
                "kubernetes.io/os": linux

We have installed Cluster issuer using following command.

kubectl apply -f 2_cluster-issuer.yaml

Then we installed a sample hello world service.

file:3_service.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: console
spec:
  selector:
    matchLabels:
      app: console
      tier: console
      track: stable
  replicas: 1
  template:
    metadata:
      labels:
        app: console
        tier: console
        track: stable
    spec:
      containers:
        - name: console
          image: "gcr.io/google-samples/hello-go-gke:1.0"
          ports:
            - name: http
              containerPort: 80
---
---
apiVersion: v1
kind: Service
metadata:
  name: console
spec:
  selector:
    app: console
    tier: console
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

kubectl apply -f 3_service.yaml

We have 2-3 services which will run on different ports. For testing purpose, we have installed only one service.

The service is installed successfully which we have verified using
kubectl get pods
and
kubectl get services .

Then finally we deployed ingress yaml file to provide host detail and routing information.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nandha-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$1
    nginx.ingress.kubernetes.io/use-regex: "true"
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-issuer
    nginx.ingress.kubernetes.io/cors-allow-headers: "Content-Type"
    nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, PUT, DELETE, OPTIONS"
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
    nginx.ingress.kubernetes.io/client-body-buffer-size: "16m"
    nginx.ingress.kubernetes.io/proxy-body-size: "16m"
    nginx.ingress.kubernetes.io/enable-modsecurity: "true"
spec:
  tls:
      - hosts:
        - a2e858295f1201234aab29d960a10bfa-41041144.us-east-1.elb.amazonaws.com
        secretName: tls-secret
  rules:
    - host: a2e858295f1201234aab29d960a10bfa-41041144.us-east-1.elb.amazonaws.com
      http:
        paths:
          - pathType: Prefix
            backend:
              service:
                name: console
                port:
                  number: 80
            path: /(.*)

kubectl apply -f 4_ingress.yaml

If the previous command executed successully we should have our tls-secret certificate ready. (For GCP, it worked correctly).

We debugged using

kubectl get certificates

kubectl describe certificates tls-secret

For the describe command we got the following error,

Failed to create Order: 400 urn:ietf:params:acme:error:rejectedIdentifier:
NewOrder request did not include a SAN short enough to fit in CN

When we searched about the error, we found that issue comes due to length of the DNS. Length of AWS DNS is greater than 64.

Current work around:
We have created a CNAME mapping for the AWS DNS url and we have used that short mapped url in the 4th step instead of actual url.
This works as of now. But we need to enable SSL for actual DNS also.

How to enable SSL for AWS DNS value?

This (a2e858295f1201234aab29d960a10bfa-41041144.us-east-1.elb.amazonaws.com) was our host when we had our EKS up. Currently our EKS is terminated.

2

Answers


  1. Chosen as BEST ANSWER

    When checked with cloud support team, we got the following response. They suggested to create custom domain mapping and suggested to use it which we already doing.

    DNS is generated by combining load balancer name + random string + region + 'elb.amazonaws.com'.

    If we are able to give custom name for load balancer from helm installation means, we can solve our problem. Currently we are trying to do this step.

    Attached the cloud support team's response.

    In compliance with RFC 5280 (https://datatracker.ietf.org/doc/html/rfc5280 ), the length of the domain name (technically, the Common Name) that you enter in this step cannot exceed 64 octets (characters), including periods. Each subsequent Subject Alternative Name (SAN) that you provide, as in the next step, can be up to 253 octets in length.

    You are encountering this error as your host name ( a2e858295f1204618aab29d960a10bfa-41041143.us-east-1.elb.amazonaws.com (http://a2e858295f1204618aab29d960a10bfa-41041143.us-east-1.elb.amazonaws.com/ ) ) is more than 64 characters.

    As to overcome this problem we can configure a custom domain name for your Load Balancer. Each Classic Load Balancer receives a default Domain Name System (DNS) name. This DNS name includes the name of the AWS Region in which the load balancer is created. For example, if you create a load balancer named my-loadbalancer in the US West (Oregon) Region, your load balancer receives a DNS name such as my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com.

    To access the website on your instances, you paste this DNS name into the address field of a web browser. In our case this DNS is has more characters than the limit that is 64. If you'd prefer to use a friendly DNS name for your load balancer, such as www.example.com, instead of the default DNS name, you can create a custom domain name and associate it with the DNS name for your load balancer. When a client makes a request using this custom domain name, the DNS server resolves it to the DNS name for your load balancer. Then we can use this custom domain in the place of our host name while configuring the ingress file.

    This workaround can be applied to achieve your use case. To associate your custom domain name with your load balancer name you have to register your domain name. The Internet Corporation for Assigned Names and Numbers (ICANN) manages domain names on the internet. You register a domain name using a domain name registrar, an ICANN-accredited organization that manages the registry of domain names. The website for your registrar will provide detailed instructions and pricing information for registering your domain name.

    Next, use your DNS service, such as your domain registrar, to create a CNAME record to route queries to your load balancer.

    Alternatively, you can use Route 53 as your DNS service. You create a hosted zone, which contains information about how to route traffic on the internet for your domain, and an alias resource record set, which routes queries for your domain name to your load balancer. Route 53 doesn't charge for DNS queries for alias record sets, and you can use alias record sets to route DNS queries to your load balancer for the zone apex of your domain (for example, example.com). For information about transferring DNS services for existing domains to Route 53, see Configuring Route 53 as your DNS service (https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-configuring.html ) in the Amazon Route 53 Developer Guide.

    [#] https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/using-domain-names-with-elb.html


  2. "Length of AWS DNS is greater than 64."

    No that is not the problem. Each DNS label is restricted to 63 characters (bytes in fact) at most, and your first label is 42 long so it is fine. The other rule is the full name must be at most 255 characters/bytes, but in practice 253. Which is also ok for your name.

    The problem is elsewhere, as the LDAP/X520/certificate reference says the full CN must be less than 64 but it is unrelated to DNS (the CN was at the beginning typically individual or organization names, it just got hijacked later to put DNS names there, until the SAN extension got written, and which is now the default, the Subject for a DV-certificate is really not relevant anymore, and in part because of those limitations; names aka dnsName in SAN are defined to have a maximum left to the implementation but also defined to be valid domain names so in practice the rule above of 63 per label/255 total applies).

    This is from where your problem comes.

    Modern host based certificates only need a proper SAN, the CN is irrelevant to browsers now. So you need to generate your certificate with all good data in the SAN but another fake thing in the CN. Which seems to be what you did, but not sure to understand. The certificate is valid for all names in the SAN.

    See https://community.letsencrypt.org/t/a-certificate-for-a-63-character-domain/78870/6 for ideas. Basically add another name as first name, which is smaller and which you control too, to be able to get the validation to issue the certificate.

    The real solution is just to get rid of the CN by CAs, as explained on https://github.com/letsencrypt/boulder/issues/2093 but this seems blocked by other standardization efforts elsewhere.

    In the meantime, you should also ask your cloud provider for help on this.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search