We have an EKS cluster in AWS. After pointing to our eks cluster using following command,
aws eks --region us-east-1 update-kubeconfig --name cluster-name
.
Then we deployed nginx for that cluster using following shell script.
file: 1_cert_manager.sh
###Nginx
# This script install nginx and cert manager using helm install..
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx
--set controller.replicaCount=2
--set controller.nodeSelector."beta.kubernetes.io/os"=linux
--set defaultBackend.nodeSelector."beta.kubernetes.io/os"=linux
sleep 60
kubectl get service nginx-ingress-ingress-nginx-controller
###########
#Cert-manager
##########
# Label the cert-manager namespace to disable resource validation
kubectl label cert-manager.io/disable-validation=true
# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
# Update your local Helm chart repository cache
helm repo update
# Install the cert-manager Helm chart
helm install
cert-manager
--version v0.16.1
--set installCRDs=true
--set nodeSelector."beta.kubernetes.io/os"=linux
jetstack/cert-manager
We ran the above script using
chmod +x ./1_cert_manager.sh
sh ./1_cert_manager.sh
After installing nginx, we could see the nginx home page when we hit the DNS provided in the AWS load balancer.
kubectl get services
gave the DNS address of the load balancer.
The page is loaded with http. To enable support for https, we installed cert manager.
We have installed letsencrypt-issuer ClusterIssuer.
File: 2_cluster-issuer.yaml
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-issuer
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-issuer
solvers:
- http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
"kubernetes.io/os": linux
We have installed Cluster issuer using following command.
kubectl apply -f 2_cluster-issuer.yaml
Then we installed a sample hello world service.
file:3_service.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: console
spec:
selector:
matchLabels:
app: console
tier: console
track: stable
replicas: 1
template:
metadata:
labels:
app: console
tier: console
track: stable
spec:
containers:
- name: console
image: "gcr.io/google-samples/hello-go-gke:1.0"
ports:
- name: http
containerPort: 80
---
---
apiVersion: v1
kind: Service
metadata:
name: console
spec:
selector:
app: console
tier: console
ports:
- protocol: TCP
port: 80
targetPort: 80
kubectl apply -f 3_service.yaml
We have 2-3 services which will run on different ports. For testing purpose, we have installed only one service.
The service is installed successfully which we have verified using
kubectl get pods
and
kubectl get services
.
Then finally we deployed ingress yaml file to provide host detail and routing information.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nandha-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-issuer
nginx.ingress.kubernetes.io/cors-allow-headers: "Content-Type"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, PUT, DELETE, OPTIONS"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/client-body-buffer-size: "16m"
nginx.ingress.kubernetes.io/proxy-body-size: "16m"
nginx.ingress.kubernetes.io/enable-modsecurity: "true"
spec:
tls:
- hosts:
- a2e858295f1201234aab29d960a10bfa-41041144.us-east-1.elb.amazonaws.com
secretName: tls-secret
rules:
- host: a2e858295f1201234aab29d960a10bfa-41041144.us-east-1.elb.amazonaws.com
http:
paths:
- pathType: Prefix
backend:
service:
name: console
port:
number: 80
path: /(.*)
kubectl apply -f 4_ingress.yaml
If the previous command executed successully we should have our tls-secret certificate ready. (For GCP, it worked correctly).
We debugged using
kubectl get certificates
kubectl describe certificates tls-secret
For the describe command we got the following error,
Failed to create Order: 400 urn:ietf:params:acme:error:rejectedIdentifier:
NewOrder request did not include a SAN short enough to fit in CN
When we searched about the error, we found that issue comes due to length of the DNS. Length of AWS DNS is greater than 64.
Current work around:
We have created a CNAME mapping for the AWS DNS url and we have used that short mapped url in the 4th step instead of actual url.
This works as of now. But we need to enable SSL for actual DNS also.
How to enable SSL for AWS DNS value?
This (a2e858295f1201234aab29d960a10bfa-41041144.us-east-1.elb.amazonaws.com) was our host when we had our EKS up. Currently our EKS is terminated.
2
Answers
When checked with cloud support team, we got the following response. They suggested to create custom domain mapping and suggested to use it which we already doing.
DNS is generated by combining
load balancer name + random string + region + 'elb.amazonaws.com'
.If we are able to give custom name for load balancer from helm installation means, we can solve our problem. Currently we are trying to do this step.
Attached the cloud support team's response.
No that is not the problem. Each DNS label is restricted to 63 characters (bytes in fact) at most, and your first label is 42 long so it is fine. The other rule is the full name must be at most 255 characters/bytes, but in practice 253. Which is also ok for your name.
The problem is elsewhere, as the LDAP/X520/certificate reference says the full CN must be less than 64 but it is unrelated to DNS (the CN was at the beginning typically individual or organization names, it just got hijacked later to put DNS names there, until the SAN extension got written, and which is now the default, the Subject for a DV-certificate is really not relevant anymore, and in part because of those limitations; names aka dnsName in SAN are defined to have a maximum left to the implementation but also defined to be valid domain names so in practice the rule above of 63 per label/255 total applies).
This is from where your problem comes.
Modern host based certificates only need a proper SAN, the CN is irrelevant to browsers now. So you need to generate your certificate with all good data in the SAN but another fake thing in the CN. Which seems to be what you did, but not sure to understand. The certificate is valid for all names in the SAN.
See https://community.letsencrypt.org/t/a-certificate-for-a-63-character-domain/78870/6 for ideas. Basically add another name as first name, which is smaller and which you control too, to be able to get the validation to issue the certificate.
The real solution is just to get rid of the CN by CAs, as explained on https://github.com/letsencrypt/boulder/issues/2093 but this seems blocked by other standardization efforts elsewhere.
In the meantime, you should also ask your cloud provider for help on this.