The ultimate aim of this is to put nginx
in front of my redis-sentinel
pods so I can protected them with our corporate SSL certificates.
Redis itself is up and tested. The problem I’m getting is with the nginx
deployment and associated service.
Latest update: @Vahid suggested I use an ingress instead. I’m using MicroK8s
and ingress was already enabled.
Since I’m using Microk8s
I think I need to update the nginx-load-balancer-microk8s-conf
file, so this is what I did:
kubectl get configmap --namespace ingress
To get the config maps and then
kubectl edit configmap nginx-load-balancer-microk8s-conf --namespace ingress
to edit it – and I added:
data:
tcp-services: |-
6379: "default/redis-service:6379:redis-ssl"
26379: "default/redis-sentinel-service:26379:redis-ssl"
and the rerolled the daemonset
.
Next I added the same secrets
to the ingress
namespace.
Then the service.yaml
:
apiVersion: v1
kind: Service
metadata:
name: redis-service-lb
namespace: redis
spec:
type: LoadBalancer
loadBalancerIP: 10.250.0.44
ports:
- port: 6379
targetPort: 6379
protocol: TCP
name: tcp-redis
- port: 26379
targetPort: 26379
protocol: TCP
name: tcp-redis-sentinel
selector:
app: redis
Looking at kubectl get events
I can see all went well:
44s Normal IPAllocated service/redis-sentinel-service-lb Assigned IP ["10.250.0.41"]
And the services is running:
redis-service-lb LoadBalancer 10.152.183.56 10.250.0.44 6379:31301/TCP,26379:32502/TCP 5m49s
That said – still can’t get a connection using nc
– or redis-cli
for that matter…
I’m sure I got something wrong here: this is my first dance with ingress
…
Historical post follows
We’ll start off with the nginx.conf
file that’s mounted as a configmap
:
server {
listen 6379 ssl;
server_name ki44.MyDomain.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
sendfile on;
keepalive_timeout 65;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_certificate /etc/nginx/certs/tls.crt;
ssl_certificate_key /etc/nginx/certs/tls.key;
location / {
proxy_pass http://redis-sentinel.redis.svc.cluster.local:6379;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen 26379 ssl;
server_name ki44.MyDomain.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_certificate /etc/nginx/certs/tls.crt;
ssl_certificate_key /etc/nginx/certs/tls.key;
location / {
proxy_pass http://redis-sentinel.redis.svc.cluster.local:23679;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The requisite certificates are mounted as secrets
as well. The pod launches with zero errors.
And next my service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-nginx
namespace: redis
spec:
replicas: 1
selector:
matchLabels:
app: nginx-proxy
template:
metadata:
labels:
app: nginx-proxy
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 6379
name: redis
hostPort: 6379
- containerPort: 26379
name: sentinel
hostPort: 26379
volumeMounts:
- name: config-volume
mountPath: /etc/nginx/conf.d
- name: cert-volume
mountPath: /etc/nginx/certs
volumes:
- name: config-volume
configMap:
name: nginx-config
- name: cert-volume
secret:
secretName: redis-ssl
nodeSelector:
location: internal
type: worker
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: redis
spec:
type: LoadBalancer
loadBalancerIP: 10.250.0.44
ports:
- port: 6379
name: redis
targetPort: 6379
protocol: TCP
- port: 26379
name: sentinel
targetPort: 26379
protocol: TCP
selector:
app: redis-nginx
I deploy this and the nginx
pod fires up just fine, and I’ve confirmed by shelling into it (and installing nc
inside it) that I’m able to reach the appropriate ports specified.
Everything seems happy, yet I’m unable to contact either port using nc
:
nc -zv ki44.MyDomain.com 6379
and nc -zv ki44.MyDomain.com 26379
get me nowhere.
Here’s the output of kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-sentinel-headless ClusterIP None <none> 6379/TCP,26379/TCP 3h30m
redis-sentinel ClusterIP 10.152.183.130 <none> 6379/TCP,26379/TCP 3h30m
redis LoadBalancer 10.152.183.218 10.250.0.44 6379:30420/TCP,26379:30154/TCP 55m
NB: KI44.MyDomain.com
does indeed resolve to 10.250.0.44
.
Running kubectl describe pod <podName>
gives me:
Name: redis-nginx-7595bdd458-7grll
Namespace: redis
Priority: 0
Service Account: default
Node: kwi01.md.local/10.250.0.147
Start Time: Sun, 12 May 2024 12:40:16 -0400
Labels: app=nginx-proxy
pod-template-hash=7595bdd458
Annotations: cni.projectcalico.org/containerID: cd6b7cf612081b0fb62e98f483d97a3f2d25d40b12e0321b70786158d7e8b7a9
cni.projectcalico.org/podIP: 10.1.40.48/32
cni.projectcalico.org/podIPs: 10.1.40.48/32
Status: Running
IP: 10.1.40.48
IPs:
IP: 10.1.40.48
Controlled By: ReplicaSet/redis-nginx-7595bdd458
Containers:
nginx:
Container ID: containerd://b224262eadc355bfc565a8654f65760f2b2cd5444baab353f15425666e718cad
Image: nginx:latest
Image ID: docker.io/library/nginx@sha256:32e76d4f34f80e479964a0fbd4c5b4f6967b5322c8d004e9cf0cb81c93510766
Ports: 6379/TCP, 26379/TCP
Host Ports: 6379/TCP, 26379/TCP
State: Running
Started: Sun, 12 May 2024 12:40:17 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/etc/nginx/certs from cert-volume (rw)
/etc/nginx/conf.d from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d865w (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: nginx-config
Optional: false
cert-volume:
Type: Secret (a volume populated by a Secret)
SecretName: redis-ssl
Optional: false
kube-api-access-d865w:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: location=internal
type=worker
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
The logs from the pod:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?)
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2024/05/12 16:40:17 [notice] 1#1: using the "epoll" event method
2024/05/12 16:40:17 [notice] 1#1: nginx/1.25.5
2024/05/12 16:40:17 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14)
2024/05/12 16:40:17 [notice] 1#1: OS: Linux 5.15.0-102-generic
2024/05/12 16:40:17 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 65536:65536
2024/05/12 16:40:17 [notice] 1#1: start worker processes
2024/05/12 16:40:17 [notice] 1#1: start worker process 21
2024/05/12 16:40:17 [notice] 1#1: start worker process 22
2024/05/12 16:40:17 [notice] 1#1: start worker process 23
2024/05/12 16:40:17 [notice] 1#1: start worker process 24
2024/05/12 16:40:17 [notice] 1#1: start worker process 25
2024/05/12 16:40:17 [notice] 1#1: start worker process 26
2024/05/12 16:40:17 [notice] 1#1: start worker process 27
2024/05/12 16:40:17 [notice] 1#1: start worker process 28
Running kubectl get events
also shows nothing odd:
34s Normal Killing pod/redis-nginx-7595bdd458-7grll Stopping container nginx
20s Normal IPAllocated service/redis Assigned IP ["10.250.0.44"]
20s Normal ScalingReplicaSet deployment/redis-nginx Scaled up replica set redis-nginx-7595bdd458 to 1
20s Normal SuccessfulCreate replicaset/redis-nginx-7595bdd458 Created pod: redis-nginx-7595bdd458-lwcsg
19s Normal Scheduled pod/redis-nginx-7595bdd458-lwcsg Successfully assigned redis/redis-nginx-7595bdd458-lwcsg to kwi01.md.local
18s Normal Pulling pod/redis-nginx-7595bdd458-lwcsg Pulling image "nginx:latest"
18s Normal Pulled pod/redis-nginx-7595bdd458-lwcsg Successfully pulled image "nginx:latest" in 244.860879ms (244.883758ms including waiting)
18s Normal Created pod/redis-nginx-7595bdd458-lwcsg Created container nginx
18s Normal Started pod/redis-nginx-7595bdd458-lwcsg Started container nginx
Things I’ve tried
- Changing proxy_pass
http://redis-sentinel.redis.svc.cluster.local.*
toproxy_pass http://redis-sentinel-headless.redis.svc.cluster.local:*
. - Adding and removing the
hostPort
assignment in myDeployment
. - Removing the
SSL
setting and trying without using encryption. - Just testing with one port –
6379
. - Created a new pod in the same namespace, installing
curl
then using that to connect to thenginx
proxy via the internal IP address (e.g. 10.1.40.48). I then saw this attempt in the logs of thenginx
pod itself, so I know thenginx
proxy is indeed running.
Things I know
- This isn’t a firewall issue.
- This is not a network connectivity issue.
- There is nothing else listening on
ki44.MyDomain.com
.
From within thenginx
pod I can see both637
and26379
listening. I am also able to successfully executenc -zv redis-sentinel.redis.svc.cluster.local:6379
(and all the other derivatives). - I have several other
LoadBalancer
services running just fine on the cluster – although this is the first one I’ve done withnginx
in front of it. - I’m confused. 😕
3
Answers
I ended up redoing the
nginx.conf
file to usestream
as opposed tohttp
and I was then successfully able to connect.Change your service
spec.type
fromLoadBalancer
toClusterIP
. Now, yourRedis
is only exposed within your cluster. Next, create your Ingress resource likeThis will expose your service to the outside.
Note the host. If you want to hit your ingress your request should have this at
host
header. Assuming your cluster is on your machine (local). You can test it byYou can have any DNS name you want.
foo.bar.com
is just an example dns record resolving to your cluster.totally agree 100% with @Vahid, use ingress