Edited with new data after complete Kubernetes wipe-out.
Lately I am trying to do a test deploy of a Blazor server app on locally hosted Kubernetes instance running on docker desktop.
I managed to correctly spin up the app in a container, migrations were applied etc, logs are telling me that the app is running and waiting.
Steps taken after resetting Kubernetes using Reset Kubernetes Kluster
in Docker Desktop:
-
Modified
hosts
file to include127.0.0.1 scp.com
-
Added secret containing key to mssql
-
Installed Ngnix controller using
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml
-
Applied local volume claim –
local-pvc.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mssql-claim spec: accessModes: - ReadWriteMany resources: requests: storage: 250Mi
-
Applied mssql instance and cluster ip –
mssql-scanapp-depl.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: mssql-depl spec: replicas: 1 selector: matchLabels: app: mssql template: metadata: labels: app: mssql spec: containers: - name: mssql image: mcr.microsoft.com/mssql/server:2019-latest ports: - containerPort: 1433 env: - name: MSSQL_PID value: "Express" - name: ACCEPT_EULA value: "Y" - name: SA_PASSWORD valueFrom: secretKeyRef: name: mssql key: SA_PASSWORD volumeMounts: - mountPath: /var/opt/mssql/data name: mssqldb volumes: - name: mssqldb persistentVolumeClaim: claimName: mssql-claim --- apiVersion: v1 kind: Service metadata: name: mssql-clusterip-srv spec: type: ClusterIP selector: app: mssql ports: - name: mssql protocol: TCP port: 1433 targetPort: 1433 --- apiVersion: v1 kind: Service metadata: name: mssql-loadbalancer spec: type: LoadBalancer selector: app: mssql ports: - protocol: TCP port: 1433 targetPort: 1433
-
Applied Blazor application and cluster ip –
scanapp-depl.yaml
:apiVersion: apps/v1 kind: Deployment metadata: name: scanapp-depl spec: replicas: 1 selector: matchLabels: app: scanapp template: metadata: labels: app: scanapp spec: containers: - name: scanapp image: scanapp:1.0 --- apiVersion: v1 kind: Service metadata: name: scanapp-clusterip-srv spec: type: ClusterIP selector: app: scanapp ports: - name: ui protocol: TCP port: 8080 targetPort: 80 - name: ui2 protocol: TCP port: 8081 targetPort: 443 - name: scanapp0 protocol: TCP port: 5000 targetPort: 5000 - name: scanapp1 protocol: TCP port: 5001 targetPort: 5001 - name: scanapp5 protocol: TCP port: 5005 targetPort: 5005
-
Applied Ingress –
ingress-srv.yaml
:apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-srv annotations: nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-name: "affinity" nginx.ingress.kubernetes.io/session-cookie-expires: "14400" nginx.ingress.kubernetes.io/session-cookie-max-age: "14400" spec: ingressClassName: nginx rules: - host: scp.com http: paths: - path: / pathType: Prefix backend: service: name: scanapp-clusterip-srv port: number: 8080
After all of this, Blazor app starts good, connects to mssql instance, seeds database and awaits for clients. Logs are as follows:
[15:18:53 INF] Starting up…
[15:18:53 WRN] Storing keys in a directory ‘/root/.aspnet
/DataProtection-Keys’ that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.
[15:18:55 INF] AuthorizationPolicy Configuration started …
[15:18:55 INF] Policy ‘LocationMustBeSady’ was configured successfully.
[15:18:55 INF] AuthorizationPolicy Configuration completed.
[15:18:55 INF] Now listening on: http://[::]:80
[15:18:55 INF] Application started. Press Ctrl+C to shut down.
[15:18:55 INF] Hosting environment: docker
[15:18:55 INF] Content root path: /app
As stated in the beginning – I cannot, for the love of all, get into my blazor app from browser – I tried:
- scp.com
- scp.com:8080
- scp.com:5000
- scp.com:5001
- scp.com:5005
Also, kubectl get ingress
now does not display ADDRESS value like before and kubectl get services
now says pending for mssql-loadbalancer
and ingress-nginx-controller
EXTERNAL-IP – detailed logs at the end of this post
Nothing seems to work, so there must be something wrong with my config files and I have no idea what could it be.
Also, note that there is no NodePort
configured this time.
In addition, Dockerfile for Blazor app:
# https://hub.docker.com/_/microsoft-dotnet
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /source
EXPOSE 5000
EXPOSE 5001
EXPOSE 5005
EXPOSE 80
EXPOSE 443
LABEL name="ScanApp"
# copy csproj and restore as distinct layers
COPY ScanApp/*.csproj ScanApp/
COPY ScanApp.Application/*.csproj ScanApp.Application/
COPY ScanApp.Common/*.csproj ScanApp.Common/
COPY ScanApp.Domain/*.csproj ScanApp.Domain/
COPY ScanApp.Infrastructure/*.csproj ScanApp.Infrastructure/
COPY ScanApp.Tests/*.csproj ScanApp.Tests/
Run ln -sf /usr/share/zoneinfo/posix/Europe/Warsaw /etc/localtime
RUN dotnet restore ScanApp/ScanApp.csproj
# copy and build app and libraries
COPY ScanApp/ ScanApp/
COPY ScanApp.Application/ ScanApp.Application/
COPY ScanApp.Common/ ScanApp.Common/
COPY ScanApp.Domain/ ScanApp.Domain/
COPY ScanApp.Infrastructure/ ScanApp.Infrastructure/
COPY ScanApp.Tests/ ScanApp.Tests/
WORKDIR /source/ScanApp
RUN dotnet build -c release --no-restore
# test stage -- exposes optional entrypoint
# target entrypoint with: docker build --target test
FROM build AS test
WORKDIR /source/ScanApp.Tests
COPY tests/ .
ENTRYPOINT ["dotnet", "test", "--logger:trx"]
FROM build AS publish
RUN dotnet publish -c release --no-build -o /app
# final stage/image
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=publish /app .
ENV ASPNETCORE_ENVIRONMENT="docker"
ENTRYPOINT ["dotnet", "ScanApp.dll"]
kubectl outputs
kubectl get ingress
output:
NAME | CLASS | HOSTS | ADDRESS | PORTS | AGE |
---|---|---|---|---|---|
ingress-srv | nginx | scp.com | 80 | 35m |
kubectl get pods --all-namespaces
output:
NAME | READY | STATUS | RESTARTS | AGE |
---|---|---|---|---|
default | mssql-depl-7f46b5c696-7hhbr | 1/1 | Running | 0 |
default | scanapp-depl-76f56bc6df-4jcq4 | 1/1 | Running | 0 |
ingress-nginx | ingress-nginx-admission-create-qdnck | 0/1 | Completed | 0 |
ingress-nginx | ingress-nginx-admission-patch-chxqn | 0/1 | Completed | 1 |
ingress-nginx | ingress-nginx-controller-54bfb9bb-f6gsf | 1/1 | Running | 0 |
kube-system | coredns-558bd4d5db-mr8p7 | 1/1 | Running | 0 |
kube-system | coredns-558bd4d5db-rdw2d | 1/1 | Running | 0 |
kube-system | etcd-docker-desktop | 1/1 | Running | 0 |
kube-system | kube-apiserver-docker-desktop | 1/1 | Running | 0 |
kube-system | kube-controller-manager-docker-desktop | 1/1 | Running | 0 |
kube-system | kube-proxy-pws8f | 1/1 | Running | 0 |
kube-system | kube-scheduler-docker-desktop | 1/1 | Running | 0 |
kube-system | storage-provisioner | 1/1 | Running | 0 |
kube-system | vpnkit-controller | 1/1 | Running | 6 |
kubectl get deployments --all-namespaces
output
NAME | READY | UP-TO-DATE | AVAILABLE | AGE |
---|---|---|---|---|
default | mssql-depl | 1/1 | 1 | 1 |
default | scanapp-depl | 1/1 | 1 | 1 |
ingress-nginx | ingress-nginx-controller | 1/1 | 1 | 1 |
kube-system | coredns | 2/2 | 2 | 2 |
kubectl get services --all-namespaces
output:
NAME | TYPE | CLUSTER-IP | EXTERNAL-IP | PORT(S) | AGE |
---|---|---|---|---|---|
default | kubernetes | ClusterIP | 10.96.0.1 | none | 443/TCP |
default | mssql-clusterip-srv | ClusterIP | 10.97.96.94 | none | 1433/TCP |
default | mssql-loadbalancer | LoadBalancer | 10.107.235.49 | pending | 1433:30149/TCP |
default | scanapp-clusterip-srv | ClusterIP | 10.109.116.183 | none | 8080/TCP,8081/TCP,5000/TCP,5001/TCP,5005/TCP |
ingress-nginx | ingress-nginx-controller | LoadBalancer | 10.103.89.226 | pending | 80:30562/TCP,443:31733/TCP |
ingress-nginx | ingress-nginx-controller-admission | ClusterIP | 10.111.235.243 | none | 443/TCP |
kube-system | kube-dns | ClusterIP | 10.96.0.10 | none | 53/UDP,53/TCP,9153/TCP |
Ingress logs:
NGINX Ingress controller
Release: v1.1.0
Build: cacbee86b6ccc45bde8ffc184521bed3022e7dee
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9
W1129 15:00:51.705331 8 client_config.go:615] Neither
–kubeconfig nor –master was specified. Using the inClusterConfig. This might not work.I1129 15:00:51.705452 8 main.go:223] "Creating API client"
host="https://10.96.0.1:443"I1129 15:00:51.721575 8 main.go:267] "Running in Kubernetes
cluster" major="1" minor="21" git="v1.21.5" state="clean"
commit="aea7bbadd2fc0cd689de94a54e5b7b758869d691"
platform="linux/amd64"I1129 15:00:51.872964 8 main.go:104] "SSL fake certificate
created"
file="/etc/ingress-controller/ssl/default-fake-certificate.pem"I1129 15:00:51.890273 8 ssl.go:531] "loading tls certificate"
path="/usr/local/certificates/cert" key="/usr/local/certificates/key"I1129 15:00:51.910104 8 nginx.go:255] "Starting NGINX Ingress
controller"I1129 15:00:51.920821 8 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx",
Name:"ingress-nginx-controller",
UID:"51060a85-d3a0-40de-b549-cf59e8fa7b08", APIVersion:"v1",
ResourceVersion:"733", FieldPath:""}): type: ‘Normal’ reason: ‘CREATE’
ConfigMap ingress-nginx/ingress-nginx-controllerI1129 15:00:53.112043 8 nginx.go:297] "Starting NGINX process"
I1129 15:00:53.112213 8 leaderelection.go:248] attempting to
acquire leader lease ingress-nginx/ingress-controller-leader…I1129 15:00:53.112275 8 nginx.go:317] "Starting validation
webhook" address=":8443" certPath="/usr/local/certificates/cert"
keyPath="/usr/local/certificates/key"I1129 15:00:53.112468 8 controller.go:155] "Configuration
changes detected, backend reload required"I1129 15:00:53.118295 8 leaderelection.go:258] successfully
acquired lease ingress-nginx/ingress-controller-leaderI1129 15:00:53.119467 8 status.go:84] "New leader elected"
identity="ingress-nginx-controller-54bfb9bb-f6gsf"I1129 15:00:53.141609 8 controller.go:172] "Backend successfully
reloaded"I1129 15:00:53.141804 8 controller.go:183] "Initial sync,
sleeping for 1 second"I1129 15:00:53.141908 8 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx",
Name:"ingress-nginx-controller-54bfb9bb-f6gsf",
UID:"54e0c0c6-40ea-439e-b1a2-7787f1b37e7a", APIVersion:"v1",
ResourceVersion:"766", FieldPath:""}): type: ‘Normal’ reason: ‘RELOAD’
NGINX reload triggered due to a change in configurationI1129 15:04:25.107359 8 admission.go:149] processed ingress via
admission controller {testedIngressLength:1 testedIngressTime:0.022s
renderingIngressLength:1 renderingIngressTime:0s admissionTime:17.9kBs
testedConfigurationSize:0.022}I1129 15:04:25.107395 8 main.go:101] "successfully validated
configuration, accepting" ingress="ingress-srv/default"I1129 15:04:25.110109 8 store.go:424] "Found valid IngressClass"
ingress="default/ingress-srv" ingressclass="nginx"I1129 15:04:25.110698 8 controller.go:155] "Configuration
changes detected, backend reload required"I1129 15:04:25.111057 8 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default",
Name:"ingress-srv", UID:"6c15d014-ac14-404e-8b5e-d8526736c52a",
APIVersion:"networking.k8s.io/v1", ResourceVersion:"1198",
FieldPath:""}): type: ‘Normal’ reason: ‘Sync’ Scheduled for syncI1129 15:04:25.143417 8 controller.go:172] "Backend successfully
reloaded"I1129 15:04:25.143767 8 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx",
Name:"ingress-nginx-controller-54bfb9bb-f6gsf",
UID:"54e0c0c6-40ea-439e-b1a2-7787f1b37e7a", APIVersion:"v1",
ResourceVersion:"766", FieldPath:""}): type: ‘Normal’ reason: ‘RELOAD’
NGINX reload triggered due to a change in configurationI1129 15:06:11.447313 8 admission.go:149] processed ingress via
admission controller {testedIngressLength:1 testedIngressTime:0.02s
renderingIngressLength:1 renderingIngressTime:0s admissionTime:17.9kBs
testedConfigurationSize:0.02}I1129 15:06:11.447349 8 main.go:101] "successfully validated
configuration, accepting" ingress="ingress-srv/default"I1129 15:06:11.449266 8 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default",
Name:"ingress-srv", UID:"6c15d014-ac14-404e-8b5e-d8526736c52a",
APIVersion:"networking.k8s.io/v1", ResourceVersion:"1347",
FieldPath:""}): type: ‘Normal’ reason: ‘Sync’ Scheduled for syncI1129 15:06:11.449669 8 controller.go:155] "Configuration
changes detected, backend reload required"I1129 15:06:11.499772 8 controller.go:172] "Backend successfully
reloaded"I1129 15:06:11.500210 8 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx",
Name:"ingress-nginx-controller-54bfb9bb-f6gsf",
UID:"54e0c0c6-40ea-439e-b1a2-7787f1b37e7a", APIVersion:"v1",
ResourceVersion:"766", FieldPath:""}): type: ‘Normal’ reason: ‘RELOAD’
NGINX reload triggered due to a change in configuration
2
Answers
AFTER COMPLETE RESET OF KUBERNETES THIS SOLUTION DOES NOT WORK!
Will re-edit main question
Leaving post for future use
I solved the problem, or at least I think so.
In addition to @moonkotte suggestion to add the
ingressClassName: nginx
toingress-srv.yaml
I also changed the ingress port configuration so that it points to port80
now.Thanks to those changes using
scp.com
now correctly opens my app. Also, using NodePort access I can visit my app usinglocalhost:30080
, where the 30080 port was set automatically (I removed thenodePort
configuration line fromscanapp-np-srv.yaml
)Why does the port in
ingress-srv.yaml
have to be set to 80 ifclusterIp
configuration states to set port8080
to target port80
- I don't know, I do not fully understand the inner workings of Kubernetes configuration - All explanations are more than welcome.Current state of main configuration files:
ingress-srv.yaml
:scanapp-np-srv.yaml
:scanapp-depl.yaml
:Rest of files remained untouched.
First, you don’t need a
NodePort
service, just aClusterIP
. Nodeports are used if you want to access the service directly without going through the ingress controller.In case you want to test this with a
NodePort
, you will need to define one, egBut looking at your ingress rule it seems a bit off. I would try the following: