I’m following this tutorial on Microservices
https://www.youtube.com/watch?v=DgVjEo3OGBI
At some point, I deploy a SQL Server image in Kubernetes, using this Yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-depl
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2019-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Express"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql5
key: SA_PASSWORD
volumeMounts:
- mountPath: /var/opt/mssql/data
name: mssqldb
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-claim
---
apiVersion: v1
kind: Service
metadata:
name: mssql-clusterip-srv
spec:
type: ClusterIP
selector:
app: mssql
ports:
- name: mssql
protocol: TCP
port: 1433
targetPort: 1433
---
apiVersion: v1
kind: Service
metadata:
name: mssql-loadbalancer
spec:
type: LoadBalancer
selector:
app: mssql
ports:
- protocol: TCP
port: 1433
targetPort: 1433
And the PVC.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
I then create a secret in Kubernets, holding the password for the SQL Server sa account, and this secret is then used in the Yaml file.
kubectl create secret generic mssql5 --from-literal=SA_PASSWORD="MyC0m9l&xPassw0rd"
It should then be possible to connect directly to the SQL Server container on localhost port 1433, using the sa account and the password.
However, I get the "Login failed for user SA" error when trying to connect. I’ve tried everything, including changing SA_PASSWORD to MSSQL_SA_PASSWORD, changing the complexity of the password, enabling login for the sa user in SQL Server, which was disabled before and googled like I’ve never googled before. TCP/IP setting is enabled in SQL Server configuration manager. I cannot get access. Can anyone shed some light on this issue?
2
Answers
TL;DR The solution to this particular problem was that https://stackoverflow.com/users/52045/soeren had an instance of SQL Server running on their computer that intercepted port 1433 requests before hitting K8s. Disabling the SQL Server instance on computer solved the problem and port 1433 traffic routed to K8s load balancer.
Original solution follows…
I have copy/pasted your YAML verbatim above into files
deployment.yaml
andpvc.yaml
respectively and then run these commands.I can connect via localhost and create database.
Conclusion is that your YAML looks fine.
If I then delete/create the deployment again…
..the SSMS connection still works and the Test database persists.
The only change I would personally make is from
- mountPath: /var/opt/mssql/data
to- mountPath: /var/opt/mssql/
indeployment.yaml
so that your entire mssql path is persisted.As you are including the
/data
folder this means that thelogs
andsecrets
folders are still within the POD and not the persisted volume. This might explain theldf
log file permissions issue you mentioned above?So, next I investigated the persisted volume
kubectl describe pv
and on my system it is showing the following:I’m guessing you are using hostPath, which will store the files on the containing node/host. I’m using Docker Kubernetes as opposed to minikube. A single node cluster.
If you shell into the node (as opposed to the POD) as root thus:
You can
cd /var/lib/k8s-pvs/mssql-claim/pvc-42f91115-efdb-4cea-90d8-204fd592f6b4
to see the mounted volume for your persisted volume claim.You should see something like:
If you had mapped
mssql
only above you will see data, logs and secrets subdirectories.Obviously, replace your PVC with your name.
So again, in conclusion your YAML looks fine. I therefore suspect that you have got something lingering around from previous attempts.
The only other issue that has cropped up with me is that sometimes it’s better to use single quotes with PowerShell and double quotes with cmd prompt.
Have you checked your pods are running OK and that the config is being read OK? (replace pod id with yours)
Beyond that…I’m sure it’s something to do with what’s been left behind from previous testing that needs to be cleaned up.
Original answer below for completeness…
Have you tried clearing everything down (using a namespace makes this easier).
I have noticed that if you’re using a persisted volume claim the master.mdf may hold an old password (SELECT * FROM master.dbo.syslogins) as the master.mdf isn’t recreated fresh as it would be with a non persisted volume.
As an experiment, when I had everything working, I deleted the deployment but left the persisted volume claim.
I deleted the SA_PASSWORD secret and recreated it with a different password and then applied the deployment again.
I was able to connect to SQL from SSMS on 1433 using the old password even though the new POD contained the new password in the environment variable SA_PASSWORD.
In conclusion, I would try to clear your persisted volume claim and start again so that the master/msdb databases are recreated.
It’s worth a try?
Incidentally, NodePort is all you need and not ClusterIP and LoadBalancer for your service (this obviously depends on your cluster environment. I presumed you’re using minikube or local Docker?)
if you turn off that service then it will work