skip to Main Content

I’m following this tutorial on Microservices
https://www.youtube.com/watch?v=DgVjEo3OGBI

At some point, I deploy a SQL Server image in Kubernetes, using this Yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mssql
  template:
    metadata:
      labels:
        app: mssql
    spec:
      containers:
        - name: mssql
          image: mcr.microsoft.com/mssql/server:2019-latest
          ports:
            - containerPort: 1433
          env:
          - name: MSSQL_PID
            value: "Express"
          - name: ACCEPT_EULA
            value: "Y"
          - name: SA_PASSWORD 
            valueFrom:
              secretKeyRef:
                name: mssql5 
                key: SA_PASSWORD
          volumeMounts:
          - mountPath: /var/opt/mssql/data
            name: mssqldb
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: mssql-claim
---
apiVersion: v1
kind: Service
metadata:
  name: mssql-clusterip-srv
spec:
  type: ClusterIP
  selector:
    app: mssql
  ports:
  - name: mssql
    protocol: TCP
    port: 1433
    targetPort: 1433
---
apiVersion: v1
kind: Service
metadata:
  name: mssql-loadbalancer
spec:
  type: LoadBalancer
  selector:
    app: mssql
  ports:
  - protocol: TCP
    port: 1433
    targetPort: 1433

And the PVC.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mssql-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi

I then create a secret in Kubernets, holding the password for the SQL Server sa account, and this secret is then used in the Yaml file.

kubectl create secret generic mssql5 --from-literal=SA_PASSWORD="MyC0m9l&xPassw0rd"

It should then be possible to connect directly to the SQL Server container on localhost port 1433, using the sa account and the password.

However, I get the "Login failed for user SA" error when trying to connect. I’ve tried everything, including changing SA_PASSWORD to MSSQL_SA_PASSWORD, changing the complexity of the password, enabling login for the sa user in SQL Server, which was disabled before and googled like I’ve never googled before. TCP/IP setting is enabled in SQL Server configuration manager. I cannot get access. Can anyone shed some light on this issue?

2

Answers


  1. TL;DR The solution to this particular problem was that https://stackoverflow.com/users/52045/soeren had an instance of SQL Server running on their computer that intercepted port 1433 requests before hitting K8s. Disabling the SQL Server instance on computer solved the problem and port 1433 traffic routed to K8s load balancer.

    Original solution follows…

    I have copy/pasted your YAML verbatim above into files deployment.yaml and pvc.yaml respectively and then run these commands.

    kubectl create -f pvc.yaml
    kubectl create secret generic mssql5 --from-literal=SA_PASSWORD="MyC0m9l&xPassw0rd"
    kubectl create -f deployment.yaml
    

    I can connect via localhost and create database.

    enter image description here

    enter image description here

    Conclusion is that your YAML looks fine.

    If I then delete/create the deployment again…

    kubectl delete -f deployment.yaml
    kubectl create-f deployment.yaml
    

    ..the SSMS connection still works and the Test database persists.

    The only change I would personally make is from - mountPath: /var/opt/mssql/data to - mountPath: /var/opt/mssql/ in deployment.yaml so that your entire mssql path is persisted.

    As you are including the /data folder this means that the logs and secrets folders are still within the POD and not the persisted volume. This might explain the ldf log file permissions issue you mentioned above?

    So, next I investigated the persisted volume kubectl describe pv and on my system it is showing the following:

    Type:          HostPath (bare host directory volume)
        Path:          /var/lib/k8s-pvs/mssql-claim/pvc-42f91115-efdb-4cea-90d8-204fd592f6b4
    

    I’m guessing you are using hostPath, which will store the files on the containing node/host. I’m using Docker Kubernetes as opposed to minikube. A single node cluster.

    If you shell into the node (as opposed to the POD) as root thus:

    docker run -it --rm --privileged --pid=host justincormack/nsenter1
    

    You can cd /var/lib/k8s-pvs/mssql-claim/pvc-42f91115-efdb-4cea-90d8-204fd592f6b4 to see the mounted volume for your persisted volume claim.

    You should see something like:

    / # cd /var/lib/k8s-pvs/mssql-claim/pvc-42f91115-efdb-4cea-90d8-204fd592f6b4
    /var/lib/k8s-pvs/mssql-claim/pvc-42f91115-efdb-4cea-90d8-204fd592f6b4 # ls -l
    total 89348
    -rw-r-----    1 10001    root           256 Feb 14 17:33 Entropy.bin
    -rw-r-----    1 10001    root       8388608 Feb 14 17:36 Test.mdf
    -rw-r-----    1 10001    root       8388608 Feb 14 17:55 Test_log.ldf
    -rw-r-----    1 10001    root       4653056 Feb 14 17:54 master.mdf
    -rw-r-----    1 10001    root       2097152 Feb 14 18:00 mastlog.ldf
    -rw-r-----    1 10001    root       8388608 Feb 14 17:49 model.mdf
    -rw-r-----    1 10001    root      14090240 Feb 14 17:49 model_msdbdata.mdf
    -rw-r-----    1 10001    root        524288 Feb 14 17:49 model_msdblog.ldf
    -rw-r-----    1 10001    root        524288 Feb 14 17:49 model_replicatedmaster.ldf
    -rw-r-----    1 10001    root       4653056 Feb 14 17:49 model_replicatedmaster.mdf
    -rw-r-----    1 10001    root       8388608 Feb 14 17:49 modellog.ldf
    -rw-r-----    1 10001    root      14090240 Feb 14 17:36 msdbdata.mdf
    -rw-r-----    1 10001    root        524288 Feb 14 17:49 msdblog.ldf
    -rw-r-----    1 10001    root       8388608 Feb 14 17:49 tempdb.mdf
    -rw-r-----    1 10001    root       8388608 Feb 14 17:55 templog.ldf
    

    If you had mapped mssql only above you will see data, logs and secrets subdirectories.

    Obviously, replace your PVC with your name.

    So again, in conclusion your YAML looks fine. I therefore suspect that you have got something lingering around from previous attempts.

    The only other issue that has cropped up with me is that sometimes it’s better to use single quotes with PowerShell and double quotes with cmd prompt.

    Have you checked your pods are running OK and that the config is being read OK? (replace pod id with yours)

    kubectl get pods
    kubectl describe pod mssql-depl-7777487cf7-w9gj5
    

    Beyond that…I’m sure it’s something to do with what’s been left behind from previous testing that needs to be cleaned up.

    Original answer below for completeness…

    Have you tried clearing everything down (using a namespace makes this easier).

    I have noticed that if you’re using a persisted volume claim the master.mdf may hold an old password (SELECT * FROM master.dbo.syslogins) as the master.mdf isn’t recreated fresh as it would be with a non persisted volume.

    As an experiment, when I had everything working, I deleted the deployment but left the persisted volume claim.

    I deleted the SA_PASSWORD secret and recreated it with a different password and then applied the deployment again.

    I was able to connect to SQL from SSMS on 1433 using the old password even though the new POD contained the new password in the environment variable SA_PASSWORD.

    In conclusion, I would try to clear your persisted volume claim and start again so that the master/msdb databases are recreated.

    It’s worth a try?

    Incidentally, NodePort is all you need and not ClusterIP and LoadBalancer for your service (this obviously depends on your cluster environment. I presumed you’re using minikube or local Docker?)

    Login or Signup to reply.
  2. enter image description here
    if you turn off that service then it will work

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search