skip to Main Content

I am trying to install galera mariadb on my cluster. I only have one node, but plan to scale in the future. It seems to install fine.

When deployed it says:

** Please be patient while the chart is being deployed **
Tip:
  Watch the deployment status using the command:
    kubectl get sts -w --namespace databases -l app.kubernetes.io/instance=galera
and then other things here

It tells me to check the status but the status is always:

galera-mariadb-galera   0/1     8m

kubectl describe pod/galera-mariadb-galera-0 –namespace databases

Name:           galera-mariadb-galera-0
Namespace:      databases
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/instance=galera
                app.kubernetes.io/managed-by=Helm
                app.kubernetes.io/name=mariadb-galera
                controller-revision-hash=galera-mariadb-galera-8d5cc8855
                helm.sh/chart=mariadb-galera-5.3.2
                statefulset.kubernetes.io/pod-name=galera-mariadb-galera-0
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/galera-mariadb-galera
Containers:
  mariadb-galera:
    Image:       docker.io/bitnami/mariadb-galera:10.5.8-debian-10-r26
    Ports:       3306/TCP, 4567/TCP, 4568/TCP, 4444/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP
    Command:
      bash
      -ec
      exec /opt/bitnami/scripts/mariadb-galera/entrypoint.sh /opt/bitnami/scripts/mariadb-galera/run.sh
      
    Liveness:  exec [bash -ec exec mysqladmin status -u$MARIADB_ROOT_USER -p$MARIADB_ROOT_PASSWORD
] delay=120s timeout=1s period=10s #success=1 #failure=3
    Readiness:  exec [bash -ec exec mysqladmin status -u$MARIADB_ROOT_USER -p$MARIADB_ROOT_PASSWORD
] delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:
      MY_POD_NAME:                          galera-mariadb-galera-0 (v1:metadata.name)
      BITNAMI_DEBUG:                        false
      MARIADB_GALERA_CLUSTER_NAME:          galera
      MARIADB_GALERA_CLUSTER_ADDRESS:       address-is-here-removed-unsure-if-private
      MARIADB_ROOT_USER:                    root
      MARIADB_ROOT_PASSWORD:                <set to the key 'mariadb-root-password' in secret 'galera-mariadb-galera'>  Optional: false
      MARIADB_USER:                         default-db-user
      MARIADB_PASSWORD:                     <set to the key 'mariadb-password' in secret 'galera-mariadb-galera'>  Optional: false
      MARIADB_DATABASE:                     default-db-name
      MARIADB_GALERA_MARIABACKUP_USER:      mariabackup
      MARIADB_GALERA_MARIABACKUP_PASSWORD:  <set to the key 'mariadb-galera-mariabackup-password' in secret 'galera-mariadb-galera'>  Optional: false
      MARIADB_ENABLE_LDAP:                  no
      MARIADB_ENABLE_TLS:                   no
    Mounts:
      /bitnami/mariadb from data (rw)
      /opt/bitnami/mariadb/.bootstrap from previous-boot (rw)
      /opt/bitnami/mariadb/conf/my.cnf from mariadb-galera-config (rw,path="my.cnf")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8qzpx (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-galera-mariadb-galera-0
    ReadOnly:   false
  previous-boot:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  mariadb-galera-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      galera-mariadb-galera-configuration
    Optional:  false
  default-token-8qzpx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-8qzpx
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason             Age    From                Message
  ----     ------             ----   ----                -------
  Warning  FailedScheduling   2m23s                      0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling   2m23s                      0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   NotTriggerScaleUp  97s    cluster-autoscaler  pod didn't trigger scale-up (it wouldn't fit if a new node is added):
$: 

I am unsure as to what might be causing the issue.

If I do:

kubectl logs pod/galera-mariadb-galera-0  --namespace databases

Nothing shows.

If I do:

 kubectl get pvc data-galera-mariadb-galera-0 --namespace databases


NAME                           STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS       AGE
data-galera-mariadb-galera-0   Pending                                      do-block-storage   70m

kubectl describe pvc data-galera-mariadb-galera-0 –namespace databases

Name:          data-galera-mariadb-galera-0
Namespace:     databases
StorageClass:  do-block-storage
Status:        Pending
Volume:        
Labels:        app.kubernetes.io/instance=galera
               app.kubernetes.io/managed-by=Helm
               app.kubernetes.io/name=mariadb-galera
Annotations:   volume.beta.kubernetes.io/storage-provisioner: dobs.csi.digitalocean.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    galera-mariadb-galera-0
Events:
  Type     Reason                Age                   From                                                                                           Message
  ----     ------                ----                  ----                                                                                           -------
  Normal   ExternalProvisioning  3m59s (x26 over 10m)  persistentvolume-controller                                                                    waiting for a volume to be created, either by external provisioner "dobs.csi.digitalocean.com" or manually created by system administrator
  Normal   Provisioning          85s (x10 over 10m)    dobs.csi.digitalocean.com_master-cluster-id-here  External provisioner is provisioning volume for claim "databases/data-galera-mariadb-galera-0"
  Warning  ProvisioningFail

The issue above was I had too many existing storages open. I now deleted them and have redeployed.

2

Answers


    • Pod is in "Pending" state due to the fact that the PVC "data-galera-mariadb-galera-0" is in unbound state.Once the PVC comes to Bound state then the POD will come to running state.
    Login or Signup to reply.
  1. Posting this as Community Wiki to expand comment and above answer.

    Generally MariaDB cannot be deployed as this setup require PersistentVolume which should be created using Dynamic Provisioning as OP is using cloud environment – DigitalOcean. As PersistentVolume was not created it couldn’t bound with PersistentVolumeClaim, thus pod stuck in Pedning state.

    As OP stated in the bottom of the question:

    The issue above was I had too many existing storages open. I now deleted them and have redeployed.

    Kubernetes is not able to create it due to DigitalOcean Limits. OP stated that is using one node and already had 10 storages.

    • Unverified users can have up to 10 volumes per region and up to a total of 500 GB of disk space per region.
    • By default, users can create up to 100 volumes and up to a total of 16 TiB of disk space per region. You can contact our support team to request an increase. You can attach a maximum of 7 volumes to any one node or Droplet, and this limit cannot be changed.
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search