I am trying to deploy this docker-compose
app on GCP kubernetes.
version: "3.5"
x-environment:
&default-back-environment
# Database settings
POSTGRES_DB: taiga
POSTGRES_USER: taiga
POSTGRES_PASSWORD: taiga
POSTGRES_HOST: taiga-db
# Taiga settings
TAIGA_SECRET_KEY: "taiga-back-secret-key"
TAIGA_SITES_SCHEME: "http"
TAIGA_SITES_DOMAIN: "localhost:9000"
TAIGA_SUBPATH: "" # "" or "/subpath"
# Email settings. Uncomment following lines and configure your SMTP server
# EMAIL_BACKEND: "django.core.mail.backends.smtp.EmailBackend"
# DEFAULT_FROM_EMAIL: "[email protected]"
# EMAIL_USE_TLS: "False"
# EMAIL_USE_SSL: "False"
# EMAIL_HOST: "smtp.host.example.com"
# EMAIL_PORT: 587
# EMAIL_HOST_USER: "user"
# EMAIL_HOST_PASSWORD: "password"
# Rabbitmq settings
# Should be the same as in taiga-async-rabbitmq and taiga-events-rabbitmq
RABBITMQ_USER: taiga
RABBITMQ_PASS: taiga
# Telemetry settings
ENABLE_TELEMETRY: "True"
x-volumes:
&default-back-volumes
- taiga-static-data:/taiga-back/static
- taiga-media-data:/taiga-back/media
# - ./config.py:/taiga-back/settings/config.py
services:
taiga-db:
image: postgres:12.3
environment:
POSTGRES_DB: taiga
POSTGRES_USER: taiga
POSTGRES_PASSWORD: taiga
volumes:
- taiga-db-data:/var/lib/postgresql/data
networks:
- taiga
taiga-back:
image: taigaio/taiga-back:latest
environment: *default-back-environment
volumes: *default-back-volumes
networks:
- taiga
depends_on:
- taiga-db
- taiga-events-rabbitmq
- taiga-async-rabbitmq
taiga-async:
image: taigaio/taiga-back:latest
entrypoint: ["/taiga-back/docker/async_entrypoint.sh"]
environment: *default-back-environment
volumes: *default-back-volumes
networks:
- taiga
depends_on:
- taiga-db
- taiga-back
- taiga-async-rabbitmq
taiga-async-rabbitmq:
image: rabbitmq:3.8-management-alpine
environment:
RABBITMQ_ERLANG_COOKIE: secret-erlang-cookie
RABBITMQ_DEFAULT_USER: taiga
RABBITMQ_DEFAULT_PASS: taiga
RABBITMQ_DEFAULT_VHOST: taiga
volumes:
- taiga-async-rabbitmq-data:/var/lib/rabbitmq
networks:
- taiga
taiga-front:
image: taigaio/taiga-front:latest
environment:
TAIGA_URL: "http://localhost:9000"
TAIGA_WEBSOCKETS_URL: "ws://localhost:9000"
TAIGA_SUBPATH: "" # "" or "/subpath"
networks:
- taiga
# volumes:
# - ./conf.json:/usr/share/nginx/html/conf.json
taiga-events:
image: taigaio/taiga-events:latest
environment:
RABBITMQ_USER: taiga
RABBITMQ_PASS: taiga
TAIGA_SECRET_KEY: "taiga-back-secret-key"
networks:
- taiga
depends_on:
- taiga-events-rabbitmq
taiga-events-rabbitmq:
image: rabbitmq:3.8-management-alpine
environment:
RABBITMQ_ERLANG_COOKIE: secret-erlang-cookie
RABBITMQ_DEFAULT_USER: taiga
RABBITMQ_DEFAULT_PASS: taiga
RABBITMQ_DEFAULT_VHOST: taiga
volumes:
- taiga-events-rabbitmq-data:/var/lib/rabbitmq
networks:
- taiga
taiga-protected:
image: taigaio/taiga-protected:latest
environment:
MAX_AGE: 360
SECRET_KEY: "taiga-back-secret-key"
networks:
- taiga
taiga-gateway:
image: nginx:1.19-alpine
ports:
- "9000:80"
volumes:
- ./taiga-gateway/taiga.conf:/etc/nginx/conf.d/default.conf
- taiga-static-data:/taiga/static
- taiga-media-data:/taiga/media
networks:
- taiga
depends_on:
- taiga-front
- taiga-back
- taiga-events
volumes:
taiga-static-data:
taiga-media-data:
taiga-db-data:
taiga-async-rabbitmq-data:
taiga-events-rabbitmq-data:
networks:
taiga:
I have used Kompose
to generate my kubernetes deployment files. All the pods are running bare two. However, they show no error except this.
Unable to attach or mount volumes: unmounted
volumes=[taiga-static-data taiga-media-data], unattached
volumes=[kube-api-access-9c74v taiga-gateway-claim0 taiga-static-data
taiga-media-data]: timed out waiting for the condition
Pod Status
taiga-async-6c7d9dbd7b-btv79 1/1 Running 19 16h
taiga-async-rabbitmq-86979cf759-lvj2m 1/1 Running 0 16h
taiga-back-7bc574768d-hst2v 0/1 ContainerCreating 0 6m34s
taiga-db-59b554854-qdb65 1/1 Running 0 16h
taiga-events-74f494df97-8rpjd 1/1 Running 0 16h
taiga-events-rabbitmq-7f558ddf88-wc2js 1/1 Running 0 16h
taiga-front-6f66c475df-8cmf6 1/1 Running 0 16h
taiga-gateway-77976dc77-w5hp4 0/1 ContainerCreating 0 3m6s
taiga-protected-7794949d49-crgbt 1/1 Running 0 16h
It is a problem with mounting the volume, I am certain as it seems from an earlier error that taiga-back
and taiga-db
share a volume.
This is the Kompose
file I have.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: taiga-gateway
name: taiga-gateway
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: taiga-gateway
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.network/taiga: "true"
io.kompose.service: taiga-gateway
spec:
containers:
- image: nginx:1.19-alpine
name: taiga-gateway
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /etc/nginx/conf.d/default.conf
name: taiga-gateway-claim0
- mountPath: /taiga/static
name: taiga-static-data
- mountPath: /taiga/media
name: taiga-media-data
restartPolicy: Always
volumes:
- name: taiga-gateway-claim0
persistentVolumeClaim:
claimName: taiga-gateway-claim0
- name: taiga-static-data
persistentVolumeClaim:
claimName: taiga-static-data
- name: taiga-media-data
persistentVolumeClaim:
claimName: taiga-media-data
status: {}
Perhaps if I can fix one I can figure out the other pod as well. This is the application
https://github.com/kaleidos-ventures/taiga-docker . Any pointers are welcome. kubectl describe pod
output
Name: taiga-gateway-77976dc77-w5hp4
Namespace: default
Priority: 0
Node: gke-taiga-cluster-default-pool-9e5ed1f4-0hln/10.128.0.18
Start Time: Wed, 13 Apr 2022 05:32:10 +0000
Labels: io.kompose.network/taiga=true
io.kompose.service=taiga-gateway
pod-template-hash=77976dc77
Annotations: kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.26.1 (a9d05d509)
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/taiga-gateway-77976dc77
Containers:
taiga-gateway:
Container ID:
Image: nginx:1.19-alpine
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/etc/nginx/conf.d/default.conf from taiga-gateway-claim0 (rw)
/taiga/media from taiga-media-data (rw)
/taiga/static from taiga-static-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9c74v (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
taiga-gateway-claim0:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: taiga-gateway-claim0
ReadOnly: false
taiga-static-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: taiga-static-data
ReadOnly: false
taiga-media-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: taiga-media-data
ReadOnly: false
kube-api-access-9c74v:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned default/taiga-gateway-77976dc77-w5hp4 to gke-taiga-cluster-default-pool-9e5ed1f4-0hln
Warning FailedMount 5m49s (x4 over 14m) kubelet Unable to attach or mount volumes: unmounted volumes=[taiga-static-data taiga-media-data], unattached volumes=[taiga-gateway-claim0 taiga-static-data taiga-media-data kube-api-access-9c74v]: timed out waiting for the condition
Warning FailedMount 81s (x3 over 10m) kubelet Unable to attach or mount volumes: unmounted volumes=[taiga-static-data taiga-media-data], unattached volumes=[kube-api-access-9c74v taiga-gateway-claim0 taiga-static-data taiga-media-data]: timed out waiting for the condition
4
Answers
You most likely have not configured your PVCs right and the container is trying to mount the claim, but the claim is not bound to a PV.
Please make sure that taiga-static-data & taiga-media-data pvcs are bound to their respective pvs.
The issue was that the persistent volumes are not bound to the pod hence the pod is failed to start. Ensure the storage is configured and the persistent volumes are created.
Base on your origin docker spec you can replace
persistentVolumeClaim
withemptyDir
.Or if you want to persist your data (continue using
persistentVolumeClaim
), you should create the PVC first:The above spec will dynamically provision 3 persistent volumes for your pod using the default StorageClass on your GKE cluster.