I have an aws ec2 micro instance (ubuntu server) with which i host a couple of web apps and an api server. I use docker and gitlab ci cd to deploy the api server written in node. Whenever I try to run build job it crashes and all the hosted applications are not reachable.
The Dockerfile is
FROM node:12.3.1
LABEL maintainer Venkatesh A <[email protected]>
WORKDIR /www/techdoc-api
ARG db_username
ARG db_password
ARG port
ARG jwt_secret
ARG jwt_expiry
ARG link_text
ARG app_link_text
ARG NODE_ENV
ARG redis_host
ARG redis_port
ARG razorpay_id
ARG razorpay_key
RUN npm install pm2 -g
RUN npm install babel-cli -g
RUN apt-get update && apt-get install -y
vim
ADD package.json /www/techdoc-api
RUN npm install --production
ADD . /www/techdoc-api
COPY docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
RUN cd /www/techdoc-api
RUN rm -f .env
RUN touch .env
RUN echo "port=$port n
redis_port=$redis_port n
redis_host=$redis_host n
razorpay_id=$razorpay_id n
razorpay_key=$razorpay_key n
db_username=$db_username n
db_password=$db_password n
link_text=$link_text n
app_link_text=$app_link_text n
jwt_secret=$jwt_secret n
jwt_expiry=$jwt_expiry n
NODE_ENV=$NODE_ENV" >> ./.env
EXPOSE 3000
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
The docker compose file is as follows:
version: '2.2'
services:
mysql:
build: ./config/docker_db_config
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
healthcheck:
test: "exit 0"
restart: always
redis:
image: 'redis'
ports:
- "6379:6379"
api:
build: .
depends_on:
mysql:
condition: service_healthy
entrypoint:
- /usr/local/bin/docker-entrypoint.sh
restart: always
ports:
- "3000:3000"
and the gitlab-ci.yml is as follows…
image: docker:stable
services:
- docker:dind
stages:
- build
- deploy
cache:
paths:
- node_modules/
build_app:
stage: build
script:
- docker-compose build mysql
- docker-compose build redis
- docker-compose build --build-arg db_username="${db_username}" --build-arg db_password="${db_password}" --build-arg jwt_secret="${jwt_secret}" --build-arg NODE_ENV="${NODE_ENV}" --build-arg port="${port}" --build-arg redis_port="${redis_port}" --build-arg redis_host="${redis_host}" --build-arg jwt_expiry="${jwt_expiry}" --build-arg razorpay_key="${razorpay_key}" --build-arg razorpay_id="${razorpay_id}" --build-arg link_text="${link_text}" --build-arg app_link_text="${app_link_text}" api
- echo "Build successful."
- docker-compose up -d
- echo "Deployed!!"
only:
- master
Should I delete the old containers before running new job? Should i cache the node modules somewhere? Should I make sure that I have enough space before I run jobs?
Open to suggestions and changes in the above design.
(Note: By ‘crashing’ I mean that im not able to ssh into the server unless i reboot it and the hosted web apps are unreachable)
2
Answers
looking at your sentence here….. I think you have a problem about cpu or memory
We need to troubleshoot the server in real time
I think your instance is too small to handle your job requests.
Below Running, These three services have enough to crash your micro instance.
So you might need a bigger instance or better to build the application outside of the instance.
you can simply check during deployment using below command to see the consumed memory by these three services
docker stats
Which become around
500MB
and all these services running in the idle state doing nothing, but still the memory consumption is around500MB
.so it might be the case one of the containers suck the whole memory, you can debug this using
docker stats
during deployment.Better to clean all process and stop all the containers so it will create some space for deployment.
Installation of the node module is not that heavy process as compared to build.