skip to Main Content

I have an aws ec2 micro instance (ubuntu server) with which i host a couple of web apps and an api server. I use docker and gitlab ci cd to deploy the api server written in node. Whenever I try to run build job it crashes and all the hosted applications are not reachable.

The Dockerfile is

FROM node:12.3.1
LABEL maintainer Venkatesh A <[email protected]>
WORKDIR /www/techdoc-api
ARG db_username
ARG db_password
ARG port
ARG jwt_secret
ARG jwt_expiry
ARG link_text
ARG app_link_text
ARG NODE_ENV
ARG redis_host
ARG redis_port
ARG razorpay_id
ARG razorpay_key
RUN npm install pm2 -g
RUN npm install babel-cli -g
RUN apt-get update && apt-get install -y 
  vim
ADD package.json /www/techdoc-api
RUN npm install --production
ADD . /www/techdoc-api
COPY docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
RUN cd /www/techdoc-api
RUN rm -f .env
RUN touch .env
RUN echo "port=$port n
redis_port=$redis_port n
redis_host=$redis_host n
razorpay_id=$razorpay_id n
razorpay_key=$razorpay_key n
db_username=$db_username n
db_password=$db_password n
link_text=$link_text n
app_link_text=$app_link_text n
jwt_secret=$jwt_secret n
jwt_expiry=$jwt_expiry n
NODE_ENV=$NODE_ENV" >> ./.env
EXPOSE 3000
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]

The docker compose file is as follows:

version: '2.2'
services:
    mysql:
        build: ./config/docker_db_config
        environment:
            - MYSQL_ALLOW_EMPTY_PASSWORD=yes
        healthcheck:
            test: "exit 0"
        restart: always
    redis:
        image: 'redis'
        ports: 
            - "6379:6379"
    api:
        build: .
        depends_on:
            mysql:
                condition: service_healthy
        entrypoint:
            - /usr/local/bin/docker-entrypoint.sh
        restart: always
        ports:
            - "3000:3000"

and the gitlab-ci.yml is as follows…

image: docker:stable

services:
  - docker:dind

stages:
  - build
  - deploy

cache:
  paths:
    - node_modules/

build_app:
  stage: build
  script:
    - docker-compose build mysql
    - docker-compose build redis
    - docker-compose build --build-arg db_username="${db_username}" --build-arg db_password="${db_password}" --build-arg  jwt_secret="${jwt_secret}" --build-arg  NODE_ENV="${NODE_ENV}" --build-arg port="${port}" --build-arg redis_port="${redis_port}" --build-arg redis_host="${redis_host}" --build-arg jwt_expiry="${jwt_expiry}" --build-arg razorpay_key="${razorpay_key}" --build-arg razorpay_id="${razorpay_id}" --build-arg link_text="${link_text}" --build-arg app_link_text="${app_link_text}" api
    - echo "Build successful."
    - docker-compose up -d
    - echo "Deployed!!"  
  only: 
    - master
    

Should I delete the old containers before running new job? Should i cache the node modules somewhere? Should I make sure that I have enough space before I run jobs?

Open to suggestions and changes in the above design.

(Note: By ‘crashing’ I mean that im not able to ssh into the server unless i reboot it and the hosted web apps are unreachable)

2

Answers


  1. (Note: By 'crashing' I mean that im not able to ssh into the server unless i reboot it and the hosted web apps are unreachable)
    

    looking at your sentence here….. I think you have a problem about cpu or memory

    We need to troubleshoot the server in real time

    1. Stay logged into the server
    2. Run the pipeline
    3. Monitor the ec2 resources…. with top/htop and the disk too

    I think your instance is too small to handle your job requests.

    Login or Signup to reply.
  2. (Note: By ‘crashing’ I mean that I’m not able to ssh into the server
    unless I reboot it and the hosted web apps are unreachable)

    Below Running, These three services have enough to crash your micro instance.

    • Redis
    • MySQL
    • Nodejs
    t2.micro
    RAM 1GB 
    VCPU 1
    

    So you might need a bigger instance or better to build the application outside of the instance.

    you can simply check during deployment using below command to see the consumed memory by these three services
    docker stats

    CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
    f2ebb9370858        node                0.00%               9.078MiB / 15.54GiB   0.06%               6.6kB / 0B          29.1MB / 0B         11
    7f5b2daf3a22        redis               0.19%               18.62MiB / 15.54GiB   0.12%               9.94kB / 0B         13MB / 0B           5
    378dcc2af8a9        mysql               0.87%               364MiB / 15.54GiB     2.29%               23.8kB / 0B         471kB / 328MB       37
    

    Which become around 500MB and all these services running in the idle state doing nothing, but still the memory consumption is around 500MB.

    so it might be the case one of the containers suck the whole memory, you can debug this using docker stats during deployment.

    Should I make sure that I have enough space before I run jobs?

    Better to clean all process and stop all the containers so it will create some space for deployment.
    Installation of the node module is not that heavy process as compared to build.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search