I have created a Docker image and pushed it to the AWS ECR repository
I’m creating a task with 3 containers included, one for Redis one for PostgreSQL and another one for the given Image which is my Node project
In Dockerfile, I have added a CMD to run the App with node
command, here is the Dockerfile content:
FROM node:16-alpine as build
WORKDIR /usr/token-manager/app
COPY package*.json .
RUN npm install
COPY . .
RUN npm run build
FROM node:16-alpine as production
ARG ENV_ARG=production
ENV NODE_ENV=${ENV_ARG}
WORKDIR /usr/token-manager/app
COPY package*.json .
RUN npm install --production
COPY --from=build /usr/token-manager/app/dist ./dist
CMD ["node", "./dist/index.js"]
This image is working in a docker-compose locally without any issue
The issue is when I run the task in ECS Cluster it’s not running the Node project, it seems that it’s not running the CMD command
I tried to override that CMD command by adding a new command to the Task definition:
When I run task with this command, there is nothing in the CloudWatch log and obviously the Node App is not running, here you can see that there is no log for api-container
:
When I change the command to something else, for example "ls" it gets executed and I can see the result in CloudWatch log:
or when I change it to a wrong command, I get an error in the log:
But When I change it to the right command which should run the App, nothing happens, it’s not even showing anything in the log as error
I have added inbound rules to allow the port number needed for connecting to the App but it seems it’s not running at all!
What should I do? How can I check to see what is the issue?
UPDATE: I changed the App Container configuration to make it Essential
, it means that the whole Task will fail and stop if this container exits with any error, then I started the Task again and it gets stopped, so now I’m sure that the App Container is crashing and exiting some how but there is nothing in the log!
2
Answers
I found the issue, I'll post it here, it may help someone else
If you go to Cluster details screen > Tasks tab > Stopped > Task ID, then you can see a brief status message regarding each container in Containers list:
it saying that container killed due to Memory issue, we can fix it by increasing the memory we specify for containers when adding new Task Definition
This is the total amount of memory you want to give to the whole Task, which will be shared between all containers:
When you are adding new Container, there is a place for specifying the memory limit:
Hard Limit: If you specify a Hard Limit, your container will get killed when attempt to exceed that limit of memory usage
Soft Limit: If you specify the Soft Limit, ECS will reserve that memory for your container, but your container can request more memory up to the Hard Limit
So the main point here is when there is some kind of Initial issue for container, there won't be any log in CloudWatch and when there is and issue but we didn't find anything in Log, then we should check possibilities like Memory or anything prevent container from being started
First: Make Sure your Docker image in deployed to ECR(you can using Codepipeline) because that is where the ECS will look for the DockerImage.
Second:Please Specify your launch-Type, in case of Ec2 make sure you are using latest Node Image while adding container.
Here you can find latest Docker Image for Node: https://hub.docker.com/_/node
Third: Create Task-Definition and Run the task, now make sure you navigate to cluster and check if task is running and check task status.
Fourth: Make sure you allow all inbound traffic in Security group and open HTTP for 0.0.0.0/0
You can test using curl i.e :http://ec2-52-38-113-251.us-west-2.compute.amazonaws.com
In case you failed to do so, i would recommend deploying simple Node App and get that running and then deploy your project. Thank you