I have 2 docker ECR images: flask-server(On port 5000) and react-client (Nginx, hosted on port 80). I put them both inside the same task in AWS ECS.
I have to make REST requests from my react-client(frontend) container to the backend flask containers. Although, upon running the two dockerfiles on my local, everything works fine.
I tried to put http://localhost:5000 inside the react-client code, and I also tried http://flask-container:5000 but nothing seems to be working. I can see that the react task is up, as I can view it via the public IP, and I know that the flask app is also working, when I tried to hit the endpoint from external IP with Postman. But the client is not able to find the server container. I have also made the service discovery available.
Edit: I’m using AWSVPC mode in AWS ECS networking.
Below is my task definition:
{
"family": "ABC",
"containerDefinitions": [
{
"name": "ABC-client",
"image": "###",
"cpu": 0,
"portMappings": [
{
"name": "ABC-client-80-tcp",
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp",
"appProtocol": "http"
}
],
"essential": true,
"environment": [],
"environmentFiles": [],
"mountPoints": [],
"volumesFrom": [],
"ulimits": [],
"systemControls": []
},
{
"name": "ABC-server",
"image": "###",
"cpu": 0,
"portMappings": [
{
"name": "5000",
"containerPort": 5000,
"hostPort": 5000,
"protocol": "tcp",
"appProtocol": "http"
}
],
"essential": false,
"environment": [
{
"name": "EMAIL_PASSWORD",
"value": "###"
}
],
"environmentFiles": [],
"mountPoints": [],
"volumesFrom": [],
"systemControls": []
}
],
"executionRoleArn": "arn:aws:iam::aws-account-id:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"volumes": [],
"placementConstraints": [],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "1024",
"memory": "3072",
"runtimePlatform": {
"cpuArchitecture": "X86_64",
"operatingSystemFamily": "LINUX"
},
"enableFaultInjection": false
}
I don’t know where the issue lies! Any help is appreciated!
2
Answers
The React application actually runs in the user’s web browser. All your React container is doing on ECS is serving up raw JavaScript files to the web browser of your user. The address
localhost
inside the React app running in the browser would resolve to the local laptop/desktop computer that is running the web browser. That works for you locally because you are also running the backend API on your local computer.To deploy your code to the Internet and get the React app connecting to the backend, you will have to expose the backend API server to the Internet. This is typically done on ECS with a Load Balancer. Then you would need to configure your React app to connect to the backend server at its public DNS address or public IP.
This is a pretty common use case, and there are a couple of options to approach this.
Before the solution, I would strongly advise you to read about the idea behind Docker images and in this case task definition goals. It is somewhat unusual to run both the frontend app and the backend service within the same Dockerfile. As you mentioned, you have separate ones, so that’s good, but then the benefits of isolating them cease to exist once you put them in the same task definition. Also, make sure to read about different network modes and what AWSVPC is and if needed, lookup Auto Scaling Groups and Capacity provider.
Ideally, you would have two task definitions → two services. Why?
There are many more reasons, but in your case I would do the following:
TLDR; Host React on S3+Cloudfront, leave backend on ECS. If you need it to be accessible outside your VPC, you need an ALB or similar (Public subnet or Private+NAT etc.). I don’t know your use case, but you probably don’t need to be able to ping it from public internet to debug it, if that’s what you are doing.