skip to Main Content

I have 2 docker ECR images: flask-server(On port 5000) and react-client (Nginx, hosted on port 80). I put them both inside the same task in AWS ECS.
I have to make REST requests from my react-client(frontend) container to the backend flask containers. Although, upon running the two dockerfiles on my local, everything works fine.

I tried to put http://localhost:5000 inside the react-client code, and I also tried http://flask-container:5000 but nothing seems to be working. I can see that the react task is up, as I can view it via the public IP, and I know that the flask app is also working, when I tried to hit the endpoint from external IP with Postman. But the client is not able to find the server container. I have also made the service discovery available.
Edit: I’m using AWSVPC mode in AWS ECS networking.

Below is my task definition:

{
    "family": "ABC",
    "containerDefinitions": [
        {
            "name": "ABC-client",
            "image": "###",
            "cpu": 0,
            "portMappings": [
                {
                    "name": "ABC-client-80-tcp",
                    "containerPort": 80,
                    "hostPort": 80,
                    "protocol": "tcp",
                    "appProtocol": "http"
                }
            ],
            "essential": true,
            "environment": [],
            "environmentFiles": [],
            "mountPoints": [],
            "volumesFrom": [],
            "ulimits": [],
            "systemControls": []
        },
        {
            "name": "ABC-server",
            "image": "###",
            "cpu": 0,
            "portMappings": [
                {
                    "name": "5000",
                    "containerPort": 5000,
                    "hostPort": 5000,
                    "protocol": "tcp",
                    "appProtocol": "http"
                }
            ],
            "essential": false,
            "environment": [
                {
                    "name": "EMAIL_PASSWORD",
                    "value": "###"
                }
            ],
            "environmentFiles": [],
            "mountPoints": [],
            "volumesFrom": [],
            "systemControls": []
        }
    ],
    "executionRoleArn": "arn:aws:iam::aws-account-id:role/ecsTaskExecutionRole",
    "networkMode": "awsvpc",
    "volumes": [],
    "placementConstraints": [],
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "cpu": "1024",
    "memory": "3072",
    "runtimePlatform": {
        "cpuArchitecture": "X86_64",
        "operatingSystemFamily": "LINUX"
    },
    "enableFaultInjection": false
}

I don’t know where the issue lies! Any help is appreciated!

2

Answers


  1. The React application actually runs in the user’s web browser. All your React container is doing on ECS is serving up raw JavaScript files to the web browser of your user. The address localhost inside the React app running in the browser would resolve to the local laptop/desktop computer that is running the web browser. That works for you locally because you are also running the backend API on your local computer.

    To deploy your code to the Internet and get the React app connecting to the backend, you will have to expose the backend API server to the Internet. This is typically done on ECS with a Load Balancer. Then you would need to configure your React app to connect to the backend server at its public DNS address or public IP.

    Login or Signup to reply.
  2. This is a pretty common use case, and there are a couple of options to approach this.

    Before the solution, I would strongly advise you to read about the idea behind Docker images and in this case task definition goals. It is somewhat unusual to run both the frontend app and the backend service within the same Dockerfile. As you mentioned, you have separate ones, so that’s good, but then the benefits of isolating them cease to exist once you put them in the same task definition. Also, make sure to read about different network modes and what AWSVPC is and if needed, lookup Auto Scaling Groups and Capacity provider.

    Ideally, you would have two task definitions → two services. Why?

    1. Better isolation and clear understanding of which task does what
    2. Better scaling options (if your backend receives a lot of requests, you can scale it without affecting the frontend)
    3. If for some reason frontend fails, that means that task failed, which means the service failed, and now backend is down also.

    There are many more reasons, but in your case I would do the following:

    1. Since you use React, and probably, you can serve static content via S3+Cloudfront. This is awesome as you don’t occupy space of your instances/Fargate with frontend, rather you rely on basically free S3 storage and have additional Cloudfront CDN. More info about that here.
    2. ECS is a simple container/service orchestrator. It may seem weird to use it only to host backend service, you might wonder why not just host it on single EC2? The simple answer is that you can simulate different environments inside your ECS cluster by having two backend services, one prod and one dev, you get awesome integration with CI/CD (Especially GH Actions with AWS actions) with abstracted task definitions that make up your backend, integrated CPU/Memory metrics, logs, options to connect to ALB etc.

    TLDR; Host React on S3+Cloudfront, leave backend on ECS. If you need it to be accessible outside your VPC, you need an ALB or similar (Public subnet or Private+NAT etc.). I don’t know your use case, but you probably don’t need to be able to ping it from public internet to debug it, if that’s what you are doing.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search