skip to Main Content

i want to deploy my app with ecs , iam using one task definition.
when i use with fargate it work but with ec2 i get this error :

Server is running on port 8080
node:internal/process/promises:289
            triggerUncaughtException(err, true /* fromPromise */);
            ^

[UnhandledPromiseRejection: This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). The promise rejected with the reason "#<Object>".] {
  code: 'ERR_UNHANDLED_REJECTION'
}

Node.js v20.10.0

and then i tried to ssh to ec2 instance and run the image with docker command and it work too.
i’am using x86 architecture and t2.medium instance.

2

Answers


  1. As commented, make sure all promises in your Node.js application have proper .catch() blocks or are within try...catch in async functions. That is a good practice regardless of the current issue, as illustrated in "Using .then(), .catch(), .finally() to Handle Errors in JavaScript Promises" from Lucy Mitchell.

    But, to your case, compare your environments between EC2 and Fargate (environment variables, configuration, Node.js version, memory/CPU, …). Sometimes, subtle differences in environment setup can lead to issues like this.

    Also, add more logging to your application to capture detailed information about the environment and execution flow. Again, compare those logs after execution on EC2 or Fargate.

    process.on('unhandledRejection', (reason, promise) => {
      console.error('Unhandled Rejection at:', promise, 'reason:', reason);
      // Application specific logging, throwing an error, or other logic here
    });
    

    For instance, this question suggests the code was working with Node 14, not with Node 18.

    And since Node.js 15 has updated its handling of rejections… different Node versions would explain why it is working in one environment, and not in another.

    Login or Signup to reply.
  2. This is not a Node.js issue or an unhandled promise error, but rather a networking issue with the Task networking mode on AWS.

    I was experiencing the same behavior: containers spawned by a Task would exit with status code 1. However, when spawning containers manually, everything would work fine.

    This is due to how awsvpc, which is selected by default when creating a cluster Task in Networking Mode, works:

    When hosting tasks that use the awsvpc network mode on Amazon EC2 Linux instances, your task ENIs aren’t given public IP addresses. To access the internet, tasks must be launched in a private subnet that’s configured to use a NAT gateway. For more information, see NAT gateways in the Amazon VPC User Guide. Inbound network access must be from within a VPC that uses the private IP address or routed through a load balancer from within the VPC. Tasks that are launched within public subnets do not have access to the internet.

    By default, AWS assigns awsvpc as the network mode. If you are using EC2 as a launch type and have set the networking mode to awsvpc, in order for the container spawned from the Task to have access to the internet, you’ll need to configure a NAT gateway for your VPC to handle that.

    Source: Amazon ECS Documentation – awsvpc network mode

    Alternatively, you can use bridge mode:

    The task uses Docker’s built-in virtual network on Linux, which runs inside each Amazon EC2 instance that hosts the task. The built-in virtual network on Linux uses the bridge Docker network driver. This is the default network mode on Linux if a network mode isn’t specified in the task definition.

    Source: Amazon ECS Documentation – bridge network mode

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search