skip to Main Content

Recently I’ve stumbled upon an interesting bug. In the essence, the problem boils down to this example:

const waitResolve = (ms) => new Promise((resolve) => {
  setTimeout(() => {
    console.log(`Waited to resolve for ${ms}ms`);
    resolve(ms);
  }, ms);
});

const waitReject = (ms) => new Promise((resolve, reject) => {
  setTimeout(() => {
    console.log(`Waited to reject for ${ms}ms`);
    reject(ms);
  }, ms);
});

const run = async() => {
  const promises = {
    a: [],
    b: [],
    c: [],
  };

  for (let i = 0; i < 5; i += 1) {
    promises.a.push(waitResolve(1e4));
    promises.b.push(waitReject(1e3));
    promises.c.push(waitResolve(1e2));
  }

  try {
    for (const [key, value] of Object.entries(promises)) {
      console.log(`Starting ${key}`);

      try {
        await Promise.all(value);
      } catch (err) {
        console.log(`Caught error in ${key}!`, err);
      }

      console.log(`Finished ${key}`);
    }
  } catch (err) {
    console.log('Caught error in run!', err);
  }
};

run();

Here, despite common understanding that the promises will reside in pending state during and after the for loop and would "really" start to execute only after Promise.all call. It implies that try/catch blocks will catch the rejection produced in waitReject(1e3), but it does not happen (tested in Node.js v18.14.2 and couple of earlier versions).

If the sequence of Promises array pushes would be changed to:

promises.a.push(waitResolve(1e2));
promises.b.push(waitReject(1e3));
promises.c.push(waitResolve(1e4));

The rejection would be caught. Now, I do vaguely understand that it is related to the mIcro and mAcro tasks resolution sequence inside the Event Loop, and the fact that Promises that would have a chance to execute between the ticks would do so.

However, I would really like to hear a proper explanation from someone who has more understanding than I do.

2

Answers


  1. How a host deals with an unhandled promise rejection is implementation dependent. The ECMAScript specification says about this:

    HostPromiseRejectionTracker is called in two scenarios:

    1. When a promise is rejected without any handlers, it is called with its operation argument set to "reject".
    2. When a handler is added to a rejected promise for the first time, it is called with its operation argument set to "handle".

    A typical implementation of HostPromiseRejectionTracker might try to notify developers of unhandled rejections, while also being careful to notify them if such previous notifications are later invalidated by new handlers being attached.

    I note that in NodeJs, this handler stops the script and does not allow the program to continue, and so the second iteration of the loop never occurs. In Chrome/Edge however, the console first shows an uncaught promise rejection error, but the asynchronous code can continue, and as soon as the promise rejections get handled, these error messages are removed from the console. These are clearly two different approaches to something that the ECMAScript specification has no say on: it is up to the host to decide what to do.

    As to your analysis:

    Here, despite common understanding that the promises […] would "really" start to execute only after Promise.all call.

    It is not the promises that start executing. Promises are just objects, not functions. However, the setTimeout timers have started at the moment the promises were created, i.e. before the second loop. The effect of Promise.all is not that some promise gets "executed", but that a new promise is created which is guaranteed to resolve when all the given promises resolve, or will reject as soon as one of those rejects. So it has placed handlers on those promises, and the try ... catch block acts as a rejection handler for the new "all" promise.

    What really opens the door to state changes, is the await. This will make the async function return. When the callstack is empty, the task queues will be monitored. When a setTimeout expires, a task will appear there, and that task will run your code to resolve/reject a promise.

    If this is a rejection, and that promise has not yet received a rejection handler, it is expected that an uncaught promise rejection error will be triggered. Since the await in the first iteration of the loop only placed handlers on the "a" promises, the rejection of a "b" promise will lead to the uncaught rejection handler error. This is expected. Only when all "a" promises have resolved and the loop can make its second iteration, will there be a rejection handler set up for the "b" promises. Nodejs does not allow this to happen (at least not in the version I ran — v20), as it already interrupted the program when a "b" promise rejected.

    You should not delay to attach rejection handlers on promises. The best moment to do that is when the promises are created, and certainly not after an await on another, unrelated promise.

    Alternative implementation

    If you intended to do as the console.log statements suggest, i.e. creating the promises by group, one group after the other, then don’t create groups of promises, but of tasks. With the latter I mean functions that — when executed — will create the promises. Now it makes sense to tackle the tasks by group: first execute them, await their promises, then continue and execute the next group of tasks, …etc.

    Here is how that would look:

    const waitResolve = (ms) => new Promise((resolve) => {
      setTimeout(() => {
        console.log(`Waited to resolve for ${ms}ms`);
        resolve(ms);
      }, ms);
    });
    
    const waitReject = (ms) => new Promise((resolve, reject) => {
      setTimeout(() => {
        console.log(`Waited to reject for ${ms}ms`);
        reject(ms);
      }, ms);
    });
    
    const run = async() => {
      const tasks = {
        a: [],
        b: [],
        c: [],
      };
    
      for (let i = 0; i < 5; i += 1) {
        tasks.a.push(() => waitResolve(1e4)); // Note the callback!
        tasks.b.push(() => waitReject(1e3));
        tasks.c.push(() => waitResolve(1e2));
      }
    
      try {
        for (const [key, value] of Object.entries(tasks)) {
          console.log(`Starting ${key}`);
    
          try {
            // Here the tasks of this group are executed, giving promises
            await Promise.all(value.map(task => task()));
          } catch (err) {
            console.log(`Caught error in ${key}!`, err);
          }
    
          console.log(`Finished ${key}`);
        }
      } catch (err) {
        console.log('Caught error in run!', err);
      }
    };
    
    run();
    Login or Signup to reply.
  2. This is just troublesome code given the current implementations of uncaught rejections.

    You’re allowing a promise to reject WHEN there is no reject handler at the time it rejects and when the system gets back to the microtasks queue. That will trigger an uncaught rejection because at the time you get back to the queue and that promise rejection is now seen by the system, there is no proper reject handler because you haven’t yet called Promise.all() for it because the previous await Promise.all() is still awaiting meaning the for loop has been suspended.

    This is just how detection of uncaught rejections works.

    The system doesn’t wait forever to see if at some future point you will then attach a reject handler (which this code will eventually). Instead, it sees that it’s back to the micro task loop, a promise has reject and there is CURRENTLY no reject handler for it. It isn’t smart enough to know you will eventually attach a reject handler.

    If b rejects while you’re awaiting a then there will be no reject handler for b when it gets back to the microtasks queue and sees the rejection without a reject handler.

    The general design guideline is to not structure your code so that it can get back to the microtasks queue before you’ve attached your reject handler. The best case is to attach the reject handler immediately as soon as you get the promise. And, certainly, never do it after the asynchronous event has started and then await some other promise because that allows the system to go back to the microtask queue before you’ve attached the reject handler and opens up this problem.

    It’s unclear what real solution to offer here for this particular code because the whole thing seems a bit contrived where you’re doing Promise.all() on one promise at a time, but expecting parallel execution of all the operations. The proper implementation for parallel execution would be a single Promise.all() that awaits all the promises, not just some of them. Then, you’d have a reject handler in place for all of them at the same time and before the code had any chance to get back to the queue.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search