There are many ways I can think of asking this question. Let’s say we have a single-core CPU on which the OS is running two processes. Process 1 is a Node application and process 2, we don’t care about. Given that there’s only a single core, the two processes will have to share access to it, which means for roughly half the time, each process won’t be executing.
Now, if we call setTimeout(callback, 1000)
in process 1, will the callback execute after exactly 1000 (real-world) ms, or will there be some unaccounted delay for the time process 1 has not been executing?
If there’s no extra delay, how can the event loop know when to precisely return the callback? Does this mean the event loop schedules an event using the CPU clock directly, rather than a process-level abstraction, such as the number of loops? And even then, how can the event loop ensure it has CPU access by the time the callback needs to be returned; are events scheduled on a CPU-level queue that can prioritise certain processes at certain times? Or would this callback be executed as soon as process 1 regains access, even if late?
If there’s an extra delay, however, what is the impact this can have on time-sensitive applications that require precise synchronisation and how could we ensure a Node process has continuous access to a CPU core?
2
Answers
Every form of sleep ultimately reaches OS in one or another way. Because it is only OS that can put a process/thread to sleep and provide time. Then the sleep time is never exact. In fact what does "exact" even mean? It has to be up to some precision, this is physics. But there’s more: most OS will only guarantee that the sleep time is at least what you request, can be more. And in fact, depending on the CPU load and OS scheduler’s priorities this can be arbitrarily long. For example 1s sleep request can take 10s, indeed. One of the reasons why it actually does matter what other processes are doing on your machine.
But most of the time, under average conditions, the sleep time will be around what is requested.
That depends on how sensitive. Below millisecond? It will have big impact. On the other hand you wouldn’t be using JavaScript if that was the case.
Also you cannot ensure that your process takes ownership of some CPU core. This is for OS to decide. Unless you have full control over the machine. But even then it is a mistake to design app like that.
As a side note: it would be better not to depend on sleep for synchronization. Most of the time it is a mistake. Only very specialized and nieche applications actually need to measure ticks precisely.
Most desktop/server/mobile operating systems are designed around the assumption that processes belonging to multiple, different applications compete with each other for CPU cycles and, for other system resources. The design goals of such operating systems emphasize fairness in allocating system resources and, emphasize the efficient use of system resources.
As an example of a strategy for "efficient use," consider that scheduling processes on a coarse grain always wastes fewer CPU cycles than fine-grained scheduling (all other things being equal) because the scheduler itself will use fewer of the available CPU cycles.
Those often are known as real-time applications. Real-time applications often are run as the only application on the host, and they often are run under a real-time OS (a.k.a., "RTOS") that is designed around the assumption that the application threads and processes all cooperate with each other instead of competing. The highest priority design goals for an RTOS typically are to minimize latencies and improve responsiveness. Security (at least, in the sense of protection of one process from the actions of another) and overall efficiency may be of much less concern in an RTOS.