event-loop

This commit is contained in:
Ilya Kantor 2019-06-29 00:32:35 +03:00
parent d1190aae21
commit f018012168
12 changed files with 378 additions and 305 deletions

View file

@ -236,7 +236,7 @@ For `setInterval` the function stays in memory until `clearInterval` is called.
There's a side-effect. A function references the outer lexical environment, so, while it lives, outer variables live too. They may take much more memory than the function itself. So when we don't need the scheduled function anymore, it's better to cancel it, even if it's very small.
````
## setTimeout(...,0)
## Zero delay setTimeout
There's a special use case: `setTimeout(func, 0)`, or just `setTimeout(func)`.
@ -254,114 +254,12 @@ alert("Hello");
The first line "puts the call into calendar after 0ms". But the scheduler will only "check the calendar" after the current code is complete, so `"Hello"` is first, and `"World"` -- after it.
### Splitting CPU-hungry tasks
There are also advanced browser-related use cases of zero-delay timeout, that we'll discuss in the chapter <info:event-loop>.
There's a trick to split CPU-hungry tasks using `setTimeout`.
````smart header="Zero delay is in fact not zero (in a browser)"
In the browser, there's a limitation of how often nested timers can run. The [HTML5 standard](https://html.spec.whatwg.org/multipage/timers-and-user-prompts.html#timers) says: "after five nested timers, the interval is forced to be at least 4 milliseconds.".
For instance, a syntax-highlighting script (used to colorize code examples on this page) is quite CPU-heavy. To highlight the code, it performs the analysis, creates many colored elements, adds them to the document -- for a big text that takes a lot. It may even cause the browser to "hang", which is unacceptable.
So we can split the long text into pieces. First 100 lines, then plan another 100 lines using `setTimeout(..., 0)`, and so on.
For clarity, let's take a simpler example for consideration. We have a function to count from `1` to `1000000000`.
If you run it, the CPU will hang. For server-side JS that's clearly noticeable, and if you are running it in-browser, then try to click other buttons on the page -- you'll see that whole JavaScript actually is paused, no other actions work until it finishes.
```js run
let i = 0;
let start = Date.now();
function count() {
// do a heavy job
for (let j = 0; j < 1e9; j++) {
i++;
}
alert("Done in " + (Date.now() - start) + 'ms');
}
count();
```
The browser may even show "the script takes too long" warning (but hopefully it won't, because the number is not very big).
Let's split the job using the nested `setTimeout`:
```js run
let i = 0;
let start = Date.now();
function count() {
// do a piece of the heavy job (*)
do {
i++;
} while (i % 1e6 != 0);
if (i == 1e9) {
alert("Done in " + (Date.now() - start) + 'ms');
} else {
setTimeout(count); // schedule the new call (**)
}
}
count();
```
Now the browser UI is fully functional during the "counting" process.
We do a part of the job `(*)`:
1. First run: `i=1...1000000`.
2. Second run: `i=1000001..2000000`.
3. ...and so on, the `while` checks if `i` is evenly divided by `1000000`.
Then the next call is scheduled in `(**)` if we're not done yet.
Pauses between `count` executions provide just enough "breath" for the JavaScript engine to do something else, to react to other user actions.
The notable thing is that both variants -- with and without splitting the job by `setTimeout` -- are comparable in speed. There's no much difference in the overall counting time.
To make them closer, let's make an improvement.
We'll move the scheduling in the beginning of the `count()`:
```js run
let i = 0;
let start = Date.now();
function count() {
// move the scheduling at the beginning
if (i < 1e9 - 1e6) {
setTimeout(count); // schedule the new call
}
do {
i++;
} while (i % 1e6 != 0);
if (i == 1e9) {
alert("Done in " + (Date.now() - start) + 'ms');
}
}
count();
```
Now when we start to `count()` and see that we'll need to `count()` more, we schedule that immediately, before doing the job.
If you run it, it's easy to notice that it takes significantly less time.
````smart header="Minimal delay of nested timers in-browser"
In the browser, there's a limitation of how often nested timers can run. The [HTML5 standard](https://www.w3.org/TR/html5/webappapis.html#timers) says: "after five nested timers, the interval is forced to be at least four milliseconds.".
Let's demonstrate what it means with the example below. The `setTimeout` call in it re-schedules itself after `0ms`. Each call remembers the real time from the previous one in the `times` array. What do the real delays look like? Let's see:
Let's demonstrate what it means with the example below. The `setTimeout` call in it re-schedules itself with zero delay. Each call remembers the real time from the previous one in the `times` array. What do the real delays look like? Let's see:
```js run
let start = Date.now();
@ -378,79 +276,22 @@ setTimeout(function run() {
// 1,1,1,1,9,15,20,24,30,35,40,45,50,55,59,64,70,75,80,85,90,95,100
```
First timers run immediately (just as written in the spec), and then the delay comes into play and we see `9, 15, 20, 24...`.
First timers run immediately (just as written in the spec), and then we see `9, 15, 20, 24...`. The 4+ ms obligatory delay between invocations comes into play.
The similar thing happens if we use `setInterval` instead of `setTimeout`: `setInterval(f)` runs `f` few times with zero-delay, and afterwards with 4+ ms delay.
That limitation comes from ancient times and many scripts rely on it, so it exists for historical reasons.
For server-side JavaScript, that limitation does not exist, and there exist other ways to schedule an immediate asynchronous job, like [process.nextTick](https://nodejs.org/api/process.html) and [setImmediate](https://nodejs.org/api/timers.html) for Node.js. So the notion is browser-specific only.
For server-side JavaScript, that limitation does not exist, and there exist other ways to schedule an immediate asynchronous job, like [setImmediate](https://nodejs.org/api/timers.html) for Node.js. So this note is browser-specific.
````
### Allowing the browser to render
Another benefit of splitting heavy tasks for browser scripts is that we can show a progress bar or something to the user.
Usually the browser does all "repainting" after the currently running code is complete. So if we do a single huge function that changes many elements, the changes are not painted out till it finishes.
Here's the demo:
```html run
<div id="progress"></div>
<script>
let i = 0;
function count() {
for (let j = 0; j < 1e6; j++) {
i++;
// put the current i into the <div>
// (we'll talk about innerHTML in the specific chapter, it just writes into element here)
progress.innerHTML = i;
}
}
count();
</script>
```
If you run it, the changes to `i` will show up after the whole count finishes.
And if we use `setTimeout` to split it into pieces then changes are applied in-between the runs, so this looks better:
```html run
<div id="progress"></div>
<script>
let i = 0;
function count() {
// do a piece of the heavy job (*)
do {
i++;
progress.innerHTML = i;
} while (i % 1e3 != 0);
if (i < 1e9) {
setTimeout(count);
}
}
count();
</script>
```
Now the `<div>` shows increasing values of `i`.
## Summary
- Methods `setInterval(func, delay, ...args)` and `setTimeout(func, delay, ...args)` allow to run the `func` regularly/once after `delay` milliseconds.
- To cancel the execution, we should call `clearInterval/clearTimeout` with the value returned by `setInterval/setTimeout`.
- Nested `setTimeout` calls is a more flexible alternative to `setInterval`. Also they can guarantee the minimal time *between* the executions.
- Zero-timeout scheduling `setTimeout(func, 0)` (the same as `setTimeout(func)`) is used to schedule the call "as soon as possible, but after the current code is complete".
Some use cases of `setTimeout(func)`:
- To split CPU-hungry tasks into pieces, so that the script doesn't "hang"
- To let the browser do something else while the process is going on (paint the progress bar).
- Zero delay scheduling with `setTimeout(func, 0)` (the same as `setTimeout(func)`) is used to schedule the call "as soon as possible, but after the current code is complete".
- The browsere ensures that for five or more nested call of `setTimeout`, or for zero-delay `setInterval`, the real delay between calls is at least 4ms. That's for historical reasons.
Please note that all scheduling methods do not *guarantee* the exact delay. We should not rely on that in the scheduled code.
@ -459,4 +300,4 @@ For example, the in-browser timer may slow down for a lot of reasons:
- The browser tab is in the background mode.
- The laptop is on battery.
All that may increase the minimal timer resolution (the minimal delay) to 300ms or even 1000ms depending on the browser and settings.
All that may increase the minimal timer resolution (the minimal delay) to 300ms or even 1000ms depending on the browser and OS-level performance settings.

View file

@ -1,5 +1,5 @@
# Microtasks and event loop
# Microtasks
Promise handlers `.then`/`.catch`/`.finally` are always asynchronous.
@ -52,99 +52,15 @@ Promise.resolve()
Now the order is as intended.
## Event loop
In-browser JavaScript execution flow, as well as Node.js, is based on an *event loop*.
"Event loop" is a process when the engine sleeps and waits for events. When they occur - handles them and sleeps again.
Events may come either from external sources, like user actions, or just as the end signal of an internal task.
Examples of events:
- `mousemove`, a user moved their mouse.
- `setTimeout` handler is to be called.
- an external `<script src="...">` is loaded, ready to be executed.
- a network operation, e.g. `fetch` is complete.
- ...etc.
Things happen -- the engine handles them -- and waits for more to happen (while sleeping and consuming close to zero CPU).
![](eventLoop.png)
As you can see, there's also a queue here. A so-called "macrotask queue" (v8 term).
When an event happens, while the engine is busy, its handling is enqueued.
For instance, while the engine is busy processing a network `fetch`, a user may move their mouse causing `mousemove`, and `setTimeout` may be due and so on, just as painted on the picture above.
Events from the macrotask queue are processed on "first come first served" basis. When the engine browser finishes with `fetch`, it handles `mousemove` event, then `setTimeout` handler, and so on.
So far, quite simple, right? The engine is busy, so other tasks queue up.
Now the important stuff.
**Microtask queue has a higher priority than the macrotask queue.**
In other words, the engine first executes all microtasks, and then takes a macrotask. Promise handling always has the priority.
For instance, take a look:
```js run
setTimeout(() => alert("timeout"));
Promise.resolve()
.then(() => alert("promise"));
alert("code");
```
What's the order?
1. `code` shows first, because it's a regular synchronous call.
2. `promise` shows second, because `.then` passes through the microtask queue, and runs after the current code.
3. `timeout` shows last, because it's a macrotask.
It may happen that while handling a macrotask, new promises are created.
Or, vice-versa, a microtask schedules a macrotask (e.g. `setTimeout`).
For instance, here `.then` schedules a `setTimeout`:
```js run
Promise.resolve()
.then(() => {
setTimeout(() => alert("timeout"), 0);
})
.then(() => {
alert("promise");
});
```
Naturally, `promise` shows up first, because `setTimeout` macrotask awaits in the less-priority macrotask queue.
As a logical consequence, macrotasks are handled only when promises give the engine a "free time". So if we have a chain of promise handlers that don't wait for anything, execute right one after another, then a `setTimeout` (or a user action handler) can never run in-between them.
## Unhandled rejection
Remember "unhandled rejection" event from the chapter <info:promise-error-handling>?
Now we can describe how JavaScript finds out that a rejection was not handled.
Now we can see exactly how JavaScript finds out that there was an unhandled rejection
**"Unhandled rejection" is when a promise error is not handled at the end of the microtask queue.**
**"Unhandled rejection" occurs when a promise error is not handled at the end of the microtask queue.**
For instance, consider this code:
```js run
let promise = Promise.reject(new Error("Promise Failed!"));
window.addEventListener('unhandledrejection', event => {
alert(event.reason); // Promise Failed!
});
```
We create a rejected `promise` and do not handle the error. So we have the "unhandled rejection" event (printed in browser console too).
We wouldn't have it if we added `.catch`, like this:
Normally, if we expect an error, we add `.catch` to the promise chain to handle it:
```js run
let promise = Promise.reject(new Error("Promise Failed!"));
@ -156,36 +72,41 @@ promise.catch(err => alert('caught'));
window.addEventListener('unhandledrejection', event => alert(event.reason));
```
Now let's say, we'll be catching the error, but after `setTimeout`:
...But if we forget to add `.catch`, then, after the microtask queue is empty, the engine triggers the event:
```js run
let promise = Promise.reject(new Error("Promise Failed!"));
// Promise Failed!
window.addEventListener('unhandledrejection', event => alert(event.reason));
```
What if we handle the error later? Like this:
```js run
let promise = Promise.reject(new Error("Promise Failed!"));
*!*
setTimeout(() => promise.catch(err => alert('caught')));
setTimeout(() => promise.catch(err => alert('caught')), 1000);
*/!*
// Error: Promise Failed!
window.addEventListener('unhandledrejection', event => alert(event.reason));
```
Now the unhandled rejection appears again. Why? Because `unhandledrejection` is generated when the microtask queue is complete. The engine examines promises and, if any of them is in "rejected" state, then the event triggers.
Now, if you run it, we'll see `Promise Failed!` message first, and then `caught`.
In the example, the `.catch` added by `setTimeout` triggers too, of course it does, but later, after `unhandledrejection` has already occurred.
If we didn't know about microtasks, we could wonder: "Why did `unhandledrejection` happen? We did catch the error!".
But now we do know that `unhandledrejection` is generated when the microtask queue is complete: the engine examines promises and, if any of them is in "rejected" state, then the event triggers.
...By the way, the `.catch` added by `setTimeout` also triggers, of course it does, but later, after `unhandledrejection` has already occurred.
## Summary
- Promise handling is always asynchronous, as all promise actions pass through the internal "promise jobs" queue, also called "microtask queue" (v8 term).
Promise handling is always asynchronous, as all promise actions pass through the internal "promise jobs" queue, also called "microtask queue" (v8 term).
**So, `.then/catch/finally` handlers are called after the current code is finished.**
So, `.then/catch/finally` handlers are always called after the current code is finished.
If we need to guarantee that a piece of code is executed after `.then/catch/finally`, it's best to add it into a chained `.then` call.
If we need to guarantee that a piece of code is executed after `.then/catch/finally`, we can add it into a chained `.then` call.
- There's also a "macrotask queue" that keeps various events, network operation results, `setTimeout`-scheduled calls, and so on. These are also called "macrotasks" (v8 term).
The engine uses the macrotask queue to handle them in the appearance order.
**Macrotasks run after the code is finished *and* after the microtask queue is empty.**
In other words, they have lower priority.
So the order is: regular code, then promise handling, then everything else, like events etc.
In most Javascript engines, including browsers and Node.js, the concept of microtasks is closely tied with "event loop" and "macrotasks". As these have no direct relation to promises, they are covered in another part of the tutorial, in the chapter <info:event-loop>.

View file

@ -290,34 +290,6 @@ In case of an error, it propagates as usual: from the failed promise to `Promise
````
## Microtask queue [#microtask-queue]
As we've seen in the chapter <info:microtask-queue>, promise handlers are executed asynchronously. Every `.then/catch/finally` handler first gets into the "microtask queue" and executed after the current code is complete.
`Async/await` is based on promises, so it uses the same microtask queue internally, and has the similar priority over macrotasks.
For instance, we have:
- `setTimeout(handler, 0)`, that should run `handler` with zero delay.
- `let x = await f()`, function `f()` is async, but returns immediately.
Which one runs first if `await` is *below* `setTimeout` in the code?
```js run
async function f() {
return 1;
}
(async () => {
setTimeout(() => alert('timeout'), 0);
await f();
alert('await');
})();
```
There's no ambiguity here: `await` always finishes first, because (as a microtask) it has a higher priority than `setTimeout` handling.
## Summary
The `async` keyword before a function has two effects:

View file

@ -233,7 +233,7 @@ For instance, here the nested `menu-open` event is processed synchronously, duri
alert(2);
};
document.addEventListener('menu-open', () => alert('nested'))
document.addEventListener('menu-open', () => alert('nested'));
</script>
```
@ -259,7 +259,7 @@ If we don't like it, we can either put the `dispatchEvent` (or other event-trigg
alert(2);
};
document.addEventListener('menu-open', () => alert('nested'))
document.addEventListener('menu-open', () => alert('nested'));
</script>
```

View file

@ -0,0 +1,339 @@
# Event loop: microtasks and macrotasks
Browser JavaScript execution flow, as well as in Node.js, is based on an *event loop*.
Understanding how event loop works is important for optimizations, and sometimes for the right architecture.
In this chapter we first cover theoretical details about how things work, and then see practical applications of that knowledge.
## Event Loop
The concept of *event loop* is very simple. There's an endless loop, when JavaScript engine waits for tasks, executes them and then sleeps waiting for more tasks.
1. While there are tasks:
- execute the oldest task.
2. Sleep until a task appears, then go to 1.
That's a formalized algorithm for what we see when browsing a page. JavaScript engine does nothing most of the time, only runs if a script/handler/event activates.
A task can be JS-code triggered by events, but can also be something else, e.g.:
- When an external script `<script src="...">` loads, the task is to execute it.
- When a user moves their mouse, the task is to dispatch `mousemove` event and execute handlers.
- When the time is due for a scheduled `setTimeout`, the task is to run its callback.
- ...and so on.
Tasks are set -- the engine handles them -- then waits for more tasks (while sleeping and consuming close to zero CPU).
It may happen that a task comes while the engine is busy, then it's enqueued.
The tasks form a queue, so-called "macrotask queue" (v8 term):
![](eventLoop.png)
For instance, while the engine is busy executing a `script`, a user may move their mouse causing `mousemove`, and `setTimeout` may be due and so on, these tasks form a queue, as illustrated on the picture above.
Tasks from the queue are processed on "first come first served" basis. When the engine browser finishes with `fetch`, it handles `mousemove` event, then `setTimeout` handler, and so on.
So far, quite simple, right?
Two more details:
1. Rendering never happens while the engine executes a task.
Doesn't matter if the task takes a long time. Changes to DOM are painted only after the task is complete.
2. If a task takes too long, the browser can't do other tasks, process user events, so after a time it suggests "killing" it.
Usually, the whole page dies with the task.
Now let's see how we can apply that knowledge.
## Use-case: splitting CPU-hungry tasks
Let's say we have a CPU-hungry task.
For example, syntax-highlighting (used to colorize code examples on this page) is quite CPU-heavy. To highlight the code, it performs the analysis, creates many colored elements, adds them to the document -- for a big text that takes a lot.
While the engine is busy with syntax highlighting, it can't do other DOM-related stuff, process user events, etc. It may even cause the browser to "hang", which is unacceptable.
So we can split the long text into pieces. Highlight first 100 lines, then schedule another 100 lines using zero-delay `setTimeout`, and so on.
To demonstrate the approach, for the sake of simplicity, instead of syntax-highlighting let's take a function that counts from `1` to `1000000000`.
If you run the code below, the engine will "hang" for some time. For server-side JS that's clearly noticeable, and if you are running it in-browser, then try to click other buttons on the page -- you'll see that no other events get handled until the counting finishes.
```js run
let i = 0;
let start = Date.now();
function count() {
// do a heavy job
for (let j = 0; j < 1e9; j++) {
i++;
}
alert("Done in " + (Date.now() - start) + 'ms');
}
count();
```
The browser may even show "the script takes too long" warning (but hopefully it won't, because the number is not very big).
Let's split the job using nested `setTimeout`:
```js run
let i = 0;
let start = Date.now();
function count() {
// do a piece of the heavy job (*)
do {
i++;
} while (i % 1e6 != 0);
if (i == 1e9) {
alert("Done in " + (Date.now() - start) + 'ms');
} else {
setTimeout(count); // schedule the new call (**)
}
}
count();
```
Now the browser interface is fully functional during the "counting" process.
A single run of `count` does a part of the job `(*)`, and then re-schedules itself `(**)` if needed:
1. First run counts: `i=1...1000000`.
2. Second run counts: `i=1000001..2000000`.
3. ...and so on.
Pauses between `count` executions provide just enough "air" for the JavaScript engine to do something else, to react on other user actions.
The notable thing is that both variants -- with and without splitting the job by `setTimeout` -- are comparable in speed. There's no much difference in the overall counting time.
To make them closer, let's make an improvement.
We'll move the scheduling in the beginning of the `count()`:
```js run
let i = 0;
let start = Date.now();
function count() {
// move the scheduling at the beginning
if (i < 1e9 - 1e6) {
setTimeout(count); // schedule the new call
}
do {
i++;
} while (i % 1e6 != 0);
if (i == 1e9) {
alert("Done in " + (Date.now() - start) + 'ms');
}
}
count();
```
Now when we start to `count()` and see that we'll need to `count()` more, we schedule that immediately, before doing the job.
If you run it, it's easy to notice that it takes significantly less time.
Why?
That's simple: remember, there's the in-browser minimal delay of 4ms for many nested `setTimeout` calls. Even if we set `0`, it's `4ms` (or a bit more). So the earlier we schedule it - the faster it runs.
## Use case: progress bar
Another benefit of splitting heavy tasks for browser scripts is that we can show a progress bar.
Usually the browser renders after the currently running code is complete. Doesn't matter if the task takes a long time. Changes to DOM are painted only after the task is finished.
From one hand, that's great, because our function may create many elements, add them one-by-one to the document and change their styles -- the visitor won't see any "intermediate", unfinished state. An important thing, right?
Here's the demo, the changes to `i` won't show up until the function finishes, so we'll see only the last value:
```html run
<div id="progress"></div>
<script>
function count() {
for (let i = 0; i < 1e6; i++) {
i++;
progress.innerHTML = i;
}
}
count();
</script>
```
...But we also may want to show something during the task, e.g. a progress bar.
If we use `setTimeout` to split the heavy task into pieces, then changes are painted out in-between them.
This looks better:
```html run
<div id="progress"></div>
<script>
let i = 0;
function count() {
// do a piece of the heavy job (*)
do {
i++;
progress.innerHTML = i;
} while (i % 1e3 != 0);
if (i < 1e7) {
setTimeout(count);
}
}
count();
</script>
```
Now the `<div>` shows increasing values of `i`, a kind of a progress bar.
## Use case: doing something after the event
In an event handler we may decide to postpone some actions until the event bubbled up and was handled on all levels. We can do that by wrapping the code in zero delay `setTimeout`.
In the chapter <info:dispatch-events> we saw an example: a custom event `menu-open` is dispatched after the "click" event is fully handled.
```js
menu.onclick = function() {
// ...
// create a custom event with the clicked menu item data
let customEvent = new CustomEvent("menu-open", {
bubbles: true
/* details: can add more details, e.g. clicked item data here */
});
// dispatch the custom event asynchronously
setTimeout(() => menu.dispatchEvent(customEvent));
};
```
The custom event is totally independent here. It's dispatched asynchronously, after the `click` event bubbled up and was fully handled. That helps to workaround some potential bugs, that may happen when different events are nested in each other.
## Microtasks
Along with *macrotasks*, described in this chapter, there exist *microtasks*, mentioned in the chapter <info:microtask-queue>.
There are two main ways to create a microtask:
1. When a promise is ready, the execution of its `.then/catch/finally` handler becomes a microtask. Microtasks are used "under the cover" of `await` as well, as it's a form of promise handling, similar to `.then`, but syntactically different.
2. There's a special function `queueMicrotask(func)` that queues `func` for execution in the microtask queue.
After every *macrotask*, the engine executes all tasks from *microtask* queue, prior to running any other macrotasks.
**Microtask queue has a higher priority than the macrotask queue.**
For instance, take a look:
```js run
setTimeout(() => alert("timeout"));
Promise.resolve()
.then(() => alert("promise"));
alert("code");
```
What's the order?
1. `code` shows first, because it's a regular synchronous call.
2. `promise` shows second, because `.then` passes through the microtask queue, and runs after the current code.
3. `timeout` shows last, because it's a macrotask.
**There may be no UI event between microtasks.**
Most of browser processing is macrotasks, including processing network request results, handling UI events and so on.
So if we'd like our code to execute asynchronously, but want the application state be basically the same (no mouse coordinate changes, no new network data, etc), then we can achieve that by creating a microtask with `queueMicrotask`.
Rendering also waits until the microtask queue is emptied.
Here's an example with a "counting progress bar", similar to the one shown previously, but `queueMicrotask` is used instead of `setTimeout`. You can see that it renders at the very end, just like the regular code:
```html run
<div id="progress"></div>
<script>
let i = 0;
function count() {
// do a piece of the heavy job (*)
do {
i++;
progress.innerHTML = i;
} while (i % 1e3 != 0);
if (i < 1e6) {
*!*
queueMicrotask(count);
*/!*
}
}
count();
</script>
```
So, microtasks are asynchronous from the point of code execution, but they don't allow any browser processes or events to stick in-between them.
## Summary
The richer event loop picture may look like this:
![](eventLoop-full.png)
The more detailed algorithm of the event loop (though still simplified compare to the [specification](https://html.spec.whatwg.org/multipage/webappapis.html#event-loop-processing-model)):
1. Dequeue and run the oldest task from the *macrotask* queue (e.g. "script").
2. Execute all *microtasks*:
- While the microtask queue is not empty:
- Dequeue and run the oldest microtask.
3. Render changes if any.
4. Wait until the macrotask queue is not empty (if needed).
5. Go to step 1.
To schedule a new macrotask:
- Use zero delayed `setTimeout(f)`.
That may be used to split a big calculation-heavy task into pieces, for the browser to be able to react on user events and show progress between them.
Also, used in event handlers to schedule an action after the event is fully handled (bubbling done).
To schedule a new microtask:
- Use `queueMicrotask(f)`.
- Also promise handlers go through the microtask queue.
There's no UI or network event handling between microtasks: they run immediately one after another.
So one may want to `queueMicrotask` to execute a function asynchronously, but also with the same application state.

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.