The JavaScript Event Loop, Explained By Working Through It
Walking through async JavaScript to show how the Event Loop decides what runs when.

I failed a JavaScript interview question about the event loop in 2023. The question was simple โ "what order do these console.log statements execute in?" โ and I got it wrong. Not because I didn't know what the event loop was. I'd read about it. Watched conference talks. Could probably give a decent whiteboard explanation of "there's a call stack and a task queue and callbacks get processed." But when it came to actually predicting behavior of a specific code snippet, my understanding fell apart.
The abstract explanations weren't sticking because they skipped the part that actually matters: the priority system. JavaScript doesn't just have "async stuff that runs later." It has multiple queues with a strict ordering between them, and getting the ordering wrong means getting the behavior wrong.
Here's what I've put together after finally sitting down and working through this properly.
JavaScript Is Single-Threaded (Sort Of)
JavaScript runs your code on one thread. One call stack. One piece of code executing at a time. If a function is running, nothing else runs until it finishes.
But โ and this is the part that confuses people โ JavaScript applications do many things concurrently. You can fire off an HTTP request, set a timer, listen for user clicks, and read a file, all at the same time. How? Because the async operations aren't happening in JavaScript. They're handled by the runtime environment.
When you call setTimeout, JavaScript doesn't sit there counting milliseconds. It hands the timer off to the browser (or to Node.js's internal C++ layer). The browser manages the timer in its own thread. When the timer expires, the browser puts your callback function into a queue. The event loop's job is to check that queue and move callbacks onto the call stack when the stack is empty.
This delegation is the whole trick. JavaScript itself is single-threaded. The runtime environment around it is not.
The Two Queues That Matter
There are actually multiple queues, but two of them cover 95% of what you need to understand.
The Microtask Queue holds callbacks from resolved Promises, queueMicrotask() calls, and MutationObserver callbacks. This queue is high priority.
The Macrotask Queue (sometimes called the Task Queue) holds callbacks from setTimeout, setInterval, setImmediate (Node.js), and I/O events. This queue is lower priority.
The priority rule is simple but it has big implications: after each macrotask completes, the event loop drains the entire microtask queue before processing the next macrotask. And after the initial script finishes (which is itself a macrotask), the microtask queue runs first too.
Let me show this with actual code because I think the abstract description isn't enough.
Walking Through a Snippet
console.log("A");
setTimeout(() => {
console.log("B");
Promise.resolve().then(() => console.log("C"));
}, 0);
Promise.resolve().then(() => {
console.log("D");
setTimeout(() => console.log("E"), 0);
});
console.log("F");
When I first looked at this kind of thing, I thought "setTimeout with 0ms delay should fire immediately, right?" Nope. The delay is a minimum, not a guarantee. And even with 0ms, the callback still goes through the queue system. It doesn't cut in line.
Let's trace through this step by step. The engine starts executing the script top to bottom, and the script itself runs as one synchronous block.
First: console.log("A") runs. Output: A. Straightforward.
Second: The engine hits setTimeout. It registers the callback with the browser/runtime and moves on. The callback goes into the macrotask queue. It's not going to run now regardless of the delay being 0.
Third: Promise.resolve().then(...) โ the promise resolves immediately, but the .then() callback doesn't execute yet. It goes into the microtask queue.
Fourth: console.log("F") runs. Output: F.
The synchronous script is done. Call stack is empty. Now the event loop kicks in.
The event loop checks the microtask queue first. It finds the Promise callback. That runs. console.log("D") outputs D. Inside that callback, a setTimeout is registered โ its callback goes into the macrotask queue. The microtask queue is now empty.
Now the event loop checks the macrotask queue. The first callback in line is from the original setTimeout. It runs. console.log("B") outputs B. During that callback, Promise.resolve().then(...) puts a new microtask on the queue.
Before the next macrotask, the event loop drains microtasks again. It finds the Promise callback from inside the setTimeout. console.log("C") outputs C. Microtask queue empty.
Back to the macrotask queue. The only remaining item is the setTimeout that was registered inside the first Promise callback. console.log("E") outputs E.
Final output: A, F, D, B, C, E.
If you got that right on the first read, you already understand the event loop better than I did after watching three YouTube videos about it. The key insight is that D comes before B even though both are "async." Promises are microtasks. setTimeout is a macrotask. Microtasks always go first.
Why This Matters Beyond Interview Questions
You might be thinking this is all academic. Interview trivia that doesn't affect real code. I thought that too until I hit a bug that took me a full day to track down.
We had a React app with a data fetching pattern that looked roughly like this:
async function loadDashboard() {
const user = await fetchUser();
updateUI(user);
// Schedule a background sync
setTimeout(() => syncAnalytics(user.id), 0);
const preferences = await fetchPreferences(user.id);
applyTheme(preferences.theme);
}
The bug was that applyTheme sometimes ran with stale preferences. Not always, just intermittently. It turned out that on slow connections, the second await would yield, and during that yield, the syncAnalytics timer would fire and, through a chain of side effects I won't get into, mutate some shared state that fetchPreferences depended on.
The fix was understanding the execution order. When you await something, the rest of the function becomes a microtask that runs when the awaited promise resolves. Meanwhile, macrotasks that were queued earlier might slip in between your awaits. The function looks sequential when you read it, but it's not โ there are gaps between the awaits where other code can execute.
I would not have diagnosed this without understanding microtask vs macrotask ordering. So no, it's not just interview trivia.
The Starvation Problem
Here's something I didn't know until embarrassingly recently. Because the event loop drains ALL microtasks before moving to the next macrotask, you can accidentally starve the macrotask queue.
function recursiveMicrotask() {
Promise.resolve().then(() => {
// This keeps adding to the microtask queue forever
recursiveMicrotask();
});
}
recursiveMicrotask();
// This setTimeout callback will NEVER run
setTimeout(() => console.log("I'll never print"), 0);
Each microtask schedules another microtask. The microtask queue never empties. The event loop never gets a chance to process the macrotask queue. The setTimeout callback sits there forever.
In practice, this would freeze the browser because rendering is also blocked by microtask processing. The page becomes unresponsive. I've never written code this obvious, but I have seen production code where a recursive async function accidentally created a similar starvation pattern. It manifested as the UI freezing under certain conditions and was hard to diagnose because the CPU wasn't pegged โ the event loop was just stuck processing an ever-growing microtask queue.
How Async/Await Fits In
I want to address this because async/await is how most people write async JavaScript now, and it can obscure the underlying queue mechanics.
An async function returns a Promise. When you await something inside it, the function pauses and the rest of the function body becomes a microtask attached to the awaited promise. When the promise resolves, that microtask gets queued and eventually executed.
async function example() {
console.log("1 - before await");
await Promise.resolve();
console.log("2 - after await");
}
console.log("3 - before calling example");
example();
console.log("4 - after calling example");
Output: 3, 1 - before await, 4 - after calling example, 2 - after await.
The async function runs synchronously up to the first await. Then it yields. The calling code continues. The continuation after await runs as a microtask once the call stack is clear.
This trips people up because the code reads like it's sequential โ "do this, then that, then the other thing" โ but it's not. There are yield points at every await where other code gets a chance to run.
I think async/await is a huge improvement over callback chains and raw .then() calls for readability. But it can hide the fact that your function doesn't actually execute top-to-bottom without interruption. Keeping that in mind has helped me avoid a few concurrency-related bugs.
Node.js Has Extra Phases
Everything I've described so far applies to both the browser and Node.js. But Node.js adds some additional queue phases that are worth mentioning if you work in backend JavaScript.
Node.js uses libuv for its event loop, and it processes events in phases: timers, pending callbacks, idle/prepare, poll, check, and close callbacks. The two that come up most often are the timers phase (where setTimeout and setInterval callbacks fire) and the check phase (where setImmediate callbacks fire).
setImmediate is Node-specific and fires after I/O events, before timers. The ordering between setTimeout(fn, 0) and setImmediate(fn) is actually non-deterministic when called from the main module โ it depends on how fast the event loop setup completes. But inside an I/O callback, setImmediate always fires before setTimeout. Confusing? Yeah. I don't think about this often, but it has come up when writing performance-sensitive Node.js code.
There's also process.nextTick(), which is even higher priority than microtasks. It runs immediately after the current operation finishes, before the event loop continues. It's like a microtask but with even higher priority. I try to avoid it because it makes the execution order harder to reason about, but some Node.js core APIs use it internally.
The Rendering Connection
In browsers, there's one more piece to the puzzle. The browser needs to repaint the screen โ update the DOM, recalculate styles, redraw pixels. This happens between macrotasks, roughly every 16ms for a 60fps display.
The sequence goes: run a macrotask โ drain all microtasks โ browser may render โ next macrotask.
If a macrotask (or a long chain of microtasks) takes longer than 16ms, rendering gets delayed. The page feels janky. This is why you sometimes see advice to break up long-running synchronous code with setTimeout(fn, 0) โ it moves the remaining work to a separate macrotask and gives the browser a chance to render in between.
requestAnimationFrame is the API designed for this. It schedules a callback to run right before the next render, which is the ideal time for visual updates. I don't think of rAF as part of the event loop exactly โ it's more like a hook into the rendering pipeline โ but it interacts with the event loop in ways that matter for animation code.
I've been writing JavaScript for years and I still occasionally get surprised by timing-related behavior. The event loop model is simple in principle โ one stack, two queues, microtasks before macrotasks โ but the interactions between async patterns, framework code, and browser rendering create enough complexity that I don't think anyone memorizes all of it. You just build up enough intuition to know where to look when something doesn't behave the way you expected.
Written by
Anurag Sinha
Developer who writes about the stuff I actually use day-to-day. If I got something wrong, let me know.
Found this useful?
Share it with someone who might find it helpful too.
Comments
Loading comments...
Related Articles
Tailwind vs CSS Modules: What We Ended Up Doing
How our team debated and resolved the Tailwind vs CSS Modules question. We didn't pick just one.
Accessibility Bugs I Keep Finding in Web Apps
The most frequent accessibility violations I encounter in code reviews, why they matter, and the specific fixes.
What We Learned Migrating to React Server Components
Notes from migrating our SPA to the Next.js App Router and React Server Components. What improved, what broke, and what surprised us.