The JavaScript Event Loop, Explained By Working Through It
Walking through async JavaScript to show how the Event Loop decides what runs when.

What order do these console.log statements execute in?
console.log("A");
setTimeout(() => {
console.log("B");
Promise.resolve().then(() => console.log("C"));
}, 0);
Promise.resolve().then(() => {
console.log("D");
setTimeout(() => console.log("E"), 0);
});
console.log("F");
Got an answer? If you said A, F, D, B, C, E โ you already understand the event loop better than I did after watching three YouTube explainer videos about it. If you got it wrong, or you're not sure why that's the order, that's exactly where I was when I failed this kind of question in a job interview back in 2023.
The abstract explanations weren't sticking. "There's a call stack and a task queue and callbacks get processed." Fine. But when it came to predicting specific behavior, my understanding collapsed. The missing piece was the priority system. JavaScript doesn't have a single queue for "async stuff that runs later." It has multiple queues with strict ordering between them.
Single-Threaded (Mostly)
JavaScript runs your code on one thread. One call stack. One piece of code executing at a time. If a function is running, nothing else runs until it finishes.
But JavaScript applications do many things concurrently. HTTP requests, timers, user click listeners, file reads โ all happening at the same time. How? Because async operations don't happen in JavaScript. They're handled by the runtime environment.
When you call setTimeout, JavaScript doesn't count milliseconds. It hands the timer off to the browser (or Node.js's internal C++ layer). The browser manages the timer on its own thread. When the timer expires, the browser puts your callback into a queue. The event loop checks that queue and moves callbacks onto the call stack when the stack is empty.
That delegation is the entire trick. JavaScript itself runs on one thread. The runtime environment around it does not.
Two Queues
Multiple queues exist, but two cover 95% of what matters.
The Microtask Queue โ callbacks from resolved Promises, queueMicrotask(), MutationObserver callbacks. High priority.
The Macrotask Queue (sometimes called Task Queue) โ callbacks from setTimeout, setInterval, setImmediate (Node.js), I/O events. Lower priority.
The priority rule: after each macrotask completes, the event loop drains the entire microtask queue before touching the next macrotask. After the initial script finishes (which is itself a macrotask), microtasks go first too.
Tracing the Snippet Step by Step
Back to the code at the top. The engine executes the script top-to-bottom as one synchronous block.
Step 1: console.log("A") runs. Output: A.
Step 2: setTimeout is encountered. The callback gets registered with the runtime. Goes into the macrotask queue. The delay being 0ms doesn't mean "fire now" โ it means "fire after a minimum of 0ms, through the queue system." Still has to wait its turn.
Step 3: Promise.resolve().then(...) โ promise resolves immediately, but the .then() callback doesn't execute yet. Goes into the microtask queue.
Step 4: console.log("F") runs. Output: F.
Synchronous script is done. Call stack is empty. Event loop kicks in.
Event loop checks microtask queue first. Finds the Promise callback. Runs it. console.log("D") outputs D. Inside that callback, a setTimeout registers โ its callback goes to the macrotask queue. Microtask queue is now empty.
Event loop moves to macrotask queue. First item: the original setTimeout callback. Runs. console.log("B") outputs B. During that callback, Promise.resolve().then(...) adds a new microtask.
Before the next macrotask, microtasks drain again. The Promise callback from inside the setTimeout runs. console.log("C") outputs C. Queue empty.
Back to macrotask queue. One item left โ the setTimeout registered inside the first Promise. console.log("E") outputs E.
Final output: A, F, D, B, C, E.
The critical insight: D comes before B even though both are "async." Promises are microtasks. setTimeout is a macrotask. Microtasks always get priority.
Why This Matters Beyond Interviews
Had a bug that took a full day to find. A React app with data fetching that looked like this:
async function loadDashboard() {
const user = await fetchUser();
updateUI(user);
// Schedule a background sync
setTimeout(() => syncAnalytics(user.id), 0);
const preferences = await fetchPreferences(user.id);
applyTheme(preferences.theme);
}
If you're building with the Next.js App Router, understanding these mechanics matters because Server Components and client components handle async data fetching very differently.
The bug: applyTheme sometimes ran with stale preferences. Not always โ intermittently. What was happening, as far as I can tell: on slow connections, the second await would yield, and during that yield, the syncAnalytics timer would fire. Through a chain of side effects, it mutated shared state that fetchPreferences depended on.
The fix required understanding execution order. When you await something, the rest of the function becomes a microtask that runs when the awaited promise resolves. Meanwhile, macrotasks queued earlier might slip in between your awaits. The function reads like it's sequential. It's not โ there are gaps at every await where other code executes.
Wouldn't have diagnosed this without understanding microtask vs macrotask ordering, I think. Not interview trivia.
Starvation
Something I didn't know until embarrassingly recently. Because the event loop drains ALL microtasks before moving to the next macrotask, you can accidentally starve the macrotask queue.
function recursiveMicrotask() {
Promise.resolve().then(() => {
// This keeps adding to the microtask queue forever
recursiveMicrotask();
});
}
recursiveMicrotask();
// This setTimeout callback will NEVER run
setTimeout(() => console.log("I'll never print"), 0);
Each microtask schedules another microtask. Queue never empties. Event loop never reaches the macrotask queue. The setTimeout callback sits there forever.
In a browser, this freezes the page โ rendering is blocked by microtask processing too. Page becomes unresponsive. Never written code this blatant, but I've seen production code where a recursive async function accidentally created a similar pattern. Manifested as the UI freezing under certain conditions. Hard to diagnose because CPU wasn't pegged โ the event loop was just stuck processing an ever-growing microtask queue.
How Async/Await Fits In
async/await is how most people write async JavaScript now, and it can obscure the queue mechanics.
An async function returns a Promise. await pauses the function and the rest of the body becomes a microtask attached to the awaited promise.
async function example() {
console.log("1 - before await");
await Promise.resolve();
console.log("2 - after await");
}
console.log("3 - before calling example");
example();
console.log("4 - after calling example");
Output: 3, 1 - before await, 4 - after calling example, 2 - after await.
The async function runs synchronously up to the first await. Then yields. Calling code continues. The continuation after await runs as a microtask once the stack clears.
This trips people up because the code reads as sequential. It's not. Yield points exist at every await where other code can execute. async/await is a readability improvement over callback chains and raw .then(). But it hides the fact that your function doesn't execute top-to-bottom without interruption.
Node.js Has Extra Phases
Everything above applies to both browser and Node.js. Node adds extra queue phases worth mentioning for backend work.
Node uses libuv for its event loop, processing events in phases: timers, pending callbacks, idle/prepare, poll, check, close callbacks. Two come up most: the timers phase (setTimeout/setInterval callbacks) and the check phase (setImmediate callbacks).
setImmediate is Node-specific, fires after I/O events, before timers. The ordering between setTimeout(fn, 0) and setImmediate(fn) is non-deterministic when called from the main module โ depends on how fast event loop setup completes. Inside an I/O callback, setImmediate always fires first. Confusing? Yes. Comes up rarely but has appeared when writing performance-sensitive Node code.
process.nextTick() โ even higher priority than microtasks. Runs immediately after the current operation, before the event loop continues. Like a microtask but cuts the line. I try to avoid it because it makes execution order harder to reason about, but some Node core APIs use it internally.
The Browser Rendering Connection
In browsers, one more piece. The browser needs to repaint โ update the DOM, recalculate styles, redraw pixels. Happens between macrotasks, roughly every 16ms for 60fps.
Sequence: run a macrotask โ drain all microtasks โ browser may render โ next macrotask.
Long macrotask or a long microtask chain exceeding 16ms? Rendering gets delayed. Page feels janky. That's why you sometimes see advice to break up long synchronous work with setTimeout(fn, 0) โ it moves remaining work to a separate macrotask and gives the browser a render opportunity.
requestAnimationFrame schedules a callback right before the next render โ ideal timing for visual updates. Not exactly part of the event loop, more a hook into the rendering pipeline. Interacts with the event loop in ways that matter for animation code.
If you're working with TypeScript, the event loop becomes relevant when using discriminated unions and async patterns to model loading states โ getting timing wrong means rendering the wrong variant.
Common Mistakes That Come From Misunderstanding the Loop
A few patterns I've seen in production code that made more sense once I understood the queue mechanics.
Using setTimeout to "wait" for a state update. In React class components (or any framework with async state), people write setTimeout(() => readState(), 0) thinking it guarantees the state has updated. Sometimes it works. Sometimes it doesn't. The setTimeout callback runs after the current macrotask, but the state update might involve microtasks (React's batched state uses its own scheduling) that haven't all resolved yet. The timing isn't guaranteed. If you need to read state after an update, use the callback form (setState(newValue, () => readState())) or an effect hook. Don't lean on queue timing for framework-specific behavior.
Wrapping everything in Promises to "make it async." Seen code like await new Promise(resolve => resolve(synchronousFunction())). This doesn't make the function asynchronous. The function still runs synchronously on the current call stack. What it does is push the continuation (whatever comes after the await) to the microtask queue. The practical effect: you've added latency without adding concurrency. Useful in rare cases where you deliberately want to yield to the microtask queue (to avoid call stack depth issues), but most of the time it's just overhead.
Assuming Promise.all runs things in parallel. Promise.all([fetchA(), fetchB(), fetchC()]) โ the fetch calls are already running by the time Promise.all receives them. Promise.all doesn't start them. It waits for them. The parallelism comes from calling all three functions before awaiting any of them. The common mistake is writing:
const results = await Promise.all([
await fetchA(),
await fetchB(),
await fetchC()
]);
Those await keywords inside the array defeat the purpose. fetchA() completes before fetchB() starts. Sequential, not parallel. Remove the inner awaits and the calls actually overlap.
Not understanding that for...of with await is sequential. This:
for (const url of urls) {
const data = await fetch(url);
results.push(data);
}
Fetches one URL at a time. Each request waits for the previous one to complete. For 10 URLs at 200ms each, that's 2 seconds. Compare with:
const results = await Promise.all(urls.map(url => fetch(url)));
All 10 requests fire simultaneously. Total time is roughly the slowest single request โ maybe 300ms. The sequential version is occasionally what you want (when each request depends on the previous result), but more often it's an accident.
Real-World Performance Implications
One thing that doesn't come up in most event loop tutorials: the queue mechanics have direct performance consequences for web applications that go beyond "page feels janky."
Consider a search-as-you-type feature. User types a character, you fire an API request, process the response, and update the UI. If the user types fast, you might have five pending requests and five pending UI updates queued up. Each response handler is a macrotask. Each React state update triggered inside those handlers generates microtasks. Pile these up and the event loop gets congested โ older results arriving after newer ones, UI flickering between stale and fresh data, scroll performance degrading because rendering can't keep up.
Debouncing the input helps, but it's treating the symptom. The root fix involves canceling stale requests (AbortController), dropping responses that arrive out of order, and batching UI updates so the render pipeline doesn't choke. All of this requires understanding where your code sits in the queue hierarchy and what happens when multiple async operations compete for the same render frame.
Debugging Timing Issues
When something behaves differently than expected and you suspect event loop ordering is involved, a few approaches help.
Sprinkle console.log with labels at suspected yield points. Not elegant. Works. When you see the output order doesn't match your mental model, the logs tell you exactly where your assumption was wrong.
For more complex situations, the Performance tab in Chrome DevTools shows the actual task execution timeline. You can see macrotasks, microtasks, rendering frames, and the gaps between them. It's a lot of data, but zoom into the relevant time window and you can watch the event loop process your code in real time. Helped me diagnose a rendering jank issue that turned out to be a Promise chain that was too long โ microtasks were blocking the render frame.
Node.js has --trace-warnings and --trace-uncaught flags that can surface promise-related issues. Also process.on('unhandledRejection', handler) catches promises that fail without a .catch() or try/catch โ which in the event loop model means an error that disappears silently because nothing is listening for it.
The Mental Model, Compressed
Been writing JavaScript for years and timing-related behavior still occasionally surprises me. The compressed model that fits in working memory:
- Synchronous code runs first, completely, before anything async
- Microtasks (Promises) run before macrotasks (setTimeout/setInterval)
- After each macrotask, ALL microtasks drain before the next macrotask
awaitcreates a yield point โ code after it becomes a microtask- Browser rendering happens between macrotasks, not during them
That covers maybe 90% of the timing behavior you'll encounter. The remaining 10% is edge cases with process.nextTick, requestAnimationFrame, setImmediate, and the specific priority ordering within Node's libuv phases. You encounter those rarely enough that looking them up when needed is fine.
One more practical note: when diagnosing timing issues in production, console.log with timestamps is underrated. Something like:
function timedLog(label) {
console.log(`[${performance.now().toFixed(1)}ms] ${label}`);
}
Sprinkle these around the suspected code path. The timestamps tell you not just the order of execution but the gaps between events. "Oh, there's a 200ms gap between the fetch completing and the callback running" โ that gap is where other code is executing, and the timestamp makes it visible.
For async code specifically, unhandled promise rejections are a common source of silent failures. A promise that rejects without a .catch() or a try/catch around the await might not produce any visible error in some environments. Node.js used to silently swallow these. Recent versions emit a warning and will eventually terminate the process. Adding a global handler catches the ones you miss:
process.on('unhandledRejection', (reason, promise) => {
console.error('Unhandled Rejection:', reason);
// In production: send to error tracking service
});
Not a substitute for proper error handling. A safety net for the errors that slip through.
If you're working with TypeScript, the event loop becomes relevant when using discriminated unions and async patterns to model loading states โ getting timing wrong means rendering the wrong variant.
The interactions between async patterns, framework code, and browser rendering create enough complexity that nobody memorizes all of it. You build intuition. Know where to look when something doesn't behave as expected. That's about the best you can do.
Written by
Anurag Sinha
Full-stack developer specializing in React, Next.js, cloud infrastructure, and AI. Writing about web development, DevOps, and the tools I actually use in production.
Stay Updated
New articles and tutorials sent to your inbox. No spam, no fluff, unsubscribe whenever.
I send one email per week, max. Usually less.
Comments
Loading comments...
Related Articles

WebSockets โ When Polling Stops Being Enough
Why I switched from polling to WebSockets, the reconnection logic nobody warns you about, and what happens when you try to scale beyond one server.

WebAssembly Demystified โ It's Not Just 'Fast JavaScript'
What WebAssembly actually is under the hood, why calling it fast JavaScript misses the point, and the Rust-to-WASM pipeline I use in real projects.

HTTP/3 and QUIC โ Why HTTP/2 Wasn't the Final Answer
The protocol running a third of the web that most developers haven't thought about. Connection migration, 0-RTT handshakes, and why switching from TCP to UDP was the only way forward.