Web Performance That Actually Matters โ Beyond Lighthouse Scores
What actually moves the needle on real user experience: fixing LCP, CLS, and INP with changes that users notice, not just scores that improve.

Spent a week getting a Lighthouse score from 67 to 98. Felt great. Shipped it. Users complained the site still felt slow, as far as I can tell. Checked real user monitoring data โ field performance was barely different from before. The Lighthouse score improved because I optimized for Lighthouse's synthetic test conditions: a simulated mobile device on a throttled connection, loading the page with an empty cache, once. Real users on real devices with real network conditions and partially cached assets had a different experience entirely.
That was the week I stopped chasing scores and started looking at what actually makes a website feel fast. Different question. Different answers.
The Metrics That Correlate with User Experience
Google's Core Web Vitals aren't perfect, but they're probably the closest thing we have to numbers that map to "does this feel fast." Three metrics:
Largest Contentful Paint (LCP): how long until the biggest visible content element renders. Usually a hero image, a heading, or a large text block. Target: under 2.5 seconds. This is the "how long until I see something useful" metric.
Cumulative Layout Shift (CLS): how much the page content moves around while loading. Buttons jumping down because an ad loaded above them. Text reflowing because a font swapped. Target: under 0.1. This is the "why did I click the wrong thing" metric.
Interaction to Next Paint (INP): how long the browser takes to visually respond after the user clicks, taps, or types. Target: under 200ms. This replaced First Input Delay in March 2024 and is harder to optimize because it measures every interaction, not just the first one.
These matter because they measure what users actually experience. Time to First Byte matters too but indirectly โ a slow TTFB delays everything else.
LCP โ The One That Affects Perceived Speed Most
A page can load 4MB of JavaScript and the user probably won't care if the content they came for appears in under 2 seconds. Conversely, a tiny page that shows a loading spinner for 3 seconds feels slow regardless of total page weight.
Fixing Image LCP
Most LCP elements are images. The fastest image optimization I've done that made the biggest difference:
<!-- Before: browser discovers image only after parsing HTML and CSS -->
<div class="hero">
<img src="/images/hero.webp" alt="Hero image" />
</div>
<!-- After: preload tells the browser to start downloading immediately -->
<link rel="preload" as="image" href="/images/hero.webp" fetchpriority="high" />
<div class="hero">
<img src="/images/hero.webp" alt="Hero image"
fetchpriority="high"
width="1280" height="720"
decoding="async" />
</div>
The preload link in the <head> tells the browser to start downloading the image before it even parses the body HTML. fetchpriority="high" prioritizes it over other resources. On a site I worked on, this alone cut LCP from 3.1 seconds to 1.8 seconds on 4G connections. The image was the same size โ we just told the browser to start fetching it sooner.
For Next.js specifically, the Image component handles this with the priority prop:
import Image from 'next/image';
// This automatically adds preload + fetchpriority="high"
<Image
src="/images/hero.webp"
alt="Hero image"
width={1280}
height={720}
priority
/>
Other image optimizations that compound:
Serve WebP or AVIF. A 300KB JPEG becomes a 90KB WebP with equivalent visual quality. AVIF is even smaller but encoding is slower. I run both through my build pipeline and serve based on browser support. Modern CDNs handle this with content negotiation.
Responsive images with srcset. Don't serve a 2400px-wide image to a 375px-wide mobile screen. The browser downloads all those extra pixels and discards them.
<img src="/images/hero-800.webp"
srcset="/images/hero-400.webp 400w,
/images/hero-800.webp 800w,
/images/hero-1200.webp 1200w,
/images/hero-1600.webp 1600w"
sizes="(max-width: 768px) 100vw, 50vw"
alt="Hero image"
width="1600" height="900" />
Fixing Font LCP
Web fonts are the other common LCP blocker. The browser downloads the font file, then re-renders text with the custom font. Until the font loads, text is either invisible (FOIT โ Flash of Invisible Text) or shown in a fallback font and then swaps (FOUT โ Flash of Unstyled Text).
/* Preload the critical font */
@font-face {
font-family: 'Inter';
src: url('/fonts/inter-var.woff2') format('woff2');
font-display: swap; /* show fallback immediately, swap when loaded */
font-weight: 100 900;
}
<link rel="preload" as="font" type="font/woff2"
href="/fonts/inter-var.woff2" crossorigin />
font-display: swap is the minimum โ shows text immediately in a fallback font, swaps to the custom font when it loads. The swap causes a layout shift (CLS), but the text is visible from the start (better LCP).
For even better results, I create a fallback font with matching metrics using tools like @next/font or manually calculating the size-adjust, ascent-override, and descent-override properties:
@font-face {
font-family: 'Inter Fallback';
src: local('Arial');
ascent-override: 90.49%;
descent-override: 22.48%;
line-gap-override: 0%;
size-adjust: 107.64%;
}
body {
font-family: 'Inter', 'Inter Fallback', sans-serif;
}
The fallback font is adjusted to occupy the same space as Inter. When Inter loads and swaps in, the text doesn't reflow. Zero CLS from font loading. Took about 30 minutes to set up and eliminated font-related layout shift entirely.
CLS โ Death by a Thousand Layout Shifts
CLS is the metric that frustrates users the most. You're about to tap a button and it jumps because an element loaded above it. You start reading and the text moves because an image rendered with unknown dimensions. Infuriating.
Images Without Dimensions
The most common CLS cause. An image loads, the browser doesn't know how tall it'll be, so the content below shifts down when the image renders.
<!-- Bad: CLS when image loads -->
<img src="/photo.webp" alt="Photo" />
<!-- Good: browser reserves space before image loads -->
<img src="/photo.webp" alt="Photo" width="800" height="600" />
<!-- Also good: CSS aspect ratio -->
<img src="/photo.webp" alt="Photo"
style="aspect-ratio: 4/3; width: 100%; height: auto;" />
Always set width and height attributes on images. The browser calculates the aspect ratio and reserves the correct space before the image downloads. This one change fixed about 60% of CLS issues on a site I audited.
Dynamic Content Insertion
Ads, cookie banners, notification bars โ anything that inserts content into the layout after initial render causes shift.
/* Reserve space for the ad slot before it loads */
.ad-container {
min-height: 250px; /* standard ad height */
background: #f5f5f5;
}
/* Cookie banner: overlay instead of pushing content */
.cookie-banner {
position: fixed;
bottom: 0;
left: 0;
right: 0;
/* Doesn't push page content because it's positioned out of flow */
}
The cookie banner is the one I see done wrong constantly. A banner that inserts at the top of the page and pushes all content down causes massive CLS. Fixed positioning โ overlaying the banner on top of content โ causes zero layout shift. Put it at the bottom of the viewport. Users can interact with the page while the banner is visible. This is what we did on this site with the cookie consent component.
Font Swap Shift
Covered above in the LCP section. The fallback font with matching metrics is the fix. If you can't match metrics perfectly, at least ensure the fallback font has a similar x-height and character width. Arial is usually a reasonable match for popular sans-serif web fonts.
INP โ The New Hard Problem
Interaction to Next Paint replaced First Input Delay as a Core Web Vital. FID only measured the first interaction's input delay. INP measures every interaction throughout the page's lifetime and reports the worst (roughly โ it's the 98th percentile).
This means a page that responds quickly to the first click but lags on the 50th click scores badly. Long-running JavaScript that blocks the main thread during any interaction hurts INP.
Breaking Up Long Tasks
The browser's main thread handles JavaScript execution, layout, painting, and user input. If a JavaScript task runs for 300ms, user input during that time waits 300ms before the browser can respond. INP measures that delay.
// Bad: blocks main thread for 200ms+
function processLargeList(items) {
items.forEach(item => {
// Heavy computation per item
const result = expensiveTransform(item);
updateDOM(result);
});
}
// Better: yield to the browser between chunks
async function processLargeList(items) {
const CHUNK_SIZE = 50;
for (let i = 0; i < items.length; i += CHUNK_SIZE) {
const chunk = items.slice(i, i + CHUNK_SIZE);
chunk.forEach(item => {
const result = expensiveTransform(item);
updateDOM(result);
});
// Yield to browser - allows input handling between chunks
await new Promise(resolve => setTimeout(resolve, 0));
}
}
The setTimeout(resolve, 0) yields control back to the browser's event loop. If a user clicks something during the yield, the browser handles it immediately instead of waiting for the entire list to process. The total processing time increases slightly (each yield has overhead), but user interactions feel responsive throughout.
For modern browsers, scheduler.yield() is the proper API:
async function processLargeList(items) {
for (const item of items) {
expensiveTransform(item);
// Yield to browser if there's pending user input
if (navigator.scheduling?.isInputPending?.()) {
await scheduler.yield();
}
}
}
Event Handler Optimization
Common INP killer: click handlers that do too much synchronously.
// Bad: user clicks, sees nothing for 500ms while all this runs
button.addEventListener('click', () => {
validateForm(); // 50ms
transformData(); // 100ms
sendAnalyticsEvent(); // 200ms (sync network call?!)
updateUI(); // 50ms
animateTransition(); // 100ms
});
// Better: respond visually first, defer non-critical work
button.addEventListener('click', () => {
// Immediate visual feedback (< 50ms)
button.classList.add('loading');
updateUI();
// Defer non-critical work
requestIdleCallback(() => {
sendAnalyticsEvent();
});
// Async work that doesn't block interaction
queueMicrotask(() => {
validateForm().then(transformData);
});
});
The principle: show the user something changed within 100ms. Everything else can happen after. Analytics events especially should never block UI updates. I've seen sites where a synchronous analytics call added 200ms to every click. Moving it to requestIdleCallback or a web worker made every interaction feel instant.
Third-Party Scripts
The elephant in the room. Your code might be optimized, but the three analytics scripts, the chat widget, the A/B testing tool, and the social media embeds each add their own JavaScript that competes for the main thread.
<!-- Load third-party scripts without blocking -->
<script async src="https://analytics.example.com/script.js"></script>
<!-- Or better: load after the page is interactive -->
<script>
// Delay non-essential third-party scripts
window.addEventListener('load', () => {
setTimeout(() => {
const script = document.createElement('script');
script.src = 'https://chat-widget.example.com/widget.js';
document.body.appendChild(script);
}, 3000); // 3 seconds after load
});
</script>
Delaying the chat widget by 3 seconds after page load improved INP on one project by 80ms. The widget was evaluating 400KB of JavaScript on load, blocking the main thread for 300ms. Users who wanted to interact with the page in those first 3 seconds got a snappier experience. Users who wanted the chat widget got it 3 seconds later โ rarely a problem since most chat interactions don't happen in the first 3 seconds.
What Actually Moves the Needle
After optimizing dozens of sites, the changes that consistently produce the biggest improvements in real user metrics:
1. Fix the Critical Rendering Path
The browser can't render anything until it has the HTML, critical CSS, and has evaluated render-blocking JavaScript. Every millisecond on this path delays everything.
<!-- Inline critical CSS so it doesn't require a separate request -->
<style>
/* Only the CSS needed for above-the-fold content */
body { margin: 0; font-family: system-ui, sans-serif; }
.header { display: flex; align-items: center; padding: 1rem; }
.hero { min-height: 60vh; display: grid; place-items: center; }
/* ... minimal critical styles */
</style>
<!-- Load full stylesheet without blocking render -->
<link rel="preload" href="/styles/main.css" as="style"
onload="this.onload=null;this.rel='stylesheet'" />
<noscript><link rel="stylesheet" href="/styles/main.css" /></noscript>
Inlining critical CSS eliminates one round trip. The browser renders above-the-fold content immediately with the inlined styles, then loads the full stylesheet asynchronously. For a site served over HTTP/2, the benefit is smaller (multiplexed requests), but still measurable on slow connections.
2. Reduce JavaScript Bundle Size
Every kilobyte of JavaScript costs more than a kilobyte of images. Images can render progressively and don't block the main thread. JavaScript must be downloaded, parsed, compiled, and executed โ all on the main thread.
# Check what's actually in your bundle
npx source-map-explorer dist/main.js
# Common findings:
# - moment.js: 300KB (replace with date-fns or dayjs: 7KB)
# - lodash: 70KB (import specific functions: lodash/debounce is 1KB)
# - Unused code from tree-shaking failures
On a Next.js project, running @next/bundle-analyzer revealed that date-fns was being included in its entirety (75KB gzipped) even though we only used format and parseISO. Switching to individual imports cut the bundle by 60KB. Multiplied by thousands of daily visitors on mobile connections, that's meaningful.
3. Server-Side Rendering for Content Sites
Client-side rendering means the browser downloads HTML with an empty <div id="root">, downloads JavaScript, executes it, fetches data, then renders content. The user sees nothing useful until all of that completes.
Server-side rendering sends complete HTML. The browser renders content immediately. JavaScript hydrates the page for interactivity afterward. LCP improves dramatically because content is visible before JavaScript loads.
For Next.js, I covered Server Components in detail in my React Server Components post. The short version: components that don't need interactivity should render on the server. Zero client-side JavaScript for those components. This blog uses that approach โ articles render server-side, only interactive elements (navigation, theme toggle) ship JavaScript to the client.
4. Lazy Load Below-the-Fold
Anything the user can't see on initial load shouldn't compete for bandwidth with what they can see.
// Native lazy loading for images
<img src="/images/section-3.webp" loading="lazy" alt="..." />
// Dynamic imports for JavaScript components
const HeavyChart = dynamic(() => import('./HeavyChart'), {
loading: () => <ChartSkeleton />,
ssr: false // don't include in server-rendered HTML
});
loading="lazy" on images is the simplest optimization with the biggest impact for content-heavy pages. The browser only downloads images as they approach the viewport. A blog post with 10 images initially downloads only the hero image. The rest load as the user scrolls. Reduced initial page weight by 70% on a photography site with this one attribute.
5. CDN and Caching
No code optimization beats serving content from a server 20ms away instead of 200ms away.
# Cache static assets aggressively
location /images/ {
expires 1y;
add_header Cache-Control "public, immutable";
}
location /_next/static/ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# Cache HTML pages with revalidation
location / {
add_header Cache-Control "public, max-age=60, stale-while-revalidate=300";
}
immutable tells the browser the file will never change at this URL (use hashed filenames). The browser doesn't even send a conditional request on return visits. stale-while-revalidate serves the cached version immediately while fetching an update in the background. Users get instant page loads on repeat visits, and the content updates within 5 minutes.
Measuring What's Real
Lighthouse runs in a controlled environment. Real User Monitoring (RUM) captures what actual users experience on actual devices with actual network conditions.
// Report Core Web Vitals from real users
import { onCLS, onINP, onLCP } from 'web-vitals';
function sendToAnalytics(metric) {
// Send to your analytics endpoint
navigator.sendBeacon('/api/vitals', JSON.stringify({
name: metric.name,
value: metric.value,
rating: metric.rating, // "good", "needs-improvement", "poor"
page: window.location.pathname,
connection: navigator.connection?.effectiveType
}));
}
onCLS(sendToAnalytics);
onINP(sendToAnalytics);
onLCP(sendToAnalytics);
The web-vitals library captures the same metrics Google uses. Sending them to your own analytics lets you see performance by page, by device type, by connection speed. "LCP is 3.2 seconds on mobile 4G but 1.1 seconds on desktop WiFi" is actionable information. A single Lighthouse score is not.
Google's Chrome User Experience Report (CrUX) also provides field data for sites with enough traffic. Check the PageSpeed Insights "field data" section โ if it has numbers, those are from real Chrome users over the past 28 days. Those numbers matter more than the lab score below them.
Performance is a feature, not a checklist item. Users don't care about your Lighthouse score. They care that the page loaded fast, the content didn't jump around, and the buttons responded when clicked. Optimize for the experience, measure with real users, and stop when the numbers say the experience is good โ not when the score hits 100.
Further Resources
- web.dev Performance โ Google's guide to Core Web Vitals, performance auditing, and optimization techniques with hands-on examples.
- MDN Web Performance โ Mozilla's comprehensive documentation on browser rendering, resource loading, and performance measurement APIs.
- Web Vitals (GitHub) โ The official JavaScript library for measuring Core Web Vitals in the field, maintained by the Chrome team.
Written by
Anurag Sinha
Full-stack developer specializing in React, Next.js, cloud infrastructure, and AI. Writing about web development, DevOps, and the tools I actually use in production.
Stay Updated
New articles and tutorials sent to your inbox. No spam, no fluff, unsubscribe whenever.
I send one email per week, max. Usually less.
Comments
Loading comments...
Related Articles

Browser DevTools โ Way Beyond console.log
The debugging features hiding in plain sight that took me years to discover. Performance profiling, memory leak hunting, network simulation, and the snippets panel I now use daily.

WebAssembly Demystified โ It's Not Just 'Fast JavaScript'
What WebAssembly actually is under the hood, why calling it fast JavaScript misses the point, and the Rust-to-WASM pipeline I use in real projects.

HTTP/3 and QUIC โ Why HTTP/2 Wasn't the Final Answer
The protocol running a third of the web that most developers haven't thought about. Connection migration, 0-RTT handshakes, and why switching from TCP to UDP was the only way forward.