What We Learned Migrating to React Server Components
Notes from migrating our SPA to the Next.js App Router and React Server Components. What improved, what broke, and what surprised us.

"We spent years getting business logic out of the view layer and now you want me to put Prisma queries in JSX?"
That was Ravi, one of our backend engineers, during the planning meeting for the migration. He wasn't wrong to be skeptical. But the performance numbers from our existing setup were also hard to ignore โ a white screen for 4+ seconds on mid-range phones over spotty connections, a Lighthouse score nobody wanted to discuss out loud, and a JavaScript bundle that shipped markdown-it, date-fns, and prismjs to every visitor just to render what was mostly static text.
If you want the broader architectural picture of the Next.js App Router and how server/client components work at a conceptual level, I covered that in my post on building production-ready apps with Next.js. What follows here is more specific โ the messy middle of the actual migration, the fights we had, and the things we'd do differently.
Starting Position
Client-side React SPA. Vite, React Router, Redux, standard REST API. Built over two years by a team that fluctuated between three and six people. Not a disaster. Not great either.
The bundle was probably the main problem. Blog pages โ just text and code snippets โ were pulling in 380KB of JavaScript. For syntax highlighting that could happen on the server. Date formatting that didn't need the browser. The waste was clear and we knew it, but every attempt at optimization was fighting the architecture. Client-side rendering means everything downloads, parses, and executes before a user sees content. Four sequential steps. Slow by design.
Three weeks estimated. Team of four. We committed.
Migration Strategy
Chose incremental over rewrite. Keep existing pages working. Migrate route by route. Test as we go. Right call in retrospect, from what I've seen, but it created an awkward period where half the app ran on the old Pages Router and half on the new App Router. Error messages from Next.js weren't always clear about which system was complaining.
Started with pages that had the most to gain: blog renderer, sidebar, documentation pages. Almost entirely static content. The theory: move these to Server Components, and the libraries they depend on stop shipping to the browser.
// Before: client-side, ships markdown-it to every visitor
import MarkdownIt from 'markdown-it';
import Prism from 'prismjs';
export function PostRenderer({ rawMarkdown }) {
const [html, setHtml] = useState('');
useEffect(() => {
const md = new MarkdownIt();
setHtml(md.render(rawMarkdown));
}, [rawMarkdown]);
return <div dangerouslySetInnerHTML={{ __html: html }} />;
}
// After: Server Component, markdown-it stays on the server
import MarkdownIt from 'markdown-it';
export async function PostRenderer({ slug }) {
const post = await db.post.findUnique({ where: { slug } });
const md = new MarkdownIt();
const html = md.render(post.content);
return <div dangerouslySetInnerHTML={{ __html: html }} />;
}
No "use client". No useState, no useEffect. Component runs on the server, renders HTML, sends it down. Browser never sees markdown-it. For these simple cases, the promise delivered exactly as advertised.
The second decision: what to do about Redux. Global store handling auth state, shopping cart, UI preferences โ and also a pile of data that was really just server state. Product listings, blog metadata, user profiles. All piped through Redux.
Server Components don't have access to React Context. No Provider tree. No useSelector. We had to split things. Interactive stuff โ cart, theme toggle, notification preferences โ went to a client-side Zustand store (dropped Redux in the process, partly for this reason, partly because nobody enjoyed writing reducers anymore). Everything else became direct database queries inside Server Components.
Architecturally correct move. Took a full week longer than the estimate.
Week Two Problems
Third-party libraries were the first real surprise. Roughly half our react-* npm packages broke inside Server Components. They used hooks or browser APIs internally but didn't have "use client" directives in their exports. Error messages were sometimes helpful, sometimes a cryptic webpack error about window is not defined buried three dependencies deep.
Our fix โ used it over and over:
// components/client-wrappers/chart-wrapper.tsx
"use client";
export { BarChart } from 'react-charts-lib';
Re-export the component from a file with the client directive. Felt hacky. Worked fine. Ended up with a client-wrappers directory containing about twelve of these files. The library ecosystem hasn't caught up yet.
Second surprise was more insidious. Shipped a build that looked fine locally, passed tests โ then we noticed the client bundle had ballooned by 2MB. A Server Component imported from a shared utility file. That utility file also exported a map component pulling in Mapbox GL. Bundler saw the import from a file containing both server and client exports, pulled everything into the client bundle. Fix: split the utility file into two โ server-only exports and client exports. Didn't catch it for two days. Could have easily reached production.
This is the kind of thing that makes me nervous about RSC long-term. The boundary between server and client code is invisible in the file system. No visual distinction. You have to be disciplined about imports. Discipline doesn't scale to twenty developers.
The "use client" boundary created structural puzzles too. Server Component can render a Client Component โ fine. But a Client Component can't import a Server Component. We had a server-rendered sidebar that needed to sit inside a client-side interactive panel. Had to restructure so the panel received the sidebar as children:
// layout.tsx (Server Component)
import { InteractivePanel } from './interactive-panel'; // Client Component
import { Sidebar } from './sidebar'; // Server Component
export default function Layout() {
return (
<InteractivePanel>
<Sidebar />
</InteractivePanel>
);
}
Logical once the mental model clicks. But we had three or four spots where the component hierarchy needed rearranging, each one a small puzzle.
The Team Conversation
This section isn't about code.
Ravi's objection โ Prisma queries in JSX โ was legitimate. The argument for it: Server Components run on the server. Building a REST endpoint to serialize data to JSON, send it over HTTP, deserialize it, and put it into state is adding latency and complexity for an abstraction boundary that no longer exists. The component is already on the machine with the database.
The argument against: where do queries go when a mobile app needs the same data? Where does business logic live? If validation and data access scatter across fifty React components, how do you audit it?
Didn't fully resolve it. Landed on a compromise. Data access goes through a lib/queries layer โ plain async functions calling Prisma, returning typed objects. Server Components call these functions, not raw Prisma. Queries are colocated with UI but the actual database interaction sits in a shared layer that a future API could also consume.
// lib/queries/posts.ts
export async function getPostBySlug(slug: string) {
return db.post.findUnique({
where: { slug },
include: { author: true, tags: true },
});
}
// app/blog/[slug]/page.tsx (Server Component)
import { getPostBySlug } from '@/lib/queries/posts';
export default async function BlogPost({ params }) {
const post = await getPostBySlug(params.slug);
if (!post) notFound();
return <PostRenderer post={post} />;
}
Ravi was okay with this. Meera thought we should have kept a full API layer. Not sure who's right. The lib/queries approach works now but I can see it getting messy as the app grows. There's a version of this codebase six months from now where someone puts auth checks in a component instead of the query layer and we have a security hole. Technology made the mistake easier to make. Process has to catch it.
Cultural adjustment period was real too. Frontend devs were used to components as browser things โ effects, event handlers, DOM manipulation. Now some components were basically controller functions that happened to return JSX. Two people said it clicked after about a week. One person said it still felt wrong at the end of the migration. That person isn't wrong.
Caching Cost More Time Than Expected
Next.js caches aggressively in the App Router. fetch calls in Server Components are cached by default. Route segments get statically rendered where possible. Great for performance. Confusing for development โ change data in the database, page still shows old content.
Hit this on day three. Someone updated a blog post in the CMS. Page showed the old version. Spent an hour thinking our queries were broken before realizing the page was cached.
// Force dynamic rendering for pages that need fresh data
export const dynamic = 'force-dynamic';
// Or revalidate on a timer
export const revalidate = 60; // seconds
Content pages: 60-second revalidation was fine. Dashboard and user-specific pages: force-dynamic. Blog: on-demand revalidation triggered by a CMS webhook.
The caching system is powerful but has a lot of knobs. More time went into configuring caching behavior than into the actual component migration. Edge cases โ like what happens when a cached page renders a component that calls a non-cached fetch โ weren't obvious from the docs and required empirical testing.
Streaming and Suspense
One thing that worked better than expected. Wrapped slow data fetches in Suspense boundaries and the page shell renders immediately with the fast parts, then fills in slower sections as they resolve.
import { Suspense } from 'react';
export default function DashboardPage() {
return (
<div>
<h1>Dashboard</h1>
<Suspense fallback={<StatsSkeleton />}>
<StatsPanel />
</Suspense>
<Suspense fallback={<ActivitySkeleton />}>
<RecentActivity />
</Suspense>
</div>
);
}
Both StatsPanel and RecentActivity are async Server Components fetching their own data. Page renders the heading and skeleton loaders immediately. Each section pops in as its data arrives. No client-side JavaScript orchestrating this. No loading state management code.
Probably the most satisfying part of the migration. Old app had a useEffect waterfall โ fetch stats, wait, fetch activity, wait, render. Sequential by accident. Now both fetches run in parallel on the server and stream independently. Dashboard feels snappier even though the data doesn't arrive faster โ it's just not serialized behind a queue anymore.
Results
Time-To-Interactive on a mid-range phone over 3G: 4.2 seconds โ 1.1 seconds.
Client JavaScript for blog pages: 380KB โ 97KB.
Lighthouse performance (mobile): 61 โ 94.
First Contentful Paint: 2.8s โ 0.6s.
Timeline: three weeks estimated, four and a half weeks actual. Team of four. Most of the overrun came from third-party library issues and the Redux-to-Zustand migration, not the RSC conversion itself.
These numbers justified the effort. A nearly 4x TTI improvement on mobile translates to measurable behavior changes. Bounce rate on mobile dropped in the two weeks after launch. Won't attribute it entirely to the migration โ we fixed some content issues around the same time โ but the correlation is there.
What I'd Do Differently
Map the third-party library situation before starting. Go through package.json, check each react-* dependency for "use client" support, build the wrapper files in advance. We did this reactively and it ate three days of scattered debugging.
Enforce the server/client file boundary from day one. Separate directories. Lint rules. Something. The shared utility file incident that bloated the bundle by 2MB was preventable with better project structure.
Spend more time upfront aligning the team on where data access lives. The tension around database queries in components was real friction. Would have moved faster with the lib/queries convention agreed on before writing the first Server Component.
And if you're thinking about web accessibility, Server Components actually help in some ways โ semantic HTML rendered on the server means screen readers get content immediately instead of waiting for JavaScript to hydrate.
The styling story is also worth getting right early โ we landed on the hybrid approach described in my Tailwind vs CSS Modules comparison, which works well with the server/client split.
Was the migration worth it? The performance numbers say yes. The architecture is better in most places. But there's a new category of mistakes to make โ invisible bundle boundary violations, caching surprises, the mental model split between server and client code โ and I don't think we've built the habits to catch them reliably yet. Young pattern. Still learning its shape. ## Error Handling Across the Boundary
One thing that hasn't been mentioned in most RSC write-ups: error handling gets split between two runtimes, and the error experience differs depending on where the failure occurs.
A Server Component that throws during rendering? Next.js catches it and shows the nearest error.tsx boundary. The user sees your error UI. The server logs the full stack trace. Debugging is straightforward โ the error happened on your server, you have the full context.
A Client Component that throws? Same error.tsx boundary catches it (if one exists), but the error happened in the user's browser. Your server logs don't have the stack trace. You need client-side error reporting (Sentry, LogRocket, whatever) to capture it. Different tooling for the same symptom.
A Server Action that fails? The error propagates to the calling Client Component. If you're using useFormState or useTransition, you can handle it in the client. If you're using a raw form action, an unhandled error from the Server Action shows the generic error boundary.
We ended up needing different error handling strategies for each case. Server Component errors: caught by error boundaries, logged by our server monitoring. Client Component errors: caught by error boundaries, reported to Sentry. Server Action errors: returned as typed error objects from the action, handled in the calling component's UI. Three patterns for what used to be one pattern in the SPA world.
Not unmanageable. But it's another dimension of complexity that the migration introduced. When something goes wrong, the first diagnostic question used to be "what endpoint failed?" Now it's "which execution environment did this happen in?" followed by looking at different tools depending on the answer.
Testing Server Components
Testing strategy changed more than we anticipated. In the SPA world, most component tests rendered components in jsdom, mocked API calls, and checked the output. Server Components don't render in jsdom โ they run on the server and produce a serialized payload. Our existing test suite couldn't test them without significant rework.
We ended up splitting the testing approach. For Server Components, integration tests that hit actual routes through Next.js's test utilities and check the rendered HTML. For Client Components, the existing React Testing Library setup still worked fine. The split isn't ideal โ two testing patterns means two mental models โ but it reflects the actual architecture. Pretending Server Components are browser components in your test suite leads to false confidence.
Would I Do It Again?
Yes. The performance numbers are too strong to ignore. But I'd go in with different expectations about the timeline and the types of challenges. The RSC conversion itself is not the hard part. The hard part is the ecosystem (third-party library compatibility), the team alignment (where does data access live), and the new failure modes (invisible bundle violations, caching surprises, split error handling).
If someone asked me "should my team adopt Server Components?" the honest answer depends on two things: what percentage of your pages are content-heavy versus interaction-heavy, and how much patience your team has for new mental models. High content, patient team? Do it. The gains are real. Low content, team already fatigued from recent tooling changes? Maybe wait another year for the ecosystem to mature.
Four months in and I still check the bundle analyzer more often than I used to.
Written by
Anurag Sinha
Full-stack developer specializing in React, Next.js, cloud infrastructure, and AI. Writing about web development, DevOps, and the tools I actually use in production.
Stay Updated
New articles and tutorials sent to your inbox. No spam, no fluff, unsubscribe whenever.
I send one email per week, max. Usually less.
Comments
Loading comments...
Related Articles

Next.js App Router Deep Dive โ Server Components, Streaming, and the Caching Trap
What I learned after six months of building with the App Router: when server components shine, when client components are the right call, and why caching will waste your afternoon.

Building Production-Ready Apps With Next.js: The Architecture Shift
Tracing the migration path from traditional React SPAs to the Next.js App Router, addressing routing mechanics, caching layers, and server action boundaries.

WebAssembly Demystified โ It's Not Just 'Fast JavaScript'
What WebAssembly actually is under the hood, why calling it fast JavaScript misses the point, and the Rust-to-WASM pipeline I use in real projects.