GraphQL vs REST: A Dialogue Between Frontend and Backend
A pragmatic debate on data fetching, over-fetching, N+1 queries, and caching complexities.

REST is boring. That's a compliment.
Boring technology has twenty years of caching infrastructure. Boring technology has monitoring tools that understand HTTP verbs without custom configuration. Boring technology is the thing your new junior developer already knows how to debug at 2 AM when something goes wrong in production. There's a reason boring sticks around.
GraphQL is not boring. GraphQL is interesting, which โ in infrastructure โ can go either way. Three years and four production projects later, my opinion on the GraphQL-versus-REST question has settled into something that satisfies neither camp: both are fine, one is usually better for a given problem, and the industry keeps arguing because the answer depends on context that most comparison articles skip over.
What follows is the comparison as I've experienced it. Not the theoretical version from documentation sites. The version where you're shipping features under deadline pressure and the technical choice affects how fast you move and what breaks when traffic spikes.
The Data Shape Problem Is Real
On one project โ a dashboard for a logistics company, internal tool โ we started with REST. Main dashboard screen needed: the user's profile, their ten most recent shipments, status of each shipment, and the warehouse associated with each one. Four resources, three levels of nesting.
Frontend code to assemble that page:
// Frontend code to assemble the dashboard
const user = await fetch('/api/users/me');
const shipments = await fetch(`/api/users/${user.id}/shipments?limit=10`);
const statuses = await Promise.all(
shipments.map(s => fetch(`/api/shipments/${s.id}/status`))
);
const warehouses = await Promise.all(
shipments.map(s => fetch(`/api/warehouses/${s.warehouseId}`))
);
// Now stitch it all together on the client
Twenty-two HTTP requests for one page load. Good connection: about 1.8 seconds. The mobile connection our warehouse managers actually used in the field? Five to six seconds. Not usable.
Obvious response: build a custom endpoint. GET /api/dashboard that pre-assembles everything. Did that. Then the design changed โ add the driver's name to each shipment. Changed again โ show the last three status updates instead of just the current one. Every design revision meant a backend ticket, a code review, a deploy. The custom endpoint kept accumulating fields that some views consumed and others ignored.
GraphQL version:
query Dashboard {
me {
name
email
role
shipments(limit: 10) {
trackingNumber
createdAt
status {
current
history(limit: 3) {
status
timestamp
}
}
warehouse {
name
city
}
driver {
name
phone
}
}
}
}
One request. Response shape matches the query. When the design team wanted driver.phone, the frontend developer added one line to the query. No backend ticket. The schema already exposed that field โ just nobody was asking for it yet.
That's the GraphQL pitch in practice. For read-heavy applications with deeply nested data and a frontend team that iterates quickly on UI design, the developer experience improvement is substantial. Not exaggerating when I say it changed how fast our frontend people could move.
But.
The Complexity Didn't Disappear โ It Relocated
All that flexibility moved problems from visible places to hidden ones.
The naive GraphQL implementation did exactly what you'd predict: N+1 queries against the database. Fetch 10 shipments, then 10 individual status lookups, then 10 warehouse lookups, then 10 driver lookups. The network waterfall from the frontend had been replaced by a database waterfall on the backend. Network round trips were faster (one HTTP request instead of twenty-two), but the database was doing roughly the same total work.
DataLoader fixed the batching problem:
const warehouseLoader = new DataLoader(async (ids) => {
const warehouses = await db.query(
'SELECT * FROM warehouses WHERE id = ANY($1)',
[ids]
);
// Return in the same order as the input ids
return ids.map(id => warehouses.find(w => w.id === id));
});
const resolvers = {
Shipment: {
warehouse: (shipment) => warehouseLoader.load(shipment.warehouseId),
},
};
Those 10 individual warehouse fetches collapse into one WHERE id IN (...) query. Works well. But every relationship needs its own DataLoader. Cache invalidation within request scope needs thought. Loader instances can't leak across requests โ each request gets a fresh set. None of it is hard, but it's boilerplate that REST doesn't require, because REST endpoints map to one or two hand-written queries you can optimize and index for directly.
The deeper issue was query complexity. With REST, each endpoint does a known, predictable thing. You trace GET /api/shipments/:id/status to a single SQL query. You index for it. Put a cache header on it. Done. With GraphQL, any client can compose any query the schema permits. Most compositions are fine. Some are accidentally expensive. A few โ if your schema has circular references and you're unlucky โ can be deliberately expensive. We ended up adding query depth limiting, query cost analysis, and request timeout middleware. Three pieces of defensive infrastructure that a REST API just doesn't need.
That infrastructure wasn't in the project plan. Nobody budgeted for it. It showed up as necessary work after we discovered a query that joined six levels deep and brought the database to its knees during load testing.
Caching Is Where REST Wins and It's Not Close
Keeping this section short because the point doesn't require elaboration.
REST uses GET requests. GET requests have URLs. URLs are the atomic unit of HTTP caching. Browsers cache them. CDNs cache them. Reverse proxies cache them. Slap Cache-Control: public, max-age=300 on your shipment endpoint and every CDN node on the planet knows the protocol. This infrastructure has been refined for decades. For read-heavy public APIs, HTTP caching can drop server load by 90% or more without you writing a single line of caching logic.
GraphQL sends POST requests to a single endpoint. CDNs don't cache POST requests by default. Workarounds exist โ persisted queries where each query gets a hash and can be sent as a GET with a query ID โ but now you're maintaining a query registry, coordinating frontend and backend deploys, and adding a tooling layer that REST gives you for free by virtue of mapping to how HTTP already works.
Apollo and Relay have impressive client-side normalized caches. They're solving a problem that REST mostly avoids by leaning on the browser's built-in caching model. Not saying GraphQL caching doesn't work. It does. But it's opt-in effort versus built-in behavior. That gap matters more than most GraphQL advocates acknowledge.
The Schema Contract โ Where GraphQL Wins Back Points
A GraphQL schema is typed, self-documenting, and is the actual source of truth โ not a separate spec that might drift from reality. You generate TypeScript types from it. You validate queries at build time. Your editor autocompletes fields as you write.
We used graphql-codegen on the logistics project:
// Auto-generated from schema
interface DashboardQuery {
me: {
name: string;
email: string;
role: UserRole;
shipments: Array<{
trackingNumber: string;
createdAt: string;
status: {
current: ShipmentStatus;
history: Array<{
status: ShipmentStatus;
timestamp: string;
}>;
};
warehouse: {
name: string;
city: string;
};
driver: {
name: string;
phone: string;
};
}>;
};
}
Backend developer renames a field? Frontend build breaks immediately with a clear error. At compile time. Not at runtime. Not as a mysterious undefined showing up in production three days later when a user reports a blank screen. Compare that to REST, where the "contract" is typically a Swagger spec that may or may not reflect what the server actually returns today.
For any team where frontend and backend developers are different people โ which is most teams โ this matters a lot.
Fair to mention: REST can get partway here too. OpenAPI with code generation from tools like openapi-typescript has gotten solid. It's more setup work, and in my experience the specs drift from reality more easily than a GraphQL schema does (because the schema IS the implementation, not a document sitting next to it). But the gap isn't as wide as the GraphQL community sometimes suggests.
Mutations Feel Awkward in GraphQL
In REST, the HTTP verb communicates intent. POST creates. PUT replaces. PATCH updates. DELETE removes. Proxies, load balancers, monitoring dashboards โ they understand these verbs natively. Your logging infrastructure shows reads versus writes at a glance without parsing request bodies.
GraphQL: everything is POST. A mutation is a query with mutation at the top:
mutation UpdateShipmentStatus($id: ID!, $status: ShipmentStatus!) {
updateShipmentStatus(id: $id, status: $status) {
id
status {
current
updatedAt
}
}
}
Functional. Type-safe. But you lose the HTTP-level semantics that two decades of web infrastructure has been built around. Your API gateway can't separate reads from writes without parsing the GraphQL body. Rate limiting needs to be smarter. Access logging needs more context.
For basic CRUD โ create a thing, read the thing, update the thing, delete the thing โ REST maps more naturally. The URL structure mirrors your resource hierarchy. POST /api/shipments creates one. GET /api/shipments/abc123 fetches it. There's a simplicity to that which GraphQL doesn't improve on for straightforward operations.
Mutations get interesting when the operation needs a complex return value. Update a shipment status and you also want the recalculated delivery estimate, the notification that was dispatched, the updated driver assignment โ all in one response. REST would either return everything from the PUT endpoint (mixing concerns) or require additional GET requests after the mutation. GraphQL lets the client specify exactly what comes back. For complex workflows, that's a real win.
The Stuff That Just Doesn't Fit
Some things are REST-shaped regardless of what else you're building.
File uploads. There's a multipart request spec for GraphQL file uploads. Implementing it is not pleasant. Most teams I've talked to end up with a REST endpoint for uploads sitting alongside their GraphQL API. Which is fine โ but it means maintaining two API styles.
Webhooks. Payment processor sends you a POST when a charge completes. CI system hits an endpoint when a build finishes. These are REST calls. They don't route through your GraphQL layer.
Simple public APIs. If outside developers are integrating with your service and the interaction model is "get a list of things" and "get one thing by ID," a few clean REST endpoints with good documentation will serve them better than asking them to learn your GraphQL schema.
Health checks, auth endpoints, static assets โ all REST. Probably always will be.
N+1 Exists on Both Sides
This gets muddled in most comparisons so I want to be precise.
REST has an N+1 problem at the client layer. Fetch a list, then fetch related data for each item. N+1 network requests. Visible, painful on slow connections, easy to diagnose.
GraphQL has an N+1 problem at the server layer. Resolver fetches a list, then each item's resolver fires individual database queries for related fields. N+1 database calls. Hidden inside your server, less visible in monitoring, needs DataLoader or similar batching to fix.
Neither approach eliminates N+1. They shift it to different parts of the stack. GraphQL's version (database-side) is arguably easier to optimize โ batching, query planning, caching all happen in one place. But the optimization isn't free and it isn't automatic. Seen teams adopt GraphQL expecting the N+1 problem to vanish and then discover their database getting hammered because nobody wrote DataLoaders for half the resolvers. The problem didn't leave. It just became someone else's problem to notice.
What I Picked for My Last Project
Most recent project was a mid-size SaaS app โ project management tool, React frontend, Node.js backend. Some deeply nested views (project boards with tasks, assignees, labels, comments) and plenty of simple CRUD (organizations, billing, user settings).
GraphQL for the core product data โ projects, tasks, comments, board views. The frontend team (two people, myself included) was iterating on UI designs constantly. Being able to change what data a component fetched without creating a backend ticket was productive in a way that's hard to overstate. Apollo Client on the frontend. Apollo Server on the backend. Codegen for types. DataLoaders for relationships.
REST for everything around the edges. Authentication โ JWT flow with /auth/login, /auth/refresh, /auth/logout โ and if you take the REST route for auth, handling JWT verification properly is worth getting right from the start. File uploads via presigned S3 URLs through /api/uploads. Webhook receivers for Stripe and GitHub. The public API that external services consumed.
Hybrid setup. Two sets of middleware. Two error handling patterns. Two things to monitor. More infrastructure than picking one side. But each piece plays to its strengths, and the boundary between the two layers stayed clean enough that it didn't create daily confusion.
Mostly.
One thing nags at me. The GraphQL portion has grown to about 40 types and 120 fields. Schema file is long. Resolver map is complicated, seems like. And new developers who join the project take longer to understand the GraphQL parts than the REST parts. REST endpoints are just functions โ request comes in, response goes out, readable top to bottom. GraphQL resolvers are scattered across files, connected by a schema, with DataLoaders adding indirection on top of indirection. Manageable. But there's an onboarding cost that REST doesn't carry.
Sometimes I wonder whether I should have built custom REST endpoints for the complex views instead. A GET /api/boards/:id?include=tasks,assignees,labels with sparse fieldsets. More backend work upfront. Simpler mental model for the team. The include-parameter approach that JSON:API made popular isn't as flexible as GraphQL queries โ but was it flexible enough? Can't know, could be wrong. We went the other direction, and rewriting a working API to test a hypothesis is a luxury that doesn't exist in most schedules.
Versioning: A Subtle Difference
REST APIs typically version via URL (/api/v2/shipments) or headers. When you need to make a breaking change, you bump the version and maintain the old one until clients migrate. Clear, well-understood, occasionally painful when you're running three API versions simultaneously.
GraphQL takes a different approach: deprecate fields, add new ones, never break existing queries. The @deprecated directive marks fields as obsolete, and tooling can warn clients still using them. In theory, you never need a version bump because the schema only grows. In practice, this means old fields accumulate like sediment. Our schema still has fields deprecated eight months ago that nobody has removed because removing them might break a client we've forgotten about. It's a tradeoff โ no breaking changes, but the schema gets noisier over time.
Where This Leaves Me
GraphQL for complex, deeply nested, read-heavy views where the frontend team needs the freedom to reshape data requests without backend coordination. REST for auth, file handling, webhooks, simple CRUD, public-facing APIs, anything where HTTP caching matters. The architecture choices extend beyond just the API layer โ if you're thinking about how to structure the backend more broadly, I wrote about the monolith vs microservices decision and how it interacts with API design.
That's the framework as of right now. Ask me in a year and the details might shift. The general shape โ use the right protocol for the shape of the data, don't be afraid to mix them โ that part I expect to hold.
Whether the hybrid approach on my current project was the right call? Four months in and I'm still not entirely sure. The GraphQL half is productive. It's also the half that takes the longest to explain to new people and the half where the most surprising bugs have surfaced. The REST half just works, quietly, in the way that boring infrastructure tends to.
Make of that what you will.
Written by
Anurag Sinha
Full-stack developer specializing in React, Next.js, cloud infrastructure, and AI. Writing about web development, DevOps, and the tools I actually use in production.
Stay Updated
New articles and tutorials sent to your inbox. No spam, no fluff, unsubscribe whenever.
I send one email per week, max. Usually less.
Comments
Loading comments...
Related Articles

SQLite โ The Most Underrated Database in Your Toolbox
Why I stopped reaching for Postgres by default and started shipping production apps with SQLite. WAL mode, embedded analytics, and when it genuinely beats the big databases.

System Design โ Not Interview Prep, Real Decisions
The system design concepts I actually use at work: load balancers, caching layers, message queues, and why picking the right trade-off matters more than knowing the right answer.

MongoDB vs PostgreSQL โ An Honest Comparison After Using Both in Production
When the document model actually helps, when relational wins, and the real project stories behind the decision.