GraphQL vs REST: A Dialogue Between Frontend and Backend
A pragmatic debate on data fetching, over-fetching, N+1 queries, and caching complexities.

The Comparison That Won't Die
Every few months someone publishes a new "GraphQL vs REST" article, and every few months the comments fill up with people who've already made up their minds. I've read dozens of these. I've been the person in the comments section too, early on, when I thought GraphQL was obviously the future and REST was the thing we'd all look back on with mild embarrassment. Like how we look at SOAP now.
I was wrong about that. Not entirely wrong โ GraphQL really is better for certain problems โ but wrong about it being a one-directional upgrade. Three years and four production projects later, I think the comparison keeps coming up not because the community can't make up its mind, but because the answer genuinely depends on what you're building, who's building it, and how much operational overhead you're willing to absorb.
This isn't a "which one wins" article. I don't think either one wins. I'm going to walk through the differences I've actually felt in practice, the ones that mattered when I was shipping real features under real deadlines, and then tell you what I ended up choosing for my most recent project. Including the part where I'm still not sure I chose right.
How They Actually Differ (In Practice, Not In Theory)
The theoretical differences are well-documented. REST uses multiple endpoints, one per resource. GraphQL uses a single endpoint with a query language. REST maps cleanly to HTTP verbs. GraphQL sends everything as POST. You know all this already.
What the theory doesn't capture is how these differences ripple through your codebase over months of development. Let me be specific.
The Data Fetching Shape Problem
On one project โ a dashboard-heavy internal tool for a logistics company โ we started with REST. The main dashboard needed to display a user's profile, their recent shipments, the status of each shipment, and the warehouse associated with each one. Four different resources, three levels of nesting.
The REST version looked like this in practice:
// Frontend code to assemble the dashboard
const user = await fetch('/api/users/me');
const shipments = await fetch(`/api/users/${user.id}/shipments?limit=10`);
const statuses = await Promise.all(
shipments.map(s => fetch(`/api/shipments/${s.id}/status`))
);
const warehouses = await Promise.all(
shipments.map(s => fetch(`/api/warehouses/${s.warehouseId}`))
);
// Now stitch it all together on the client
That's 22 HTTP requests for one page load. On a good connection, this took around 1.8 seconds. On the mobile connection our warehouse managers were actually using? More like 5-6 seconds. Unusable.
The obvious fix is to create a custom endpoint โ GET /api/dashboard โ that returns everything pre-assembled. And we did that. But then the design changed. They wanted to add the driver's name to each shipment. Then they wanted the last three status changes instead of just the current one. Each design change meant a backend ticket, a PR review, a deployment. The custom endpoint kept growing, returning fields that some views used and others didn't.
When we eventually rewrote it with GraphQL, the frontend could just ask for what it needed:
query Dashboard {
me {
name
email
role
shipments(limit: 10) {
trackingNumber
createdAt
status {
current
history(limit: 3) {
status
timestamp
}
}
warehouse {
name
city
}
driver {
name
phone
}
}
}
}
One request. The response shape matches the query shape. When the design team wanted to add driver.phone, the frontend developer added it to the query and it just worked โ because the schema already exposed it. No backend ticket required.
This is the GraphQL sales pitch and it's real. I'm not going to pretend it isn't. For read-heavy applications with deeply nested data and a frontend team that iterates quickly on UI design, the developer experience improvement is significant.
But.
What Moved to the Backend
All that flexibility came with a cost I didn't fully appreciate until I was debugging production issues at 11 PM.
The GraphQL server now had to resolve every field in that nested query. The naive implementation did exactly what you'd expect โ N+1 queries against the database. Fetching 10 shipments, then 10 individual status lookups, then 10 warehouse lookups, then 10 driver lookups. We'd moved the waterfall from the frontend to the backend. The network was faster (one round trip instead of 22), but the database was doing roughly the same amount of work.
DataLoader helped. A lot, actually. For those unfamiliar: DataLoader batches individual lookups within a single tick of the event loop, so those 10 warehouse lookups become a single WHERE id IN (...) query. Here's roughly what the resolver looked like:
const warehouseLoader = new DataLoader(async (ids) => {
const warehouses = await db.query(
'SELECT * FROM warehouses WHERE id = ANY($1)',
[ids]
);
// Return in the same order as the input ids
return ids.map(id => warehouses.find(w => w.id === id));
});
const resolvers = {
Shipment: {
warehouse: (shipment) => warehouseLoader.load(shipment.warehouseId),
},
};
This works well. But you have to write a DataLoader for every relationship. And you have to think about cache invalidation within request scope. And you have to be careful about DataLoader instances leaking across requests (each request needs its own set of loaders). It's not hard, but it's boilerplate that REST doesn't require because REST endpoints typically map to one or two well-defined database queries that you write by hand and can optimize with indexes you understand.
The deeper problem was query complexity. With REST, you know exactly what each endpoint does. You can look at GET /api/shipments/:id/status and trace it to one SQL query. You can index for it. You can put a cache header on it and be done.
With GraphQL, any client can compose any query the schema allows. Most of those compositions are fine. Some of them are accidentally expensive. And a few โ if you're unlucky and your schema has circular references โ can be deliberately expensive. We had to add query depth limiting, query cost analysis, and request timeout middleware. Three pieces of infrastructure that a REST API just doesn't need.
Caching: Where REST Has a Genuine, Undeniable Advantage
This section is short because the point is straightforward.
REST uses GET requests. GET requests have URLs. URLs are the fundamental unit of HTTP caching โ your browser caches them, CDNs cache them, reverse proxies cache them. You add Cache-Control: public, max-age=300 to your shipment endpoint and every CDN node on the planet knows what to do with it. This isn't a small thing. For read-heavy public APIs, HTTP caching can reduce your server load by 90% or more, and the infrastructure for it has been refined over decades.
GraphQL uses POST requests to a single endpoint. CDNs don't cache POST requests by default. You can work around this with persisted queries (where each query gets a hash and can be sent as a GET with a query ID), but now you're maintaining a query registry, coordinating between frontend and backend deployments, and adding a layer of tooling that REST gives you for free. Apollo and Relay have client-side normalized caches that are impressive engineering, but they're solving a problem that REST largely avoids by leaning on HTTP's built-in caching model.
I'm not saying GraphQL caching doesn't work. It does. But it's opt-in complexity versus the built-in simplicity of REST plus HTTP headers. That distinction matters more than most GraphQL advocates admit.
The Schema as a Contract
Here's where things flip back in GraphQL's favor in a way that I think is underappreciated.
A GraphQL schema is a typed, self-documenting contract between the frontend and backend. It's not just documentation โ it's the actual source of truth for what data exists and how it's shaped. You can generate TypeScript types from it automatically. You can validate queries at build time. You can set up your editor to autocomplete fields as you write queries.
On the logistics project, we used graphql-codegen to generate TypeScript types from our schema. The frontend code looked like this:
// Auto-generated from schema
interface DashboardQuery {
me: {
name: string;
email: string;
role: UserRole;
shipments: Array<{
trackingNumber: string;
createdAt: string;
status: {
current: ShipmentStatus;
history: Array<{
status: ShipmentStatus;
timestamp: string;
}>;
};
warehouse: {
name: string;
city: string;
};
driver: {
name: string;
phone: string;
};
}>;
};
}
If a backend developer changed the schema โ renamed a field, altered a type โ the frontend build would break immediately with a clear error message. Not at runtime. At build time. Compare this with REST, where the "contract" is usually a Swagger/OpenAPI spec that may or may not be up to date, and type mismatches surface as mysterious undefined values in production.
This alone might justify GraphQL for any team where frontend and backend developers are different people. Which is... most teams.
But I want to be honest: you can get some of this with REST too. OpenAPI with code generation tooling like openapi-typescript has gotten quite good. It's more work to set up, and in my experience the specs drift from reality more easily than a GraphQL schema does (because the schema IS the implementation, not a separate document describing it). But it's possible. The gap here is real but not as wide as the GraphQL community sometimes implies.
Mutations and Side Effects
Mutations in GraphQL have always felt a little awkward to me. In REST, the HTTP verb tells you what kind of operation is happening. POST creates, PUT replaces, PATCH updates, DELETE deletes. Proxies, load balancers, and monitoring tools understand these verbs. Your logging infrastructure can tell you at a glance how many writes vs reads you're handling.
In GraphQL, everything is a POST. A mutation is a query with the word mutation at the top:
mutation UpdateShipmentStatus($id: ID!, $status: ShipmentStatus!) {
updateShipmentStatus(id: $id, status: $status) {
id
status {
current
updatedAt
}
}
}
It works fine. The type system ensures you send the right arguments. But you lose the HTTP-level semantics that two decades of web infrastructure has been built around. Your API gateway can't distinguish reads from writes at the transport layer without parsing the GraphQL query body. Your rate limiting has to be smarter. Your logging needs more context.
For simple CRUD operations, REST is more natural. The URL structure maps to your resource hierarchy. POST /api/shipments creates a shipment. GET /api/shipments/abc123 retrieves it. There's an elegance to this that I don't think GraphQL improves on for basic operations.
Where mutations shine is when the operation returns complex data. After updating a shipment status, you might want the updated status, the new estimated delivery time that was recalculated, and the notification that was sent โ all in one response. With REST, you'd either return all of this from the PUT endpoint (mixing concerns) or make the client send additional GET requests afterward. With GraphQL, the mutation response is just another query result โ you ask for what you need.
File Uploads, Webhooks, and the Stuff Nobody Talks About
Some things just don't fit GraphQL well.
File uploads. Yes, there's a multipart request spec for GraphQL file uploads. No, it's not pleasant to implement. Most teams I've talked to end up using a REST endpoint for file uploads alongside their GraphQL API. Which is fine โ but it means you're maintaining two API styles anyway.
Webhooks. If you're building an API that external systems consume via webhooks, those are REST. Your payment processor calls a URL with a POST body. Your CI system hits an endpoint. These don't go through GraphQL.
Simple public APIs. If your API is mostly "get a list of things" and "get one thing by ID," the overhead of a GraphQL schema, resolvers, and query execution engine is hard to justify. A few well-designed REST endpoints with good documentation will serve you better and be easier for third-party developers to consume.
Health checks, authentication endpoints, static asset serving โ all REST. Always will be.
The N+1 Problem: Both Sides Have It
This point gets muddled in most comparisons, so I want to be clear.
REST has an N+1 problem on the client side. You fetch a list of items, then for each item you fetch related data. N+1 network requests. This is visible, painful on slow connections, and easy to understand.
GraphQL has an N+1 problem on the server side. Your resolver fetches a list of items, then for each item it resolves related fields with individual database queries. N+1 database queries. This is hidden inside your server, less visible in monitoring, and requires DataLoader or similar batching tools to fix.
Neither approach eliminates the N+1 problem. They move it to different layers of the stack. GraphQL's layer (the database) is arguably easier to optimize โ DataLoader, query planning, caching โ but the optimization isn't free and it isn't automatic.
I've seen teams adopt GraphQL expecting the N+1 problem to vanish and then discover their database is getting hammered because nobody set up DataLoaders for every resolver. The N+1 didn't go away. It just became someone else's problem.
What I Actually Chose for My Last Project
My most recent project was a mid-size SaaS application โ a project management tool with a React frontend and a Node.js backend. Moderate complexity. A handful of deeply nested views (project boards with tasks, assignees, labels, comments) but also plenty of simple CRUD (creating organizations, managing billing, user settings).
I chose GraphQL for the core product data โ projects, tasks, comments, the board views. The frontend team (two developers including me) iterated on UI designs constantly, and being able to change what data a component fetched without touching the backend was genuinely productive. We used Apollo Client on the frontend and Apollo Server on the backend. Code generation for TypeScript types. DataLoaders for all the relationships.
I kept REST for authentication (JWT token flow with /auth/login, /auth/refresh, /auth/logout), file uploads (presigned S3 URLs via /api/uploads), webhook receivers (Stripe, GitHub), and the public API (other services integrating with us expected REST).
It's a hybrid. More infrastructure to maintain than picking one, sure. Two sets of middleware, two sets of error handling patterns, two things to monitor. But each piece plays to its strengths, and in practice the boundary between them is clean enough that it doesn't cause confusion.
Mostly.
There's one thing that still nags at me. The GraphQL portion of our API has about 40 types and 120 fields now. The schema file is getting long. The resolver map is getting complicated. And I've started noticing that new developers on the project take longer to understand the GraphQL parts than the REST parts. The REST endpoints are just functions โ input comes in, output goes out, you can read them top to bottom. The GraphQL resolvers are scattered across files, connected by a schema, with DataLoaders adding another layer of indirection. It's not unmanageable, but there's a learning curve that REST doesn't have.
I sometimes wonder if I should have just built custom REST endpoints for the complex views. A GET /api/boards/:id?include=tasks,assignees,labels with sparse fieldsets. It would've been more backend work upfront but the mental model would be simpler for the team. The include parameter approach that JSON:API popularized isn't as flexible as GraphQL queries, but it might have been flexible enough. I'll never know, because we went the other way, and rewriting an API you've already built to test a hypothesis is a luxury most teams don't have.
So. GraphQL for complex, deeply nested, rapidly evolving read-heavy views where the frontend team needs autonomy. REST for everything else โ auth, files, webhooks, simple CRUD, public APIs, anything where HTTP caching matters.
That's my current thinking. Ask me again in a year and the specifics might shift, but the general shape of the answer probably won't โ use the right tool for the shape of the data, and don't be afraid to mix them in the same project.
Written by
Anurag Sinha
Developer who writes about the stuff I actually use day-to-day. If I got something wrong, let me know.
Found this useful?
Share it with someone who might find it helpful too.
Comments
Loading comments...
Related Articles
Monolith vs. Microservices: How We Made the Decision
Our team's actual decision-making process for whether to break up a Rails monolith. Spoiler: we didn't go full microservices.
An Interview with an Exhausted Redis Node
I sat down with our caching server to talk about cache stampedes, missing TTLs, and the things backend developers keep getting wrong.
Debugging Slow PostgreSQL Queries in Production
How to track down and fix multi-second query delays when your API starts timing out.