API Security Best Practices: A Direct Technical Breakdown
Authentication, authorization, rate limiting, and input validation security mechanics.

So I keep getting asked to review APIs for security. And every time, the same problems show up. Not obscure zero-days or anything exciting. Just... the basics, done wrong. Repeatedly. Across teams that should know better.
I figured I'd write this down once so I can link to it instead of repeating myself in Slack threads. This is the stuff that actually breaks in production. Not theoretical attacks from academic papers. Real things that have bitten me or people I work with.
JWT Verification Gone Wrong
I want to start here because it's the one that scares me the most when I find it.
JWTs get used everywhere for API authentication now. The idea is simple โ the server signs a token, hands it to the client, and the client sends it back on every request. The server verifies the signature and trusts the claims inside. Works great in theory.
The problem shows up in how verification is implemented. A JWT has a header that specifies which algorithm was used to sign it. Something like "alg": "HS256". And here's where things go sideways โ some JWT libraries, especially older versions, will read that header and use whatever algorithm it says. The client tells the server how to verify the token. You can probably see where this is going.
If an attacker sends a token with "alg": "none", a misconfigured parser might skip signature verification entirely. The token just... passes. No check. The attacker can put whatever claims they want inside.
I ran into this on a project about two years ago. An Express API using an older version of jsonwebtoken with default settings. The library accepted "none" as a valid algorithm. We caught it in a security review, not in production, thank god. But it was sitting there for months before anyone noticed.
The fix is to pin the algorithm on the server side and never let the token dictate how it gets verified.
import jwt from 'jsonwebtoken';
const JWT_SECRET = process.env.JWT_SECRET;
const EXPECTED_ISSUER = 'api.technoknowledge24.com';
function verifyClientToken(token) {
try {
return jwt.verify(token, JWT_SECRET, {
algorithms: ['HS256'],
issuer: EXPECTED_ISSUER,
});
} catch (error) {
return null;
}
}
Two lines. algorithms: ['HS256'] and issuer: EXPECTED_ISSUER. That's all it takes to close the hole. And yet I keep finding APIs in the wild where this isn't set.
There's a second variant of this attack that's worth knowing about. If a server uses RS256 (asymmetric โ signs with a private key, verifies with a public key), an attacker can set the algorithm to HS256 (symmetric) and sign the token with the public key, which is usually... public. The server then uses the public key as the HMAC secret and the signature checks out. Same fix โ always pin the algorithm server-side.
I should mention โ if you're starting a new project today, consider using a managed auth service or at minimum a well-maintained library like jose instead of rolling your own JWT logic. But if you're maintaining existing code, check how your tokens are being verified. Right now.
IDOR: The Bug That's Too Simple to Notice
IDOR stands for Insecure Direct Object Reference and it's the kind of vulnerability that makes you feel dumb when you find it in your own code. Because it's not complicated. It's barely even a "hack."
Here's what happens. A user hits /api/invoices/1043 and gets their invoice. They change the URL to /api/invoices/1044. They get someone else's invoice. The API just looked up the record by ID without checking who was asking for it.
I've found this in code written by experienced developers. Not because they don't know about authorization, but because it's easy to forget when you're moving fast. You write the happy path, it works, you ship it, and the access check just... never gets added.
app.get('/api/documents/:id', requireAuth, async (req, res) => {
const document = await prisma.document.findFirst({
where: {
id: req.params.id,
ownerId: req.user.id
}
});
if (!document) {
return res.status(404).json({ error: 'Resource not found' });
}
res.json(document);
});
The line that matters is ownerId: req.user.id. Without it, you've just built a document browser for the entire database. With it, users can only access their own stuff. Simple.
One thing I want to flag about the error response here โ I'm returning 404, not 403. This is intentional. A 403 tells the attacker "this resource exists but you can't access it." That's useful information for someone probing your API. It tells them they've found a valid ID and they just need to find a way around the permission check. A 404 gives them nothing. The resource might not exist at all, or it might belong to someone else. They can't tell.
There's a subtlety here too. IDOR doesn't just happen with URL parameters. It shows up in request bodies, query strings, even in websocket messages. Anywhere you take an ID from the client and use it to look up a resource, you need to scope the query to the authenticated user. Every time. There's no shortcut.
I've started adding IDOR checks to my code review checklist. Like, literally a checklist item: "Does this endpoint fetch a resource by ID? Is the query scoped to the requesting user?" It's that common.
Payload Size and Shape Validation
This one is boring but it matters. If your API accepts arbitrary JSON with no size limit and no shape validation, you're inviting problems.
The obvious issue is memory exhaustion. Someone sends a 200MB JSON body and your server tries to parse the whole thing into memory. Node.js or Python or whatever you're using allocates a giant buffer, the garbage collector panics, and your server either crashes or becomes unresponsive.
The less obvious issue is when the payload is a reasonable size but has unexpected structure. Like, you expect {email: "...", password: "..."} but someone sends {email: "...", password: "...", isAdmin: true} and if your ORM is doing mass assignment, that extra field gets written to the database.
Fix both problems at the same time. Cap body size at the middleware level, then validate the shape of what gets through.
app.use(express.json({ limit: '10kb' }));
import { z } from 'zod';
const userRegistrationSchema = z.object({
email: z.string().email().max(100),
password: z.string().min(12).max(64),
bio: z.string().max(500).optional()
});
app.post('/api/users', async (req, res) => {
const parsed = userRegistrationSchema.safeParse(req.body);
if (!parsed.success) {
return res.status(400).json({ error: 'Invalid payload structure' });
}
// proceed with parsed.data โ anything not in the schema is stripped out
});
The password max length of 64 isn't arbitrary, by the way. bcrypt has a maximum input length of 72 bytes. If someone sends a 10,000 character password, some bcrypt implementations will silently truncate it, which means the first 72 bytes become the actual password and the rest is ignored. Other implementations might hang trying to hash the whole thing. Either way, cap it.
I use Zod here because it's what I'm most familiar with in the JS/TS world, but the concept applies everywhere. Joi, Yup, Pydantic in Python, whatever. The point is to define what you expect and reject everything else. Don't be permissive.
The 10kb limit on the express.json middleware is a starting point. Adjust it for your use case. If you have an endpoint that accepts file uploads, obviously you need a higher limit there, but you should still cap it at something sane and apply it per-route rather than globally.
Rate Limiting That Actually Makes Sense
A flat rate limit across your entire API is better than nothing, but it's not much better. Your login endpoint and your public search endpoint have completely different threat profiles. A brute force attack against login needs maybe 5-10 requests per minute to be dangerous. A search endpoint getting 200 requests a minute from a power user is normal behavior.
If you apply the same limit everywhere, you either make it too tight for normal usage or too loose for sensitive endpoints. You have to scope your limits by route.
import rateLimit from 'express-rate-limit';
const standardLimiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 200,
});
const authLimiter = rateLimit({
windowMs: 60 * 60 * 1000,
max: 5,
skipSuccessfulRequests: true,
});
app.use('/api/', standardLimiter);
app.use('/api/auth/login', authLimiter);
skipSuccessfulRequests: true on the auth limiter is worth calling out. It means successful logins don't count toward the limit. Only failed attempts do. So a legitimate user who logs in on the first try doesn't get penalized, but an attacker trying to brute force credentials gets locked out after 5 failures.
This is the simple version. In practice, for a production API handling real traffic, you probably want to rate limit by a combination of IP address and user identity. IP-based limiting alone breaks down when multiple legitimate users share a corporate proxy or NAT. User-based limiting alone doesn't help with unauthenticated endpoints.
There's also the question of what happens when someone hits the limit. Most implementations return a 429 Too Many Requests with a Retry-After header. That's fine. But make sure your rate limit responses don't leak information about other users or about the system's internal state. I've seen rate limit responses that include the exact number of requests remaining and the window reset time, which gives an attacker a precise tool for calibrating their attack speed.
CORS Misconfigurations
I almost didn't include this because it feels like it should be common knowledge by now. But I audited three different APIs last month and two of them had Access-Control-Allow-Origin: * in production. With credentials.
If your API sends Access-Control-Allow-Origin: * and Access-Control-Allow-Credentials: true together, any website on the internet can make authenticated requests to your API using your users' cookies. That's a complete bypass of the same-origin policy.
The fix depends on your setup. If you have a known set of frontends that need to talk to your API, whitelist them directly.
const allowedOrigins = [
'https://app.technoknowledge24.com',
'https://admin.technoknowledge24.com',
];
app.use(cors({
origin: (origin, callback) => {
if (!origin || allowedOrigins.includes(origin)) {
callback(null, true);
} else {
callback(new Error('Not allowed'));
}
},
credentials: true,
}));
The !origin check handles requests with no Origin header โ things like server-to-server calls or curl. Whether you want to allow those depends on your use case.
I also want to mention โ even if your CORS is configured correctly, it's not a security boundary by itself. CORS is enforced by the browser. A server-side attacker or a script running outside a browser context ignores CORS entirely. It only protects against malicious websites making cross-origin requests through a user's browser. You still need proper authentication and authorization on every endpoint regardless of your CORS config.
Error Messages That Tell Too Much
Last thing. Your API's error messages are a source of information leakage, and most teams don't think about this at all.
A login endpoint that returns "Incorrect password" when the password is wrong and "User not found" when the email doesn't exist has just told an attacker which email addresses are registered. They can enumerate your user base one request at a time.
Same thing with stack traces. I've seen production APIs return full Node.js stack traces with file paths, package versions, and database connection strings in the error response. That's handing someone a roadmap for attacking your infrastructure.
Return generic error messages to the client. Log the details internally.
app.use((err, req, res, next) => {
// Log the real error for your own debugging
logger.error({ err, requestId: req.id }, 'Unhandled error');
// Send a generic message to the client
res.status(500).json({
error: 'Something went wrong',
requestId: req.id
});
});
Include a request ID so users can reference it in support tickets and you can look up the actual error in your logs. But the client never sees the details.
For authentication specifically, always return the same message regardless of whether the email exists or the password was wrong: "Invalid credentials." Same response, same status code, same response time. That last part matters too โ if your API takes 300ms to hash a password for a valid user but returns in 2ms for an invalid email, the timing difference itself leaks information about which emails are registered. It's called a timing attack and it sounds paranoid until someone actually exploits it.
I wish I had some neat way to wrap this up but honestly, this is just the stuff I keep seeing over and over. None of it is novel. Most of it has been documented for years. And yet I keep finding the same problems in new codebases. I guess that's just how it goes.
Written by
Anurag Sinha
Developer who writes about the stuff I actually use day-to-day. If I got something wrong, let me know.
Found this useful?
Share it with someone who might find it helpful too.
Comments
Loading comments...
Related Articles
Cybersecurity Survival: A Practical Scenario
Walk through a simulated breach to understand which skills actually matter in real-world incident response.
GraphQL vs REST: A Dialogue Between Frontend and Backend
A pragmatic debate on data fetching, over-fetching, N+1 queries, and caching complexities.
Monolith vs. Microservices: How We Made the Decision
Our team's actual decision-making process for whether to break up a Rails monolith. Spoiler: we didn't go full microservices.