API Security Best Practices: A Direct Technical Breakdown
Authentication, authorization, rate limiting, and input validation security mechanics.

Same problems. Every audit. Not obscure zero-days or anything exciting โ just the basics done wrong. Repeatedly. Across teams that should know better.
I keep getting asked to review APIs for security, and the findings are, from what I've seen, monotonously consistent. So I wrote this down once to have something to link instead of repeating myself in Slack threads. What follows is the stuff that actually breaks in production. Not theoretical attacks from academic papers. Real vulnerabilities that have bitten me or people I work with.
JWT Verification Gone Wrong
Starting here because it's the scariest when found.
JWTs get used everywhere for API authentication. The idea: server signs a token, hands it to the client, client sends it back on every request, server verifies the signature and trusts the claims inside. Works well in theory.
The problem is implementation. A JWT header specifies which algorithm was used to sign it โ something like "alg": "HS256". Some libraries, especially older versions, read that header and use whatever algorithm it says. The client tells the server how to verify the token. See the problem.
An attacker sends a token with "alg": "none". A misconfigured parser skips signature verification entirely. Token passes. No check. Attacker puts whatever claims they want inside.
Encountered this on a project about two years ago. An Express API using an older version of jsonwebtoken with default settings. Library accepted "none" as valid. Caught it during a security review, not in production โ but it had been sitting there for months before anyone noticed.
Fix: pin the algorithm server-side and never let the token dictate how it gets verified.
import jwt from 'jsonwebtoken';
const JWT_SECRET = process.env.JWT_SECRET;
const EXPECTED_ISSUER = 'api.technoknowledge24.com';
function verifyClientToken(token) {
try {
return jwt.verify(token, JWT_SECRET, {
algorithms: ['HS256'],
issuer: EXPECTED_ISSUER,
});
} catch (error) {
return null;
}
}
Two config lines. algorithms: ['HS256'] and issuer: EXPECTED_ISSUER. That closes the hole. And yet APIs in the wild keep shipping without these set.
There's a second variant worth knowing. If a server uses RS256 (asymmetric โ signs with private key, verifies with public key), an attacker can set the algorithm to HS256 (symmetric) and sign the token with the public key โ which is, well, public. Server then uses the public key as the HMAC secret and the signature validates. Same fix: always pin the algorithm server-side.
Worth mentioning โ for new projects, consider a managed auth service or at minimum a well-maintained library like jose rather than rolling your own JWT logic. For existing code, check how tokens are being verified. Now. For a broader look at the skills needed to think about security as a whole discipline, I wrote a separate piece on getting into cybersecurity.
IDOR: Too Simple to Notice
Insecure Direct Object Reference. The kind of vulnerability that makes you feel dumb when you find it in your own code. Because it's barely even a "hack."
User hits /api/invoices/1043 and gets their invoice. Changes the URL to /api/invoices/1044. Gets someone else's invoice. The API looked up the record by ID without checking who was asking.
Found this in code written by experienced developers. Probably not because they don't know about authorization โ because it's easy to skip when moving fast. You write the happy path, test it, ship it, and the access check just never gets added.
app.get('/api/documents/:id', requireAuth, async (req, res) => {
const document = await prisma.document.findFirst({
where: {
id: req.params.id,
ownerId: req.user.id
}
});
if (!document) {
return res.status(404).json({ error: 'Resource not found' });
}
res.json(document);
});
The line that matters: ownerId: req.user.id. Without it, you've built a document browser for the entire database. With it, users access only their own resources.
Note the 404 response, not 403. A 403 tells an attacker "this resource exists, you just can't access it." Useful information for someone probing the API โ they've found a valid ID and just need to find a way around permissions. A 404 gives nothing. Resource might not exist, might belong to someone else. Attacker can't tell.
IDOR shows up everywhere. URL parameters, request bodies, query strings, websocket messages. Anywhere you take an ID from the client and use it to look up a resource โ scope the query to the authenticated user. Every time.
Started adding IDOR checks to my review checklist. Literally a line item: "Does this endpoint fetch by ID? Is the query scoped to the requesting user?"
Payload Size and Shape Validation
Boring but important. API accepts arbitrary JSON with no size limit and no shape validation โ that's an invitation.
Memory exhaustion: someone sends a 200MB JSON body and the server tries to parse the whole thing into memory. Node.js or Python allocates a giant buffer, garbage collector panics, server crashes or becomes unresponsive.
Mass assignment: payload is a reasonable size but has unexpected structure. You expect {email: "...", password: "..."} but receive {email: "...", password: "...", isAdmin: true} and if the ORM does mass assignment, that extra field lands in the database.
Fix both simultaneously. Cap body size at the middleware level, then validate shape:
app.use(express.json({ limit: '10kb' }));
import { z } from 'zod';
const userRegistrationSchema = z.object({
email: z.string().email().max(100),
password: z.string().min(12).max(64),
bio: z.string().max(500).optional()
});
app.post('/api/users', async (req, res) => {
const parsed = userRegistrationSchema.safeParse(req.body);
if (!parsed.success) {
return res.status(400).json({ error: 'Invalid payload structure' });
}
// proceed with parsed.data โ anything not in the schema is stripped out
});
Password max length of 64 isn't arbitrary. bcrypt has a maximum input length of 72 bytes. Send a 10,000 character password and some bcrypt implementations silently truncate it โ first 72 bytes become the actual password, rest ignored. Others might hang trying to hash the full input. Cap it.
Zod here because it's what I know in the JS/TS world. Joi, Yup, Pydantic โ the concept applies everywhere. Define what you expect, reject everything else. If you're designing your API and still choosing between REST and GraphQL, security considerations differ โ I compared the two approaches in my GraphQL vs REST writeup.
Rate Limiting That Isn't One-Size-Fits-All
A flat rate limit across your entire API is better than nothing. Not much better, though, I think. Login endpoint and public search endpoint have completely different threat profiles. Brute force against login needs maybe 5-10 requests per minute to be dangerous. Search endpoint getting 200 requests per minute from a power user is normal.
Same limit everywhere: too tight for normal usage or too loose for sensitive endpoints. Scope limits by route.
import rateLimit from 'express-rate-limit';
const standardLimiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 200,
});
const authLimiter = rateLimit({
windowMs: 60 * 60 * 1000,
max: 5,
skipSuccessfulRequests: true,
});
app.use('/api/', standardLimiter);
app.use('/api/auth/login', authLimiter);
skipSuccessfulRequests: true on the auth limiter โ successful logins don't count toward the limit. Only failures. Legitimate user logging in on the first try isn't penalized. Attacker trying to brute force credentials gets locked out after 5 failures.
Simple version. Production likely needs rate limiting by a combination of IP and user identity. IP alone breaks down behind corporate proxies and NAT. User-based alone doesn't cover unauthenticated endpoints.
Also consider what the limit response reveals. Some implementations return exact request counts remaining and window reset times, giving an attacker a precision tool for calibrating their attack speed. Return 429 Too Many Requests with a Retry-After header. Don't include a roadmap.
CORS Misconfigurations
Almost didn't include this. Feels like it should be common knowledge by now. But I audited three APIs last month and two had Access-Control-Allow-Origin: * in production. With credentials.
Access-Control-Allow-Origin: * combined with Access-Control-Allow-Credentials: true โ any website on the internet can make authenticated requests to your API using your users' cookies. Complete bypass of the same-origin policy.
Fix: whitelist known frontends.
const allowedOrigins = [
'https://app.technoknowledge24.com',
'https://admin.technoknowledge24.com',
];
app.use(cors({
origin: (origin, callback) => {
if (!origin || allowedOrigins.includes(origin)) {
callback(null, true);
} else {
callback(new Error('Not allowed'));
}
},
credentials: true,
}));
!origin check handles requests with no Origin header โ server-to-server calls, curl. Whether you allow those depends on your use case.
Even with correct CORS config โ it's not a security boundary by itself. CORS is enforced by the browser. Server-side attackers or scripts running outside a browser context ignore it entirely. It protects against malicious websites making cross-origin requests through a user's browser. You still need proper auth on every endpoint regardless of CORS config.
Error Messages That Leak Information
Login endpoint returns "Incorrect password" for wrong passwords and "User not found" for unknown emails. An attacker just learned which email addresses are registered. They can enumerate your user base one request at a time.
Stack traces in production responses: file paths, package versions, database connection strings. A roadmap for attacking your infrastructure, delivered via error response.
Return generic messages to clients. Log details internally.
app.use((err, req, res, next) => {
// Log the real error for your own debugging
logger.error({ err, requestId: req.id }, 'Unhandled error');
// Send a generic message to the client
res.status(500).json({
error: 'Something went wrong',
requestId: req.id
});
});
Include a request ID so users can reference it in support tickets and you can find the actual error in your logs. Client never sees the details.
For authentication specifically: same message regardless of whether the email exists or the password was wrong. "Invalid credentials." Same response, same status code, same response time. That last part matters โ if the API takes 300ms to hash a password for a valid user but returns in 2ms for an invalid email, the timing difference itself leaks information about which emails are registered. Timing attacks might sound paranoid until someone exploits one. If you're building these APIs on a modern stack, my post on building production apps with Next.js covers server actions and API route handlers where many of these security practices apply directly.
HTTPS and Transport Security
Felt weird including this because it's 2026 and HTTPS should be a given. Then I audited a staging environment last month that was sending JWT tokens over plain HTTP because someone had configured the internal service mesh without TLS between services. "It's internal traffic" was the justification. Internal traffic traverses a network that your cloud provider manages, that other tenants share, that could be intercepted at various points. TLS everywhere. Even internal.
Beyond HTTPS itself โ check your TLS configuration. Older cipher suites have known weaknesses. SSLabs runs a free test against your domain and grades your setup. Takes two minutes. If you're below an A rating, the fix is usually updating your nginx or load balancer TLS config to disable older protocols (TLS 1.0, 1.1) and weak cipher suites.
HSTS headers tell browsers to always use HTTPS for your domain, even if someone types http://. Without HSTS, a user's first request might go over HTTP before getting redirected to HTTPS, creating a brief window for interception. Add the header:
Strict-Transport-Security: max-age=31536000; includeSubDomains
One year max-age. Include subdomains. Set it and forget it.
Logging and Monitoring for Security Events
Before getting into headers, there's a category of defense that sits between prevention and response: knowing that something suspicious is happening while it's happening. Most teams log requests, but few โ from what I've seen โ log with security awareness.
Track failed authentication attempts per IP and per account. A sudden spike in failures against a single account looks different from a credential-stuffing attack that hits thousands of accounts with one password each. Both are bad; they require different responses. The first might warrant a temporary account lock. The second calls for IP-based throttling or a CAPTCHA challenge.
Log every permission check that returns a denial. If an authenticated user is repeatedly hitting 404s on resources that belong to other users, that's not normal browsing behavior โ that's someone probing for IDOR vulnerabilities. A simple counter on denied access attempts per user session, with an alert threshold, catches this early.
The key: logging these events is useless if nobody looks at the logs. Pipe them into an alerting system. Set thresholds that trigger notifications. A Slack alert saying "user 4821 received 47 access denials in the last 5 minutes" takes five minutes to set up and catches active attacks.
Security Headers Beyond CORS
While we're on headers โ a few more that take a minute to add and close common attack vectors:
Content-Security-Policy โ restricts what sources can load scripts, styles, images. Prevents XSS attacks that inject malicious script tags. Complex to configure for existing apps (because you need to whitelist every legitimate source), but even a basic policy that blocks inline scripts catches a lot of attacks.
X-Content-Type-Options: nosniff โ prevents browsers from MIME-sniffing a response away from the declared content type. Without it, an attacker might get a browser to interpret a text file as JavaScript.
X-Frame-Options: DENY โ prevents your site from being embedded in an iframe on another site. Blocks clickjacking attacks where an attacker overlays invisible iframes to trick users into clicking things they didn't intend to.
Three headers. Each one line of nginx config or one middleware call. The combined protection is probably significant enough to be worth it.
The Pattern
Nothing novel here. Most of it has been documented for years. And yet the same problems keep showing up in new codebases. Missing JWT algorithm pinning. Unscoped queries. No payload validation. Wildcard CORS with credentials. Stack traces in production error responses. No HSTS headers. No CSP. The basics, done wrong, repeatedly.
There's a pattern to why these things get missed. Security isn't part of most feature development workflows. The ticket says "add invoice endpoint." It doesn't say "add invoice endpoint with scoped queries, rate limiting, payload validation, and proper error handling." Those requirements are implicit, and implicit requirements get dropped under deadline pressure. Making security checks an explicit part of code review โ a checklist, a linting rule, an automated scan โ catches more of these than relying on developers to remember everything under time pressure.
Dependency Auditing
One more thing that takes five minutes and catches real vulnerabilities: auditing your dependencies. npm audit or yarn audit checks your node_modules tree against known vulnerability databases. Running it weekly (or in CI on every PR) catches outdated packages with disclosed security issues before they become your problem.
The output can be noisy โ lots of low-severity findings in transitive dependencies you don't control. Focus on high and critical severity. For the rest, npm audit fix handles what it can. For vulnerabilities in transitive dependencies where no fix is available, you either wait for the maintainer to patch, find an alternative package, or accept the risk with documentation about why.
Automated dependency update tools like Dependabot or Renovate open PRs when new versions are available. Saves you from manually checking. The volume of PRs can be overwhelming for projects with many dependencies โ configure update grouping or scheduling so you're not drowning in bot-generated PRs.
And check your lock file into version control. package-lock.json or yarn.lock. Without it, npm install on different machines might resolve to different dependency versions, introducing inconsistencies that are hard to debug and potentially introducing vulnerable versions that your local machine didn't have.
I wish I had a cleaner ending for this. But "the basics keep being wrong" seems to be how it goes, and the most practical response is building systems that catch the basics automatically rather than depending on human memory to never lapse.
Written by
Anurag Sinha
Full-stack developer specializing in React, Next.js, cloud infrastructure, and AI. Writing about web development, DevOps, and the tools I actually use in production.
Stay Updated
New articles and tutorials sent to your inbox. No spam, no fluff, unsubscribe whenever.
I send one email per week, max. Usually less.
Comments
Loading comments...
Related Articles

OAuth2 Finally Made Sense โ The Flows, the Confusion, and the Mistakes Everyone Makes
Why OAuth2 is confusing on first encounter, what the authorization code flow actually does step by step, and the implementation mistakes I've seen in production.

Linux Server Hardening โ After the First SSH In
The steps I take on every new Linux server before it faces the internet: SSH lockdown, firewall rules, fail2ban, automatic updates, and the security that actually matters.

Cybersecurity Survival: A Practical Scenario
Walk through a simulated breach to understand which skills actually matter in real-world incident response.