SQLite โ The Most Underrated Database in Your Toolbox
Why I stopped reaching for Postgres by default and started shipping production apps with SQLite. WAL mode, embedded analytics, and when it genuinely beats the big databases.

Every side project I started for three years followed the same script. Spin up a Postgres container. Configure connection strings. Set up migrations. Write a database access layer. Manage connection pooling. For a personal blog. For a to-do app. For a weekend project that three people would ever use.
Then I read the SQLite documentation โ actually read it, not just skimmed the "limitations" section everyone points to โ and realized I'd been over-engineering my data layer for years. SQLite isn't just "that database for mobile apps." It's a production-grade, ACID-compliant, absurdly fast embedded database that handles more workloads than most developers give it credit for.
The moment that changed my mind: I moved a side project from Postgres to SQLite and the response times dropped from 15-20ms to under 1ms for most queries. No network hop. No connection negotiation. No TCP overhead. Just a function call to read from a file on the same machine. That's not a marginal improvement. That's an order of magnitude.
What SQLite Actually Is
SQLite is a C library that implements a complete SQL database engine in a single file. No server process. No configuration. No daemon running in the background consuming memory. Your application links against the SQLite library (or uses a binding in whatever language you're working in), and database operations become function calls.
The database itself is one file on disk. A 500MB database? One 500MB file. Back it up by copying the file. Move it to another server by copying the file. Inspect it with the sqlite3 command-line tool. The simplicity is the feature.
Most people know this much and immediately file SQLite under "toy databases for prototypes." The assumption is that a database without a server can't possibly handle serious workloads. That assumption is wrong, and it stems from not understanding what modern SQLite actually offers.
WAL Mode Changes Everything
The single biggest misconception about SQLite is that it can't handle concurrent access. In its default journal mode, this is partly true โ writes lock the entire database, and readers are blocked during writes. For anything beyond a single-user application, that's a dealbreaker.
WAL mode (Write-Ahead Logging) fixes this. Enable it once and the concurrency model changes completely.
PRAGMA journal_mode=WAL;
That one line transforms SQLite's behavior. Writers no longer block readers. Multiple readers can operate simultaneously while a write is in progress. Reads come from a consistent snapshot of the database at the point the read transaction started. Writes are serialized โ only one writer at a time โ but readers are completely unaffected.
For read-heavy workloads (which most web applications are โ think 95% reads, 5% writes), WAL mode makes SQLite competitive with client-server databases. The single-writer limitation is real but rarely the bottleneck. Most web apps aren't doing thousands of concurrent writes per second. They're doing a handful of writes and a lot of reads. SQLite in WAL mode handles that without breaking a sweat.
I configure every SQLite database I create with the same pragmas:
PRAGMA journal_mode=WAL;
PRAGMA synchronous=NORMAL;
PRAGMA busy_timeout=5000;
PRAGMA cache_size=-20000;
PRAGMA foreign_keys=ON;
PRAGMA temp_store=MEMORY;
synchronous=NORMAL relaxes the durability guarantee slightly in WAL mode โ data is still safe against application crashes but could theoretically lose the last transaction during an OS crash or power failure. The performance improvement is significant, and for most applications, the tradeoff is acceptable.
busy_timeout=5000 tells SQLite to wait up to 5 seconds for a lock instead of immediately returning SQLITE_BUSY. Without this, concurrent write attempts fail instantly. With it, the second writer waits for the first to finish. Five seconds is plenty for typical write operations.
cache_size=-20000 sets the page cache to roughly 20MB. Default is 2MB. More cache means more data stays in memory between queries. If your database is smaller than the cache, the entire thing lives in RAM after the first few queries.
When SQLite Beats Postgres
Not always. Not for every workload. But for specific, common scenarios, SQLite has structural advantages that no client-server database can match.
Single-Server Applications
If your application runs on one server (which is most applications โ the multi-server scaling conversation is relevant for far fewer projects than the industry pretends), SQLite eliminates an entire network hop. Your query goes from application process to SQLite library to filesystem, all within the same machine. With Postgres, the query goes from application to TCP connection to Postgres server process to shared buffers to filesystem and back. Each hop adds latency.
For a typical web request that runs 3-5 queries, removing the network hop from each query saves 5-15ms total. Doesn't sound like much until you're handling hundreds of requests per second and those milliseconds compound.
Read-Heavy Workloads
Blogs, documentation sites, content management systems, dashboards, analytics displays. Anything where the data is written occasionally and read constantly. SQLite's read performance in WAL mode is remarkable because there's no query parsing on a remote server, no result serialization over the network, no connection pool management. It's a function call that reads from a memory-mapped file.
import sqlite3
import time
conn = sqlite3.connect('analytics.db')
conn.execute("PRAGMA journal_mode=WAL")
conn.execute("PRAGMA cache_size=-50000")
# Typical analytics query
start = time.perf_counter()
results = conn.execute("""
SELECT date(created_at) as day,
count(*) as page_views,
count(DISTINCT visitor_id) as unique_visitors
FROM events
WHERE created_at > datetime('now', '-30 days')
GROUP BY day
ORDER BY day
""").fetchall()
elapsed = (time.perf_counter() - start) * 1000
print(f"Query time: {elapsed:.2f}ms")
# On a table with 1M rows: typically 15-40ms
# Same query through Postgres with network hop: 30-80ms
Deployment Simplicity
No database server to provision, monitor, patch, or back up separately. The database is part of your application deployment. Ship a binary and a .db file. Done. No connection string configuration, no firewall rules for the database port, no separate backup strategy.
For personal projects, internal tools, and small-to-medium production applications, this simplicity is probably worth real money. I've maintained Postgres instances that cost more in management time than the application they served was worth.
When Postgres Wins (and It Does Win)
SQLite is not a replacement for Postgres. Some workloads genuinely need a client-server database.
Multiple application servers. If you're running three copies of your application behind a load balancer, they can't all write to the same SQLite file (technically possible with network filesystems, but the performance and corruption risks make it impractical). Postgres handles this naturally because it's a network service.
Write-heavy workloads. If your application does hundreds of concurrent writes per second, SQLite's single-writer serialization becomes the bottleneck. Postgres's MVCC handles concurrent writes much more gracefully.
Complex queries on large datasets. Postgres has a sophisticated query planner, parallel query execution, extensive indexing options (GIN, GiST, BRIN), and decades of optimization work for complex analytical queries. SQLite's query planner is simpler. For multi-table JOINs across millions of rows with complex WHERE clauses, Postgres probably wins on raw query performance.
Replication. Need a read replica in another region? Postgres has streaming replication built in. SQLite has Litestream (which I'll get to) but it's external tooling, not a native feature.
The honest answer for most developers, as far as I can tell: if your application runs on one server and has fewer than 100 concurrent users, SQLite probably works fine and saves you operational complexity. If you're building the next Twitter, use Postgres. Most of us are not building the next Twitter.
Embedded Analytics โ SQLite's Secret Strength
Here's a use case most developers haven't considered. SQLite is excellent as an embedded analytics engine. Not for your main application database, necessarily, but as a purpose-built analytics store that lives alongside your application.
I built a request logger that writes to SQLite instead of shipping logs to an external service. Every HTTP request gets a row: timestamp, path, status code, response time, user agent. At the end of each day, a simple SQL query generates the analytics dashboard.
import sqlite3
def init_analytics_db(path='analytics.db'):
conn = sqlite3.connect(path)
conn.executescript("""
PRAGMA journal_mode=WAL;
PRAGMA synchronous=NORMAL;
CREATE TABLE IF NOT EXISTS requests (
id INTEGER PRIMARY KEY,
timestamp TEXT DEFAULT (datetime('now')),
method TEXT,
path TEXT,
status_code INTEGER,
response_time_ms REAL,
user_agent TEXT,
ip_hash TEXT
);
CREATE INDEX IF NOT EXISTS idx_requests_timestamp
ON requests(timestamp);
CREATE INDEX IF NOT EXISTS idx_requests_path
ON requests(path);
""")
return conn
def log_request(conn, method, path, status, response_time, ua, ip_hash):
conn.execute(
"INSERT INTO requests (method, path, status_code, response_time_ms, user_agent, ip_hash) VALUES (?, ?, ?, ?, ?, ?)",
(method, path, status, response_time, ua, ip_hash)
)
conn.commit()
No external analytics service. No monthly bill. No data leaving your server. Query it however you want with standard SQL. Want the slowest endpoints this week? The most common error paths? Requests per hour broken down by country? Write a SQL query. It runs in milliseconds against a local file.
For small-to-medium traffic sites (say, under 1 million requests per day), this approach handles the write load comfortably. Each insert takes microseconds. Even at 100 requests per second, SQLite barely notices.
The Litestream Backup Story
The biggest legitimate concern with SQLite in production: backup and disaster recovery. With Postgres, you get WAL archiving, point-in-time recovery, streaming replication. With SQLite, you have... a file. Copying a SQLite file while the database is active can produce a corrupt backup if a write happens during the copy.
Litestream solves this elegantly. It's a standalone tool that continuously replicates your SQLite database to S3 (or any S3-compatible storage). It reads the WAL file as writes happen and streams changes to the replica. Sub-second replication lag. No application code changes needed.
# litestream.yml
dbs:
- path: /data/app.db
replicas:
- url: s3://my-bucket/app.db
retention: 72h
# Start Litestream wrapping your application
litestream replicate -config litestream.yml
Restore is equally simple:
litestream restore -config litestream.yml /data/app.db
I've had to restore from backup twice. Both times, the process took under a minute. Compared to the Postgres backup restoration procedures I've fumbled through (pg_dump, pg_restore, WAL replay, configuration), the simplicity was refreshing.
SQLite in Production โ Real Examples
This isn't theoretical. Major software runs on SQLite in production.
Every iPhone and Android phone runs SQLite for dozens of system databases. Every Firefox and Chrome browser stores bookmarks, history, and cookies in SQLite. Airbus uses SQLite in flight software. The Library of Congress uses it as a recommended storage format for datasets.
But for web applications specifically, the recent wave of SQLite-in-production has been driven by frameworks and hosting platforms that deploy to single servers. Rails 8 made SQLite the default database for new applications. Fly.io and other platform providers run SQLite-backed applications with Litestream replication as a first-class deployment option.
The pattern works: application and database on the same machine, Litestream replicating to object storage for backup, and the application scaled vertically (bigger machine) rather than horizontally (more machines). Vertical scaling works for a surprisingly long time. A modern VPS with 8 cores and 32GB RAM running SQLite can handle thousands of concurrent users for most web applications.
Working with SQLite in Node.js
Since this blog runs on Next.js, here's what SQLite looks like in a Node.js context. The better-sqlite3 library is the go-to choice โ it's synchronous (no callback hell for database calls), fast, and well-maintained.
import Database from 'better-sqlite3';
const db = new Database('app.db');
// Apply pragmas once at startup
db.pragma('journal_mode = WAL');
db.pragma('synchronous = NORMAL');
db.pragma('busy_timeout = 5000');
db.pragma('cache_size = -20000');
db.pragma('foreign_keys = ON');
// Prepared statements for performance
const getPostBySlug = db.prepare(`
SELECT id, title, content, published_at, view_count
FROM posts
WHERE slug = ?
AND published = 1
`);
const incrementViews = db.prepare(`
UPDATE posts SET view_count = view_count + 1 WHERE id = ?
`);
// Use in your route handler
export function getPost(slug: string) {
const post = getPostBySlug.get(slug);
if (post) {
incrementViews.run(post.id);
}
return post;
}
The synchronous API might look odd if you're used to Postgres drivers where everything is async. With SQLite, there's no network call โ the database read is a local file operation that completes in microseconds. Making it async would add overhead for no benefit. better-sqlite3 intentionally avoids the async API because the operations are already fast enough that the event loop blocking is negligible.
Transactions in better-sqlite3
Transactions are where SQLite's performance really shines for batch operations.
const insertPost = db.prepare(`
INSERT INTO posts (title, slug, content, published_at)
VALUES (@title, @slug, @content, @publishedAt)
`);
const insertMany = db.transaction((posts: Post[]) => {
for (const post of posts) {
insertPost.run(post);
}
});
// Insert 1000 posts in ~10ms instead of ~2000ms
insertMany(thousandPosts);
Without the explicit transaction, each insert is its own transaction โ each one syncs to disk separately. Wrapping them in a transaction batches the disk sync. The difference can be 100x or more for bulk inserts. Learned this the hard way when an import script that should have taken seconds was taking minutes.
SQLite's Limitations โ Honestly
No ALTER TABLE for complex schema changes. You can add columns, rename columns, and drop columns (as of version 3.35.0), but you can't change a column's type or add constraints to existing columns. The workaround is the classic recreate-the-table dance: create a new table with the desired schema, copy data, drop the old table, rename the new one. ORMs handle this transparently in migrations, but it's annoying if you're writing raw SQL.
No built-in full-text search that matches Postgres's sophistication. SQLite has FTS5, which handles basic full-text search well enough for most applications, but Postgres's tsvector/tsquery with language-aware stemming, ranking, and phrase matching is more powerful.
No LISTEN/NOTIFY or built-in pub/sub. Postgres can push notifications to connected clients when data changes. SQLite has no equivalent. If you need real-time reactivity, you'll need to poll or add a message queue.
No user management or access control. SQLite has no concept of database users or permissions. Anyone who can read the file can read all the data. Fine when only your application accesses the database. Not fine if you need multiple services with different access levels.
These are real limitations. For many projects, none of them matter. For some projects, any one of them is a dealbreaker. The important thing is knowing which category your project falls into before choosing.
Migration Path โ Starting with SQLite, Growing to Postgres
One approach I've had success with: start every project with SQLite and migrate to Postgres only when you hit a genuine limitation. Not a theoretical limitation โ an actual problem you're experiencing.
The migration path is smoother than you'd expect if you use an ORM or query builder that abstracts the database engine. Drizzle, Prisma, and Knex all support both SQLite and Postgres. Write your queries through the abstraction layer, and switching databases is a configuration change plus a data migration.
// drizzle.config.ts โ swap this and your schema stays the same
export default {
schema: './src/db/schema.ts',
driver: 'better-sqlite3', // Change to 'pg' when the time comes
dbCredentials: {
url: './data/app.db', // Change to Postgres connection string
},
};
The queries that need changing are the ones that use database-specific features โ Postgres's JSONB operators, SQLite's json_extract(), date functions that differ between engines. If you stick to standard SQL and avoid engine-specific features, the migration is straightforward.
Most projects never need to migrate. That's the point. Start simple. Add complexity when the workload demands it. Most workloads never demand it, and you've saved yourself years of unnecessary database administration.
I used to think using SQLite in production was amateurish. Now I think reaching for Postgres when SQLite would do the job is the more amateurish move, might be โ it's choosing complexity without justification, which is the opposite of good engineering. Use the simplest tool that solves the problem. For a surprising number of problems, that tool is a single file on disk.
Keep Reading
- Debugging Slow PostgreSQL Queries in Production โ When you do outgrow SQLite, knowing how to diagnose and fix Postgres performance issues is essential.
- MongoDB vs PostgreSQL โ An Honest Comparison After Using Both in Production โ The database decision goes beyond SQLite; this covers when documents beat relations and vice versa.
Written by
Anurag Sinha
Full-stack developer specializing in React, Next.js, cloud infrastructure, and AI. Writing about web development, DevOps, and the tools I actually use in production.
Stay Updated
New articles and tutorials sent to your inbox. No spam, no fluff, unsubscribe whenever.
I send one email per week, max. Usually less.
Comments
Loading comments...
Related Articles

System Design โ Not Interview Prep, Real Decisions
The system design concepts I actually use at work: load balancers, caching layers, message queues, and why picking the right trade-off matters more than knowing the right answer.

MongoDB vs PostgreSQL โ An Honest Comparison After Using Both in Production
When the document model actually helps, when relational wins, and the real project stories behind the decision.

Monolith vs. Microservices: How We Made the Decision
Our team's actual decision-making process for whether to break up a Rails monolith. Spoiler: we didn't go full microservices.