I used to reach for Redis when I needed caching. Elasticsearch when search got complex. Mongo when the schema felt too rigid. SQLite for local dev. PostgreSQL for the "real" data.
That's five databases. Five connection pools. Five sets of failure modes. Five ways things can go out of sync.
Then I read Ethan McCue's "Just Use Postgres" and something clicked. And then I found HAMY's "Postgres Over Everything" and it clicked harder.
Here's the thing: Postgres can do almost all of it. And with Rust's async ecosystem, the performance costs are negligible.
The Five-Database Problem
Every project I've worked on eventually hits this wall. You start with Postgres. Then someone says "we need to cache session data" and you add Redis. Then "full-text search" and you add Elasticsearch. Then "fast local storage for the CLI tool" and you add SQLite. Then "flexible document storage" and someone sneaks in Mongo.
Now you have five databases. Five things that can fail. Five connection strings to manage. Five sets of credentials. Five ways your data can become inconsistent.
The funny thing? Most of these databases solve problems that Postgres solved years ago.
What Postgres Actually Has
Postgres isn't just a relational database anymore. It's become a platform:
- JSONB — Full document storage with indexing support. Not as flexible as Mongo, but close enough for 90% of use cases.
- Full-text search — Don't need Elasticsearch? Postgres has you covered with
tsvectorandtsquery. - Pub/Sub — LISTEN/NOTIFY for real-time updates. Redis can do this, but Postgres does it too.
- Arrays — Native array types. No need to serialize to JSON just to store a list.
- Range types — Date ranges, integer ranges, custom ranges. Great for scheduling.
- Extensions — PostGIS for geospatial, pgvector for embeddings, pg_cron for scheduling. Postgres has an extension for almost anything.
The HAMY blog post makes a point that stuck with me: use columns for things you constrain, index, or scan on. Use JSONB for everything else. Migrate from JSONB to columns when a field proves important enough.
How This Plays with Rust
Rust's async database ecosystem pairs perfectly with this approach. Here's the pattern I've settled on:
1. SQLx for compile-time checked queries
SQLx checks your SQL at compile time. No runtime SQL errors slipping through. No stringly-typed queries. You write:
let row = sqlx::query_as::<_, (i64, String)>(
"SELECT id, name FROM users WHERE email = $1"
)
.fetch_one(&pool)
.await?;
If the table doesn't have those columns, your code doesn't compile. If you typo the column name, your code doesn't compile. This is the same safety you'd get from an ORM, but you own the SQL.
2. One connection pool
Instead of managing Redis + Postgres + maybe Mongo, you manage one PgPool. One set of connections. One place to configure timeouts and retries.
let pool = PgPoolOptions::new()
.max_connections(5)
.acquire_timeout(Duration::from_secs(3))
.connect(&std::env::var("DATABASE_URL")?)
.await?;
3. JSONB for flexibility
For documents that don't need rigid schemas:
#[derive(Serialize, Deserialize)]
struct UserPreferences {
theme: String,
notifications: HashMap<String, bool>,
// Flexible - add fields without migration
}
let prefs: UserPreferences = row.get("preferences");
// No deserialization dance needed
4. Use extensions when you need them
Need scheduled jobs? pg_cron runs in-database:
SELECT cron.schedule('cleanup', '0 0 * * *', 'DELETE FROM sessions WHERE expires_at < now()');
Need vector similarity search? pgvector:
SELECT * FROM embeddings ORDER BY embedding <-> $1 LIMIT 10;
When This Doesn't Work
I'm not saying Postgres solves everything. A few cases where you'd still want specialized tools:
- Truly massive scale — If you're doing millions of operations per second, dedicated solutions may outshine Postgres. But most of us aren't at that scale.
- Graph data — Neo4j is purpose-built for graph queries. Postgres can do traversals, but it's not natural.
- Time-series at scale — TimescaleDB (built on Postgres) handles this well, but dedicated TSDBs have edge cases covered.
- Local/embedded — SQLite still wins for CLI tools where you want zero-setup.
But for most web applications? The "just use Postgres" philosophy holds.
The Real Benefit
It's not about performance. It's about operational simplicity.
One database to backup. One to monitor. One to tune. One to explain to new team members.
When ZeroClaw (my agent daemon) needed persistence, I used SQLite. It's the right tool for that — embedded, single-file, no setup. But for any server-side application I'm building now, Postgres is my default.
The Rust ecosystem makes this practical. SQLx's compile-time checking means you're not trading safety for flexibility. The async drivers mean you're not sacrificing performance.
The Pattern
Here's the mental model I use now:
- Default to Postgres — It's your core, your source of truth, your everything.
- Use columns for important stuff — Things you query, filter, constrain. IDs, timestamps, foreign keys.
- Use JSONB for flexible stuff — Configuration, preferences, payloads that vary.
- Add specialized tools only when you hit Postgres's ceiling — Not before.
Five databases isn't a badge of sophistication. It's technical debt with good PR.
Postgres over everything. Except when SQLite makes sense. Which is rare.