Redis
The in-memory key-value store. Fast cache, fast queue, fast everything that doesn't need durability.
Mindmap
The plain-English version
Redis is an in-memory data structure store. Strings, hashes, lists, sets, sorted sets, streams. Sub-millisecond latency. Optionally persistent. The default choice for caching, session storage, rate limiting, queues, and pub/sub messaging in modern stacks.
The problem it solves
Application servers should be stateless (so they can be horizontally scaled). But you still need shared state — sessions, caches, rate-limit counters. Redis is the answer. It's fast enough that calling it on every request is fine, and rich enough to handle queues and pub/sub without a separate system.
Alternatives
| Alternative | Type | When it wins |
|---|---|---|
| Postgres | database | The serious open-source relational database. The default choice for most production apps that need structured data. |
| MongoDB | database | The dominant document database. Schemaless flexibility, JSON-shaped documents, harder consistency tradeoffs. |
| Prisma | ORM | The TypeScript-first ORM. Schema-driven, type-safe, the default for most modern Node apps. |
Deep links
The words you'll hear
- Key-value
- Simplest data model: set, get, delete by key.
- TTL
- Time-to-live. Keys can expire automatically.
- Pub/Sub
- Publish messages on channels; subscribers receive them. No persistence.
- Stream
- Append-only log with consumer groups. Closer to Kafka than to a queue.
- Cluster
- Sharded Redis across many nodes.
- Sentinel
- Redis's failover system without sharding.
- Cache stampede
- Many clients miss the cache simultaneously and overwhelm the database.
Bad vs. good prompt for Redis
Why it works: Specifies the client library, the cache key shape, the invalidation contract, and asks for stampede protection — the realistic failure mode.
What bites real teams
Redis is RAM-first. With AOF and RDB it persists, but that's a tradeoff with speed. Don't store data you can't lose unless you've checked the persistence config.
No TTL + steady writes = OOM eventually. Set TTLs by default; monitor memory.
One key getting most traffic creates a single-instance bottleneck. Hash-based sharding or read replicas.