in-memory store Introduced in L7

Redis

The in-memory key-value store. Fast cache, fast queue, fast everything that doesn't need durability.

Mindmap

hover · click to navigate
this tech depends on / used by alternative Shipyard anchor
What it is

The plain-English version

Redis is an in-memory data structure store. Strings, hashes, lists, sets, sorted sets, streams. Sub-millisecond latency. Optionally persistent. The default choice for caching, session storage, rate limiting, queues, and pub/sub messaging in modern stacks.

Why it exists

The problem it solves

Application servers should be stateless (so they can be horizontally scaled). But you still need shared state — sessions, caches, rate-limit counters. Redis is the answer. It's fast enough that calling it on every request is fine, and rich enough to handle queues and pub/sub without a separate system.

What it competes with

Alternatives

AlternativeTypeWhen it wins
PostgresdatabaseThe serious open-source relational database. The default choice for most production apps that need structured data.
MongoDBdatabaseThe dominant document database. Schemaless flexibility, JSON-shaped documents, harder consistency tradeoffs.
PrismaORMThe TypeScript-first ORM. Schema-driven, type-safe, the default for most modern Node apps.
Where it shows up in Shipyard

Deep links

Vocabulary

The words you'll hear

Key-value
Simplest data model: set, get, delete by key.
TTL
Time-to-live. Keys can expire automatically.
Pub/Sub
Publish messages on channels; subscribers receive them. No persistence.
Stream
Append-only log with consumer groups. Closer to Kafka than to a queue.
Cluster
Sharded Redis across many nodes.
Sentinel
Redis's failover system without sharding.
Cache stampede
Many clients miss the cache simultaneously and overwhelm the database.
Prompting

Bad vs. good prompt for Redis

✕ Bad prompt
add redis
✓ Good prompt
Add Redis caching to Tasklane's GET /api/tasks endpoint. Cache by user_id with 60-second TTL. Use ioredis with Node. Add a cache-busting on POST/PATCH/DELETE that invalidates the user's tasks key. Use a stampede-prevention pattern (request coalescing) for hot keys.

Why it works: Specifies the client library, the cache key shape, the invalidation contract, and asks for stampede protection — the realistic failure mode.

Pitfalls

What bites real teams

⚠ Treating it as durable

Redis is RAM-first. With AOF and RDB it persists, but that's a tradeoff with speed. Don't store data you can't lose unless you've checked the persistence config.

⚠ Memory blowup

No TTL + steady writes = OOM eventually. Set TTLs by default; monitor memory.

⚠ Hot keys

One key getting most traffic creates a single-instance bottleneck. Hash-based sharding or read replicas.

References

Official docs only