Cache

Redis vs Memcached

Compare Redis and Memcached in-memory data stores - data structures, persistence, clustering, and caching strategies.

Redis vs Memcached

Overview

Redis and Memcached are both high-performance, in-memory data stores commonly used for caching. Redis is a versatile data structure server supporting strings, hashes, lists, sets, sorted sets, streams, and more. Memcached is a simpler, focused key-value cache designed purely for speed and simplicity.

Feature Comparison

FeatureRedisMemcached
Data StructuresStrings, hashes, lists, sets, sorted sets, streams, bitmaps, HyperLogLogStrings only
PersistenceRDB snapshots, AOF logNone (pure cache)
ReplicationMaster-replicaNone (client-side)
ClusteringRedis Cluster (automatic sharding)Client-side sharding
Pub/SubBuilt-inNone
Lua ScriptingBuilt-inNone
TransactionsMULTI/EXECCAS (check-and-set)
TTL/ExpirationPer-keyPer-key
Max Key Size512 MB250 bytes
Max Value Size512 MB1 MB (default)
ThreadingSingle-threaded (I/O threads in 6.0+)Multi-threaded
Memory EfficiencyHigher overhead per keyLower overhead per key
ProtocolRESPASCII/binary
Eviction Policies8 policies (LRU, LFU, random, TTL)LRU only
Streams/QueuesRedis StreamsNone
GeospatialGEOADD, GEODIST, etc.None
Default Port637911211
LicenseBSD (source-available for some modules)BSD

Redis Pros & Cons

Pros:

  • Rich data structures beyond simple key-value
  • Persistence options for data durability
  • Built-in replication and clustering
  • Pub/Sub messaging capabilities
  • Lua scripting for atomic operations
  • Sorted sets enable leaderboards and ranking
  • Streams for event sourcing and message queues
  • Active development and large community

Cons:

  • Single-threaded core (I/O threads help but not full multi-threading)
  • Higher memory overhead per key
  • More complex to operate than Memcached
  • Persistence can cause latency spikes (fork for RDB)
  • Module licensing concerns (some modules source-available)
  • Can be over-engineered for simple caching

Memcached Pros & Cons

Pros:

  • Simple and focused (does one thing well)
  • Multi-threaded for better multi-core utilization
  • Lower memory overhead per key
  • Predictable performance
  • Easy to set up and operate
  • Mature and battle-tested
  • No persistence overhead

Cons:

  • Strings only (no complex data structures)
  • No persistence (data lost on restart)
  • No built-in replication or clustering
  • No Pub/Sub or messaging features
  • Limited eviction policies (LRU only)
  • 1 MB default value size limit
  • Fewer features for modern use cases

When to Use Redis

  • Caching with need for data structure flexibility
  • Session stores requiring persistence
  • Real-time leaderboards and ranking (sorted sets)
  • Message queues and Pub/Sub messaging
  • Rate limiting and counters
  • Geospatial queries
  • Full-page and fragment caching
  • Job queues (with Sidekiq, Bull, etc.)
  • Feature flags and configuration caching

When to Use Memcached

  • Simple key-value caching at massive scale
  • Multi-threaded workloads on multi-core machines
  • Caching large HTML fragments or serialized objects
  • Environments where memory efficiency per key matters
  • Simple session caching (without persistence needs)
  • Horizontally scaled cache pools
  • Workloads needing predictable, low-latency reads

Verdict

Choose Redis for most use cases. Its rich data structures, persistence, clustering, and pub/sub capabilities make it versatile beyond simple caching. It serves as a cache, message broker, queue, and session store.

Choose Memcached when you need a simple, high-throughput cache with multi-threaded performance and your workload is purely key-value string caching without need for persistence or advanced features.

Redis has become the default choice for most teams due to its versatility. Memcached remains relevant for high-throughput, simple caching workloads where its multi-threaded architecture provides an advantage.