Cachimbo is a composable caching library that allows you to layer different strategies in order to maximize the performance.
- Supports external cache stores
- Redis
- Valkey
- Memcached
- Cloudflare Workers KV
- Keyv
- Supports in-memory cache stores
- Least Recently Used (LRU) eviction
- Time-based (TTL) eviction
- FIFO eviction
- Weak References (garbage collectable cached items)
- Supports composable cache strategies
- Request coalescing (deduplication)
- Multi-layer caching (tiered cache)
- Stale-While-Revalidate
- TTL jittering
- Metrics collection
- Easily extendable
First, install the library:
npm install cachimboThen, initialize the cache stores and layers you want to use. For example:
import { RedisCache, SWRCache } from 'cachimbo';
// A Redis cache with a Stale-While-Revalidate layer on top
const cache = new SWRCache({
cache: new RedisCache({
client: redisClient, // your Redis client instance
}),
defaultTTL: 60 * 15, // 15 minutes
staleTTL: 60, // 1 minute
});
const data = await cache.getOrLoad<MyData>(
"mykey", // the cache key
() => loadData(), // function to load data if not in cache
{ ttl: 60 * 3 }, // cache for 3 minutes
);
// Other useful methods:
// cache.get("key");
// cache.set("key", data, { ttl: 120 });
// cache.delete("key");
// cache.getMany(["key1", "key2"]);
// cache.setMany({ key1: value1, key2: value2 }, { ttl: 300 });
// cache.deleteMany(["key1", "key2"]);In-memory caches offer extremely low latency since data is stored directly in the application’s process. They reduce external round-trips, improve performance under load, and are ideal for fast, frequently accessed data.
External caches (like Redis, Memcached, etc) provide fast, scalable, shared storage that can be accessed across multiple application instances. They offer high throughput, larger memory capacity, and centralized cache management beyond what in-memory caches can reliably provide.
- In-memory
- Redis (and Valkey)
- Memcached
- Cloudflare Workers KV
- Keyv
Cache layers are composable components that sit between your code and the cache store. While cache stores define where data is stored, cache layers define how the cache is accessed.
Each layer intercepts cache operations to add behavior. Layers can be stacked to form a pipeline, allowing advanced caching strategies to be reused across different cache backends.
- Request Coalescing (deduplication)
- Tiered Caching (multi-layer caching)
- Stale-While-Revalidate
- TTL Jittering
- Async/Lazy Initialization
- Key Transformation
- Metrics Collection