My short pitch is this: tobolac is probably one of the fastest single-node caches you can use within Bun and Node.js. It comes with full type safety, Zod schema compatibility, stampede protection, and SWR. It’s open source under the MIT license at filipdanic/tobolac and installable through your JavaScript package manager of choice as tobolac.

Motivation and Use Cases

If you are building a distributed system aka anything between “a bit more traffic than a single host/node can handle” and “we are the next Amazon,” you are going to need a cache. It’s likely going to be Redis, and you’re either going to go through the pain of setting up and maintaining a cluster, or your CTO will insist on something like ElastiCache. Whatever you end up using, you are in good company, and my library, tobolac, has nothing to offer there.

But what about a single-node use case? Think:

  • sporadic event/data pipelines with a lot of volume,
  • local simulation software,
  • local ML inference caches,
  • web servers with a lot of traffic and tight latency budgets,
  • dev/pre-prod environments,
  • build tools,
  • single-player video games,
  • mobile/desktop apps that process a lot of data locally.

All of these tend to need a fast, reliable cache. You could use Redis, of course, but even with a single-node deployment will still pay the network round trip; that’s part of the protocol that Redis uses. That’s where tobolac comes in. It is built for the cases where everything runs on one machine and you want the cache to be as local, boring, and fast as possible. As a bonus, it is also:

  • fully type-safe,
  • Zod-compatible,
  • zero-dependency on Bun, and
  • one runtime dependency on Node.js.

The Stats

Using a multi-tier cache (LRU + SQLite), tobolac comes out ahead on ops/s in my single-node benchmarks. Here’s a quick table from the shifting-hotset scenario:

library/key space1k10k100k
no-cache13,57113,60913,608
node-lru97,49893,18351,928
bento-multitier (redis + lru)105,107108,85174,585
tobolac (sqlite + lru)459,531376,200246,387

The test above mimicked an application where a small hot window shifts every 50k operations, and 90% of accesses hit the current hot window. In other words, it is meant to look like a real workload with locality, churn, and a loader that is expensive enough for caching to matter. More stats and benchmarks are available here.

Developer Experience

Let’s look at the code below, which comes from a financial analysis desktop app.

import { createCache, namespace } from 'tobolac';
import type { Report } from '../connectors/services/types';
import { createReport } from '../connectors/services/getters';

const cacheSetup = createCache({
  namespaces: {
    reports: namespace<Report, [id: string, from: number, to: number ]>({
      ttl: '7d',
      factoryGetter: async (id, from, to) => createReport(id, from, to),
    }),
  },
});

if (!cacheSetup.ok) {
  // something went wrong with setting up your cache
  console.error(cacheSetup.error.message);
}

const appCache = cacheSetup.value;

const myReport = await appCache.reports.getOrSet("b651113bf96a5e3543d7", timestamp1, timestamp2);
if (!myReport.ok) {
  // error handling
} else {
  // do something with myReport.value, already inferred as a 'Report' type
}

In the code above, a Report object is expensive to compute because it involves a lot of network round-trips and CPU-intensive work. That’s why the application wants to keep it cached. Here’s what is nice about the code:

  1. Entities can have composite keys, which the factory getter immediately infers.
  2. The cache can automatically get the data for you. It is all type-safe.
  3. ttl can be expressed ergonomically, and you can also provide a fixed timestamp.
  4. The database used underneath is persistent on disk.
  5. Every action returns a Result type, making it easy to separate errors. The cache doesn’t throw exceptions.

What Makes tobolac so Fast?

Earlier, I mentioned that, on a single-node deployment, tobolac is faster than a Redis-based solution because there’s no network overhead. But that alone is not enough to compete with Redis. There were a few other optimizations I had to do:

  1. WAL mode: This was the most obvious one, and literally a one-liner. SQLite’s Write-Ahead Logging mode ensures concurrent reads do not block writes. (WAL docs.)
  2. Batching: Even with WAL mode, inserting items one by one would not scale. The library now leans on several techniques here: batched flushes, write-back behavior, and transactions.
  3. Stampede protection: A cache stampede occurs when many concurrent calls come in for the same missing or recently expired key. This happened quite a lot in my benchmark runs. The fix was simple in principle: keep an in-flight promise per key, then hand that same promise back to every caller until it resolves.

That’s the whole idea behind tobolac: if your app lives on one machine, your cache should act like it. And it should be blazing fast. Give it a try.

Wishing your caches a good hit rate,

–Filip