If you're building a dApp, you've probably written something like this:
const cached = await redis.get(`cdp:position:${user}`);
if (cached) return JSON.parse(cached);
const position = await cdpContract.methods.getPosition(user).call();
await redis.setEx(`cdp:position:${user}`, 60, JSON.stringify(position)); // 60s TTL
return position;
This works, but there's a hidden problem: your data is stale for up to 60 seconds by design. And you're making RPC calls every 60 seconds regardless of whether anything actually changed on-chain.
For most protocol contracts — a CDP vault, a staking contract, a price oracle, a liquidity pool — transactions are relatively rare. Most users are just reading. The contract state sits unchanged for minutes, hours, sometimes days.
The insight
Contract state only changes when a transaction is mined. A user's CDP position doesn't change on its own between blocks — it changes because that user sent a transaction.
So instead of a TTL, you can do this:
- Cache forever (TTL = 0)
- Watch the blockchain for transactions to your contracts
- Delete the affected Redis keys the moment a tx is detected
tx mined → watcher detects it → redis.del("cdp:position:*") → next request hits RPC
| Scenario | 60s TTL | Event-driven (TTL=0) |
|---|---|---|
| No activity, 1 hour | 60 RPC calls | 0 RPC calls |
| 1 tx per minute | 60 RPC calls | 1 RPC call |
| 10 tx per minute | 60 RPC calls | 10 RPC calls |
The less active your contract, the bigger the win. For a lending protocol or an oracle, this can mean zero RPC calls during quiet periods while data stays perfectly fresh.
Note: This pattern is designed for your own protocol contracts with moderate activity — not for watching global tokens like USDC or WETH, which receive thousands of transactions per block and would invalidate your cache constantly.
Introducing Blockpulse
I extracted this pattern from a production DeFi app into a standalone service: Blockpulse.
You configure which contracts to watch and which Redis patterns to delete:
// config/config.js
module.exports = {
contracts: [
{
address: "0xYourCDPContract",
name: "cdp",
events: ["PositionOpened", "PositionClosed", "Liquidated"],
cacheKeys: [
"myapp:cdp:position:*",
"myapp:cdp:stats:*"
]
},
{
address: "0xYourOracle",
name: "oracle",
events: ["PriceUpdated"],
cacheKeys: ["myapp:price:*"]
}
]
};
That's it. Start it alongside your app:
docker compose up
Every time a transaction touches your CDP contract, Blockpulse deletes myapp:cdp:position:* from Redis. Your backend gets fresh data on the next request — and not a moment sooner.
What else it does
Beyond cache invalidation, Blockpulse also:
-
Indexes events — decoded logs stored in Redis, queryable via
/api/events/:address -
REST API for contract calls —
/api/call/:contract/:methodwith Redis caching -
Batch calls —
/api/batchfor multiple contract reads in one round-trip - Historical sync — backfills past events via Etherscan API on startup
- Cache dependencies — invalidate contract B's keys when contract A changes
-
Multi-chain — set
CHAIN_IDfor Polygon, Arbitrum, Base, etc.
Architecture
Ethereum Node (WebSocket / HTTP)
│
▼
┌─────────────┐ tx detected
│ Blockpulse │ ──────────────────► Redis DEL (your key patterns)
└──────┬──────┘
│
▼ REST API :3002
/api/call · /api/events · /api/batch
Getting started
git clone https://github.com/nagor2/blockpulse
cd blockpulse
cp .env.example .env # add your RPC URL and REDIS_URL
cp config/config.example.js config/config.js # add your contracts
npm install && npm start
The service has been running in production on Ethereum mainnet as part of a DeFi frontend. It's MIT licensed and ~1700 lines of Node.js.
GitHub: github.com/nagor2/blockpulse
Curious if anyone's solved this differently — happy to discuss tradeoffs in the comments.
Top comments (1)
The tradeoff table is the whole argument. 60 RPC calls vs 0 during quiet periods. For most protocol contracts that's a clear win.
I took a different approach for a specific reason: my tool answers user questions about lending positions during market crashes, which is exactly when the data matters most and when event-driven invalidation gets overwhelmed. During the KelpDAO exploit in April, $8.4B in deposits fled Aave within hours. Every Chainlink oracle was updating constantly. Event-driven invalidation would have been firing on every block, which at that point is functionally the same as no cache at all.
So I skipped caching prices entirely. Every position query reads directly from the protocol's on-chain oracle: AaveOracle.getAssetPrice() for Aave V3, comet.getPrice(priceFeed) for Compound V3. More RPC calls per request, but the answer is never stale. When someone asks "will I get liquidated if ETH drops 20%?" the simulation runs against the same price the liquidator would use, not a cached version from N seconds ago.
Your pattern is the right one for the 99% case where contracts are quiet and freshness tolerance is measured in seconds. The 1% case where everything is on fire simultaneously is where I ended up going direct-to-oracle. Different problem, different tradeoff. Nice library.