DEV Community

Cover image for Event Debouncing with Logic Apps and Azure Table Storage
Daniel Jonathan
Daniel Jonathan

Posted on

Event Debouncing with Logic Apps and Azure Table Storage

Forwarding every webhook event directly to a downstream API is a recipe for throttling and duplicate processing. This post walks through how to fix that with three Logic Apps and one Azure Table Storage table.


Debouncing is a term from frontend development — wait for the noise to stop, then act once.

In integration, this pattern is better described as event buffering with deduplication: absorb bursts, collapse repeated updates per entity, and process only the final state.

In this implementation, Azure Table Storage is not the source of truth — it acts as a deduplication index. We store only the entity ID, and at processing time we re-fetch the authoritative state from the source system before calling downstream APIs.


The Problem

Source systems fire event bursts — hundreds of events at once during bulk imports, and multiple rapid updates for the same entity. You don't need to process every intermediate state — only the final one per entity.

A burst of 200 events may touch 50 entities, each updated multiple times. Every entity should be processed once, with its latest state — and the downstream API called once per entity.


The Pattern

Source System Webhook
        │
        ▼
  rcv-events  (HTTP trigger)
        │  upsert each event → EventBuffer table
        ▼
  Azure Table Storage: EventBuffer
        │  PartitionKey: "relation-events"  RowKey: entityId  Status: "Pending"
        ▼
  prc-events  (Timer: every 5 min)
        │  query Pending rows older than X min → dispatch each
        ▼
  prc-process-single-event
        │  mark Processing → fetch fresh from source → call downstream
        │  delete on success  /  reset to Pending on failure
        ▼
  Downstream API
Enter fullscreen mode Exit fullscreen mode

Step 1 — Receive

rcv-events accepts a batch of events via HTTP and upserts each one into the buffer table. No queue, no broker — the HTTP trigger is the ingress.

rcv-events workflow — HTTP trigger with ForEach upsert to EventBuffer table

Each row looks like this:

{
  "PartitionKey": "relation-events",
  "RowKey": "<entityId>",
  "Event": "updated",
  "EntityType": "Record",
  "Status": "Pending",
  "ReceivedAt": "2026-04-20T14:30:00Z"
}
Enter fullscreen mode Exit fullscreen mode

RowKey = entityId is the key insight. No matter how many events arrive for the same entity, there is always exactly one row. The tenth update overwrites the ninth. Deduplication is a schema decision, not code.


Step 2 — Wait

prc-events runs on a timer (every 5 minutes) and queries rows where Status eq 'Pending' and LastUpdated <= utcNow() - X minutes. The time window is your debounce threshold — nothing gets processed until the burst settles.

prc-events workflow — timer trigger querying pending rows and dispatching each to prc-process-single-event


Step 3 — Process

For each pending row, prc-process-single-event:

  1. Marks the row Processing — prevents double-processing if the timer fires again mid-run
  2. Fetches the current state from the source system — never trusts the buffered payload, which may already be stale
  3. Calls the downstream API with fresh data
  4. Deletes the row on success / resets to Pending on failure

prc-process-single-event workflow — mark Processing, fetch from source, call downstream, delete on success or reset to Pending on failure

This gives at-least-once delivery with automatic retry — no custom infrastructure needed.


Status Lifecycle

Pending  →  Processing  →  [deleted]
                 │
                 └──(on failure)──→  Pending
Enter fullscreen mode Exit fullscreen mode

Three states, one field. Fully visible in Azure Storage Explorer during an incident.


Why It Works

  • Deduplication for free — one row per entity, always the latest
  • No ordering concerns — you fetch fresh data at processing time, so intermediate states are irrelevant
  • Respects downstream rate limits — 20 updates in 30 minutes still results in one API call to the downstream system
  • Parallel processingprc-events fans out each pending row as an independent call, so entities are processed concurrently with isolated retry state
  • Operationally transparent — query the table, see exactly what's pending or stuck
  • No broker needed at low-to-moderate scale — if your HTTP trigger can handle the inbound burst and your timer cadence keeps up with the queue depth, you don't need Service Bus

Consider adding Service Bus only if you need strict ordering, dead-lettering, or multiple consumers on the same stream.

When Not to Use This Pattern

Avoid it when you need strict event ordering, every event preserved independently, near-real-time latency, multiple consumers, or very high throughput. In those cases, reach for Service Bus or Event Hubs instead.


No Service Bus. No custom retry logic. No ordering guarantees needed. Just a table, a timer, and one row per entity.

Top comments (0)