DEV Community

Cover image for Built an event-driven order pipeline in .NET with Kafka and Azure Service Bus
Aftab Bashir
Aftab Bashir

Posted on

Built an event-driven order pipeline in .NET with Kafka and Azure Service Bus

Most order processing systems I have worked with are synchronous. The API receives the request, does the work, and returns the result. That works fine until it does not. The database is slow, a third-party service is down, or you have 500 orders arriving at the same time. Everything backs up and the API starts timing out.

This project is the async version of that. Orders come in through a REST API, get saved to PostgreSQL, and get published to Kafka. A separate consumer picks them up, processes them, and publishes a fulfilment event to Azure Service Bus. The API never waits for any of that.

The architecture

There are three projects in the solution.

OrderPipeline.Api is an ASP.NET Core 10 API. It does two things when an order arrives: saves it to PostgreSQL and publishes an event to Kafka. That is it. The processing happens somewhere else.

OrderPipeline.Consumer is a .NET Worker Service. It runs continuously, reading from the Kafka orders topic. When it picks up an order event, it updates the status to Processing, does the fulfilment work, marks it as Fulfilled in PostgreSQL, and publishes a fulfilment event to Azure Service Bus.

OrderPipeline.Core holds the shared models. Order, OrderItem, OrderEvent, and the interfaces for the repository and publisher.

The order flow

POST an order to the API:

curl -X POST http://localhost:5125/api/orders \
  -H "Content-Type: application/json" \
  -d "{\"customerName\": \"John Smith\", \"customerEmail\": \"john@example.com\", \"items\": [{\"productName\": \"Laptop\", \"quantity\": 1, \"unitPrice\": 999.99}]}"
Enter fullscreen mode Exit fullscreen mode

The API responds immediately with a 201. In the background:

Order e6e20d66 created in database
Order e6e20d66 published to Kafka topic orders at offset 1
Enter fullscreen mode Exit fullscreen mode

Then the consumer picks it up:

Consumed message from partition [0] at offset 1
Processing order e6e20d66 for customer John Smith
Fulfilment event published to Service Bus for order e6e20d66
Order e6e20d66 fulfilled successfully
Enter fullscreen mode Exit fullscreen mode

The client does not wait for any of that. It can poll GET /api/orders/{id} to check the status whenever it wants.

Publishing to Kafka

The publisher uses the Confluent.Kafka producer. The order ID is the message key, which ensures all events for the same order land on the same partition.

var message = new Message<string, string>
{
    Key = order.Id.ToString(),
    Value = JsonSerializer.Serialize(orderEvent)
};

var result = await _kafkaProducer.ProduceAsync(_kafkaTopic, message);
_logger.LogInformation(
    "Order {OrderId} published to Kafka topic {Topic} at offset {Offset}",
    order.Id, _kafkaTopic, result.Offset);
Enter fullscreen mode Exit fullscreen mode

Consuming from Kafka

The consumer uses AutoOffsetReset.Earliest so it picks up messages from the beginning of the topic on first start. Manual commit means a message only gets marked as processed after the work is done, not when it is received.

var config = new ConsumerConfig
{
    BootstrapServers = _configuration["Kafka:BootstrapServers"],
    GroupId = "order-pipeline-consumer",
    AutoOffsetReset = AutoOffsetReset.Earliest,
    EnableAutoCommit = false
};
Enter fullscreen mode Exit fullscreen mode

If the consumer crashes mid-processing, it picks up from the last committed offset when it restarts. No orders get lost.

Publishing to Azure Service Bus

Once an order is fulfilled, the consumer publishes a fulfilment event to a Service Bus queue. Downstream systems subscribe to that queue to handle shipping, notifications, invoicing, or whatever comes next.

var message = new ServiceBusMessage(JsonSerializer.Serialize(orderEvent))
{
    MessageId = orderEvent.EventId.ToString(),
    Subject = "OrderFulfilled",
    ContentType = "application/json"
};

await _sender.SendMessageAsync(message);
Enter fullscreen mode Exit fullscreen mode

For local development, this uses the official Microsoft Azure Service Bus emulator running in Docker. The connection string just needs UseDevelopmentEmulator=true and it works identically to the real service.

The database

The Consumer and API share the same PostgreSQL database but they access it independently through EF Core. The API writes new orders. The Consumer reads and updates them. No shared state, no coupling between the two services beyond the database schema.

order.Status = OrderStatus.Processing;
await db.SaveChangesAsync();

// do the fulfilment work

order.Status = OrderStatus.Fulfilled;
order.ProcessedAt = DateTime.UtcNow;
await db.SaveChangesAsync();
Enter fullscreen mode Exit fullscreen mode

Running it locally

Everything runs with Docker Compose. One command starts PostgreSQL, Kafka, Zookeeper, Kafka UI, and the Service Bus emulator.

git clone https://github.com/aftabkh4n/order-pipeline.git
cd order-pipeline
docker-compose up -d
Enter fullscreen mode Exit fullscreen mode

Then start the API and Consumer in separate terminals and send a test order. The Kafka UI at http://localhost:8080 lets you watch messages flow through the topic in real time.

What I learned

Kafka startup takes time. On the first request after starting the containers, the producer waits while Kafka finishes initialising. Subsequent requests are fast. Worth knowing when you are testing and wondering why the first call takes 60 seconds.

Manual offset commit is important. With auto-commit enabled, Kafka marks a message as processed the moment it is received. If the consumer crashes before finishing the work, that message is gone. Manual commit means you commit only after the work succeeds.

The Service Bus emulator is genuinely useful. I expected it to be a rough approximation but it behaves exactly like the real service for basic queue operations. No need to touch a real Azure subscription during development.

Source code: https://github.com/aftabkh4n/order-pipeline

If you have questions or run into issues getting it running, drop a comment below.

Top comments (6)

Collapse
 
andrew_tan_layline profile image
Andrew Tan

This is a clean, textbook async pipeline — separating ingestion from processing is exactly how you stop APIs from falling over under load. One thing I'd watch: you now have two sources of truth in flight, PostgreSQL and Kafka. If the API crashes after writing to Postgres but before publishing, you've got an order that never gets processed. Have you considered using an outbox pattern or transactional writes to close that gap?

Collapse
 
aftabkh4n profile image
Aftab Bashir

That is a valid concern and you are right to flag it. The current implementation does have that gap. If the API crashes after the database write but before the Kafka publish, the order sits in PostgreSQL with Pending status and never moves forward.
The outbox pattern is the clean fix for this. Instead of writing to PostgreSQL and then publishing to Kafka as two separate operations, you write the order and the outbox event in the same database transaction. A separate background process then reads unprocessed outbox records and publishes them to Kafka, marking each one as processed after a successful publish. The API never touches Kafka directly.
I have not implemented it in this project yet but it is the natural next step. The tricky part in .NET is deciding whether to use a dedicated outbox library like MassTransit or CAP, or roll a lightweight version with EF Core and a background service. I lean toward the lightweight approach for a project this size since it keeps the dependencies minimal and the behaviour visible.
Good catch. Worth a follow-up post on its own.

Collapse
 
aftabkh4n profile image
Aftab Bashir • Edited

Update: I have now implemented the outbox pattern in this project. The order and outbox record are written in the same PostgreSQL transaction. A background service polls unprocessed outbox records every 5 seconds and publishes them to Kafka, marking each one as processed after a successful publish. The gap is closed. Commit is here if you want to see the implementation: github.com/aftabkh4n/order-pipeline

Collapse
 
andrew_tan_layline profile image
Andrew Tan

Nice :-)

The lightweight outbox approach is the right call for a project this size — keeps the moving parts visible and avoids pulling in a heavy bus framework just to guarantee delivery.

One small thing to watch as this grows: with a 5-second polling interval, you are trading latency for database load. That is fine at low volume, but if throughput ramps up you might want to switch to an advisory-lock-based approach or a FOR UPDATE SKIP LOCKED query so multiple poller instances do not step on each other. Also worth considering what happens if Kafka is down: Does the background service keep retrying with backoff, or do unprocessed records pile up? A dead-letter path or at least a metric on outbox age can save you a lot of debugging later.

Either way, nice work?

P.S. I like your hero image. What did you use to make it?

Thread Thread
 
aftabkh4n profile image
Aftab Bashir

Andrew thank you, really good points. The FOR UPDATE SKIP LOCKED approach is on my radar for when this needs to scale horizontally, right now there is only one poller instance so contention is not an issue, but you are right that it becomes one fast if you add more.

On the Kafka down scenario, currently unprocessed records just pile up and the poller keeps retrying every 5 seconds with no backoff. That is a gap worth fixing. A dead letter path and an alert on outbox age would be the next step. Adding it to the backlog.

For the hero image I use Canva. Dark background, grab the relevant logos from their asset library, add the title text. Takes about 10 minutes once you have a template you like.

Thread Thread
 
aftabkh4n profile image
Aftab Bashir

@andrew_tan_layline I wrote up the full implementation based on your feedback, outbox pattern, retry logic, and the two things you flagged for scale. Link in case it is useful: