- š„Connect: https://xam-heisenberg-company.vercel.app/
- š„GitHub: https://github.com/Subham-Maity
- š„Twitter: https://twitter.com/TheSubhamMaity
- š„LinkedIn: https://www.linkedin.com/in/subham-xam
- š„Insta: https://www.instagram.com/subham_xam
Building a Real-Time Delivery Tracking System Using Socket, Redis, Redis streams adapter, Kafka
I recently worked on an exciting project for a clientāa delivery app similar to Zomato, where users can track their driver's location live on a map.
The complete application used Flutter for the frontend with both NestJS and Golang powering different versions of the backend.
While I developed two separate implementations, this article focuses purely on the core tracking logic that's completely language-independent.
If you're curious about the actual code, everything is available on GitHub: https://github.com/Subham-Maity/RTLS-Scale.
But don't worry about the specific programming languagesāI've designed this guide to be accessible to anyone interested in understanding the fundamental architecture of real-time location tracking systems.
Let me walk you through how I built this prototype, how it works, and how to scale it for real-world applications.
Important disclaimer: this is not production-ready code, as a full commercial implementation would require additional business logic, security considerations, battery optimization, and many other factors I won't cover here. I'm also not addressing driver matching algorithms or distance calculationsāthis article focuses exclusively on the real-time tracking system architecture.
Along the way, I'll share insights from my experience, including practical advice on backend-frontend communication and what I learned about building reliable real-time systems. By the end, you'll understand exactly how that little moving dot on your food delivery app actually works behind the scenes!

š„Connect: https://www.subham.online
š„Repo: https://github.com/Subham-Maity/RTLS-Scale
š„Twitter: https://twitter.com/TheSubhamMaity
š„LinkedIn: https://www.linkedin.com/in/subham-xam
How the Prototype Works
Imagine this: you open the prototype in a browser, and there are two buttonsāEnter as User or Enter as Driver.
Pretty straightforward, right?
If you pick Driver, the app starts sending your location (latitude and longitude) to the server every few seconds.
If you pick User, you see the driverās location updating live on a map.
To test it, I opened the driver page on my phone and the user page on my laptop. I walked around a bit with my phone, and on my laptop, I could see my position moving on the map in real-time. It felt satisfying, like āhaan, this is working!ā But this was just a prototype. In a real app, youād need proper authentication, middleware, and all that stuff. Here, my focus was on the core logic: how to send the driverās location to the user continuously, without any hiccups.
Server 1: The Basic WebSocket Setup
Letās start with the simplest way I did this, using WebSockets. The code for this is in
Hereās how it works, step-by-step:
-
Driver Sends Location: The driverās app connects to the server using WebSockets and sends a
send-locationevent with their latitude and longitude every few seconds. Think of it like the driver saying, āHey server, hereās where I am right now!ā Server Broadcasts It: The server listens for this event and sends the location to all connected clients (like the userās app) using a
receive-locationevent. Itās like the server shouting, āEveryone, hereās the driverās new position!āUser Updates Map: The userās app listens for
receive-locationevents and moves the driverās dot on the map. Simple and quick.
For a small setup, this works like a charm. But then I started thinkingāwhat if there are hundreds or thousands of drivers? Will this still hold up?
Is This Scalable? Not Quite
Hereās where I hit a wall:
- Too Many Connections: Every WebSocket connection uses server resourcesāCPU, memory, etc. With thousands of drivers and users, one server canāt handle it alone. Itāll slow down or crash.
- Wasting Data: The server sends every driverās update to all users. So if there are 100 drivers, each user gets 100 updates every few seconds, even though they only care about their own driver. Thatās a lot of useless data clogging the system.
- Adding More Servers: If I add more servers to share the load, how do I make sure the right updates reach the right users? Without some clever trick, itās a headache. Assuming you're a clever programmer, feel free to drop any tricky solutions in the comments š
Verdict: This is fine for a prototype or a small app with less than 100 drivers. But for a big delivery app? No chanceāitāll break.
Server 2: Adding Redis Pub/Sub
So, I needed a better way. Thatās when I brought in Redis Pub/Sub. Redis is this super-fast in-memory store, and its publish-subscribe system is perfect for scaling real-time stuff. Check the code in
2. server (socket + redis pub-sub)/src/websockets/location.gateway.ts. Hereās how I made it work, step-by-step:
2. server (socket + redis pub-sub)/src/websockets/location.gateway.ts
-
Driver Publishes Location: When the driver sends a
send-locationevent, the server doesnāt broadcast it directly. Instead, it publishes the location to a Redis channel calledlocation-updates. Hereās the code:
@SubscribeMessage('send-location')
handleLocation(client: Socket, data: { latitude: number; longitude: number }) {
const locationData = {
id: client.id,
latitude: data.latitude,
longitude: data.longitude,
};
this.pubSubService.publish('location-updates', JSON.stringify(locationData));
}
-
Server Subscribes and Targets Updates: The server subscribes to the
location-updateschannel and sends the update only to specific users using WebSocket rooms. Each driver has a room (named after their ID), and users join that room to track them. Hereās how itās set up in the constructor:
constructor(private pubSubService: PubSubService) {
this.pubSubService.subscribe('location-updates', (message) => {
const locationData = JSON.parse(message);
this.server.to(locationData.id).emit('receive-location', locationData);
});
}
And when a user wants to track a driver:
@SubscribeMessage('track-driver')
handleTrackDriver(client: Socket, driverId: string) {
client.join(driverId);
}
-
Scaling with Multiple Servers: Redis makes this easy. Multiple NestJS servers can subscribe to the same
location-updateschannel. When a driverās location is published, all servers get it and send it to the right room. No mess, no fuss.
Why This Is Better
- Targeted Updates: Only users tracking a specific driver get their updates. No more flooding everyone with data they donāt need.
- Horizontal Scaling: Add more servers, and Redis handles the coordination. Each server manages its own clients, and the load gets shared.
This is a big step up from the basic setup. But I found something even betterākeep reading!
Server 3: Redis Streams Adapter for the Win
While Redis Pub/Sub was good, I stumbled upon the Redis Streams Adapter for Socket.IO, and itās like Pub/Sub ka bada bhai šāmore powerful and reliable. The code for this is in
3. server (socket + redis streams adapter)/src/redis/redis.module.ts.
3. server (socket + redis streams adapter)/src/redis/redis-io-adapter.ts.
3. server (socket + redis streams adapter)/src/websockets/location.gateway.ts.
Hereās how it works, step-by-step:
-
Set Up the Adapter: I created a
RedisIoAdapterin3. server (socket + redis streams adapter)/src/redis/redis-io-adapter.tsto use Redis Streams with Socket.IO:
export class RedisIoAdapter extends IoAdapter {
private redisClient: Redis;
constructor(app: INestApplication, redisClient: Redis) {
super(app);
this.redisClient = redisClient;
}
createIOServer(port: number, options?: ServerOptions): any {
const server = super.createIOServer(port, options);
server.adapter(createAdapter(this.redisClient));
return server;
}
}
-
Driver Sends Location: Same as beforeāthe driver sends a
send-locationevent, and the server emits it to their room:
@SubscribeMessage('send-location')
handleLocation(client: Socket, data: { latitude: number; longitude: number }) {
const locationData = {
id: client.id,
latitude: data.latitude,
longitude: data.longitude,
};
this.server.to(client.id).emit('receive-location', locationData);
}
-
Users Track Drivers: Users join the driverās room with a
track-driverevent:
@SubscribeMessage('track-driver')
handleTrackDriver(client: Socket, driverId: string) {
client.join(driverId);
}
- Magic of Streams: The Redis Streams Adapter handles everything else. It distributes updates across all server instances, ensures no messages are lost, and keeps rooms working seamlessly.
Why This Beats Pub/Sub
Hereās a quick comparison:
| Feature | Redis Pub/Sub | Redis Streams Adapter |
|---|---|---|
| Reliability | If a server is down, it misses updates. | Stores messages, so servers catch up later. |
| Scalability | Good for medium loads, but struggles with huge volumes. | Uses consumer groups for big scale. |
| Message Order | Order isnāt always guaranteed. | Strict order, great for tracking. |
| Ease of Use | You manage pub/sub yourself. | Socket.IO does it allāless code! |
- Reliability: If a server crashes with Pub/Sub, it misses updates. With Streams, messages are saved, so nothing gets lost.
- Scalability: Streams can handle way more drivers and users with consumer groups splitting the work.
- Simplicity: No need to write pub/sub logicāSocket.IO handles it behind the scenes.
This is perfect for a large app with lots of users. But what about massive scale? Thatās where Kafka comes in.
Future-Proofing with Kafka
Now, imagine your app grows hugeāthousands of drivers, millions of users, and you want to do fancy things like analytics or logging alongside tracking. Thatās when Kafka enters the picture. Itās a distributed streaming platform built for handling tons of real-time data.
Hereās the basic plan:
- Driver sends location via WebSockets (
send-locationevent). - Server pushes it to a Kafka topic, like
driver-locations. - A consumer service reads from the topic and sends updates to users via WebSockets.
Kafka is overkill for small apps, but for enterprise-level scale, itās a game-changer. Iāll add a Kafka setup to my GitHub repo soonākeep an eye out!
What to Tell Frontend Devs
As a backend dev, I was scratching my head about what to tell the frontend team. Turns out, itās pretty simple:
-
Driver App:
- Connect to the WebSocket server.
- Send
send-locationevents with latitude and longitude every few seconds. - Maybe show the driverās own location on a map, if needed.
-
User App:
- Connect to the WebSocket server.
- Listen for
receive-locationevents and update the map. - Send a
track-driverevent with the driverās ID to join their room.
Thatās it! The frontend devs will love how easy this isājust a few events, and the backend handles the heavy lifting.
Comparing the Approaches
Letās break it down with a table to see how these methods stack up:
| Approach | Pros | Cons | Best For |
|---|---|---|---|
| Basic WebSockets | Easy to set up, works for small apps. | Not scalable, sends too much data. | Prototypes, small apps. |
| Redis Pub/Sub | Scales better, targets updates. | Misses updates if servers crash. | Medium-sized apps. |
| Redis Streams Adapter | Reliable, scalable, less code. | Slightly tricky to set up. | Large apps with many users. |
| Kafka | Handles huge scale, extra features. | Too much for small apps, needs infra. | Enterprise-level apps. |
So, thatās the full story! From a basic prototype to scaling for a real delivery app, this is how you make real-time tracking work. The codeās all on GitHubāgo check it out.
Next time youāre waiting for your food and watching that driver dot move, youāll know whatās happening behind the scenes.
Hope this clears things upālet me know if you have questions!













Top comments (0)