For about six months I ran a multi-agent pipeline with a message broker sitting in the middle. One agent would write a task to a queue, another would pick it up, process it, and write the result somewhere the first agent could find it. It worked. It also cost me more than I wanted to pay, failed twice at inconvenient times, and required a non-trivial amount of configuration that I had to keep in sync across every agent that touched it.
When I finally removed it, I replaced it with something much simpler: the agents just talk to each other directly.
Here is what I learned doing that.
Why I added a broker in the first place
The honest answer is that a message broker was the obvious pattern for agent communication and I did not question it. Agent A needs to tell agent B to do something. Agent A does not know where agent B is. Agent B might not be running when A sends the message. A broker solves all of this.
The broker I used was a managed Redis instance with Redis Streams. It handled delivery, it let me inspect queued messages when something went wrong, and it gave me at-least-once delivery semantics without writing retry logic myself.
None of those properties are bad. But I was running a pipeline where agents were either both running or neither running, the tasks were idempotent, and I did not actually need replay or inspection. I was paying for properties I never used and taking on infrastructure I had to maintain.
What the pipeline actually needed
When I stripped back what each agent actually required from the broker, it came down to two things.
The first was delivery. Agent A sends a task to agent B and agent B receives it. That is it. No persistence across restarts, no fan-out to multiple consumers, no dead-letter queue.
The second was addressing. Agent A needs a stable way to refer to agent B regardless of where agent B is running or what its current IP address is. The broker handled this by being a known central point that both agents connected to. Without it, I needed another way to address each agent.
What replaced it
I replaced the broker with Pilot Protocol. Each agent runs a local daemon that handles addressing and transport. Agents get a virtual address derived from their Ed25519 keypair, which means the address is the same across restarts without any coordination.
Installing the Pilot daemon is one line:
curl -fsSL https://pilotprotocol.network/install.sh | sh
Then on each machine:
pilotctl daemon start --hostname agent-a
pilotctl daemon start --hostname agent-b
To establish trust between them:
pilotctl handshake agent-b
And on the other side:
pilotctl approve <node_id>
After that, sending a task from agent A to agent B is one command:
pilotctl send-message agent-b --data 'run subtask: analyze document X'
Agent B receives it in its inbox directory. There is no broker in the middle. The two daemons negotiate a direct path using STUN and hole-punching, and when a direct connection is not possible they fall back to a relay automatically. Either way the application code does not need to handle the routing.
What I gave up
I want to be clear about what this trade-off costs, because I have seen posts that oversell this kind of swap.
You lose message persistence. If agent B is not running when A sends a task, that message is gone. The broker buffered tasks so B could process them when it came back online. Pilot does not do this.
You lose fan-out. If I needed to broadcast a task to five agents simultaneously and have each one process it independently, a queue handles that naturally. With direct messaging I would need to send to each agent explicitly.
You lose built-in inspection. Redis Streams let me look at what was queued, replay messages for debugging, and see how far behind a consumer was. With direct messaging I have to add that observability myself if I want it.
For my specific pipeline, none of those properties mattered. Both agents were always running together. Tasks were one-to-one. Debugging was done through logs on each agent rather than through the queue.
How the pipeline works now
Agent A generates a task and sends it directly to agent B's address. Agent B processes it and sends the result back to agent A. No broker. No shared credential to authenticate with the broker. No managed service to keep running.
The agents authenticate each other through a signed handshake using Ed25519 keys. The connection is encrypted with X25519 key exchange and AES-256-GCM — the same cipher suite as TLS 1.3. Neither agent needs to trust a shared secret or manage rotation. The trust is between the agents themselves.
When I restart agent B, it comes back with the same virtual address. Agent A does not need to be updated or reconfigured. The Pilot daemon re-establishes the connection automatically.
When to actually do this
A broker is the right tool when you need decoupling between producer and consumer, when tasks need to survive restarts, when you have multiple consumers competing for work from the same queue, or when you need replay for debugging or auditing. Tools like RabbitMQ, Apache Kafka, and Redis Streams are well-suited to those requirements.
Direct agent-to-peer messaging is the right tool when your agents are always running together, when delivery semantics are fire-and-forget or you handle retries yourself, and when simplicity and one fewer managed service matter more than the features a broker provides.
I had a pipeline that looked like it needed a broker but actually just needed two agents to talk to each other. Removing the broker removed about $40 a month, one managed service to monitor, and roughly 200 lines of queue management code across the two agents.
Getting started
- Install:
curl -fsSL https://pilotprotocol.network/install.sh | sh - Docs: pilotprotocol.network/docs
- Protocol spec: published as an IETF Internet-Draft covering the addressing and encryption model
- GitHub: github.com/TeoSlayer/pilotprotocol
- Live network stats: polo.pilotprotocol.network
The free tier covers direct messaging, NAT traversal, and encrypted tunnels with no signup required.
Top comments (0)