built a kill switch for looping AI coding agents
AI coding agents kept:
- retrying the same fixes
- failing the same tests
- burning tokens endlessly
So I built Loop Watchdog.
It sits between coding agents and OpenAI-compatible APIs and detects:
- repeated fix-break loops
- file churn
- retry spam
- repeating error patterns
When the loop score gets too high:
- the next model call is blocked
- the session is paused
- alerts are sent to dashboard/Slack/email
Built with:
- Python + FastAPI
- Cloudflare Workers
- D1
- local-first architecture
Run it with:
loop-watchdog start codex
Then watch looping sessions get paused before more credits burn.
Github: https://github.com/bevinkatti/Loop-Watchdog
Would love feedback from people building with AI coding agents.
Top comments (0)