I Got Tired of Writing Boilerplate at 11pm. So I Built a Factory That Does It For Me.
Give it a repo. Give it a task. Wake up to a PR. Repeat.
You know the feeling. It's late. Your backlog is staring at you. You know what needs doing — it's not even hard — but the idea of writing another three hours of boilerplate before bed is genuinely demoralising.
So you close the laptop. Tell yourself you'll do it tomorrow.
What if there was nothing to close the laptop on? What if the work just... happened?
That's the problem ACODA FACTORY solves. And I shipped it this week.
What We're Actually Talking About
ACODA FACTORY is a self-hosted, open-source autonomous AI coding agent. You point it at a GitHub repo, describe a task, and it handles the entire dev cycle on its own:
- Analyses the codebase
- Plans the implementation
- Creates a branch
- Writes the code
- Tests and self-reviews the diff
- Opens a pull request
No hand-holding. No approvals mid-flow. Just a PR waiting for you in the morning.
This isn't a demo. In the last 48 hours it's opened four real PRs on a live production codebase. This isn't the future. It's Tuesday.
You: "Add dark mode to the settings page"
ACODA: ✅ Analysing repo...
✅ Planning implementation (3 files)...
✅ Writing changes...
✅ Self-reviewing diff...
✅ PR opened → github.com/your/repo/pull/42
You: *reviews over coffee*
Why Not Just Use Existing Tools?
Fair question. Here's where the existing options fall down:
Cloud-hosted coding assistants — powerful, but expensive, opaque about what they're doing with your code, and platform-locked. "Trust us with your proprietary codebase" isn't a policy, it's a prayer.
Raw LLM API chaining — you can build this yourself, sure. But you're looking at hundreds of hours just on infrastructure: retry logic, error handling, state management, concurrency. Before you write a single line of useful agent code.
ACODA FACTORY is neither. It's fully self-hosted (your server, your data, your rules), open-source (fork it, audit it, own it), and built on infrastructure that handles all that hard stuff for you.
The Architecture Actually Matters Here
Most DIY coding agents have a silent killer: they break under real-world conditions and don't tell you.
A job starts. A network blip happens. The process dies. You wake up to nothing — or worse, a half-finished branch with uncommitted chaos.
ACODA FACTORY runs on Temporal — the same workflow orchestration layer used by Stripe, Netflix, and Uber. Every coding job is a durable workflow. If your server crashes mid-implementation, the job resumes exactly where it left off when it restarts. Not from scratch. Not from a checkpoint. Exactly where it was.
The full stack:
| Layer | Tech | Job |
|---|---|---|
| AI Agent | FORGE (TypeScript) | Full dev lifecycle |
| Pool Manager | Rust | 3 warm agent slots, zero cold start |
| Orchestration | Temporal | Crash-resilient job durability |
| State | Postgres | Job history, costs, success rates |
| UI | Dashboard | Real-time mission control |
The two-model setup — one for planning, one for writing code — is intentional. Architecture and implementation are genuinely different cognitive tasks. Splitting them mirrors how good senior engineers work, and it meaningfully improves output quality.
It Costs Nothing to Run
OpenRouter has genuinely capable free-tier models. The default config uses:
- Owl Alpha (1M context, built for agentic work) — orchestrator
- Qwen3-Coder (480B params, 262K context) — coder
- GPT-OSS-120B — fallback
Get a free OpenRouter API key, point the factory at those models, and your entire autonomous coding pipeline costs zero.
When you need more horsepower, swap to DeepSeek V3 (~$0.14/1M tokens) or Claude without touching any code. Stop renting talent. Start deploying intelligence.
Quick Start (Under 5 Minutes)
git clone https://github.com/leoaicloud-source/acoda-factory
cd acoda-factory
cp .env.example .env
# Add GITHUB_TOKEN + ORCHESTRATOR_API_KEY + CODER_API_KEY
docker compose up -d
Submit your first job:
curl -X POST http://localhost:3020/api/forge/jobs \
-H "Content-Type: application/json" \
-d '{
"repoUrl": "https://github.com/your/repo",
"taskDescription": "Add input validation to the login form"
}'
Open http://localhost:3020. Watch it run. Check GitHub for the PR.
Bonus: ARM64 is already supported. The full stack runs on a Raspberry Pi 5 (8GB). Point it at a screen via HDMI for mission control on the big screen. Run it on that idle mining rig. The DEPLOYMENT-ARM64.md guide covers everything.
What's Next on the Roadmap
- v0.2 — GitHub OAuth + repo picker in the UI. No more manual curl commands.
- v0.3 — Post-merge verification agent. Automatic regression fixing.
- v0.4 — Multi-repo support. Custom agent types.
- v0.5 — Mobile control layer. Self-hosted relay + iOS/Android app. Submit jobs and approve PRs from your phone, no third-party relay.
The Real Point
Autonomous coding agents are becoming infrastructure. The question is who controls them and where your code goes while they run.
ACODA FACTORY is a bet on sovereignty. Your code stays on your servers. Your model choices are yours. No platform that can change pricing, terms, or data retention policies next quarter.
Your competitors are already looking at this. The AI playing field just got levelled — but only for the people who actually show up.
Star it. Fork it. Break it. Build on it.
👉 github.com/leoaicloud-source/acoda-factory
Built by LeoAI Labs — drop a comment below if you're building agents or have questions about the architecture. And if you want to see the full platform stack (multi-model, AWS-grade infrastructure, built for teams), check out leoai.cloud. Reach us directly: Leo@leoai.cloud
Top comments (0)