Most coding platforms train engineers to solve isolated algorithm problems.
But in real engineering, you rarely reverse linked lists.
- You debug production systems.
- You trace issues across files.
- You deal with incomplete logs, unexpected states, and systems you didn’t write.
so I built something around that.
Introducing Recticode
Recticode is a platform focused on real-world debugging challenges.
Instead of algorithm puzzles, engineers work with realistic multi-file codebases that contain subtle production-style bugs.
The 4-week system
Instead of a single hackathon, I split it into two connected phases:
Phase 1: Challenge Sprint (May 4 – May 31)
Engineers submit real debugging challenges.
Each submission is:
- multi-file (real code structure)
- contains a subtle production-style bug
- includes expected behaviour + context
- written like a real engineering system
The goal is to build a library of realistic debugging problems.
Phase 2: Debugging Championship (June 1 – June 14)
Engineers then compete to solve these challenges.
A public leaderboard tracks:
- correctness
- consistency
- difficulty-weighted performance
This turns submitted challenges into a live competition system.
Why I built this
I kept noticing a gap between:
what coding platforms teach
and
what real engineering actually looks like
Most practice focuses on:
- algorithms in isolation
- clean inputs
- nno system complexity
But real work is:
- debugging distributed systems
- reading unfamiliar codebases
- tracing state across multiple layers
So I wanted a format that reflects that.
What i’m hoping to learn
This is still early, but i’m mainly trying to answer:
- do engineers prefer debugging-based practice?
- can a "challenge library" become a real learning system?
- will people actually engage with this over time?
If you want to try it:
Challenge Sprint is live now: https://recticode.com
Recticode is also fully open source on GitHub.
Recticode
Practice real-world coding by fixing bugs in actual codebases, not solving toy problems.
What is this?
Recticode is a cli-based platform where you:
- pull a coding challenge (a real mini codebase)
- identify and fix a bug or implement a feature
- run your own tests to verify your solution
- submit your fix
Instead of writing isolated functions, you work with realistic systems.
Why?
Most platforms train you to:
- solve algorithm problems from scratch
But real dev work is more like:
- reading existing code
- debugging issues
- making safe changes without breaking things
Recticode is built to train that skill.
How it works
- install the CLI
- fetch a challenge
- work locally in your editor
- run your own tests
- submit your solution
Installation
pip install recticode
Check it works:
recticode --help
Example Flow
# login
recticode login
# get a challenge
recticode start <challenge-name>
# work on the code locally...
# submit…Submissions are open until may 31.
Feedback Welcome
Especially interested in:
- whether this feels closer to real engineering than leetcode-style platforms
- what would make debugging challenges more useful for learning or hiring
Top comments (2)
The distinction between solving a problem from a blank file and debugging someone else's code is the difference between being an author and being an archaeologist, and most of professional engineering is archaeology. You're dropped into a system with history, with decisions made by people who left the team, with patterns that made sense in a context you'll never fully reconstruct. Leetcode doesn't train that muscle at all.
What's interesting about the challenge library idea is that it depends entirely on the quality of the submissions. A good debugging challenge has to be wrong in a specific way — not trivially broken, not impossibly obscure, but the kind of subtle wrongness where the symptoms and the cause are in different files. That's hard to manufacture. The best debugging stories I've heard all have an element of "this only made sense once we understood the history." A synthetic challenge can't have real history; it can only simulate it. So the question is whether a well-crafted simulation of history — a multi-file codebase with a planted bug and a plausible backstory — triggers the same cognitive muscles as a real production incident, or whether the artificiality creates a different kind of puzzle entirely. Curious how you're guiding submitters toward that sweet spot where the bug feels organic to the codebase rather than like a contrived puzzle. That seems like the editorial challenge that makes or breaks the whole concept.
This is a great practical angle @vulcanwm .
From my experience, OutOfMemory (OOM) errors are a significant hurdle for many Java developers, and fixing them is a skill in itself. Mastering tools like Flame graphs and JVM metrics to pinpoint leaks or overloads is a game-changer.
I’d love to see this framed as a debugging challenge for the hackathon. It's a high-impact, real-world skill.