DEV Community

J Now
J Now

Posted on

Plateauing with Claude Code: what Anthropic's 11-behavior index reveals

After a few months of daily Claude Code sessions, I noticed my prompts had calcified. Same patterns, same delegation style, same results — just faster. I couldn't tell whether I was genuinely getting better at AI collaboration or just getting better at my existing habits.

Anthropic published a study in February 2026 measuring 11 observable collaboration behaviors across 9,830 Claude conversations. The behaviors come from Dakan & Feller's 4D AI Fluency Framework — Description, Discernment, Delegation, Diligence — though Diligence doesn't surface in chat logs. I wanted to run that same classification on my own sessions and see where I sat against the population baseline.

So I built skill-tree.

The tool pulls your Claude Code session history, runs a remote classifier (Claude Haiku on Fly.io) across all 11 behaviors, assigns one of seven archetype cards rendered as tarot cards with curated museum art, and picks a behavior you haven't touched as a growth quest for the next session. The quest persists across sessions via a ~/.skill-tree/ state file and a SessionStart hook. End-to-end it takes 30–60 seconds and returns a stable URL — live example at skill-tree-ai.fly.dev/fixture/illuminator.

The part that changed how I work: the growth quest. Knowing I never use a specific discernment behavior — say, explicitly asking Claude to surface its own uncertainty — made me try it in the next session. That single feedback loop is worth more than any aggregate score.

Installs in Claude Code via:

claude plugin marketplace add robertnowell/ai-fluency-skill-cards
claude plugin install skill-tree-ai@ai-fluency-skill-cards
Enter fullscreen mode Exit fullscreen mode

Also available as an MCP server (npm install skill-tree-ai) for Cursor, VS Code, and Windsurf, and as skill-tree-ai.zip for Cowork.

https://github.com/robertnowell/ai-fluency-skill-cards

Top comments (0)