Most discussions around AI coding assistants focus on models, prompts, benchmarks, or editor UX.
My problem was much more boring: files.
At some point I had several AI coding assistants in the same development workflow. Not because I wanted to collect tools, but because each one was useful in a slightly different context. One was better inside the IDE. Another was useful from the terminal. Another worked better for larger refactors. Another was convenient for quick repository-level questions.
That part was fine.
The painful part was that every assistant wanted its own configuration.
One tool wanted a root instruction file. Another wanted rules in a dedicated directory. Another had its own format for MCP servers. Another supported commands. Another supported agents. Some tools had ignore files. Some had hooks. Some had permissions. Some had overlapping concepts, but not exactly the same concepts.
So the same project slowly started to contain copies of the same intent in different places.
CLAUDE.md
AGENTS.md
.cursor/rules/*.mdc
.github/copilot-instructions.md
.gemini/settings.json
.codex/config.toml
.windsurf/rules/*.md
At first this looked manageable. It was just configuration. A few markdown files. A few JSON or TOML files.
Then it became a real source of drift.
A TypeScript rule changed in one assistant but not in another. A command was updated in one place but the old version stayed somewhere else. An MCP server definition had slightly different names depending on the target. One assistant had a more recent description of the project architecture than the others.
The result was strange: I was asking different AI assistants to work on the same codebase, but they were not really working with the same understanding of the codebase.
The issue is not “which AI assistant is best”
I do not think this problem should be framed as Claude vs Cursor vs Copilot vs Codex vs Gemini vs anything else.
In practice, developers increasingly use more than one AI tool:
- one inside the IDE;
- one in the terminal;
- one for code review;
- one for large refactors;
- one for documentation or migration work;
- one used personally, another standardized by the team.
That is not unusual anymore.
The real problem is that every tool has its own configuration surface.
And these surfaces are not just “prompt files”. They can include:
- project rules;
- scoped rules;
- agent definitions;
- reusable commands;
- skills or playbooks;
- MCP servers;
- hooks;
- permissions;
- ignore files;
- global user-level configuration;
- project-level team configuration.
A single shared markdown file can help, but it does not cover the whole surface.
For small projects, duplicating a few instructions manually is probably fine. For a serious project, a monorepo, or a team setup, it becomes the same class of problem we already solved many times in software engineering: one source of truth, many generated outputs.
I looked for an existing solution first
Before building anything, I tried the obvious path: use an existing sync/generation tool.
The idea was correct: keep a canonical configuration and generate assistant-specific files from it.
That worked for simple rules.
But once I tried to model the actual configuration I used every day, I kept hitting edge cases.
The first issue was cross-references.
If an agent references a skill, or a rule references a command, the relative path is different in every generated target. A link that works from .agentsmesh/skills/refactor/SKILL.md will not necessarily work from .claude/agents/code-reviewer.md or .cursor/rules/typescript.mdc.
So generation cannot be only “copy this file there”. It needs to understand references and rebase links per target.
The second issue was unsupported features.
Some tools have native agents. Some do not. Some have commands. Some do not. Some have hooks. Some have permissions. Some only have a general instruction file.
If a target does not support a feature natively, there are a few possible choices:
- drop the feature;
- flatten everything into a generic prompt;
- embed the feature with metadata so it can be reconstructed later.
Dropping data was not acceptable. Flattening everything made import/generate cycles lossy. I wanted round-trip behavior: import existing configs, normalize them, generate them again, and not silently lose structure.
The third issue was tool velocity.
New AI coding tools appear constantly. Existing ones change their config formats. Waiting for a central maintainer to add support for every new assistant is not a great model if the ecosystem keeps moving this fast.
That is where I decided to build my own solution.
The mental model I wanted
The model I wanted was not unusual.
We already use it everywhere:
TypeScript source -> JavaScript output
.proto files -> generated SDKs
OpenAPI schema -> generated clients
Dockerfile -> image
source config -> target artifacts
I wanted the same pattern for AI assistant configuration.
There should be one canonical directory that describes the intent:
.agentsmesh/
rules/
commands/
agents/
skills/
mcp.json
hooks.yaml
permissions.yaml
ignore
Then a generator should project that intent into the native layout of each assistant.
Generated files should be treated as artifacts. You can inspect them, commit them if needed, and let tools consume them in their native format. But humans should primarily edit the canonical source.
That is the important distinction.
I did not want to force every tool into one lowest-common-denominator format. I wanted to preserve native output where possible, while keeping a canonical source that is easier to reason about.
Why this is harder than it looks
At first this sounds like a file converter.
It is not.
The hard parts are mostly about preserving semantics.
1. Native vs embedded capabilities
A target may support a feature natively, partially, or not at all.
For example:
- one assistant may have a native commands directory;
- another may only support commands as embedded sections in a root instruction file;
- another may not have a useful equivalent at all.
The generator needs to know that difference.
More importantly, the importer also needs to know that difference. Otherwise you can generate an embedded command, import it again, and lose the original command boundary.
That is why metadata matters. Not because metadata is elegant, but because lossless round trips are impossible without some way to preserve structure where the target format does not have a native slot.
2. Link rebasing
This was one of the first problems that made the project feel necessary.
Developers naturally split configuration into multiple files:
See ../skills/refactor/SKILL.md before changing this module.
But after generation, that file may live somewhere completely different.
So links must be rewritten depending on the target layout.
This sounds small until you realize that agents can reference skills, skills can reference supporting files, commands can reference rules, and project/global modes have different path assumptions.
If this is not handled, the generated configuration looks correct but slowly rots because internal links stop working.
3. Drift detection
If generated files are committed, somebody will eventually edit them directly.
That is not a moral failure. It is normal developer behavior.
But then the repository needs a way to detect whether the generated files still match the canonical source.
So the workflow needs a check step:
agentsmesh check
The point is not to be strict for the sake of being strict. The point is to make drift visible before different assistants start reading different versions of the project rules.
4. Import matters as much as generate
A tool like this is not useful if adoption requires a clean rewrite.
Most real projects already have some assistant configuration:
.cursor/rules/
CLAUDE.md
.github/copilot-instructions.md
AGENTS.md
So import needs to exist:
agentsmesh import --from cursor
agentsmesh generate
That gives a realistic migration path: start from what already exists, normalize it, then generate from the canonical structure.
5. Plugins are necessary
The AI tooling ecosystem changes too quickly for every target to be hardcoded forever.
So target support has to be data-driven and extensible.
A target should describe:
- where its files live;
- which features it supports;
- whether each feature is native, embedded, partial, or unsupported;
- how to generate its output;
- how to import existing files back.
That makes it possible to ship new target support as a plugin instead of changing core every time a new assistant appears.
What I ended up building
I built AgentsMesh around that model.
The basic workflow is intentionally boring:
npx agentsmesh init
npx agentsmesh generate
For an existing project:
npx agentsmesh import --from cursor
npx agentsmesh generate
For CI:
npx agentsmesh check
For reviewing output before writing:
npx agentsmesh diff
For personal global config:
npx agentsmesh init --global
npx agentsmesh generate --global
The goal is not to replace any assistant.
The goal is to stop treating assistant configuration as scattered hand-written state.
This is useful outside Node.js projects
The CLI is distributed through npm, but the problem is not Node-specific.
A Rust team can have the same issue. A Python backend can have the same issue. A Java monolith can have the same issue. A Go service, Unity project, mobile app, infrastructure repo, or documentation-heavy project can all hit the same problem once several AI tools are involved.
The configuration being synchronized is not about Node.js. It is about how AI assistants understand a repository.
That includes things like:
- coding conventions;
- architecture rules;
- domain vocabulary;
- testing strategy;
- forbidden files or directories;
- review checklists;
- migration playbooks;
- commands for repetitive workflows;
- MCP server definitions;
- permissions and safety boundaries.
Every codebase has some version of this knowledge.
The question is whether it lives in one place or slowly diverges across tools.
Where I think this category is going
I do not think AI coding configuration will become simpler.
I expect the opposite.
Assistants will get more capabilities:
- deeper repo indexing;
- more tool execution;
- more agent-like workflows;
- more project memory;
- more MCP usage;
- more fine-grained permissions;
- more team-level policies.
That means configuration will matter more, not less.
A single prompt file may be enough for now in many projects. But as soon as teams start using several assistants with different capabilities, the config layer becomes infrastructure.
And infrastructure usually needs:
- a source of truth;
- reproducible generation;
- validation;
- drift detection;
- reviewable diffs;
- import/export paths;
- extension points.
That is the direction I wanted AgentsMesh to take.
When I would not use it
There are cases where this is overkill.
If you use one assistant, one project, and one instruction file, you probably do not need a canonical sync layer.
If your rules are tiny and rarely change, manual duplication might be cheaper than introducing tooling.
If your team does not commit assistant config at all, the value is lower.
But if you maintain several assistants, or you care that every assistant reads the same project rules, then the problem becomes real very quickly.
For me, the breaking point was realizing that my AI tools were not disagreeing because the models were different.
They were disagreeing because I had taught them different versions of the same project.
That is the bug I wanted to remove.
Links
AgentsMesh repository:
https://github.com/sampleXbro/agentsmesh
Documentation:
https://samplexbro.github.io/agentsmesh/
Top comments (0)