DEV Community

Cover image for How to escape vendor lock-in in your Git collaboration workflow
Alan West
Alan West

Posted on

How to escape vendor lock-in in your Git collaboration workflow

Last March I watched a small open-source project I contributed to get nuked off a major hosting platform overnight. No warning. The maintainer's account got flagged by some automated system, and suddenly months of issues, PRs, and discussions were just... gone. The Git history survived because everyone had local clones, but the conversation around the code? Vaporized.

That incident sent me down a rabbit hole. Why does our collaboration metadata — the stuff that actually makes Git workflows social — live on a single server we don't control? The repo itself is distributed. The issues, reviews, and patches are not.

The root cause: Git is decentralized, but everything else isn't

Git was designed by Linus Torvalds to be decentralized. Every clone is a complete repository. You can pull from any peer, push to any remote, work offline indefinitely. That's the whole point.

But here's what Git deliberately leaves out:

  • Identity — Git doesn't know who you really are. It trusts whatever you put in user.email.
  • Discovery — There's no way to find a repository without an out-of-band URL.
  • Collaboration metadata — Issues, pull requests, code reviews, and discussions don't exist in Git itself.

So we bolted those features onto centralized platforms. Which works great, until it doesn't. The platform goes down. Or removes your project. Or changes its terms. Or gets bought. Suddenly your "distributed" workflow has a single point of failure that lives in someone else's database.

I started exploring peer-to-peer alternatives a few months ago. The most interesting approach I've found stores everything — identity, discovery, collaboration — in Git itself, then gossips between nodes. The project I've been poking at is Radicle, which calls itself a sovereign code forge built on Git.

Step 1: Identity, not accounts

In a peer-to-peer Git world, you don't have an account. You have a cryptographic key. Your identity is a public key that signs everything you publish. No central registry can revoke it.

# Generate a new identity - this becomes your 'account' forever
rad auth --alias alan-west
Enter fullscreen mode Exit fullscreen mode

This creates a keypair stored locally. Your DID (decentralized identifier) is derived from the public key. Nobody can take it away because nobody issued it in the first place.

Step 2: Repositories as first-class peer-to-peer objects

Once you have an identity, initializing a project for peer-to-peer collaboration looks like this:

# Inside an existing Git repo
cd my-project
rad init --name 'my-project' --description 'Yet another todo app'

# This creates a Radicle ID - a content-addressable identifier
# that you can share with anyone, no DNS or central registry needed
rad inspect --rid
Enter fullscreen mode Exit fullscreen mode

The Radicle ID (RID) is the equivalent of a repository URL, but it's cryptographic. It points to your repo wherever it currently lives on the network, not to a specific server.

Anyone can clone it from any node that has it replicated:

# Clone using the RID - works as long as ANY peer in the network has it
rad clone rad:z3gqcJUoA1n9HaeesA8YpcwujzNMc7
Enter fullscreen mode Exit fullscreen mode

If your node goes offline, peers who already replicated the repo can still serve it. The repo only "dies" when every replica disappears, which is a much higher bar than "the central platform decided to remove it."

Step 3: Collaboration that survives the platform

This is the part that solved my original problem. Issues and patches (the peer-to-peer equivalent of pull requests) are stored in Git refs on the repository itself.

# Open an issue - this gets committed to the repo's collaboration refs
rad issue open --title 'Memory leak in worker pool' \
               --description 'Heap grows ~5MB/hour under load'

# List issues - works offline because they're local Git data
rad issue list
Enter fullscreen mode Exit fullscreen mode

The mental model that finally clicked for me: imagine every issue and review comment as a Git object stored in a special namespace, signed by the author's key, and replicated alongside the code. When you sync, you get the conversations too.

For pull-request-style workflows:

# Create a patch from your current branch
git checkout -b fix-memory-leak
# ... make changes ...
git commit -am 'Fix worker pool leak by closing idle connections'

# Publish the patch - this is your 'PR'
rad patch open

# Reviewers can fetch and check it out locally
rad patch checkout <patch-id>
Enter fullscreen mode Exit fullscreen mode

I haven't run this at team scale yet — my experiments have been with two or three peers — but the workflow feels familiar enough that I'd trust it for a small project.

Step 4: Running a node

To participate in the network beyond your local machine, you run a node that gossips with other nodes:

# Start the node daemon in the background
rad node start

# Check what you're seeding
rad node status

# Track a repo so your node keeps it replicated
rad node seed rad:z3gqcJUoA1n9HaeesA8YpcwujzNMc7
Enter fullscreen mode Exit fullscreen mode

Nodes form a mesh. When you push to your own copy, the changes propagate to peers who are tracking your repo. There's no "main" server. The repo's availability is the union of every node that's seeding it.

Tradeoffs I've hit so far

I'd be lying if I said this is a drop-in replacement for the centralized workflow. A few honest observations:

  • Discovery is harder. Without a built-in search index, you find repos by sharing IDs out of band, which feels like 1999 Usenet vibes.
  • Web UIs are sparse. There are HTTP gateways for browsing, but the polish isn't where mainstream platforms are. If your team includes non-CLI folks, expect friction.
  • Replication is best-effort. If only you and one other peer have a repo and both go offline, it's effectively gone. Treat it like any other distributed system — your data is only as safe as the number of independent replicas.
  • CI/CD integrations are DIY. You can wire something up via the HTTP API, but you're building the pipeline yourself.

Prevention tips for whatever you choose

Even if you stick with a centralized host, the original incident taught me a few defensive habits worth applying:

  • Mirror automatically. Run a cron job that pushes to a second remote (even a self-hosted bare repo on a cheap VPS). git push --mirror is your friend.
  • Back up the metadata, not just the code. Tools that archive issues and PRs to local files exist for most major platforms — set one up before you need it.
  • Keep collaboration artifacts in-repo when you can. ADRs, design docs, and changelogs in docs/ survive any platform migration. Tickets in a third-party issue tracker do not.
  • Treat your local clone as the source of truth. If everyone on the team has a recent clone, the worst-case recovery is "we lost some PR comments," not "the project is gone."

Peer-to-peer code forges aren't going to dethrone the big platforms next year. But for personal projects, infrastructure repos, or anything where you genuinely care about not having a censor in the loop, the tooling is finally past the fun-science-experiment phase. I'm running a node for my own stuff now. Worst case, it's a fancy backup. Best case, it's the future.

Top comments (0)