Every time you paste a log file into Cursor, or ask Claude Code to debug
a production error, you're probably leaking something you shouldn't.
API keys. Patient emails. Bearer tokens. Production AWS credentials.
They flow into prompts silently, by default, with no warning.
I spent this weekend building ContextDuty to fix that.
What it does
ContextDuty is a local-first context firewall. It sits between your files
and your AI assistant, scanning and redacting sensitive values before the
prompt leaves your machine.
Demo link - https://asciinema.org/a/uouCCzZe7UkbomNM (vibecoded on friend's laptop)
pip install contextduty
contextduty scan mylog.txt
contextduty redact --in mylog.txt --out clean.txt
It also runs as an MCP server — so Cursor, VS Code, and Claude
can intercept automatically without you doing anything manually.
Current state
This is a weekend build — rough edges, limited detector coverage
(email, API keys, AWS keys, bearer tokens, phone numbers), North
America phone format only. Works end-to-end, 53 tests passing,
published on PyPI.
What I'd love feedback on:
What detectors are you missing?
Does the policy layering model make sense for your team?
What would make this useful in your actual workflow?
Top comments (1)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.