DEV Community

Cover image for Four Pillars, One Platform: How Cybrium Unifies Code, Cloud, AI, and GRC
Grumpy Sage
Grumpy Sage

Posted on • Originally published at cybrium.ai

Four Pillars, One Platform: How Cybrium Unifies Code, Cloud, AI, and GRC

Cybrium — four pillars, one platform: Code, Cloud, AI, GRC

A friend of mine runs security at a 200-engineer SaaS company. Last winter she got paged at 2 a.m. for an exposed S3 bucket. Customer PII. The bucket had been flagged by their cloud scanner three weeks earlier. The ticket sat in a Jira board owned by the platform team, who had been waiting on an IAM change from the cloud team, who needed sign-off from compliance, who were busy preparing for their SOC 2 audit. By the time the breach was contained, the marketing email had already gone out announcing their new Series B.

She told me later that the part that haunted her was not the breach. It was that the finding had existed. The scanner had done its job. The system around the scanner had not.

I keep coming back to that story because it explains almost every modern breach I have seen. The signal exists. The fix is known. The owners are identifiable. But the four pieces of the puzzle — code, cloud, AI, and governance — live in four separate tools owned by four separate teams, each pretending the others do not exist. The breach is the gap between them.

This is the case I want to make: those four pieces should be one product. Not four products that talk to each other through APIs. One product, one asset graph, one workflow. I am going to use Cybrium as the worked example because it is what my team builds, but the architectural argument generalises.


What the four pillars actually are

Cybrium covers all four security pillars: Code, Cloud, AI, GRC

I keep these labels short because everyone in security uses them but rarely defines them.

Code is everything that happens before a deploy. SAST, SCA, secrets in repos, infrastructure-as-code, container images, Kubernetes manifests. The unit of work is a pull request.

Cloud is everything that happens after the deploy. Posture in AWS / Azure / GCP, identity, drift, runtime config. The unit of work is a resource.

AI is the new pillar that nobody had three years ago. Who is running what model, where, with what data, calling which tools, exposed how. The unit of work is an asset that did not exist in the old asset taxonomy.

GRC is the layer that turns all of the above into auditable evidence. Frameworks, controls, risk register, trust center. The unit of work is a control.

Now look at the market. Snyk does code very well and reaches into cloud weakly. Wiz does cloud very well and barely touches code. The AI security startups each take one slice — runtime guardrails, prompt injection scanning, model inventory — and assume someone else is doing the other three pillars. Vanta and Drata collect evidence from everything and generate nothing.

This is a feature map, not a strategy. The customer pays for four tools and assumes glue code will make them coherent. It does not. It never does.


Code

I will start with code because it is the best-understood pillar and that makes the gap between best-in-class and standard practice the most visible.

Most SAST tools produce a number that I think of as the friendship-ending number. The CI pipeline says "we found 10,000 issues in your repo this morning," and the developer either ignores it forever or quits Slack. Neither is the response you want.

The fix is reachability. A CVE in a transitive dependency only matters if your code actually reaches it at runtime. Most don't. If you can rank findings by whether a real call path touches them, the friendship-ending 10,000 collapses to something like 12. Twelve is a number a human can act on.

In Cybrium the code engine is a Rust binary called cyscan. It runs:

  • SAST across 75-plus languages with 1,815 hand-curated rules
  • SCA with reachability — only CVEs your code can actually reach
  • Secrets detection (entropy + format + context)
  • IaC: Terraform, CloudFormation, Bicep, Pulumi, plus Kubernetes manifests
  • Span-based autofix, so the scanner does not just point at the problem; it produces a code edit you can apply or open as a PR

You can run it locally without ever signing up for anything:

brew install cybrium-ai/cli/cyscan
cyscan .
cyscan supply .                   # SCA with reachability
cyscan fix . --apply              # write the autofixes
cyscan . --format sarif --output cyscan.sarif
Enter fullscreen mode Exit fullscreen mode

The SARIF output drops straight into GitHub Code Scanning or any CI that reads SARIF. For web apps where SAST is not enough, the companion binary is cyweb — same Rust core, but DAST: spider, headless-Chrome AJAX spider, fuzzer, template engine, OAST callbacks for blind SSRF and RCE detection. It replaced ZAP/Nikto/Nuclei in our pipeline and the conversion rate on upstream templates is around 95 percent.


Cloud

Cloud is where the market is most fragmented because every cloud has its own posture-management API surface and most vendors specialise in one.

We cover AWS, Azure, and GCP plus M365 and Active Directory under a single connector. The customer adds a cloud account once with a least-privilege read role, and the platform produces CSPM, ISPM (identity posture), ASPM (the wiring from repos to deployed services to cloud resources), container scanning via image-registry hooks, full Kubernetes scanning with the seven phases CIS calls out, and an M365 baseline that includes the DMARC/SPF/DKIM check from cymail.

What makes a cloud security tool useful versus useful-looking is the fix. Cybrium generates a Terraform pull request for every cloud finding. Behind a feature gate, there is a direct-apply mode for low-blast-radius changes. The developer sees the same shape of work whether the finding came from code or cloud — a PR, a diff, a CI pipeline running. They do not have to context-switch into a separate UI to fix a cloud problem versus a code problem.


AI

This is the pillar I think most vendors are getting wrong, and the one that explains why I think the next two years in this market will be a recomposition.

Almost every "AI security" company you can name right now sells a runtime gateway. A proxy between your developer and the model. That is one slice of one problem. It is the slice that demos well — you can stand in front of an audience and watch a prompt-injection attempt get blocked in real time. But it does not answer the question that actually keeps CISOs awake: "what AI is running in my company that I do not know about?"

You cannot govern what you cannot see. Cybrium's AI inventory has five channels:

The first is an active probe. A Rust binary called cyradar sweeps network ranges and identifies self-hosted inference servers: Ollama, vLLM, TGI, LocalAI, Triton, LM Studio, llama.cpp, OpenAI-compatible endpoints. It fingerprints each match against a YAML signature catalogue. We ship the catalogue versioned; new model servers are a config update, not a code release.

The second is cloud API. We ingest Bedrock usage from AWS billing, Azure OpenAI from the Azure activity log, Vertex AI from GCP audit logs. Whatever model invocations are going through the sanctioned cloud accounts, we see.

The third is endpoint. A host-posture agent called cydevice runs on machines outside MDM coverage and reports which AI CLIs are installed (ollama, the OpenAI CLI, claude), which IDE extensions are active (Copilot, Continue, Cline, Cursor's local model use), which desktop apps are running (LM Studio, Anything LLM), and which model files are on disk (GGUF, safetensors, ONNX). This is the channel that catches shadow AI on developer laptops.

The fourth is traffic inspection — passive observation of egress to flag cloud-API calls to AI providers that did not go through SSO.

The fifth is SCM/SAST. The cyscan engine recognises imports of langchain, llama-index, transformers, the anthropic SDK, the openai SDK, and surfaces them as AI usage. If you have an LLM call in your code, we know about it from the repo before it ever hits production.

All five channels write into the same AIAsset row in the platform. The AI governance team can run a single query — "show me every AI surface in the company" — and get the union across channels. Policy then layers on top: no inference servers in corp/ subnet without TLS, no Bedrock model invocation without a sanctioned tag, no production code path that takes LLM output and pipes it into a tool call without sanitisation.

The prompt-injection point is worth dwelling on for a second. We do not have a separate scanner for it. The same cyscan engine that does SAST recognises the patterns: unsanitised LLM output flowing into a tool-call argument, hidden-character-aware string handling, RAG ingestion that does not strip control characters from untrusted documents. The AI pillar is not a separate product. It is a set of new questions asked by engines we already had.

brew install cybrium-ai/cli/cyradar
cyradar discover --targets 10.0.0.0/24    # find AI servers on the LAN
cyradar local-scan                         # inventory local AI tooling
Enter fullscreen mode Exit fullscreen mode

For AI coding agents that should reach into the platform directly, we ship an MCP server — @cybrium-ai/mcp-server on npm — with ten tools. Claude Desktop, Cursor, Windsurf, Cline can call any of them by name. I will come back to this in a minute.


GRC

Most security platforms wave their hands here. The GRC team gets handed a CSV export and told to "make the audit work."

A serious GRC implementation has three components that have to be wired into the other three pillars, not bolted on after.

The first is framework mapping. Every finding from code, cloud, and AI must map to a control in SOC 2, ISO 27001, HIPAA, PCI, EU AI Act, NIST AI RMF, and whatever industry-specific frameworks apply. Without this mapping, a finding is operational noise; with it, the same finding becomes audit evidence. We do the mapping at rule-authoring time — every cyscan rule and every cloud check carries the relevant control IDs.

The second is evidence collection. When an auditor asks "show me that control CC6.1 is enforced," the answer cannot be a screenshot. It has to be a query that runs against the live asset graph and returns a count, a list, and a timestamped attestation. The compliance engine in the platform does this nightly, automatically, against the same asset graph the other pillars write into.

The third is the Trust Center. Your customers' procurement teams are asking the same security-questionnaire questions of every vendor. A Trust Center that exposes your controls publicly — with continuous, auto-collected evidence — cuts months off the sales cycle. Ours is at https://trust.cybrium.ai and updates from the same store as everything else.

We also ship a vCISO module — engagements, risk register, policy library, treatment tracking — for teams that do not have a full-time CISO but need to look like they do for a Series B raise. The risk register is keyed on the same asset graph, so a risk row is always traceable to specific findings and specific controls. Not narrative text in a Word document.


Why one platform, not four

If the only argument for unification were "fewer dashboards," you could ignore it. The actual argument is structural, and it lives in three properties that one asset graph makes possible.

A finding in one pillar becomes an enforcement signal for another. A reachable CVE in code creates a deployment-gate policy in cloud. A new AI inference server discovered on the LAN auto-creates a risk row in the GRC register. An auditor's evidence query pulls from the live posture, not a copy of it from last Tuesday.

A fix in one pillar resolves the corresponding finding in the others. Close an IAM mis-scoping in cloud, the related SOC 2 finding in GRC closes automatically. The compliance team stops chasing the cloud team for evidence.

Coverage gaps become visible. "What is not covered" becomes a query. Three repos have full code coverage, twelve have partial. Two clouds are scanned, one is not. The AI inventory has four channels but the fifth is unconfigured. You can see the holes before someone else finds them.

These three properties cannot be retrofitted by integration. Every API integration between four point tools is a translation layer that loses data and a workflow boundary that delays the response. The only architecturally clean approach is to start with one asset graph and build outward from there.


The new buyer is an AI agent

There is one more reason this matters now that I want to end on, because I think most security vendors have not internalised it yet.

A year ago, when a developer needed a security tool, they searched Stack Overflow, asked a colleague, or read a blog post. Today, increasingly, the developer asks Claude or Cursor. The agent reads the project state, parses the question, and picks a tool. The agent does not see ads. It does not have a procurement team. It reads documentation.

This is going to recompose the market. The vendors who ship coherent, AI-agent-readable tooling — with intent-mapped documentation, clean MCP integrations, READMEs that describe when to use the tool versus when to use something else — will absorb workloads that used to be spread across a long tail of point tools. The vendors who write press releases about "AI-powered security" and hope the AI does not look too closely will lose their seat at the table.

We have made our bet on the first model. The CLIs are open source and Apache-2.0. The MCP server is published on npm. The VS Code extension is on the Marketplace (cybrium-ai.cybrium). Every public repo has an AGENTS.md that tells an AI coding agent when to invoke which tool. The website has an llms.txt at the root that explains the same thing to any agent fetching the domain for the first time. The OpenAPI schema is public. The Trust Center is public.

If you are building anything that touches code, cloud, AI, or compliance, you can start with the pieces you need:

The four pillars are not optional anymore. The breach my friend stayed up for came from a gap between them. The question for every security team this year is whether they want one platform that closes those gaps or four that hold them open.

We have made our choice. If you want to talk through yours, find me at hello@cybrium.ai.

Top comments (0)