I’ve spent the last five years arguing with notified bodies about traceability, CAPA backlogs and change-control evidence. During that time every vendor slide deck and trade-show demo started using the same word: AI. To be fair, a lot of those demos are useful — but there is a big gap between marketing claims and what AI actually brings to a regulated QMS.
Below I lay out practical, practitioner-level expectations: what AI in a QMS reliably does today, what it rarely does, and the controls you need to keep a regulator or auditor happy.
What “AI” typically means in a QMS — operational, not magical
Most eQMS vendors use AI in a narrow, operational way. In practice this means:
- Natural-language search across documents (one search across the entire QMS), with semantic matching so you find the right procedure, risk assessment or CAPA even if wording differs.
- Assisted impact analysis: the system suggests which documents, products or processes a change might affect — often surfaced as a linked list you can review.
- Drafting assistance: auto-populating sections of a CAPA, nonconformance report or change request based on prior similar records.
- Triage and prioritisation: scoring incoming complaints or NCRs by keywords and historical outcomes to suggest priority.
- Pattern detection in structured fields: flagging rising trends in supplier nonconformances or repeated inspection findings.
Call this “operational AI”. It speeds routine work and makes connected workflow usable. It is not doing your root-cause analysis for you. It is controlled assistance, not replacement of judgement.
Common marketing claims — how to read them
Vendors love short phrases. Read them carefully.
- “Automates CAPA” — realistic translation: reduces manual data entry and suggests actions; you still need a human to own root cause, accept corrective action and verify effectiveness (ISO 13485:2016 requires documented evidence of effectiveness).
- “Self-healing QMS” — unrealistic. QMSs do not repair processes; they support human actors to do it faster.
- “Fully automated regulatory submissions” — partially true for pre-populated fields and export helpers; full regulatory narrative, clinical evidence and sign-off remain human responsibilities (per MDR requirements for clinical evaluation and technical documentation; Annex II is explicit about content and justification).
- “AI reviews your Technical File” — useful for spotting obvious gaps, but an AI cannot replace an expert review against Annex II/Annex IX expectations or notified body interpretation.
When a vendor uses the word AI, ask: which of the above operational capabilities are implemented, and how are outputs logged and reviewed?
What regulators and notified bodies will expect
In audits and conformity assessments I see three recurring themes:
- Traceability and reviewability: every AI-generated suggestion must be traceable to source records and show who reviewed or overruled it. Audit trails are essential — store prompts, suggested text, and final approved text.
- Validation and acceptance criteria: treat AI features as software tools. Define acceptance tests and performance thresholds in your software verification plan (ISO 13485 clause 4.1 and design control principles apply in spirit).
- Risk analysis: include AI-driven behaviours in your risk management (ISO 14971). If an AI suggestion feeds a CAPA that changes production controls, the chain of influence must be assessed.
In practice this means documenting the AI feature in your Change Control, updating your Risk Management File, and demonstrating to your notified body that users are trained and outputs are reviewed.
Practical controls I insist on
I push for the following minimum controls whenever we deploy AI features in the QMS:
- Prompt and output logging: save the user prompt, the AI response, and who accepted/edited it.
- Human-in-the-loop sign-off: no AI-generated text is final until a named person signs it off.
- Reproducibility tests: run the same prompt periodically and on software updates to ensure behaviour is consistent or changes are documented.
- Acceptance criteria: measurable tests for suggested mappings (e.g., precision/recall thresholds for document linkage) that you validate during rollout.
- Versioned models and update policy: vendors should state when models are updated and what validation is performed — this goes into Change Control.
These controls make the AI feature auditable and link it to your overall QMS, which is what inspectors actually look for.
Where AI gives the most tangible ROI
If you need to prioritise, these are the areas where I’ve seen real, low-risk benefit:
- Finding the right evidence quickly — faster literature searches and semantic search across Technical Files.
- Faster impact mapping for changes — a suggested map saves hours of manual tracing.
- Reducing form friction — auto-filled fields cut the time to open a CAPA and improve data consistency.
- Trend detection for surveillance — catching repeating supplier issues sooner so you can open fewer large CAPAs later.
These are valuable because they integrate into existing workflows and preserve reviewer responsibility.
Features I remain sceptical about
- AI that claims to “decide” severity or regulatory classification without human review. Classification decisions (e.g., under MDR) have legal implications and need a named responsible person.
- Black-box recommendations with no explainability. If you cannot trace why a document was linked or why a priority was set, you will struggle in an audit.
- Claims of full automation for clinical evaluation or PMCF design. AI can assist literature screening or draft study outlines, but you must retain clinical science oversight.
Final practical advice
Treat AI features as you would any other tool that affects product safety or documentation. Document the feature in your QMS, run defined validation, preserve audit trails, and mandate human sign-off. The phrase I use with vendors now is “operational AI, controlled assistance, traceable outcome” — that’s what gets through an audit.
What AI-assisted QMS feature has actually saved you time (or caused you trouble) in your last audit?
Top comments (0)