DEV Community

Cover image for No More Hallucinated Citations: A Domain-Specific RAG System with Ollama, ChromaDB and AI Agents
sep83
sep83

Posted on

No More Hallucinated Citations: A Domain-Specific RAG System with Ollama, ChromaDB and AI Agents

TL;DR: I built a full-stack knowledge pipeline around a corpus of 2,514 academic PDFs focused on urban art. The system combines ChromaDB vector search, Ollama-powered semantic analysis, a FastAPI REST layer, and six AI agents (slash commands in Claude Code) that orchestrate research workflows end-to-end. The result: zero hallucinated citations, dense evidence-backed documents, and a workflow that scales to any specialized domain.


The Problem: LLMs Hallucinate Academic Citations

If you've ever used an LLM to help write a research proposal or academic paper, you've probably hit this wall: the model confidently produces author names, journal titles, and publication years — none of which exist.

The standard advice is "don't use AI for citations." But that advice ignores a better question: what if you gave the model an authoritative, queryable corpus instead of relying on its training data?

That's exactly what this system does.


The Stack at a Glance

PDFs (2,514 docs, 4 languages)
        │
        ▼
  [Ingestion Pipeline]
   pdfplumber → chunking → metadata extraction
        │
        ▼
  [Dual Storage]
   MariaDB (corpus.db) ──── ChromaDB (59,030 chunks)
   metadata + citations      vector embeddings
        │
        ▼
  [Analysis Layer]
   Ollama (self-hosted LLM) — semantic relevance scoring,
   research_hint generation, debate mapping
        │
        ▼
  [REST API — FastAPI]
   /search · /consulta · /fragmentos · /autores
   /debate · /translate · /recientes · /document/{id}
        │
        ▼
  [AI Agent Layer — Claude Code slash commands]
   /mapeador · /evidencia · /auditor · /propuesta · /articulo · /vigilante
Enter fullscreen mode Exit fullscreen mode

Everything runs locally. No data leaves the machine. The corpus, embeddings, LLM inference, and API are all self-hosted.


Part 1 — The Corpus

Ingestion

The corpus started with a focused collection of academic PDFs on urban art and graffiti: peer-reviewed articles, book chapters, conference proceedings, and policy documents spanning English, Spanish, Portuguese, and French.

Each document goes through a pipeline:

# Simplified ingestion flow
doc = extract_text_pdfplumber(pdf_path)
chunks = split_into_paragraphs(doc, max_tokens=512, overlap=64)

for chunk in chunks:
    metadata = extract_metadata(chunk)  # author, year, DOI, language
    embedding = embed(chunk.text)       # via Ollama nomic-embed-text
    chromadb.add(chunk.text, embedding, metadata)
    mariadb.insert(chunk, metadata)     # full text + APA citation
Enter fullscreen mode Exit fullscreen mode

The chunking strategy matters enormously here. Fixed-size token splitting loses paragraph context. Instead, I split on paragraph boundaries with a 64-token overlap between chunks, which preserves semantic coherence while keeping retrieval granular.

What Gets Stored

Each chunk in MariaDB carries:

Field Description
paragraph Full text of the chunk
page Page number in the original PDF
citation_apa Pre-formatted APA 7 citation
citation_mla Pre-formatted MLA citation
doc_id Unique document identifier
relevancia Ollama-assigned relevance score (1–5)
research_hint ≤240-char synthesis of the chunk's contribution
categoria_tematica Thematic category (e.g., graffiti_core, tecnologia)
anio Publication year
idioma Language code

The relevancia field is generated once at ingestion time by asking Ollama to score each chunk's relevance to the corpus domain. This pre-computation means searches can filter by quality without running LLM inference at query time.

Current State

Total documents:    2,514
Chunks in ChromaDB: 59,030
RAG coverage:       99.8%
Languages:          EN (62.3%), ES (14.3%), PT (12.7%), FR (2.7%), others
Enter fullscreen mode Exit fullscreen mode

Part 2 — The Retrieval Layer

Hybrid Search

A common mistake in RAG systems is relying solely on cosine similarity. Semantic similarity isn't the same as relevance — a chunk can be about the right topic but still be a weak citation (methodologically unsound, pre-2010, tangential argument).

The system uses a hybrid relevance_score that combines three signals:

relevance_score = (
    0.5 × cosine_similarity(query_embedding, chunk_embedding)
  + 0.3 × ollama_relevancia          # pre-computed quality score
  + 0.2 × recency_weight(year)       # normalized publication year
)
Enter fullscreen mode Exit fullscreen mode

This produces meaningfully different rankings from pure vector search — and in practice surfaces much better citations for academic use.

The API Endpoints

The FastAPI layer exposes the corpus as a set of purpose-built endpoints:

# Semantic search — fast, no LLM at query time
GET /search?q=urban+heritage+documentation&top_k=20&style=apa

# Full RAG — slower, synthesizes an answer with citations
POST /consulta
{
  "pregunta": "What methods exist for documenting ephemeral urban art?",
  "top_k": 10,
  "anio_desde": 2015,
  "idioma": "en"
}

# Verbatim fragments for direct quotation
GET /fragmentos?concepto=ephemeral+art+preservation&top_k=8

# Author and debate mapping
GET /autores?tema=digital+documentation+street+art
GET /debate?tema=vandalism+vs+heritage

# Export for bibliography sections
GET /search/export?q=graffiti+cultural+heritage&format=bib&top_k=100

# Translation of non-native-language chunks
POST /translate
{"paragraph": "...", "target_lang": "es"}
Enter fullscreen mode Exit fullscreen mode

Each endpoint returns structured data including paragraph, page, citation_apa, relevance_score, research_hint, and doc_id. The doc_id is critical — it makes every citation traceable back to a specific PDF page.

Example Response from /fragmentos

{
  "results": [
    {
      "paragraph": "Street art occupies a paradoxical position in urban space: simultaneously celebrated as cultural expression and prosecuted as criminal damage, its ephemerality is not accidental but constitutive of its meaning...",
      "page": 47,
      "citation_apa": "Brighenti, A. M. (2010). At the wall: Graffiti writers, urban territoriality, and the public domain. Space and Culture, 13(3), 315–332. https://doi.org/10.1177/1206331210365283",
      "relevance_score": 0.91,
      "research_hint": "Argues ephemerality is constitutive of street art's meaning, not a deficiency — critical for patrimony arguments.",
      "doc_id": "brighenti_2010_wall"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Notice the research_hint — this is Ollama's pre-computed synthesis of why the chunk matters, not just what it says. It's what makes the agent layer possible.


Part 3 — The Agent Layer

The real productivity multiplier isn't the API — it's the agents built on top of it.

Six Claude Code slash commands orchestrate the research workflow. Each agent is a Markdown file in .claude/commands/ that instructs Claude Code to call the corpus API in a specific sequence and produce structured output files.

The Agent Map

/vigilante   — session start: check deadlines, detect new corpus documents
/mapeador    — map the corpus for a topic (calls /autores + /debate + /search ×2 + /consulta ×2)
/evidencia   — build an evidence table for a specific section
/propuesta   — full proposal generator (orchestrates mapeador + evidencia + redaction)
/articulo    — full IMRAD academic article generator
/auditor     — audit any .md file, classifying claims as [✓ VERIFIED] / [⚠ WEAK] / [✗ PENDING]
Enter fullscreen mode Exit fullscreen mode

The /mapeador Agent — Eliminating 6 Manual API Calls

The most impactful agent. Before it existed, mapping the corpus for a new topic meant manually calling six endpoints, collecting outputs, and synthesizing them into a coherent picture. With /mapeador, one command does all of this:

/mapeador "ephemeral urban art documentation" B
Enter fullscreen mode Exit fullscreen mode

The agent:

  1. Calls GET /autores?tema=... — maps the key authors and schools of thought
  2. Calls GET /debate?tema=... — surfaces competing positions
  3. Calls GET /search?q=...&top_k=20 with two different query formulations
  4. Calls POST /consulta twice with methodologically distinct questions
  5. Synthesizes all results into mapa-corpus-<slug>.md

Output structure:

# Corpus Map — Ephemeral Urban Art Documentation

## Key Authors
- Brighenti (2010) — territoriality and ephemerality
- Iveson (2010) — public space and street art legitimacy
...

## Academic Debate
Position A: Documentation preserves — [corpus evidence]
Position B: Documentation alters meaning — [corpus evidence]
...

## Evidence Bank (top 20 sources)
| citation_apa | relevance_score | research_hint |
|---|---|---|
...

## State of the Art Synthesis
[Generated by Ollama via /consulta — 400-600 words, cited]

## Identified Gaps
- [PENDING — no corpus support]: ...
Enter fullscreen mode Exit fullscreen mode

This takes about 3 minutes and replaces what used to be 45–90 minutes of manual API work.

The /auditor Agent — Zero Hallucinations Guarantee

After any document is written, /auditor runs through every substantive claim and classifies it:

/auditor propuestas/minciencias/2026-grafpin/propuesta.md
Enter fullscreen mode Exit fullscreen mode

Output example:

## Audit — 2026-05-10

| Claim | Endpoint Checked | Status | Note |
|---|---|---|---|
| "Street art is recognized as intangible cultural heritage in 12 countries" | /search?q=intangible cultural heritage street art | [✗ PENDING] | No corpus support for "12 countries" |
| "Brighenti (2010) argues ephemerality is constitutive..." | /fragmentos?concepto=ephemeral constitutive meaning | [✓ VERIFIED] | relevance_score: 0.91, page 47 |
| "Computer vision achieves 87% accuracy in graffiti style detection" | /search?q=computer vision graffiti accuracy | [⚠ WEAK] | One source, pre-2018 |
Enter fullscreen mode Exit fullscreen mode

Rule: if more than 40% of claims are [✗ PENDING], the document doesn't ship. This single constraint eliminates hallucination from the research output.


Part 4 — Multi-Query RAG Strategy

One underrated technique in this system: always formulate 3–4 query variants before calling /consulta.

The same concept produces very different corpus results depending on phrasing:

queries = [
    "ephemeral art documentation methods",           # EN — technical angle
    "documentación arte urbano efímero patrimonio",  # ES — heritage angle
    "street art preservation digital archive",       # EN — infrastructure angle
    "graffiti documentation urban memory loss"       # EN — urgency angle
]
Enter fullscreen mode Exit fullscreen mode

Each formulation retrieves a different subset of the corpus. The agent layer handles this automatically — /mapeador uses two /search calls and two /consulta calls with different formulations, then merges the evidence.

The practical impact: query coverage increases from ~65% (single query) to ~92% (four-query merge) for complex topics.


Part 5 — Why This Applies to Any Domain

The urban art corpus is just one instantiation. The architecture generalizes completely:

Component What to change for a new domain
PDF corpus Replace with domain PDFs
Thematic categories Redefine categoria_tematica for your taxonomy
Ollama relevance prompt Update to score relevance to the new domain
Agent prompts Update the research questions in each slash command
API filters Add domain-specific facets (e.g., species, jurisdiction, drug_class)

I've seen this pattern applied to:

  • Legal research: case law corpora with jurisdiction and year filters
  • Medical literature: clinical trial PDFs with PICO-structure extraction
  • Policy analysis: government documents with agency and date filters

The key insight is that domain specificity is a feature, not a limitation. A general-purpose RAG system optimized for everything is often not great at any one thing. A corpus built around a specific domain, with relevance scoring tuned to that domain, produces dramatically better retrieval.


Challenges and Lessons Learned

1. Chunk size is a first-class design decision

Too small (< 200 tokens): loses context, retrieval precision drops.
Too large (> 700 tokens): embeds multiple ideas, relevance scores become noisy.
The sweet spot for academic text: 400–512 tokens with paragraph boundary alignment.

2. Pre-compute expensive operations at ingestion time

Running Ollama at query time for every search would make the system unusable. Pre-computing relevancia and research_hint at ingestion means query-time latency stays under 200ms for /search and under 8 seconds for the full /consulta RAG pipeline.

3. The citation must be traceable to a page

citation_apa without page and doc_id is not an academic citation — it's a claim. Storing both means every output of the system can be verified by opening the original PDF.

4. Agents need failure modes, not just success paths

When the corpus doesn't support a claim, the agents mark it [PENDING] rather than hallucinating support. This requires explicit prompting: "if you cannot find corpus evidence, write [PENDING — verify corpus] and do not fabricate a source."

5. Multi-language corpora need language-aware retrieval

Embedding models trained primarily on English produce lower-quality embeddings for Spanish and Portuguese text. Using nomic-embed-text (multilingual-capable) and adding language filters (idioma field) to all queries significantly improves cross-language retrieval.


Results

Since deploying this system:

  • Zero hallucinated citations in output documents (validated by /auditor on every deliverable)
  • Research proposal drafts in 3–4 hours vs. 2–3 days manually
  • Evidence density: 1.8 citations per paragraph on average (up from ~0.4 with manual research)
  • Corpus growth: The system accepts new PDFs at any time — ingestion takes ~30 seconds per document

What's Next

A few things on the roadmap:

  • Citation graph: linking documents by shared references to surface clusters of highly-cited foundational work
  • Temporal drift detection: alerting when a claim that was [✓ VERIFIED] against the corpus in 2024 now has contradicting evidence from 2025+ additions
  • Cross-corpus queries: combining this corpus with data from the Grafpin platform itself (geolocated urban art documentation) to answer questions that require both academic literature and empirical field data

The Core Idea

The goal was never to automate research. It was to make the evidence layer of research reliable.

LLMs are extraordinary at synthesis, argument construction, and adapting tone for different audiences. They are unreliable as bibliographic databases. The solution isn't to avoid LLMs — it's to give them a trustworthy, queryable knowledge base so they can focus on what they're actually good at.

A domain-specific RAG corpus, properly indexed and exposed through a well-designed API, changes the LLM's role from source of truth to engine of reasoning. That's a much better place for both the model and the researcher to be.


The system described here is operational and actively used for academic research on urban art preservation. The platform it supports, Grafpin, documents geolocated street art in cities across Latin America.

Stack: Python · FastAPI · ChromaDB · MariaDB · Ollama · Claude Code · pdfplumber

Top comments (0)