LLM WIKI— your compounding knowledge base
Knowledge,compiled.
LLM Wiki Tools turns your documents into a living, cross-referenced wiki that Claude builds and maintains for you. Upload your sources once — the LLM wiki thickens with every question you ask.
§ 01 · Overview
Transformer research, compiled
12 sources · 47 pages · Updated 2 hours ago
This wiki tracks research on transformer architectures and their scaling properties. It synthesizes findings across 12 sources.
The relationship between model size and performance follows predictable scaling laws — loss decreases as a power law of compute, dataset size, and parameters.
fig. 1 — a compiled overview page, written by Claude, cited back to sources
§ What is an LLM Wiki
The LLM Wiki is the artifact.
Most AI tools pretend your documents are a search index. You ask, they retrieve fragments, you get an answer, and nothing sticks. Next question starts from zero.
An LLM Wiki inverts that. The LLM reads your sources once, then writes. It produces entity pages, concept pages, summaries, diagrams. It maintains cross-references, flags contradictions, and files good answers back as new pages. The LLM wiki compounds.
Three layers, clearly separated —
Raw sources
Your PDFs, papers, notes. Immutable. The LLM reads, never rewrites.
The wiki
LLM-authored markdown — summaries, entity pages, cross-references, tables, diagrams.
The tools
MCP tools — search, read, write, delete — so Claude orchestrates the whole thing.
§ LLM Wiki features
An LLM Wiki built for knowledge that sticks.
Every feature serves one idea: that the work of reading should become a durable LLM wiki — not a chat transcript that disappears.
Compounding
Knowledge that thickens with use
Every source you add and every question you ask makes the wiki richer. Answers are synthesized once, filed back as pages, and never re-derived from scratch.
MCP-native
Claude writes the wiki directly
Connect Claude.ai via MCP. It gains tools to search, read, write, and lint across your vault — so conversation becomes authorship.
Any source
PDFs, office docs, articles, notes
Drop in PDFs with OCR, Word, PowerPoint, Markdown, web captures. Claude reads them in place; you browse the original in a full viewer.
Citations
Every claim keeps its paper trail
Wiki pages link back to the exact page ranges and passages that support them. Open a footnote, open the PDF — the evidence is one click away.
Source fidelity
Raw sources stay immutable
Claude reads from sources, never rewrites them. The wiki is a layer on top — summaries, entity pages, cross-references — all revisable, all reversible.
Lint & maintain
Stale claims get flagged, not forgotten
Ask for a health check. Claude finds contradictions between sources, orphan pages, missing cross-references, and suggests what to read next.
§ How to use LLM Wiki
Four steps to your LLM Wiki, then conversation.
The only thing you do yourself is curate sources and ask questions. Claude handles the rest of the LLM wiki.
- 01
Create a wiki
Sign up and spin up a fresh knowledge base. You can keep several — one per project, one per subject.
- 02
Upload your sources
Drop PDFs, papers, lecture notes, decks, or articles into the vault. OCR and office conversion happen for you.
- 03
Connect Claude via MCP
Copy the MCP connector config from Settings. Paste it into Claude.ai as a custom connector and sign in.
- 04
Tell Claude what to do
"Read these papers and build an entity page for each method." The wiki compiles itself — with cross-references and citations.
{
"mcpServers": {
"llmwiki": {
"url": "https://llmwiki.tools/mcp",
"transport": "sse"
}
}
}“Ingest these three papers and update the attention page.”
“Write me an entity page on Mistral with citations.”
“Lint the wiki — what contradicts what?”
§ Why it works
“The tedious part of maintaining a knowledge base is not the reading or the thinking. It is the bookkeeping — updating cross-references, keeping summaries current, noting when new data contradicts old claims.”
Humans abandon personal wikis because maintenance cost grows faster than value. LLMs don't get bored, don't forget to update a cross-reference, and can touch fifteen files in one pass. The wiki stays alive because the cost of keeping it alive drops to near zero.
§ FAQ
Plain answers.
Everything we get asked about the LLM Wiki idea, Karpathy's original proposal, and how Claude builds your LLM wiki through MCP.
An LLM Wiki is a personal, compounding knowledge base that a large language model builds and maintains on your behalf. You curate the sources and ask the questions; the LLM reads, synthesizes, writes entity and concept pages, and keeps cross-references current. Unlike RAG over raw chunks, the wiki is a persistent, structured artifact — so knowledge accrues instead of being re-derived every query.
RAG retrieves fragments on each query and forgets them after the turn. ChatGPT with files holds context for a single session. An LLM Wiki is different because the LLM writes a durable layer — summaries, concept pages, contradictions it found — that survives across sessions and compounds. Your second question benefits from the synthesis the LLM did for your first.
LLM Wiki connects to Claude through the Model Context Protocol (MCP). You use Claude.ai as the interface; our MCP server gives it typed tools to search, read, write, and delete inside your vault. Your conversations happen in Claude; your LLM wiki lives with us on LLM Wiki Tools.
PDFs (with OCR for scanned papers), Word documents, PowerPoint decks, Markdown, plain text, and web articles. Office formats are converted in an isolated sandbox. You can always open the original in a built-in viewer alongside the wiki page it informed.
Raw sources are immutable — Claude reads from them but cannot modify them. The wiki layer is where Claude writes: it creates markdown pages, edits them with precise replacements, and appends. You can review, revert, or delete anything at any time.
No. Your sources and wiki pages are stored in your account and used only to serve your own conversations with Claude. Claude.ai follows its own data handling policies for the messages you send it.
No. Sign up at LLM Wiki Tools, upload sources, paste one MCP config into Claude.ai, and you are finished. There is no local setup required to run your LLM Wiki.
LLM Wiki Tools is a faithful implementation of Andrej Karpathy's LLM Wiki proposal — the three-layer model of raw sources, the LLM wiki, and the schema, with ingest / query / lint as the core operations of every LLM wiki.
§ Begin
Your wiki is waiting to be written.
Sign up, upload a handful of sources, connect Claude — then watch it compile what you know.