OpenKB: Compiling Documents into a Continuously Updated LLM Knowledge Base

OpenKB is an open-source LLM knowledge base CLI from VectifyAI. It compiles PDFs, Word files, Markdown, web pages, and other raw documents into a Markdown wiki with summaries, concept pages, and cross-links, while using PageIndex for long-document and multimodal retrieval.

OpenKB is an open-source LLM knowledge base tool from VectifyAI.

It is not a traditional RAG system that chunks documents, vectorizes them, and then stitches context back together at query time. Instead, it first compiles raw documents into a structured wiki: document summaries, concept pages, cross-references, follow-up queries, and lint checks. In other words, it feels more like a knowledge-base CLI that keeps organizing your material over time.

Project link: https://github.com/VectifyAI/OpenKB

The Short Version

OpenKB is worth watching for three reasons:

  1. It outputs the knowledge base as ordinary Markdown files instead of locking it inside a dedicated database.
  2. It uses PageIndex for long PDFs, focusing on vector-database-free retrieval for long documents.
  3. It emphasizes “knowledge compilation”: the LLM generates summaries, concept pages, and cross-links instead of retrieving from scratch on every question.

That makes OpenKB better suited to long-term knowledge accumulation: paper reading, project documentation, internal company materials, technical standards, product research, and personal knowledge bases.

It is not a universal replacement. If you need high-concurrency online Q&A, complex permissions, a web admin console, enterprise audit trails, or large-scale multi-tenancy, OpenKB currently looks more like a developer tool and knowledge-base prototype than a complete enterprise knowledge platform.

What OpenKB Is

OpenKB stands for Open Knowledge Base.

It works as a CLI: it converts, organizes, summarizes, and writes documents into a set of wiki files. The official README describes it directly: OpenKB uses LLMs to compile raw documents into a structured, interlinked wiki-style knowledge base, with PageIndex providing vectorless long-document retrieval.

Supported input formats include:

  • PDF
  • Word
  • Markdown
  • PowerPoint
  • HTML
  • Excel
  • Plain text
  • Other formats that markitdown can convert

The generated knowledge base lives under wiki/ and mainly includes:

  • index.md: knowledge base overview
  • log.md: operation timeline
  • AGENTS.md: knowledge base structure and maintenance instructions
  • sources/: converted source text
  • summaries/: summaries for each document
  • concepts/: cross-document concept pages
  • explorations/: saved query results
  • reports/: lint reports

The biggest benefit of this design is transparency. You can open the Markdown files directly instead of only receiving answers through a black-box retrieval interface.

How It Differs from Traditional RAG

A typical traditional RAG pipeline looks like this:

  1. Chunk the documents.
  2. Generate embeddings.
  3. Store them in a vector database.
  4. Retrieve relevant chunks at query time.
  5. Feed those chunks to the LLM to generate an answer.

That workflow is mature and works well for Q&A systems. But it has one problem: the knowledge itself does not really accumulate. Every question repeats the work of finding chunks, assembling context, and generating an answer.

OpenKB is closer to “organize first, ask later”:

  1. Documents enter raw/.
  2. Short documents are converted to Markdown with markitdown.
  3. Long PDFs go through PageIndex to produce tree indexes and summaries.
  4. The LLM generates document summaries.
  5. The LLM reads existing concept pages and creates or updates cross-document concepts.
  6. The knowledge base index, log, and cross-links are updated.

As a result, adding one document does more than create another searchable file. It may update a dozen wiki pages. Knowledge is written into concept pages and connected to existing material.

This is closer to how humans maintain knowledge bases: when new material arrives, you do not just archive it; you update topic pages, summarize differences, and add references.

What PageIndex Solves

Long documents have always been difficult for RAG and LLM knowledge bases.

If you simply split a long PDF into many chunks, several problems appear:

  • Chapter relationships are lost.
  • Tables, images, and footnotes are hard to handle.
  • Retrieved snippets are too fragmented, so answers lack global structure.
  • Even a large context window is not ideal for stuffing an entire document into the prompt.
  • Long summary chains can compress away important details.

OpenKB uses PageIndex for long PDFs. According to the project description, PageIndex builds tree indexes and summaries for long documents, letting the LLM reason over the document tree instead of reading the whole document directly.

The focus is not “the few text snippets with the highest vector similarity.” It is about helping the model use document hierarchy to find relevant content. For research reports, papers, manuals, prospectuses, and compliance documents, this direction makes a lot of sense.

OpenKB can use the open-source PageIndex locally by default. If you need OCR, complex PDF handling, or faster structure generation, you can configure PAGEINDEX_API_KEY to use PageIndex Cloud.

Install and Quick Start

Install OpenKB with pip:

1
pip install openkb

Or install the latest GitHub version:

1
pip install git+https://github.com/VectifyAI/OpenKB.git

For editable source installation:

1
2
3
git clone https://github.com/VectifyAI/OpenKB.git
cd OpenKB
pip install -e .

Create a knowledge base directory:

1
2
mkdir my-kb && cd my-kb
openkb init

Add documents:

1
2
openkb add paper.pdf
openkb add ~/papers/

Ask a question:

1
openkb query "What are the main findings?"

Start an interactive chat:

1
openkb chat

If you want OpenKB to process new files automatically, use watch mode:

1
openkb watch

After that, drop files into raw/, and OpenKB will update the wiki automatically.

LLM Configuration

OpenKB uses LiteLLM to support multiple model providers, including OpenAI, Claude, and Gemini.

You can set the model during initialization, or configure it in .openkb/config.yaml:

1
2
3
model: gpt-5.4
language: en
pageindex_threshold: 20

Model names follow LiteLLM’s provider/model format. OpenAI models can omit the provider prefix:

1
model: gpt-5.4

Models such as Anthropic and Gemini are usually written like this:

1
model: anthropic/claude-sonnet-4-6
1
model: gemini/gemini-3.1-pro-preview

Put the API key in .env:

1
LLM_API_KEY=your_llm_api_key

If you enable PageIndex Cloud, add:

1
PAGEINDEX_API_KEY=your_pageindex_api_key

Common Commands

OpenKB’s commands are developer-friendly:

  • openkb init: initialize a knowledge base.
  • openkb add <file_or_dir>: add a file or directory.
  • openkb remove <doc>: remove a document and clean up related wiki pages, images, registry entries, and PageIndex state.
  • openkb query "question": ask a one-off question against the knowledge base.
  • openkb chat: enter a multi-turn conversation.
  • openkb watch: monitor raw/ and update automatically.
  • openkb lint: check knowledge base structure and content health.
  • openkb list: list indexed documents and concepts.
  • openkb status: show knowledge base statistics.

openkb chat is better than openkb query for continuous exploration. It supports session resume, session listing, deletion, and slash commands such as /status, /list, /add <path>, /save, and /lint.

Why a Markdown Wiki Matters

Many knowledge-base tools are painful because of migration cost.

Once material enters a proprietary database, index, or format, it becomes hard to inspect, edit, back up, or migrate directly. OpenKB writes the result as ordinary Markdown, which makes it naturally compatible with existing tools.

The most direct use is opening wiki/ in Obsidian:

  • Summary pages can be read directly.
  • Concept pages can connect through [[wikilinks]].
  • Graph view can show relationships between knowledge items.
  • Query results can be saved to explorations/.
  • AGENTS.md can define how the knowledge base should be maintained.

That makes OpenKB more than a Q&A tool. It can become a knowledge-organizing pipeline for individuals or teams.

Best-Fit Scenarios

OpenKB is especially useful for:

  • Reading papers and technical reports.
  • Organizing project documentation.
  • Building product research archives.
  • Creating documentation knowledge bases around open-source projects.
  • Organizing internal policies, meeting notes, and explanatory documents.
  • Maintaining a personal Obsidian knowledge base automatically.
  • Structuring long PDFs, PPTs, Word files, and web materials.

If you often face piles of documents and want more than “ask one question, get one answer,” OpenKB’s direction is a good fit: it gradually turns material into a browsable, reusable, and traceable knowledge base.

What to Watch Out For

First, OpenKB depends on LLM quality.

Summaries, concept pages, and cross-links are generated by models. Stronger models usually produce more stable knowledge compilation; weaker models may struggle with concept extraction, contradiction detection, and cross-document synthesis.

Second, estimate cost early.

If you import many long documents at once, LLM calls may become expensive. Start with a small dataset, check the output structure and quality, and then expand.

Third, the generated wiki still needs human review.

OpenKB can organize material, but it does not automatically guarantee factual correctness. Important knowledge bases still need humans to review summaries, concept pages, and references.

Fourth, be careful with sensitive material.

If you use cloud LLMs or PageIndex Cloud, pay attention to privacy, trade secrets, and compliance requirements. For internal materials, confirm the model provider, data retention policy, and access boundaries first.

Fifth, it is currently more of a CLI tool.

The roadmap mentions a future Web UI, database-backed storage, support for large collections, and hierarchical concept indexing. At this stage, if teammates are not comfortable with the command line, there is still some adoption friction.

Relationship with Obsidian, NotebookLM, and Enterprise RAG

OpenKB and Obsidian are best understood as an “automatic organization layer” plus a “reading and editing layer.”

Obsidian is good for humans to write, edit, browse, and link notes. OpenKB is good for turning raw documents into a wiki that can enter Obsidian.

OpenKB and NotebookLM differ more around local control and open file formats.

NotebookLM is more direct for quickly asking questions and generating summaries after dropping in materials. OpenKB is better for developers who want the organized result to remain in a local directory and continue evolving as Markdown.

OpenKB does not replace enterprise RAG; it complements it.

Enterprise RAG cares more about permissions, auditability, service deployment, access isolation, monitoring, and stable throughput. OpenKB is better for building a readable, editable, long-lived knowledge layer. If you later build online Q&A, the wiki generated by OpenKB can also become a higher-quality corpus.

If you want to try OpenKB, start like this:

  1. Create a test knowledge base directory.
  2. Add 3 to 5 documents on the same topic.
  3. Run openkb add.
  4. Open wiki/ and inspect the summaries and concept pages.
  5. Ask a few specific questions with openkb query.
  6. Run openkb lint to check knowledge-base health.
  7. Open wiki/ in Obsidian and see whether the link graph is meaningful.
  8. Once quality looks good, import a larger document collection.

Do not throw in hundreds of files at the beginning. First see whether it understands your material type well, especially tables, images, long PDFs, and multi-document concept merging.

Summary

OpenKB’s value is that it moves an LLM knowledge base one step earlier than “assemble context at query time”: organize the material into a wiki first, then ask questions, chat, lint, and keep maintaining that wiki.

This direction is not right for every Q&A system, but it is well suited to knowledge work that needs long-term accumulation. Markdown files, Obsidian compatibility, PageIndex long-document handling, multi-model support, and a CLI workflow combine into a useful tool for developers and research-oriented users.

If you have many PDFs, reports, web pages, papers, and project documents, OpenKB is worth trying. It may not immediately replace a mature enterprise knowledge base, but it can become a practical entry point for organizing material: first turn documents into readable, linked, traceable knowledge, then let the LLM work on top of that knowledge.

References:

记录并分享
Built with Hugo
Theme Stack designed by Jimmy