qmd is a search tool for local Markdown documents, with AI Agents as its main target users.
It solves a specific problem: when a project contains many .md documents, AI coding assistants often do not know which file to read, which section to cite, or which instructions are current. Full-text grep can find keywords, but it does not understand meaning well. Putting all documentation into the context wastes window space and easily introduces irrelevant content.
The idea behind qmd is to index Markdown documents first, then return the most relevant snippets through a search interface for AI to use. It can be used as a command-line tool, integrated through an SDK, or exposed as an MCP Server for clients that support MCP.
What Problem It Solves
Real projects usually have more than one or two README files.
You may have:
- Architecture notes
- API documentation
- Development conventions
- Deployment procedures
- Architecture decision records
- Troubleshooting notes
- Requirement documents
- AI usage instructions
- Toolchain notes and reminders
Humans can browse documents through directories, but AI Agents need a clear retrieval entry point. Otherwise, they may:
- Read the wrong document
- Miss key constraints
- Use outdated instructions
- Put irrelevant content into context
- Invent rules in answers based on experience
This is where qmd is useful. It turns local Markdown documents into a searchable knowledge source, so AI can search first when it needs context, then answer or act based on matched snippets.
Search Approach
The README says qmd combines several retrieval methods:
- BM25 keyword search
- Vector search
- LLM reranking
BM25 is good for clear keywords. If you search for a function name, configuration key, error code, or file name, it is usually direct and effective.
Vector search is better for semantic questions. For example, if you ask “how does this project handle permission validation,” the documentation may not contain that exact phrase, but it may contain related descriptions about authentication, access control, and role checks.
LLM reranking is used to reorder candidate results. The first two steps find potentially relevant content, and the model then judges which snippets best match the current question.
This combination is more suitable for AI Agents than plain keyword search, because Agent questions are often task intentions rather than fixed keywords.
Why Markdown
Markdown is the most common documentation format in development projects.
It is simple enough to store in Git and structured enough to include headings, lists, code blocks, links, and tables. For AI, Markdown is also easier to parse than PDFs, web snapshots, or screenshots.
Because qmd focuses on Markdown, it can process developer documentation more directly:
- Split content by headings and paragraphs
- Preserve code blocks
- Preserve document paths
- Return snippets suitable for citation
- Let the Agent know which document an answer comes from
This is more stable than asking AI to randomly scan a repository, and it saves more context than putting every document into a prompt at once.
Three Entry Points
qmd provides three entry points: CLI, SDK, and MCP Server.
1. CLI
The CLI is suitable for direct terminal use and for scripts.
You can index a documentation directory and then search related content with commands. For developers, the CLI is the easiest way to validate the tool: first see whether it can find the correct documents, then consider integrating it into more complex workflows.
This kind of tool is useful inside local projects. For example, before changing code you can search design documents; before debugging, search troubleshooting notes; before writing an API, search API conventions.
2. SDK
The SDK is suitable for integrating qmd into your own tools.
If you are building an internal development assistant, documentation Q&A system, code review bot, or project knowledge base, you can call the search capability through the SDK instead of asking users to run commands directly.
The SDK gives more control over:
- Search directories
- Query content
- Number of returned results
- Result format
- Whether to pass results to a model for summarization
This fits scenarios that need deeper integration.
3. MCP Server
MCP is the most valuable entry point for AI Agents.
Through MCP Server, clients that support MCP can call qmd as a document search tool. This lets an Agent search local Markdown documents before acting, instead of guessing project rules.
A typical workflow could be:
- The user asks AI to modify a feature
- AI calls
qmdto search related design documents qmdreturns the most relevant Markdown snippets- AI modifies code based on those document constraints
This is more natural than manually pasting all rules into a new session, and it is better suited to long-term projects.
Suitable Scenarios
qmd is suitable for:
- Projects with many Markdown documents
- AI Agents that often need to look up project rules
- Teams that want AI answers to cite local documents
- Documentation spread across multiple directories
- Reusing the same retrieval capability across CLI, SDK, and MCP
- Reducing AI coding assistants’ tendency to guess project conventions
- Connecting local knowledge bases to Claude Desktop, Claude Code, or other MCP clients
If your project only has one short README, directly asking AI to read the file is enough.
But if the documentation has grown to dozens or hundreds of files, or if you want the Agent to search documents before acting, this type of indexing tool becomes meaningful.
Difference from grep
Tools such as grep and rg are excellent for exact search.
If you know you need DATABASE_URL, authMiddleware, 404, or docker compose, keyword search is usually the fastest.
qmd is better when you do not know the exact words.
For example, you may ask:
- What is the release process for this project?
- What conventions apply when adding a new API?
- Was the caching strategy documented before?
- Which documents should AI read before changing code?
- Where is the design background for a module?
These questions usually require semantic retrieval rather than matching one word. The BM25 + vector + reranking combination in qmd is intended to make these questions find the right context more easily.
Relationship with RAG
qmd can be seen as a lightweight RAG component for Markdown documents.
It does not try to build a full Q&A system for you. It focuses on one step: finding relevant document snippets. How those snippets are used afterward can be handled by CLI, SDK, an MCP client, or your own Agent workflow.
This positioning is practical. Many projects do not need a large knowledge base system; they only need AI to search local documents more accurately and quickly, then bring the results back into the current task.
Notes for Use
First, documentation quality still matters.
A retrieval tool can only find existing content. If the documents are outdated, duplicated, or contradictory, AI may still receive wrong context. Before connecting qmd to an Agent, clean up the key documents first.
Second, do not make the index scope too broad.
Indexing every Markdown file in the repository is not always better. Dependency documentation, temporary notes, and old draft solutions can pollute results. A better approach is to define which directories are trusted documentation sources.
Third, search results should preserve sources.
When AI uses document snippets, it should know which file and section they came from. This makes human review traceable and reduces the risk of “this looks like a document conclusion, but it is only a model summary.”
Fourth, do not replace human judgment completely.
qmd can improve context recall quality, but it is not a replacement for the source of truth. Important changes still require current code, test results, and the latest requirements.
Suitable Teams
If your team has already started putting AI Agents into daily development workflows, tools like qmd can be valuable.
They are especially suitable for teams that:
- Write a lot of documentation
- Have a long project history
- Need both new people and AI to quickly understand context
- Maintain architecture decision records
- Have many Markdown convention documents
- Want AI to check rules before modifying code
Its goal is not to make AI all-knowing. It is to make AI guess less and look things up more.
Reference
Final Thought
The value of qmd is that it turns local Markdown documents into a search entry point that AI Agents can reliably call.
When project documentation moves from “instructions for humans” to “a context source searchable by both humans and AI,” AI coding assistants can follow project rules more easily.