<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Knowledge Base on KnightLi Blog</title>
        <link>https://www.knightli.com/en/tags/knowledge-base/</link>
        <description>Recent content in Knowledge Base on KnightLi Blog</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en</language>
        <lastBuildDate>Wed, 15 Apr 2026 22:09:25 +0800</lastBuildDate><atom:link href="https://www.knightli.com/en/tags/knowledge-base/index.xml" rel="self" type="application/rss+xml" /><item>
        <title>RAGFlow Project Notes: Features and Usage of an Open-Source RAG Engine</title>
        <link>https://www.knightli.com/en/2026/04/15/ragflow-rag-engine-guide/</link>
        <pubDate>Wed, 15 Apr 2026 22:09:25 +0800</pubDate>
        
        <guid>https://www.knightli.com/en/2026/04/15/ragflow-rag-engine-guide/</guid>
        <description>&lt;p&gt;&lt;code&gt;RAGFlow&lt;/code&gt; is an open-source RAG engine from &lt;code&gt;infiniflow&lt;/code&gt;. Its goal is not merely to provide a thin “upload documents and ask questions” shell, but to bring document parsing, chunking, retrieval, reranking, citation tracing, model configuration, agent capabilities, and API integration into one complete workflow.&lt;/p&gt;
&lt;p&gt;If you are building an enterprise knowledge base, document Q&amp;amp;A, a support assistant, internal information retrieval, or you want to give an LLM a more reliable context layer, RAGFlow is one of the open-source options worth serious attention.&lt;/p&gt;
&lt;h2 id=&#34;01-what-problem-ragflow-solves&#34;&gt;01 What Problem RAGFlow Solves
&lt;/h2&gt;&lt;p&gt;Most RAG systems run into three common issues:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Document parsing is unstable, especially for PDFs, scanned files, tables, images, and complex layouts.&lt;/li&gt;
&lt;li&gt;Chunking strategy is opaque, so retrieval may look correct while the actual context is incomplete.&lt;/li&gt;
&lt;li&gt;Answers lack trustworthy citations, making it hard for users to verify where the response came from.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;RAGFlow focuses on exactly these problems. The project README emphasizes &lt;code&gt;Deep document understanding&lt;/code&gt;, template-based chunking, chunk visualization, citation grounding, and multi-path retrieval with reranking. In other words, it cares more about “high-quality input leads to high-quality answers” than simply wiring a vector database to a chat UI.&lt;/p&gt;
&lt;h2 id=&#34;02-core-features&#34;&gt;02 Core Features
&lt;/h2&gt;&lt;h3 id=&#34;1-deep-document-understanding&#34;&gt;1. Deep Document Understanding
&lt;/h3&gt;&lt;p&gt;RAGFlow can extract knowledge from complex unstructured data. The README lists formats such as Word, PPT, Excel, TXT, images, scanned documents, structured data, and web pages.&lt;/p&gt;
&lt;p&gt;This matters a lot for enterprise knowledge bases. Real-world material is rarely clean Markdown. It is usually a mix of contracts, reports, tables, scanned PDFs, product manuals, screenshots, and web content. If parsing quality is weak, retrieval and LLM answers will both suffer.&lt;/p&gt;
&lt;h3 id=&#34;2-template-based-chunking&#34;&gt;2. Template-Based Chunking
&lt;/h3&gt;&lt;p&gt;RAGFlow provides template-based chunking. The value here is that chunking is not a black box; different document types can use different strategies.&lt;/p&gt;
&lt;p&gt;For example, articles, papers, tables, Q&amp;amp;A documents, image explanations, and contract clauses all need different chunk boundaries and granularity. Template-based chunking helps reduce problems like broken sentences, lost table context, and separated headings and body text.&lt;/p&gt;
&lt;h3 id=&#34;3-traceable-citations&#34;&gt;3. Traceable Citations
&lt;/h3&gt;&lt;p&gt;RAGFlow emphasizes grounded citations, meaning answers can be traced back to source passages. It also offers chunk visualization, making it easier for people to inspect and adjust parsing and chunking results.&lt;/p&gt;
&lt;p&gt;This is especially important in production. Internal enterprise Q&amp;amp;A is not only about producing something that “looks right”; it also has to be verifiable. For policy, compliance, finance, technical documents, and customer support content, citations and traceability are close to mandatory.&lt;/p&gt;
&lt;h3 id=&#34;4-automated-rag-workflow&#34;&gt;4. Automated RAG Workflow
&lt;/h3&gt;&lt;p&gt;RAGFlow turns the RAG lifecycle into a more complete workflow:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create a knowledge base&lt;/li&gt;
&lt;li&gt;Upload or sync data&lt;/li&gt;
&lt;li&gt;Parse documents&lt;/li&gt;
&lt;li&gt;Review and adjust chunks&lt;/li&gt;
&lt;li&gt;Configure LLM and embedding models&lt;/li&gt;
&lt;li&gt;Run multi-path retrieval and reranking&lt;/li&gt;
&lt;li&gt;Build chat assistants&lt;/li&gt;
&lt;li&gt;Integrate through APIs into business systems&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That makes it closer to a RAG platform than a single library. For teams, both the UI and the API matter: non-engineers can maintain the knowledge base, while engineers can integrate the capability into existing systems.&lt;/p&gt;
&lt;h3 id=&#34;5-agent-mcp-and-workflow-extensions&#34;&gt;5. Agent, MCP, and Workflow Extensions
&lt;/h3&gt;&lt;p&gt;Recent RAGFlow updates already include Agentic workflow, MCP, Agent Memory, and code execution components. That suggests it is no longer limited to traditional knowledge-base Q&amp;amp;A and is also moving toward agent-oriented scenarios.&lt;/p&gt;
&lt;p&gt;A typical pattern is that an agent can use RAGFlow as a reliable enterprise knowledge layer: retrieve from the knowledge base when it needs context, generate answers with citations, and combine that with tools or workflow steps when necessary.&lt;/p&gt;
&lt;h2 id=&#34;03-basic-usage-flow&#34;&gt;03 Basic Usage Flow
&lt;/h2&gt;&lt;p&gt;According to the official quickstart documentation, the common usage path for RAGFlow can be summarized in the following steps.&lt;/p&gt;
&lt;h3 id=&#34;1-prepare-the-environment&#34;&gt;1. Prepare the Environment
&lt;/h3&gt;&lt;p&gt;The basic requirements listed in the official README are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;CPU &amp;gt;= 4 cores&lt;/li&gt;
&lt;li&gt;RAM &amp;gt;= 16 GB&lt;/li&gt;
&lt;li&gt;Disk &amp;gt;= 50 GB&lt;/li&gt;
&lt;li&gt;Docker &amp;gt;= 24.0.0&lt;/li&gt;
&lt;li&gt;Docker Compose &amp;gt;= v2.26.1&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you want to use the sandbox for the code executor, you also need &lt;code&gt;gVisor&lt;/code&gt;. Another practical note is that the official Docker images mainly target x86 platforms. For ARM64, the project documentation recommends building the image yourself.&lt;/p&gt;
&lt;h3 id=&#34;2-clone-the-project&#34;&gt;2. Clone the Project
&lt;/h3&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;git clone https://github.com/infiniflow/ragflow.git
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nb&#34;&gt;cd&lt;/span&gt; ragflow/docker
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h3 id=&#34;3-check-vmmax_map_count&#34;&gt;3. Check &lt;code&gt;vm.max_map_count&lt;/code&gt;
&lt;/h3&gt;&lt;p&gt;RAGFlow deployment depends on components such as Elasticsearch or OpenSearch, so on Linux you usually need to verify:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;sysctl vm.max_map_count
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;If the value is below &lt;code&gt;262144&lt;/code&gt;, you can set it temporarily:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;sudo sysctl -w vm.max_map_count&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;m&#34;&gt;262144&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;If you want the change to persist after reboot, add it to &lt;code&gt;/etc/sysctl.conf&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id=&#34;4-start-with-docker-compose&#34;&gt;4. Start with Docker Compose
&lt;/h3&gt;&lt;p&gt;You can start the CPU mode directly:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;docker compose -f docker-compose.yml up -d
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;If you want GPU acceleration for DeepDoc tasks, the README shows enabling &lt;code&gt;DEVICE=gpu&lt;/code&gt; in &lt;code&gt;.env&lt;/code&gt; before startup:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;sed -i &lt;span class=&#34;s1&#34;&gt;&amp;#39;1i DEVICE=gpu&amp;#39;&lt;/span&gt; .env
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;docker compose -f docker-compose.yml up -d
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Then inspect the logs:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;docker logs -f docker-ragflow-cpu-1
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Once the services are ready, open the machine address in your browser. Under the default configuration, that is typically:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-text&#34; data-lang=&#34;text&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;http://IP_OF_YOUR_MACHINE
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h3 id=&#34;5-configure-model-api-keys&#34;&gt;5. Configure Model API Keys
&lt;/h3&gt;&lt;p&gt;RAGFlow needs LLM and embedding model configuration. The README mentions choosing the default LLM factory in &lt;code&gt;service_conf.yaml.template&lt;/code&gt; and updating the corresponding &lt;code&gt;API_KEY&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;In practice, you need to configure models according to your provider:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Chat model&lt;/li&gt;
&lt;li&gt;Embedding model&lt;/li&gt;
&lt;li&gt;Rerank model&lt;/li&gt;
&lt;li&gt;Multimodal model, if you want to understand images inside PDFs or DOCX files&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;6-create-the-knowledge-base-and-upload-documents&#34;&gt;6. Create the Knowledge Base and Upload Documents
&lt;/h3&gt;&lt;p&gt;After the service starts, the typical workflow is:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Log in to the Web UI.&lt;/li&gt;
&lt;li&gt;Create a dataset or knowledge base.&lt;/li&gt;
&lt;li&gt;Upload documents or configure a data source sync.&lt;/li&gt;
&lt;li&gt;Wait for parsing to finish.&lt;/li&gt;
&lt;li&gt;Inspect chunk results and adjust them when necessary.&lt;/li&gt;
&lt;li&gt;Create a chat assistant and attach the knowledge base.&lt;/li&gt;
&lt;li&gt;Test answer quality and citation sources.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you need to integrate with a business system, you can continue with the RAGFlow API or SDK and connect retrieval and chat capabilities to your own application.&lt;/p&gt;
&lt;h2 id=&#34;04-suitable-scenarios&#34;&gt;04 Suitable Scenarios
&lt;/h2&gt;&lt;p&gt;RAGFlow fits these kinds of needs:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Enterprise internal knowledge-base Q&amp;amp;A&lt;/li&gt;
&lt;li&gt;Product manuals, technical documentation, and FAQ retrieval&lt;/li&gt;
&lt;li&gt;Customer support and pre-sales assistants&lt;/li&gt;
&lt;li&gt;Traceable Q&amp;amp;A over contracts, reports, and policy documents&lt;/li&gt;
&lt;li&gt;Unified handling of multi-format materials&lt;/li&gt;
&lt;li&gt;Teams that want both UI-based maintenance and API integration&lt;/li&gt;
&lt;li&gt;Systems that want to use RAG as the context layer for agents&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It is especially suitable when document formats are complex, citations matter, and people want to inspect or intervene in parsing results.&lt;/p&gt;
&lt;h2 id=&#34;05-what-to-watch-out-for&#34;&gt;05 What to Watch Out For
&lt;/h2&gt;&lt;p&gt;First, RAGFlow is not a lightweight script. It has real infrastructure requirements. The official recommendation is at least 4 CPU cores, 16 GB RAM, and 50 GB disk. If you only want Q&amp;amp;A over a small amount of Markdown, a full platform may be unnecessary.&lt;/p&gt;
&lt;p&gt;Second, document quality still matters. RAGFlow can improve parsing and chunking, but it cannot magically make low-quality, outdated, or contradictory source material reliable. Knowledge-base governance still matters before production.&lt;/p&gt;
&lt;p&gt;Third, model selection directly affects quality. Embedding, rerank, chat, and multimodal model choices all influence retrieval and answer quality. RAGFlow gives you the workflow, but the final result still depends on data, models, and tuning.&lt;/p&gt;
&lt;p&gt;Fourth, production deployments need careful attention to permissions and data security. Enterprise knowledge bases often contain internal documents, so deployment model, access control, logs, API keys, and model-provider data policy all need to be designed in advance.&lt;/p&gt;
&lt;h2 id=&#34;06-quick-take&#34;&gt;06 Quick Take
&lt;/h2&gt;&lt;p&gt;RAGFlow’s strength is that it turns the hardest parts of RAG into platform capabilities: complex document parsing, explainable chunking, citation grounding, multi-path retrieval, reranking, model configuration, Web UI, API access, and agent extensions.&lt;/p&gt;
&lt;p&gt;If what you need is a verifiable, maintainable enterprise knowledge base that can connect to business systems, RAGFlow is more complete than a “vector database plus a simple chat UI” setup. On the other hand, if you only need small-scale personal Q&amp;amp;A over simple data, a lighter RAG framework may be more resource-efficient.&lt;/p&gt;
&lt;h2 id=&#34;related-links&#34;&gt;Related Links
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;GitHub project: &lt;a class=&#34;link&#34; href=&#34;https://github.com/infiniflow/ragflow&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/infiniflow/ragflow&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Official docs: &lt;a class=&#34;link&#34; href=&#34;https://ragflow.io/docs/dev/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://ragflow.io/docs/dev/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Online demo: &lt;a class=&#34;link&#34; href=&#34;https://cloud.ragflow.io&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://cloud.ragflow.io&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
        </item>
        
    </channel>
</rss>
