### What problem does this PR solve? This revision performed a comprehensive check on LightRAG to ensure the correctness of its implementation. It **did not involve** Entity Resolution and Community Reports Generation. There is an example using default entity types and the General chunking method, which shows good results in both time and effectiveness. Moreover, response caching is enabled for resuming failed tasks. [The-Necklace.pdf](https://github.com/user-attachments/files/22042432/The-Necklace.pdf) After:  ```bash Begin at: Fri, 29 Aug 2025 16:48:03 GMT Duration: 222.31 s Progress: 16:48:04 Task has been received. 16:48:06 Page(1~7): Start to parse. 16:48:06 Page(1~7): OCR started 16:48:08 Page(1~7): OCR finished (1.89s) 16:48:11 Page(1~7): Layout analysis (3.72s) 16:48:11 Page(1~7): Table analysis (0.00s) 16:48:11 Page(1~7): Text merged (0.00s) 16:48:11 Page(1~7): Finish parsing. 16:48:12 Page(1~7): Generate 7 chunks 16:48:12 Page(1~7): Embedding chunks (0.29s) 16:48:12 Page(1~7): Indexing done (0.04s). Task done (7.84s) 16:48:17 Start processing for f421fb06849e11f0bdd32724b93a52b2: She had no dresses, no je... 16:48:17 Start processing for f421fb06849e11f0bdd32724b93a52b2: Her husband, already half... 16:48:17 Start processing for f421fb06849e11f0bdd32724b93a52b2: And this life lasted ten ... 16:48:17 Start processing for f421fb06849e11f0bdd32724b93a52b2: Then she asked, hesitatin... 16:49:30 Completed processing for f421fb06849e11f0bdd32724b93a52b2: She had no dresses, no je... after 1 gleanings, 21985 tokens. 16:49:30 Entities extraction of chunk 3 1/7 done, 12 nodes, 13 edges, 21985 tokens. 16:49:40 Completed processing for f421fb06849e11f0bdd32724b93a52b2: Finally, she replied, hes... after 1 gleanings, 22584 tokens. 16:49:40 Entities extraction of chunk 5 2/7 done, 19 nodes, 19 edges, 22584 tokens. 16:50:02 Completed processing for f421fb06849e11f0bdd32724b93a52b2: Then she asked, hesitatin... after 1 gleanings, 24610 tokens. 16:50:02 Entities extraction of chunk 0 3/7 done, 16 nodes, 28 edges, 24610 tokens. 16:50:03 Completed processing for f421fb06849e11f0bdd32724b93a52b2: And this life lasted ten ... after 1 gleanings, 24031 tokens. 16:50:04 Entities extraction of chunk 1 4/7 done, 24 nodes, 22 edges, 24031 tokens. 16:50:14 Completed processing for f421fb06849e11f0bdd32724b93a52b2: So they begged the jewell... after 1 gleanings, 24635 tokens. 16:50:14 Entities extraction of chunk 6 5/7 done, 27 nodes, 26 edges, 24635 tokens. 16:50:29 Completed processing for f421fb06849e11f0bdd32724b93a52b2: Her husband, already half... after 1 gleanings, 25758 tokens. 16:50:29 Entities extraction of chunk 2 6/7 done, 25 nodes, 35 edges, 25758 tokens. 16:51:35 Completed processing for f421fb06849e11f0bdd32724b93a52b2: The Necklace By Guy de Ma... after 1 gleanings, 27491 tokens. 16:51:35 Entities extraction of chunk 4 7/7 done, 39 nodes, 37 edges, 27491 tokens. 16:51:35 Entities and relationships extraction done, 147 nodes, 177 edges, 171094 tokens, 198.58s. 16:51:35 Entities merging done, 0.01s. 16:51:35 Relationships merging done, 0.01s. 16:51:35 ignored 7 relations due to missing entities. 16:51:35 generated subgraph for doc f421fb06849e11f0bdd32724b93a52b2 in 198.68 seconds. 16:51:35 run_graphrag f421fb06849e11f0bdd32724b93a52b2 graphrag_task_lock acquired 16:51:35 set_graph removed 0 nodes and 0 edges from index in 0.00s. 16:51:35 Get embedding of nodes: 9/147 16:51:35 Get embedding of nodes: 109/147 16:51:37 Get embedding of edges: 9/170 16:51:37 Get embedding of edges: 109/170 16:51:40 set_graph converted graph change to 319 chunks in 4.21s. 16:51:40 Insert chunks: 4/319 16:51:40 Insert chunks: 104/319 16:51:40 Insert chunks: 204/319 16:51:40 Insert chunks: 304/319 16:51:40 set_graph added/updated 147 nodes and 170 edges from index in 0.53s. 16:51:40 merging subgraph for doc f421fb06849e11f0bdd32724b93a52b2 into the global graph done in 4.79 seconds. 16:51:40 Knowledge Graph done (204.29s) ``` Before:  ```bash Begin at: Fri, 29 Aug 2025 17:00:47 GMT processDuration: 173.38 s Progress: 17:00:49 Task has been received. 17:00:51 Page(1~7): Start to parse. 17:00:51 Page(1~7): OCR started 17:00:53 Page(1~7): OCR finished (1.82s) 17:00:57 Page(1~7): Layout analysis (3.64s) 17:00:57 Page(1~7): Table analysis (0.00s) 17:00:57 Page(1~7): Text merged (0.00s) 17:00:57 Page(1~7): Finish parsing. 17:00:57 Page(1~7): Generate 7 chunks 17:00:57 Page(1~7): Embedding chunks (0.31s) 17:00:57 Page(1~7): Indexing done (0.03s). Task done (7.88s) 17:00:57 created task graphrag 17:01:00 Task has been received. 17:02:17 Entities extraction of chunk 1 1/7 done, 9 nodes, 9 edges, 10654 tokens. 17:02:31 Entities extraction of chunk 2 2/7 done, 12 nodes, 13 edges, 11066 tokens. 17:02:33 Entities extraction of chunk 4 3/7 done, 9 nodes, 10 edges, 10433 tokens. 17:02:42 Entities extraction of chunk 5 4/7 done, 11 nodes, 14 edges, 11290 tokens. 17:02:52 Entities extraction of chunk 6 5/7 done, 13 nodes, 15 edges, 11039 tokens. 17:02:55 Entities extraction of chunk 3 6/7 done, 14 nodes, 13 edges, 11466 tokens. 17:03:32 Entities extraction of chunk 0 7/7 done, 19 nodes, 18 edges, 13107 tokens. 17:03:32 Entities and relationships extraction done, 71 nodes, 89 edges, 79055 tokens, 149.66s. 17:03:32 Entities merging done, 0.01s. 17:03:32 Relationships merging done, 0.01s. 17:03:32 ignored 1 relations due to missing entities. 17:03:32 generated subgraph for doc b1d9d3b6848711f0aacd7ddc0714c4d3 in 149.69 seconds. 17:03:32 run_graphrag b1d9d3b6848711f0aacd7ddc0714c4d3 graphrag_task_lock acquired 17:03:32 set_graph removed 0 nodes and 0 edges from index in 0.00s. 17:03:32 Get embedding of nodes: 9/71 17:03:33 Get embedding of edges: 9/88 17:03:34 set_graph converted graph change to 161 chunks in 2.27s. 17:03:34 Insert chunks: 4/161 17:03:34 Insert chunks: 104/161 17:03:34 set_graph added/updated 71 nodes and 88 edges from index in 0.28s. 17:03:34 merging subgraph for doc b1d9d3b6848711f0aacd7ddc0714c4d3 into the global graph done in 2.60 seconds. 17:03:34 Knowledge Graph done (153.18s) ``` ### Type of change - [x] Bug Fix (non-breaking change which fixes an issue) - [x] Refactoring - [x] Performance Improvement
Document | Roadmap | Twitter | Discord | Demo
📕 Table of Contents
- 💡 What is RAGFlow?
- 🎮 Demo
- 📌 Latest Updates
- 🌟 Key Features
- 🔎 System Architecture
- 🎬 Get Started
- 🔧 Configurations
- 🔧 Build a docker image without embedding models
- 🔧 Build a docker image including embedding models
- 🔨 Launch service from source for development
- 📚 Documentation
- 📜 Roadmap
- 🏄 Community
- 🙌 Contributing
💡 What is RAGFlow?
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.
🎮 Demo
Try our demo at https://demo.ragflow.io.
🔥 Latest Updates
- 2025-08-08 Supports OpenAI's latest GPT-5 series models.
- 2025-08-04 Supports new models, including Kimi K2 and Grok 4.
- 2025-08-01 Supports agentic workflow and MCP.
- 2025-05-23 Adds a Python/JavaScript code executor component to Agent.
- 2025-05-05 Supports cross-language query.
- 2025-03-19 Supports using a multi-modal model to make sense of images within PDF or DOCX files.
- 2025-02-28 Combined with Internet search (Tavily), supports reasoning like Deep Research for any LLMs.
- 2024-12-18 Upgrades Document Layout Analysis model in DeepDoc.
- 2024-08-22 Support text to SQL statements through RAG.
🎉 Stay Tuned
⭐️ Star our repository to stay up-to-date with exciting new features and improvements! Get instant notifications for new releases! 🌟
🌟 Key Features
🍭 "Quality in, quality out"
- Deep document understanding-based knowledge extraction from unstructured data with complicated formats.
- Finds "needle in a data haystack" of literally unlimited tokens.
🍱 Template-based chunking
- Intelligent and explainable.
- Plenty of template options to choose from.
🌱 Grounded citations with reduced hallucinations
- Visualization of text chunking to allow human intervention.
- Quick view of the key references and traceable citations to support grounded answers.
🍔 Compatibility with heterogeneous data sources
- Supports Word, slides, excel, txt, images, scanned copies, structured data, web pages, and more.
🛀 Automated and effortless RAG workflow
- Streamlined RAG orchestration catered to both personal and large businesses.
- Configurable LLMs as well as embedding models.
- Multiple recall paired with fused re-ranking.
- Intuitive APIs for seamless integration with business.
🔎 System Architecture
🎬 Get Started
📝 Prerequisites
- CPU >= 4 cores
- RAM >= 16 GB
- Disk >= 50 GB
- Docker >= 24.0.0 & Docker Compose >= v2.26.1
- gVisor: Required only if you intend to use the code executor (sandbox) feature of RAGFlow.
Tip
If you have not installed Docker on your local machine (Windows, Mac, or Linux), see Install Docker Engine.
🚀 Start up the server
-
Ensure
vm.max_map_count>= 262144:To check the value of
vm.max_map_count:$ sysctl vm.max_map_countReset
vm.max_map_countto a value at least 262144 if it is not.# In this case, we set it to 262144: $ sudo sysctl -w vm.max_map_count=262144This change will be reset after a system reboot. To ensure your change remains permanent, add or update the
vm.max_map_countvalue in /etc/sysctl.conf accordingly:vm.max_map_count=262144 -
Clone the repo:
$ git clone https://github.com/infiniflow/ragflow.git -
Start up the server using the pre-built Docker images:
Caution
All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64. If you are on an ARM64 platform, follow this guide to build a Docker image compatible with your system.
The command below downloads the
v0.20.4-slimedition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different fromv0.20.4-slim, update theRAGFLOW_IMAGEvariable accordingly in docker/.env before usingdocker composeto start the server. For example: setRAGFLOW_IMAGE=infiniflow/ragflow:v0.20.4for the full editionv0.20.4.
$ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# docker compose -f docker-compose-gpu.yml up -d
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|---|---|---|---|
| v0.20.4 | ≈9 | ✔️ | Stable release |
| v0.20.4-slim | ≈2 | ❌ | Stable release |
| nightly | ≈9 | ✔️ | Unstable nightly build |
| nightly-slim | ≈2 | ❌ | Unstable nightly build |
-
Check the server status after having the server up and running:
$ docker logs -f ragflow-serverThe following output confirms a successful launch of the system:
____ ___ ______ ______ __ / __ \ / | / ____// ____// /____ _ __ / /_/ // /| | / / __ / /_ / // __ \| | /| / / / _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ / /_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/ * Running on all addresses (0.0.0.0)If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a
network anormalerror because, at that moment, your RAGFlow may not be fully initialized. -
In your web browser, enter the IP address of your server and log in to RAGFlow.
With the default settings, you only need to enter
http://IP_OF_YOUR_MACHINE(sans port number) as the default HTTP serving port80can be omitted when using the default configurations. -
In service_conf.yaml.template, select the desired LLM factory in
user_default_llmand update theAPI_KEYfield with the corresponding API key.See llm_api_key_setup for more information.
The show is on!
🔧 Configurations
When it comes to system configurations, you will need to manage the following files:
- .env: Keeps the fundamental setups for the system, such as
SVR_HTTP_PORT,MYSQL_PASSWORD, andMINIO_PASSWORD. - service_conf.yaml.template: Configures the back-end services. The environment variables in this file will be automatically populated when the Docker container starts. Any environment variables set within the Docker container will be available for use, allowing you to customize service behavior based on the deployment environment.
- docker-compose.yml: The system relies on docker-compose.yml to start up.
The ./docker/README file provides a detailed description of the environment settings and service configurations which can be used as
${ENV_VARS}in the service_conf.yaml.template file.
To update the default HTTP serving port (80), go to docker-compose.yml and change 80:80
to <YOUR_SERVING_PORT>:80.
Updates to the above configurations require a reboot of all containers to take effect:
$ docker compose -f docker-compose.yml up -d
Switch doc engine from Elasticsearch to Infinity
RAGFlow uses Elasticsearch by default for storing full text and vectors. To switch to Infinity, follow these steps:
-
Stop all running containers:
$ docker compose -f docker/docker-compose.yml down -v
Warning
-vwill delete the docker container volumes, and the existing data will be cleared.
-
Set
DOC_ENGINEin docker/.env toinfinity. -
Start the containers:
$ docker compose -f docker-compose.yml up -d
Warning
Switching to Infinity on a Linux/arm64 machine is not yet officially supported.
🔧 Build a Docker image without embedding models
This image is approximately 2 GB in size and relies on external LLM and embedding services.
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
🔧 Build a Docker image including embedding models
This image is approximately 9 GB in size. As it includes embedding models, it relies on external LLM services only.
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly .
🔨 Launch service from source for development
-
Install
uvandpre-commit, or skip this step if they are already installed:pipx install uv pre-commit -
Clone the source code and install Python dependencies:
git clone https://github.com/infiniflow/ragflow.git cd ragflow/ uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules uv run download_deps.py pre-commit install -
Launch the dependent services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose:
docker compose -f docker/docker-compose-base.yml up -dAdd the following line to
/etc/hoststo resolve all hosts specified in docker/.env to127.0.0.1:127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager -
If you cannot access HuggingFace, set the
HF_ENDPOINTenvironment variable to use a mirror site:export HF_ENDPOINT=https://hf-mirror.com -
If your operating system does not have jemalloc, please install it as follows:
# ubuntu sudo apt-get install libjemalloc-dev # centos sudo yum install jemalloc -
Launch backend service:
source .venv/bin/activate export PYTHONPATH=$(pwd) bash docker/launch_backend_service.sh -
Install frontend dependencies:
cd web npm install -
Launch frontend service:
npm run devThe following output confirms a successful launch of the system:
-
Stop RAGFlow front-end and back-end service after development is complete:
pkill -f "ragflow_server.py|task_executor.py"
📚 Documentation
📜 Roadmap
See the RAGFlow Roadmap 2025
🏄 Community
🙌 Contributing
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our Contribution Guidelines first.

