mirror of
https://github.com/infiniflow/ragflow.git
synced 2025-12-08 20:42:30 +08:00
DOC: Miscellaneous UI and editorial updates (#7324)
### What problem does this PR solve? ### Type of change - [x] Documentation Update
This commit is contained in:
63
docs/faq.mdx
63
docs/faq.mdx
@ -26,6 +26,38 @@ The "garbage in garbage out" status quo remains unchanged despite the fact that
|
||||
|
||||
---
|
||||
|
||||
### Differences between RAGFlow full edition and RAGFlow slim edition?
|
||||
|
||||
Each RAGFlow release is available in two editions:
|
||||
|
||||
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.18.0-slim`
|
||||
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.18.0`
|
||||
|
||||
---
|
||||
|
||||
### Which embedding models can be deployed locally?
|
||||
|
||||
RAGFlow offers two Docker image editions, `v0.18.0-slim` and `v0.18.0`:
|
||||
|
||||
- `infiniflow/ragflow:v0.18.0-slim` (default): The RAGFlow Docker image without embedding models.
|
||||
- `infiniflow/ragflow:v0.18.0`: The RAGFlow Docker image with embedding models including:
|
||||
- Built-in embedding models:
|
||||
- `BAAI/bge-large-zh-v1.5`
|
||||
- `BAAI/bge-reranker-v2-m3`
|
||||
- `maidalun1020/bce-embedding-base_v1`
|
||||
- `maidalun1020/bce-reranker-base_v1`
|
||||
- Embedding models that will be downloaded once you select them in the RAGFlow UI:
|
||||
- `BAAI/bge-base-en-v1.5`
|
||||
- `BAAI/bge-large-en-v1.5`
|
||||
- `BAAI/bge-small-en-v1.5`
|
||||
- `BAAI/bge-small-zh-v1.5`
|
||||
- `jinaai/jina-embeddings-v2-base-en`
|
||||
- `jinaai/jina-embeddings-v2-small-en`
|
||||
- `nomic-ai/nomic-embed-text-v1.5`
|
||||
- `sentence-transformers/all-MiniLM-L6-v2`
|
||||
|
||||
---
|
||||
|
||||
### Where to find the version of RAGFlow? How to interpret it?
|
||||
|
||||
You can find the RAGFlow version number on the **System** page of the UI:
|
||||
@ -55,6 +87,14 @@ Where:
|
||||
|
||||
---
|
||||
|
||||
### Differences between demo.ragflow.io and a locally deployed open-source RAGFlow service?
|
||||
|
||||
demo.ragflow.io demonstrates the capabilities of RAGFlow Enterprise. Its DeepDoc models are pre-trained using proprietary data and it offers much more sophisticated team permission controls. Essentially, demo.ragflow.io serves as a preview of RAGFlow's forthcoming SaaS (Software as a Service) offering.
|
||||
|
||||
You can deploy an open-source RAGFlow service and call it from a Python client or through RESTful APIs. However, this is not supported on demo.ragflow.io.
|
||||
|
||||
---
|
||||
|
||||
### Why does it take longer for RAGFlow to parse a document than LangChain?
|
||||
|
||||
We put painstaking effort into document pre-processing tasks like layout analysis, table structure recognition, and OCR (Optical Character Recognition) using our vision models. This contributes to the additional time required.
|
||||
@ -73,29 +113,6 @@ We officially support x86 CPU and nvidia GPU. While we also test RAGFlow on ARM6
|
||||
|
||||
---
|
||||
|
||||
### Which embedding models can be deployed locally?
|
||||
|
||||
RAGFlow offers two Docker image editions, `v0.18.0-slim` and `v0.18.0`:
|
||||
|
||||
- `infiniflow/ragflow:v0.18.0-slim` (default): The RAGFlow Docker image without embedding models.
|
||||
- `infiniflow/ragflow:v0.18.0`: The RAGFlow Docker image with embedding models including:
|
||||
- Built-in embedding models:
|
||||
- `BAAI/bge-large-zh-v1.5`
|
||||
- `BAAI/bge-reranker-v2-m3`
|
||||
- `maidalun1020/bce-embedding-base_v1`
|
||||
- `maidalun1020/bce-reranker-base_v1`
|
||||
- Embedding models that will be downloaded once you select them in the RAGFlow UI:
|
||||
- `BAAI/bge-base-en-v1.5`
|
||||
- `BAAI/bge-large-en-v1.5`
|
||||
- `BAAI/bge-small-en-v1.5`
|
||||
- `BAAI/bge-small-zh-v1.5`
|
||||
- `jinaai/jina-embeddings-v2-base-en`
|
||||
- `jinaai/jina-embeddings-v2-small-en`
|
||||
- `nomic-ai/nomic-embed-text-v1.5`
|
||||
- `sentence-transformers/all-MiniLM-L6-v2`
|
||||
|
||||
---
|
||||
|
||||
### Do you offer an API for integration with third-party applications?
|
||||
|
||||
The corresponding APIs are now available. See the [RAGFlow HTTP API Reference](./references/http_api_reference.md) or the [RAGFlow Python API Reference](./references/python_api_reference.md) for more information.
|
||||
|
||||
@ -71,7 +71,7 @@ As mentioned earlier, the **Begin** component is indispensable for an agent. Sti
|
||||
|
||||
### Is the uploaded file in a knowledge base?
|
||||
|
||||
No. Files uploaded to an agent as input are not stored in a knowledge base and hence will not be processed using RAGFlow's built-in OCR, DLR or TSR models, or chunked using RAGFlow's built-in chunk methods.
|
||||
No. Files uploaded to an agent as input are not stored in a knowledge base and hence will not be processed using RAGFlow's built-in OCR, DLR or TSR models, or chunked using RAGFlow's built-in chunking methods.
|
||||
|
||||
### How to upload a webpage or file from a URL?
|
||||
|
||||
|
||||
@ -22,22 +22,22 @@ _Each time a knowledge base is created, a folder with the same name is generated
|
||||
|
||||
## Configure knowledge base
|
||||
|
||||
The following screenshot shows the configuration page of a knowledge base. A proper configuration of your knowledge base is crucial for future AI chats. For example, choosing the wrong embedding model or chunk method would cause unexpected semantic loss or mismatched answers in chats.
|
||||
The following screenshot shows the configuration page of a knowledge base. A proper configuration of your knowledge base is crucial for future AI chats. For example, choosing the wrong embedding model or chunking method would cause unexpected semantic loss or mismatched answers in chats.
|
||||
|
||||

|
||||
|
||||
This section covers the following topics:
|
||||
|
||||
- Select chunk method
|
||||
- Select chunking method
|
||||
- Select embedding model
|
||||
- Upload file
|
||||
- Parse file
|
||||
- Intervene with file parsing results
|
||||
- Run retrieval testing
|
||||
|
||||
### Select chunk method
|
||||
### Select chunking method
|
||||
|
||||
RAGFlow offers multiple chunking template to facilitate chunking files of different layouts and ensure semantic integrity. In **Chunk method**, you can choose the default template that suits the layouts and formats of your files. The following table shows the descriptions and the compatible file formats of each supported chunk template:
|
||||
RAGFlow offers multiple chunking template to facilitate chunking files of different layouts and ensure semantic integrity. In **Chunking method**, you can choose the default template that suits the layouts and formats of your files. The following table shows the descriptions and the compatible file formats of each supported chunk template:
|
||||
|
||||
| **Template** | Description | File format |
|
||||
|--------------|-----------------------------------------------------------------------|-----------------------------------------------------------------------------------------------|
|
||||
@ -54,9 +54,9 @@ RAGFlow offers multiple chunking template to facilitate chunking files of differ
|
||||
| One | Each document is chunked in its entirety (as one). | DOCX, XLSX, XLS (Excel97~2003), PDF, TXT |
|
||||
| Tag | The knowledge base functions as a tag set for the others. | XLSX, CSV/TXT |
|
||||
|
||||
You can also change a file's chunk method on the **Datasets** page.
|
||||
You can also change a file's chunking method on the **Datasets** page.
|
||||
|
||||

|
||||

|
||||
|
||||
### Select embedding model
|
||||
|
||||
@ -76,13 +76,13 @@ While uploading files directly to a knowledge base seems more convenient, we *hi
|
||||
|
||||
### Parse file
|
||||
|
||||
File parsing is a crucial topic in knowledge base configuration. The meaning of file parsing in RAGFlow is twofold: chunking files based on file layout and building embedding and full-text (keyword) indexes on these chunks. After having selected the chunk method and embedding model, you can start parsing a file:
|
||||
File parsing is a crucial topic in knowledge base configuration. The meaning of file parsing in RAGFlow is twofold: chunking files based on file layout and building embedding and full-text (keyword) indexes on these chunks. After having selected the chunking method and embedding model, you can start parsing a file:
|
||||
|
||||

|
||||
|
||||
- Click the play button next to **UNSTART** to start file parsing.
|
||||
- Click the red-cross icon and then refresh, if your file parsing stalls for a long time.
|
||||
- As shown above, RAGFlow allows you to use a different chunk method for a particular file, offering flexibility beyond the default method.
|
||||
- As shown above, RAGFlow allows you to use a different chunking method for a particular file, offering flexibility beyond the default method.
|
||||
- As shown above, RAGFlow allows you to enable or disable individual files, offering finer control over knowledge base-based AI chats.
|
||||
|
||||
### Intervene with file parsing results
|
||||
|
||||
@ -9,7 +9,7 @@ Generate a knowledge graph for your knowledge base.
|
||||
|
||||
---
|
||||
|
||||
To enhance multi-hop question-answering, RAGFlow adds a knowledge graph construction step between data extraction and indexing, as illustrated below. This step creates additional chunks from existing ones generated by your specified chunk method.
|
||||
To enhance multi-hop question-answering, RAGFlow adds a knowledge graph construction step between data extraction and indexing, as illustrated below. This step creates additional chunks from existing ones generated by your specified chunking method.
|
||||
|
||||

|
||||
|
||||
|
||||
@ -67,8 +67,8 @@ It defaults to 0.1, with a maximum limit of 1. A higher **Threshold** means fewe
|
||||
|
||||
### Max cluster
|
||||
|
||||
The maximum number of clusters to create. Defaults to 108, with a maximum limit of 1024.
|
||||
The maximum number of clusters to create. Defaults to 64, with a maximum limit of 1024.
|
||||
|
||||
### Random seed
|
||||
|
||||
A random seed. Click the **+** button to change the seed value.
|
||||
A random seed. Click **+** to change the seed value.
|
||||
|
||||
@ -11,7 +11,7 @@ Conduct a retrieval test on your knowledge base to check whether the intended ch
|
||||
|
||||
After your files are uploaded and parsed, it is recommended that you run a retrieval test before proceeding with the chat assistant configuration. Running a retrieval test is *not* an unnecessary or superfluous step at all! Just like fine-tuning a precision instrument, RAGFlow requires careful tuning to deliver optimal question answering performance. Your knowledge base settings, chat assistant configurations, and the specified large and small models can all significantly impact the final results. Running a retrieval test verifies whether the intended chunks can be recovered, allowing you to quickly identify areas for improvement or pinpoint any issue that needs addressing. For instance, when debugging your question answering system, if you know that the correct chunks can be retrieved, you can focus your efforts elsewhere. For example, in issue [#5627](https://github.com/infiniflow/ragflow/issues/5627), the problem was found to be due to the LLM's limitations.
|
||||
|
||||
During a retrieval test, chunks created from your specified chunk method are retrieved using a hybrid search. This search combines weighted keyword similarity with either weighted vector cosine similarity or a weighted reranking score, depending on your settings:
|
||||
During a retrieval test, chunks created from your specified chunking method are retrieved using a hybrid search. This search combines weighted keyword similarity with either weighted vector cosine similarity or a weighted reranking score, depending on your settings:
|
||||
|
||||
- If no rerank model is selected, weighted keyword similarity will be combined with weighted vector cosine similarity.
|
||||
- If a rerank model is selected, weighted keyword similarity will be combined with weighted vector reranking score.
|
||||
|
||||
@ -32,7 +32,7 @@ The page rank value must be an integer. Range: [0,100]
|
||||
If you set the page rank value to a non-integer, say 1.7, it will be rounded down to the nearest integer, which in this case is 1.
|
||||
:::
|
||||
|
||||
## Mechanism
|
||||
## Scoring mechanism
|
||||
|
||||
If you configure a chat assistant's **similarity threshold** to 0.2, only chunks with a hybrid score greater than 0.2 x 100 = 20 will be retrieved and sent to the chat model for content generation. This initial filtering step is crucial for narrowing down relevant information.
|
||||
|
||||
|
||||
@ -42,7 +42,7 @@ As a rule of thumb, consider including the following entries in your tag table:
|
||||
### Create a tag set
|
||||
|
||||
1. Click **+ Create knowledge base** to create a knowledge base.
|
||||
2. Navigate to the **Configuration** page of the created knowledge base and choose **Tag** as the default chunk method.
|
||||
2. Navigate to the **Configuration** page of the created knowledge base and choose **Tag** as the default chunking method.
|
||||
3. Navigate to the **Dataset** page and upload and parse your table file in XLSX, CSV, or TXT formats.
|
||||
_A tag cloud appears under the **Tag view** section, indicating the tag set is created:_
|
||||

|
||||
|
||||
@ -229,7 +229,9 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
|
||||
* Running on all addresses (0.0.0.0)
|
||||
```
|
||||
|
||||
> If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network anomaly` error because, at that moment, your RAGFlow may not be fully initialized.
|
||||
:::danger IMPORTANT
|
||||
If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network anomaly` error because, at that moment, your RAGFlow may not be fully initialized.
|
||||
:::
|
||||
|
||||
5. In your web browser, enter the IP address of your server and log in to RAGFlow.
|
||||
|
||||
@ -285,7 +287,7 @@ To create your first knowledge base:
|
||||
|
||||

|
||||
|
||||
3. RAGFlow offers multiple chunk templates that cater to different document layouts and file formats. Select the embedding model and chunk method (template) for your knowledge base.
|
||||
3. RAGFlow offers multiple chunk templates that cater to different document layouts and file formats. Select the embedding model and chunking method (template) for your knowledge base.
|
||||
|
||||
:::danger IMPORTANT
|
||||
Once you have selected an embedding model and used it to parse a file, you are no longer allowed to change it. The obvious reason is that we must ensure that all files in a specific knowledge base are parsed using the *same* embedding model (ensure that they are being compared in the same embedding space).
|
||||
|
||||
@ -519,7 +519,7 @@ A `Document` object contains the following attributes:
|
||||
- `name`: The document name. Defaults to `""`.
|
||||
- `thumbnail`: The thumbnail image of the document. Defaults to `None`.
|
||||
- `dataset_id`: The dataset ID associated with the document. Defaults to `None`.
|
||||
- `chunk_method` The chunk method name. Defaults to `"naive"`.
|
||||
- `chunk_method` The chunking method name. Defaults to `"naive"`.
|
||||
- `source_type`: The source type of the document. Defaults to `"local"`.
|
||||
- `type`: Type or category of the document. Defaults to `""`. Reserved for future use.
|
||||
- `created_by`: `str` The creator of the document. Defaults to `""`.
|
||||
|
||||
@ -7,14 +7,24 @@ slug: /release_notes
|
||||
|
||||
Key features, improvements and bug fixes in the latest releases.
|
||||
|
||||
:::info
|
||||
Each RAGFlow release is available in two editions:
|
||||
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.18.0-slim`
|
||||
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.18.0`
|
||||
:::
|
||||
|
||||
## v0.18.0
|
||||
|
||||
Released on April 23, 2025.
|
||||
|
||||
### Compatibility changes
|
||||
|
||||
From this release onwards, built-in rerank models have been removed because they have minimal impact on retrieval rates but significantly increase retrieval time.
|
||||
|
||||
### New features
|
||||
|
||||
- MCP server: enables access to RAGFlow's knowledge bases via MCP.
|
||||
- DeepDoc supports adopting VLM model as a processing pipeline during document layout recognition, enabling in-depth analysis of images in PDFs.
|
||||
- DeepDoc supports adopting VLM model as a processing pipeline during document layout recognition, enabling in-depth analysis of images in PDF and DOCX files.
|
||||
- OpenAI-compatible APIs: Agents can be called via OpenAI-compatible APIs.
|
||||
- User registration control: administrators can enable or disable user registration through an environment variable.
|
||||
- Team collaboration: Agents can be shared with team members.
|
||||
@ -54,7 +64,7 @@ From this release onwards, if you still see RAGFlow's responses being cut short
|
||||
- Accelerates knowledge graph extraction.
|
||||
- Enables Tavily-based web search in the **Retrieval** agent component.
|
||||
- Adds Tongyi-Qianwen QwQ models (OpenAI-compatible).
|
||||
- Supports CSV files in the **General** chunk method.
|
||||
- Supports CSV files in the **General** chunking method.
|
||||
|
||||
### Fixed issues
|
||||
|
||||
@ -317,7 +327,7 @@ Released on October 31, 2024.
|
||||
|
||||
- Adds the team management functionality for all users.
|
||||
- Updates the Agent UI to improve usability.
|
||||
- Adds support for Markdown chunking in the **General** chunk method.
|
||||
- Adds support for Markdown chunking in the **General** chunking method.
|
||||
- Introduces an **invoke** tool within the Agent UI.
|
||||
- Integrates support for Dify's knowledge base API.
|
||||
- Adds support for GLM4-9B and Yi-Lightning models.
|
||||
@ -349,7 +359,7 @@ Released on September 30, 2024.
|
||||
- Improves the results of multi-round dialogues.
|
||||
- Enables users to remove added LLM vendors.
|
||||
- Adds support for **OpenTTS** and **SparkTTS** models.
|
||||
- Implements an **Excel to HTML** toggle in the **General** chunk method, allowing users to parse a spreadsheet into either HTML tables or key-value pairs by row.
|
||||
- Implements an **Excel to HTML** toggle in the **General** chunking method, allowing users to parse a spreadsheet into either HTML tables or key-value pairs by row.
|
||||
- Adds agent tools **YahooFinance** and **Jin10**.
|
||||
- Adds an investment advisor agent template.
|
||||
|
||||
@ -410,7 +420,7 @@ Released on August 6, 2024.
|
||||
|
||||
### New features
|
||||
|
||||
- Supports GraphRAG as a chunk method.
|
||||
- Supports GraphRAG as a chunking method.
|
||||
- Introduces Agent component **Keyword** and search tools, including **Baidu**, **DuckDuckGo**, **PubMed**, **Wikipedia**, **Bing**, and **Google**.
|
||||
- Supports speech-to-text recognition for audio files.
|
||||
- Supports model vendors **Gemini** and **Groq**.
|
||||
@ -425,8 +435,8 @@ Released on July 8, 2024.
|
||||
|
||||
- Supports Agentic RAG, enabling graph-based workflow construction for RAG and agents.
|
||||
- Supports model vendors **Mistral**, **MiniMax**, **Bedrock**, and **Azure OpenAI**.
|
||||
- Supports DOCX files in the MANUAL chunk method.
|
||||
- Supports DOCX, MD, and PDF files in the Q&A chunk method.
|
||||
- Supports DOCX files in the MANUAL chunking method.
|
||||
- Supports DOCX, MD, and PDF files in the Q&A chunking method.
|
||||
|
||||
## v0.7.0
|
||||
|
||||
@ -438,7 +448,7 @@ Released on May 31, 2024.
|
||||
- Integrates reranker and embedding models: [BCE](https://github.com/netease-youdao/BCEmbedding), [BGE](https://github.com/FlagOpen/FlagEmbedding), and [Jina](https://jina.ai/embeddings/).
|
||||
- Supports LLMs Baichuan and VolcanoArk.
|
||||
- Implements [RAPTOR](https://arxiv.org/html/2401.18059v1) for improved text retrieval.
|
||||
- Supports HTML files in the GENERAL chunk method.
|
||||
- Supports HTML files in the GENERAL chunking method.
|
||||
- Provides HTTP and Python APIs for deleting documents by ID.
|
||||
- Supports ARM64 platforms.
|
||||
|
||||
@ -467,7 +477,7 @@ Released on May 21, 2024.
|
||||
- Supports streaming output.
|
||||
- Provides HTTP and Python APIs for retrieving document chunks.
|
||||
- Supports monitoring of system components, including Elasticsearch, MySQL, Redis, and MinIO.
|
||||
- Supports disabling **Layout Recognition** in the GENERAL chunk method to reduce file chunking time.
|
||||
- Supports disabling **Layout Recognition** in the GENERAL chunking method to reduce file chunking time.
|
||||
|
||||
### Related APIs
|
||||
|
||||
|
||||
Reference in New Issue
Block a user