Compare commits

...

46 Commits

Author SHA1 Message Date
de24e74b4c Docs: How to use MinerU to parse pdf documents (#10763)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2025-10-23 18:56:09 +08:00
83e80e3d7f Docs: Update version references to v0.21.1 in READMEs and docs (#10761)
### What problem does this PR solve?

- Update version tags in README files (including translations) from
v0.21.0 to v0.21.1
- Modify Docker image references and documentation to reflect new
version
- Update version badges and image descriptions
- Maintain consistency across all language variants of README files

### Type of change

- [x] Documentation Update
2025-10-23 18:55:41 +08:00
ea73f13ebf Fix: infinity rerank error. (#10760)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-23 17:38:54 +08:00
af6eabad0e Docs: Added v0.21.1 release notes (#10757)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-10-23 17:25:29 +08:00
5fb5a51b2e Fix: create KB initial embedding. (#10751)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-23 16:17:43 +08:00
37004ecfb3 Fix: Clicking "Stop receiving messages" in Firefox will cause the page to crash. #10752 (#10754)
### What problem does this PR solve?

Fix: Clicking "Stop receiving messages" in Firefox will cause the page
to crash. #10752
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-23 16:17:28 +08:00
6d333ec4bc Fix: Add video preview #9869 (#10748)
### What problem does this PR solve?

Fix: Add video preview

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-23 14:25:05 +08:00
ac188b0486 Feat: The default value of the parser operator's Video output format is set to text #9869 (#10745)
### What problem does this PR solve?
Feat: The default value of the parser operator's Video output format is
set to text #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-23 14:18:51 +08:00
adeb9d87e2 Bump infinity to 0.6.1 (#10749)
### What problem does this PR solve?

Bump infinity to 0.6.1

#10727 missed `docker/docker-compose-base.yml`.

### Type of change

- [x] Other (please describe):
2025-10-23 13:36:43 +08:00
d121033208 Fix: Resolved the issue where the Generate button must be refreshed after generating chunk to take effect #9869 (#10742)
### What problem does this PR solve?

Fix: Resolved the issue where the Generate button must be refreshed
after generating chunk to take effect

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-23 11:54:45 +08:00
494f84cd69 Feat: Add suffix to the parser operator's video configuration #9869 (#10741)
### What problem does this PR solve?

Feat: Add suffix to the parser operator's video configuration #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-23 11:13:21 +08:00
f24d464a53 Fix: video file suffix (#10740)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-23 11:13:09 +08:00
484c536f2e Fix typo (#10737)
### What problem does this PR solve?

Chunkder to Chunker

### Type of change

- [x] Documentation Update
2025-10-23 09:25:15 +08:00
f7112acd97 Feat: pipeline supports MinerU PDF parser (#10736)
### What problem does this PR solve?

Pipeline supports MinerU PDF parser.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-23 09:24:31 +08:00
de4f75dcd8 Fix: add video parser (#10735)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-23 09:24:16 +08:00
15fff5724e Fix:filename is not displayed on the overview page #9869 (#10731)
### What problem does this PR solve?

Fix: Fixed the issue that filename is not displayed on the overview
page; and added the processing logic of the generate button when chunk=0

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-22 19:52:50 +08:00
d616354d66 Fix: model parameter (#10730)
### What problem does this PR solve?

Fix: fix model parameter  #10729

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-22 19:52:37 +08:00
1bad24e3ab Feat: version 0.21.1 (#10718)
### What problem does this PR solve?

Update version, and remove '_canvas' suffix in agent_category.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-22 19:03:02 +08:00
4910146149 Feat: Display the video field in the parser operator #9869 (#10728)
### What problem does this PR solve?

Feat: Display the video field in the parser operator #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-22 18:59:20 +08:00
0e549e96ee bump infinity to v0.6.1 (#10727)
### What problem does this PR solve?

bump infinity to v0.6.1

### Type of change

- [x] Other (please describe): Infinity
2025-10-22 17:36:58 +08:00
318cb7d792 Fix: Optimize the style of the personal center sidebar component #9869 (#10723)
### What problem does this PR solve?

fix: Optimize the style of the personal center sidebar component

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-22 16:55:16 +08:00
4d1255b231 hotfix: Rename chunk summary's component name (#10721)
### What problem does this PR solve?

Using Indexer instead of Tokenizer

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-22 16:55:03 +08:00
b30f0be858 Refactor: How LiteLLMBase Calculate total count (#10532)
### What problem does this PR solve?

How LiteLLMBase Calculate total count

### Type of change

- [x] Refactoring
2025-10-22 12:25:31 +08:00
a82e9b3d91 Fix: can't upload image in ollama model #10447 (#10717)
### What problem does this PR solve?

Fix: can't upload image in ollama model #10447

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)


### Change all `image=[]` to `image = None`

Changing `image=[]` to `images=None` avoids Python’s mutable default
parameter issue.
If you keep `images=[]`, all calls share the same list, so modifying it
(e.g., images.append()) will affect later calls.
Using images=None and creating a new list inside the function ensures
each call is independent.
This change does not affect current behavior — it simply makes the code
safer and more predictable.


把 `images=[]` 改成 `images=None` 是为了避免 Python 默认参数的可变对象问题。
如果保留 `images=[]`,所有调用都会共用同一个列表,一旦修改就会影响后续调用。
改成 None 并在函数内部重新创建列表,可以确保每次调用都是独立的。
这个修改不会影响现有运行结果,只是让代码更安全、更可控。
2025-10-22 12:24:12 +08:00
02a452993e Feat: Adjust the style of the mcp dialog #10703 (#10719)
### What problem does this PR solve?

Feat: Adjust the style of the mcp dialog #10703

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-22 12:20:19 +08:00
307cdc62ea fix:RAGFlowOSS.put() got an unexpected keyword argument 'tenant_id' (#10712)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/10700

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-22 09:30:41 +08:00
2d491188b8 Refa: improve flow of GraphRAG and RAPTOR (#10709)
### What problem does this PR solve?

Improve flow of GraphRAG and RAPTOR.

### Type of change

- [x] Refactoring
2025-10-22 09:29:20 +08:00
acc0f7396e Feat: add fault-tolerant mechanism to GraphRAG (#10708)
### What problem does this PR solve?

Add fault-tolerant mechanism to GraphRAG. #10406.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-22 09:29:04 +08:00
9a4cd81891 Docs: Added token chunker and title chunker components (#10711)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-10-21 20:11:23 +08:00
1694f32e8e Fix: Profile page UI adjustment #9869 (#10706)
### What problem does this PR solve?

Fix: Profile page UI adjustment

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-21 20:11:07 +08:00
41fade3fe6 Fix:wrong param in manual chunk (#10710)
### What problem does this PR solve?

change:
wrong param in manual chunk

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-21 20:10:54 +08:00
8d333f3590 Feat: Change the style of all cards according to the design #10703 (#10704)
### What problem does this PR solve?

Feat: Change the style of all cards according to the design #10703

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-21 20:08:55 +08:00
cd77425b87 Fix: potential negative max_tokens in RAPTOR (#10701)
### What problem does this PR solve?

Fix potential negative max_tokens in RAPTOR. #10235.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue
2025-10-21 15:49:51 +08:00
544c9990e3 Feat: Move the pipeline translation field to flow #9869 (#10697)
### What problem does this PR solve?

Feat: Move the pipeline translation field to flow #9869

### Type of change


- [X] New Feature (non-breaking change which adds functionality)
2025-10-21 15:23:37 +08:00
41a647fe32 Feat: A pipeline's child node can only have one node #9869 (#10695)
### What problem does this PR solve?

Feat: A pipeline's child node can only have one node #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-21 13:55:46 +08:00
594bf485d4 Test: update test cases for chunk retrieval pagination (#10694)
### What problem does this PR solve?

Updated test cases in test_retrieval_chunks.py to:
- Remove skip mark from page pagination test case (issues/6646 resolved)
- Add skip marks for page_size=1 tests due to new issue (issues/10692)

### Type of change

- [x] Test
2025-10-21 13:02:29 +08:00
863c3e3d9c Fix: tree merge (#10691)
### What problem does this PR solve?

Fix: Fix tree merge, solved #10636

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-21 13:02:01 +08:00
1767039be3 Feat: Display the pipeline operation sheet on the agent page #9869 (#10690)
### What problem does this PR solve?

Feat: Display the pipeline operation sheet on the agent page #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-21 12:59:30 +08:00
cd75fa02b1 Feat: Make knowledge base renaming automatically reflected in agent discussions, solved #10597 (#10680)
### What problem does this PR solve?
Feat: Make knowledge base renaming automatically reflected in agent
discussions, solved #10597

### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2025-10-21 10:42:05 +08:00
cfdd37820a Feat: Support attribute filtering #8703 (#10670)
### What problem does this PR solve?

Feat: Support attribute filtering #8703

### Type of change

- [X] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
Co-authored-by: writinwaters <cai.keith@gmail.com>
2025-10-21 10:38:40 +08:00
9d12380806 Fix: Excel2HTML can't support XLS(Excel 97-2003) (#10660)
### What problem does this PR solve?
https://github.com/infiniflow/ragflow/issues/10602

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-21 09:52:59 +08:00
866098634b Feat:setting metadata in the retrieval (#10682)
### What problem does this PR solve?
issue:
[#9272](https://github.com/infiniflow/ragflow/issues/9272)
change:
setting metadata in the retrieval

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-21 09:52:26 +08:00
8013505daf Fix(edit-tag): Fix the bug that the edit-tag tag cannot be deleted #9869 (#10679)
### What problem does this PR solve?

fix(edit-tag): Fix the bug that the edit-tag tag cannot be deleted #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-21 09:38:36 +08:00
deb81810e9 Update message printout when start ingestion server (#10677)
### What problem does this PR solve?

```
     ____                                  __     _                                                                  
    /  _/   ____    ____ _  ___    _____  / /_   (_)  ____    ____           _____  ___    _____ _   __  ___    _____
    / /    / __ \  / __ `/ / _ \  / ___/ / __/  / /  / __ \  / __ \         / ___/ / _ \  / ___/| | / / / _ \  / ___/
 _/ /    / / / / / /_/ / /  __/ (__  ) / /_   / /  / /_/ / / / / /        (__  ) /  __/ / /    | |/ / /  __/ / /    
/___/   /_/ /_/  \__, /  \___/ /____/  \__/  /_/   \____/ /_/ /_/        /____/  \___/ /_/     |___/  \___/ /_/     
                /____/          
```

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-21 09:38:20 +08:00
6ab96287c9 Feat:Vision Model Image Enhancement in Manual/Paper/Book/One chunker (#10640)
### What problem does this PR solve?
issue:
[#7472](https://github.com/infiniflow/ragflow/issues/7472)
change:
Vision Model Image Enhancement in Manual chunker
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-21 09:36:27 +08:00
aaa4776657 Feat: Qwen-VL series supports video parsing (#10676)
### What problem does this PR solve?

Qwen-VL series supports video parsing. #10617.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-21 09:36:13 +08:00
150 changed files with 3041 additions and 1664 deletions

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -187,7 +187,7 @@ releases! 🌟
> All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64.
> If you are on an ARM64 platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a Docker image compatible with your system.
> The command below downloads the `v0.21.0-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.21.0-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0` for the full edition `v0.21.0`.
> The command below downloads the `v0.21.1-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.21.1-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1` for the full edition `v0.21.1`.
```bash
$ cd ragflow/docker
@ -200,8 +200,8 @@ releases! 🌟
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|-------------------|-----------------|-----------------------|--------------------------|
| v0.21.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.0-slim | &approx;2 | ❌ | Stable release |
| v0.21.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -22,7 +22,7 @@
<img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru">
@ -181,7 +181,7 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
> Semua gambar Docker dibangun untuk platform x86. Saat ini, kami tidak menawarkan gambar Docker untuk ARM64.
> Jika Anda menggunakan platform ARM64, [silakan gunakan panduan ini untuk membangun gambar Docker yang kompatibel dengan sistem Anda](https://ragflow.io/docs/dev/build_docker_image).
> Perintah di bawah ini mengunduh edisi v0.21.0-slim dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.21.0-slim, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. Misalnya, atur RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0 untuk edisi lengkap v0.21.0.
> Perintah di bawah ini mengunduh edisi v0.21.1-slim dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.21.1-slim, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. Misalnya, atur RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1 untuk edisi lengkap v0.21.1.
```bash
$ cd ragflow/docker
@ -194,8 +194,8 @@ $ docker compose -f docker-compose.yml up -d
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.0-slim | &approx;2 | ❌ | Stable release |
| v0.21.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -160,7 +160,7 @@
> 現在、公式に提供されているすべての Docker イメージは x86 アーキテクチャ向けにビルドされており、ARM64 用の Docker イメージは提供されていません。
> ARM64 アーキテクチャのオペレーティングシステムを使用している場合は、[このドキュメント](https://ragflow.io/docs/dev/build_docker_image)を参照して Docker イメージを自分でビルドしてください。
> 以下のコマンドは、RAGFlow Docker イメージの v0.21.0-slim エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.21.0-slim とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。例えば、完全版 v0.21.0 をダウンロードするには、RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0 と設定します。
> 以下のコマンドは、RAGFlow Docker イメージの v0.21.1-slim エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.21.1-slim とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。例えば、完全版 v0.21.1 をダウンロードするには、RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1 と設定します。
```bash
$ cd ragflow/docker
@ -173,8 +173,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.0-slim | &approx;2 | ❌ | Stable release |
| v0.21.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -160,7 +160,7 @@
> 모든 Docker 이미지는 x86 플랫폼을 위해 빌드되었습니다. 우리는 현재 ARM64 플랫폼을 위한 Docker 이미지를 제공하지 않습니다.
> ARM64 플랫폼을 사용 중이라면, [시스템과 호환되는 Docker 이미지를 빌드하려면 이 가이드를 사용해 주세요](https://ragflow.io/docs/dev/build_docker_image).
> 아래 명령어는 RAGFlow Docker 이미지의 v0.21.0-slim 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.21.0-slim과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. 예를 들어, 전체 버전인 v0.21.0을 다운로드하려면 RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0로 설정합니다.
> 아래 명령어는 RAGFlow Docker 이미지의 v0.21.1-slim 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.21.1-slim과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. 예를 들어, 전체 버전인 v0.21.1을 다운로드하려면 RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1로 설정합니다.
```bash
$ cd ragflow/docker
@ -173,8 +173,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.0-slim | &approx;2 | ❌ | Stable release |
| v0.21.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -22,7 +22,7 @@
<img alt="Badge Estático" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Última%20Relese" alt="Última Versão">
@ -180,7 +180,7 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
> Todas as imagens Docker são construídas para plataformas x86. Atualmente, não oferecemos imagens Docker para ARM64.
> Se você estiver usando uma plataforma ARM64, por favor, utilize [este guia](https://ragflow.io/docs/dev/build_docker_image) para construir uma imagem Docker compatível com o seu sistema.
> O comando abaixo baixa a edição `v0.21.0-slim` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.21.0-slim`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. Por exemplo: defina `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0` para a edição completa `v0.21.0`.
> O comando abaixo baixa a edição `v0.21.1-slim` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.21.1-slim`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. Por exemplo: defina `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1` para a edição completa `v0.21.1`.
```bash
$ cd ragflow/docker
@ -193,8 +193,8 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
| Tag da imagem RAGFlow | Tamanho da imagem (GB) | Possui modelos de incorporação? | Estável? |
| --------------------- | ---------------------- | ------------------------------- | ------------------------ |
| v0.21.0 | ~9 | :heavy_check_mark: | Lançamento estável |
| v0.21.0-slim | ~2 | ❌ | Lançamento estável |
| v0.21.1 | ~9 | :heavy_check_mark: | Lançamento estável |
| v0.21.1-slim | ~2 | ❌ | Lançamento estável |
| nightly | ~9 | :heavy_check_mark: | _Instável_ build noturno |
| nightly-slim | ~2 | ❌ | _Instável_ build noturno |

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -183,7 +183,7 @@
> 所有 Docker 映像檔都是為 x86 平台建置的。目前,我們不提供 ARM64 平台的 Docker 映像檔。
> 如果您使用的是 ARM64 平台,請使用 [這份指南](https://ragflow.io/docs/dev/build_docker_image) 來建置適合您系統的 Docker 映像檔。
> 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.21.0-slim`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.21.0-slim` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。例如,你可以透過設定 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0` 來下載 RAGFlow 鏡像的 `v0.21.0` 完整發行版。
> 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.21.1-slim`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.21.1-slim` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。例如,你可以透過設定 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1` 來下載 RAGFlow 鏡像的 `v0.21.1` 完整發行版。
```bash
$ cd ragflow/docker
@ -196,8 +196,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.0-slim | &approx;2 | ❌ | Stable release |
| v0.21.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -183,7 +183,7 @@
> 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。
> 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.21.0-slim`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.21.0-slim` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。比如,你可以通过设置 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0` 来下载 RAGFlow 镜像的 `v0.21.0` 完整发行版。
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.21.1-slim`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.21.1-slim` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。比如,你可以通过设置 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1` 来下载 RAGFlow 镜像的 `v0.21.1` 完整发行版。
```bash
$ cd ragflow/docker
@ -196,8 +196,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.0-slim | &approx;2 | ❌ | Stable release |
| v0.21.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -48,7 +48,7 @@ It consists of a server-side Service and a command-line client (CLI), both imple
1. Ensure the Admin Service is running.
2. Install ragflow-cli.
```bash
pip install ragflow-cli==0.21.0
pip install ragflow-cli==0.21.1
```
3. Launch the CLI client:
```bash

View File

@ -370,7 +370,7 @@ class AdminCLI(Cmd):
self.session.headers.update({
'Content-Type': 'application/json',
'Authorization': response.headers['Authorization'],
'User-Agent': 'RAGFlow-CLI/0.21.0'
'User-Agent': 'RAGFlow-CLI/0.21.1'
})
print("Authentication successful.")
return True

View File

@ -1,6 +1,6 @@
[project]
name = "ragflow-cli"
version = "0.21.0"
version = "0.21.1"
description = "Admin Service's client of [RAGFlow](https://github.com/infiniflow/ragflow). The Admin Service provides user management and system monitoring. "
authors = [{ name = "Lynn", email = "lynn_inf@hotmail.com" }]
license = { text = "Apache License, Version 2.0" }

View File

@ -1,24 +0,0 @@
[project]
name = "ragflow-cli"
version = "0.21.0.dev2"
description = "Admin Service's client of [RAGFlow](https://github.com/infiniflow/ragflow). The Admin Service provides user management and system monitoring. "
authors = [{ name = "Lynn", email = "lynn_inf@hotmail.com" }]
license = { text = "Apache License, Version 2.0" }
readme = "README.md"
requires-python = ">=3.10,<3.13"
dependencies = [
"requests>=2.30.0,<3.0.0",
"beartype>=0.18.5,<0.19.0",
"pycryptodomex>=3.10.0",
"lark>=1.1.0",
]
[dependency-groups]
test = [
"pytest>=8.3.5",
"requests>=2.32.3",
"requests-toolbelt>=1.0.0",
]
[project.scripts]
ragflow-cli = "ragflow_cli.admin_client:main"

View File

@ -32,6 +32,7 @@ from api.utils.crypt import decrypt
from api.utils import (
current_timestamp,
datetime_format,
get_format_time,
get_uuid,
)
from api.utils.api_utils import (
@ -131,6 +132,7 @@ def login_admin(email: str, password: str):
login_user(user)
user.update_time = (current_timestamp(),)
user.update_date = (datetime_format(datetime.now()),)
user.last_login_time = get_format_time()
user.save()
msg = "Welcome back!"
return construct_response(data=resp, auth=user.get_id(), message=msg)

View File

@ -32,18 +32,24 @@ admin_bp = Blueprint('admin', __name__, url_prefix='/api/v1/admin')
def login():
if not request.json:
return error_response('Authorize admin failed.' ,400)
email = request.json.get("email", "")
password = request.json.get("password", "")
return login_admin(email, password)
try:
email = request.json.get("email", "")
password = request.json.get("password", "")
return login_admin(email, password)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/logout', methods=['GET'])
@login_required
def logout():
current_user.access_token = f"INVALID_{secrets.token_hex(16)}"
current_user.save()
logout_user()
return success_response(True)
try:
current_user.access_token = f"INVALID_{secrets.token_hex(16)}"
current_user.save()
logout_user()
return success_response(True)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/auth', methods=['GET'])

View File

@ -36,8 +36,13 @@ class UserMgr:
users = UserService.get_all_users()
result = []
for user in users:
result.append({'email': user.email, 'nickname': user.nickname, 'create_date': user.create_date,
'is_active': user.is_active})
result.append({
'email': user.email,
'nickname': user.nickname,
'create_date': user.create_date,
'is_active': user.is_active,
'is_superuser': user.is_superuser,
})
return result
@staticmethod
@ -50,7 +55,6 @@ class UserMgr:
'email': user.email,
'language': user.language,
'last_login_time': user.last_login_time,
'is_authenticated': user.is_authenticated,
'is_active': user.is_active,
'is_anonymous': user.is_anonymous,
'login_channel': user.login_channel,
@ -166,7 +170,7 @@ class UserServiceMgr:
return [{
'title': r['title'],
'permission': r['permission'],
'canvas_category': r['canvas_category'].split('-')[0]
'canvas_category': r['canvas_category'].split('_')[0]
} for r in res]

View File

@ -350,7 +350,7 @@
]
},
"label": "Tokenizer",
"name": "Tokenizer"
"name": "Indexer"
},
"dragging": false,
"id": "Tokenizer:EightRocketsAppear",

View File

@ -18,12 +18,14 @@ import re
from abc import ABC
from agent.tools.base import ToolParamBase, ToolBase, ToolMeta
from api.db import LLMType
from api.db.services.document_service import DocumentService
from api.db.services.dialog_service import meta_filter
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle
from api import settings
from api.utils.api_utils import timeout
from rag.app.tag import label_question
from rag.prompts.generator import cross_languages, kb_prompt
from rag.prompts.generator import cross_languages, kb_prompt, gen_meta_filter
class RetrievalParam(ToolParamBase):
@ -58,6 +60,7 @@ class RetrievalParam(ToolParamBase):
self.use_kg = False
self.cross_languages = []
self.toc_enhance = False
self.meta_data_filter={}
def check(self):
self.check_decimal_float(self.similarity_threshold, "[Retrieval] Similarity threshold")
@ -117,6 +120,21 @@ class Retrieval(ToolBase, ABC):
vars = self.get_input_elements_from_text(kwargs["query"])
vars = {k:o["value"] for k,o in vars.items()}
query = self.string_format(kwargs["query"], vars)
doc_ids=[]
if self._param.meta_data_filter!={}:
metas = DocumentService.get_meta_by_kbs(kb_ids)
if self._param.meta_data_filter.get("method") == "auto":
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT)
filters = gen_meta_filter(chat_mdl, metas, query)
doc_ids.extend(meta_filter(metas, filters))
if not doc_ids:
doc_ids = None
elif self._param.meta_data_filter.get("method") == "manual":
doc_ids.extend(meta_filter(metas, self._param.meta_data_filter["manual"]))
if not doc_ids:
doc_ids = None
if self._param.cross_languages:
query = cross_languages(kbs[0].tenant_id, None, query, self._param.cross_languages)
@ -131,6 +149,7 @@ class Retrieval(ToolBase, ABC):
self._param.top_n,
self._param.similarity_threshold,
1 - self._param.keywords_similarity_weight,
doc_ids=doc_ids,
aggs=False,
rerank_mdl=rerank_mdl,
rank_feature=label_question(query, kbs),

View File

@ -45,7 +45,7 @@ from api.utils.api_utils import (
from api.utils.file_utils import filename_type, get_project_base_directory, thumbnail
from api.utils.web_utils import CONTENT_TYPE_MAP, html2pdf, is_valid_url
from deepdoc.parser.html_parser import RAGFlowHtmlParser
from rag.nlp import search
from rag.nlp import search, rag_tokenizer
from rag.utils.storage_factory import STORAGE_IMPL
@ -524,6 +524,21 @@ def rename():
e, file = FileService.get_by_id(informs[0].file_id)
FileService.update_by_id(file.id, {"name": req["name"]})
tenant_id = DocumentService.get_tenant_id(req["doc_id"])
title_tks = rag_tokenizer.tokenize(req["name"])
es_body = {
"docnm_kwd": req["name"],
"title_tks": title_tks,
"title_sm_tks": rag_tokenizer.fine_grained_tokenize(title_tks),
}
if settings.docStoreConn.indexExist(search.index_name(tenant_id), doc.kb_id):
settings.docStoreConn.update(
{"doc_id": req["doc_id"]},
es_body,
search.index_name(tenant_id),
doc.kb_id,
)
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)

View File

@ -70,6 +70,7 @@ def create():
e, t = TenantService.get_by_id(current_user.id)
if not e:
return get_data_error_result(message="Tenant not found.")
req["parser_config"] = {
"layout_recognize": "DeepDOC",
"chunk_token_num": 512,
@ -579,7 +580,7 @@ def run_graphrag():
sample_document = documents[0]
document_ids = [document["id"] for document in documents]
task_id = queue_raptor_o_graphrag_tasks(doc=sample_document, ty="graphrag", priority=0, fake_doc_id=GRAPH_RAPTOR_FAKE_DOC_ID, doc_ids=list(document_ids))
task_id = queue_raptor_o_graphrag_tasks(sample_doc_id=sample_document, ty="graphrag", priority=0, fake_doc_id=GRAPH_RAPTOR_FAKE_DOC_ID, doc_ids=list(document_ids))
if not KnowledgebaseService.update_by_id(kb.id, {"graphrag_task_id": task_id}):
logging.warning(f"Cannot save graphrag_task_id for kb {kb_id}")
@ -648,7 +649,7 @@ def run_raptor():
sample_document = documents[0]
document_ids = [document["id"] for document in documents]
task_id = queue_raptor_o_graphrag_tasks(doc=sample_document, ty="raptor", priority=0, fake_doc_id=GRAPH_RAPTOR_FAKE_DOC_ID, doc_ids=list(document_ids))
task_id = queue_raptor_o_graphrag_tasks(sample_doc_id=sample_document, ty="raptor", priority=0, fake_doc_id=GRAPH_RAPTOR_FAKE_DOC_ID, doc_ids=list(document_ids))
if not KnowledgebaseService.update_by_id(kb.id, {"raptor_task_id": task_id}):
logging.warning(f"Cannot save raptor_task_id for kb {kb_id}")
@ -717,7 +718,7 @@ def run_mindmap():
sample_document = documents[0]
document_ids = [document["id"] for document in documents]
task_id = queue_raptor_o_graphrag_tasks(doc=sample_document, ty="mindmap", priority=0, fake_doc_id=GRAPH_RAPTOR_FAKE_DOC_ID, doc_ids=list(document_ids))
task_id = queue_raptor_o_graphrag_tasks(sample_doc_id=sample_document, ty="mindmap", priority=0, fake_doc_id=GRAPH_RAPTOR_FAKE_DOC_ID, doc_ids=list(document_ids))
if not KnowledgebaseService.update_by_id(kb.id, {"mindmap_task_id": task_id}):
logging.warning(f"Cannot save mindmap_task_id for kb {kb_id}")

View File

@ -470,6 +470,20 @@ def list_docs(dataset_id, tenant_id):
required: false
default: 0
description: Unix timestamp for filtering documents created before this time. 0 means no filter.
- in: query
name: suffix
type: array
items:
type: string
required: false
description: Filter by file suffix (e.g., ["pdf", "txt", "docx"]).
- in: query
name: run
type: array
items:
type: string
required: false
description: Filter by document run status. Supports both numeric ("0", "1", "2", "3", "4") and text formats ("UNSTART", "RUNNING", "CANCEL", "DONE", "FAIL").
- in: header
name: Authorization
type: string
@ -512,63 +526,62 @@ def list_docs(dataset_id, tenant_id):
description: Processing status.
"""
if not KnowledgebaseService.accessible(kb_id=dataset_id, user_id=tenant_id):
return get_error_data_result(message=f"You don't own the dataset {dataset_id}. ")
id = request.args.get("id")
name = request.args.get("name")
return get_error_data_result(message=f"You don't own the dataset {dataset_id}. ")
if id and not DocumentService.query(id=id, kb_id=dataset_id):
return get_error_data_result(message=f"You don't own the document {id}.")
q = request.args
document_id = q.get("id")
name = q.get("name")
if document_id and not DocumentService.query(id=document_id, kb_id=dataset_id):
return get_error_data_result(message=f"You don't own the document {document_id}.")
if name and not DocumentService.query(name=name, kb_id=dataset_id):
return get_error_data_result(message=f"You don't own the document {name}.")
page = int(request.args.get("page", 1))
keywords = request.args.get("keywords", "")
page_size = int(request.args.get("page_size", 30))
orderby = request.args.get("orderby", "create_time")
if request.args.get("desc") == "False":
desc = False
else:
desc = True
docs, tol = DocumentService.get_list(dataset_id, page, page_size, orderby, desc, keywords, id, name)
page = int(q.get("page", 1))
page_size = int(q.get("page_size", 30))
orderby = q.get("orderby", "create_time")
desc = str(q.get("desc", "true")).strip().lower() != "false"
keywords = q.get("keywords", "")
create_time_from = int(request.args.get("create_time_from", 0))
create_time_to = int(request.args.get("create_time_to", 0))
# filters - align with OpenAPI parameter names
suffix = q.getlist("suffix")
run_status = q.getlist("run")
create_time_from = int(q.get("create_time_from", 0))
create_time_to = int(q.get("create_time_to", 0))
# map run status (accept text or numeric) - align with API parameter
run_status_text_to_numeric = {"UNSTART": "0", "RUNNING": "1", "CANCEL": "2", "DONE": "3", "FAIL": "4"}
run_status_converted = [run_status_text_to_numeric.get(v, v) for v in run_status]
docs, total = DocumentService.get_list(
dataset_id, page, page_size, orderby, desc, keywords, document_id, name, suffix, run_status_converted
)
# time range filter (0 means no bound)
if create_time_from or create_time_to:
filtered_docs = []
for doc in docs:
doc_create_time = doc.get("create_time", 0)
if (create_time_from == 0 or doc_create_time >= create_time_from) and (create_time_to == 0 or doc_create_time <= create_time_to):
filtered_docs.append(doc)
docs = filtered_docs
docs = [
d for d in docs
if (create_time_from == 0 or d.get("create_time", 0) >= create_time_from)
and (create_time_to == 0 or d.get("create_time", 0) <= create_time_to)
]
# rename key's name
renamed_doc_list = []
# rename keys + map run status back to text for output
key_mapping = {
"chunk_num": "chunk_count",
"kb_id": "dataset_id",
"kb_id": "dataset_id",
"token_num": "token_count",
"parser_id": "chunk_method",
}
run_mapping = {
"0": "UNSTART",
"1": "RUNNING",
"2": "CANCEL",
"3": "DONE",
"4": "FAIL",
}
for doc in docs:
renamed_doc = {}
for key, value in doc.items():
if key == "run":
renamed_doc["run"] = run_mapping.get(str(value))
new_key = key_mapping.get(key, key)
renamed_doc[new_key] = value
if key == "run":
renamed_doc["run"] = run_mapping.get(value)
renamed_doc_list.append(renamed_doc)
return get_result(data={"total": tol, "docs": renamed_doc_list})
run_status_numeric_to_text = {"0": "UNSTART", "1": "RUNNING", "2": "CANCEL", "3": "DONE", "4": "FAIL"}
output_docs = []
for d in docs:
renamed_doc = {key_mapping.get(k, k): v for k, v in d.items()}
if "run" in d:
renamed_doc["run"] = run_status_numeric_to_text.get(str(d["run"]), d["run"])
output_docs.append(renamed_doc)
return get_result(data={"total": total, "docs": output_docs})
@manager.route("/datasets/<dataset_id>/documents", methods=["DELETE"]) # noqa: F821
@token_required

View File

@ -22,7 +22,7 @@ import secrets
import time
from datetime import datetime
from flask import redirect, request, session, Response
from flask import redirect, request, session, make_response
from flask_login import current_user, login_required, login_user, logout_user
from werkzeug.security import check_password_hash, generate_password_hash
@ -866,7 +866,9 @@ def forget_get_captcha():
from captcha.image import ImageCaptcha
image = ImageCaptcha(width=300, height=120, font_sizes=[50, 60, 70])
img_bytes = image.generate(captcha_text).read()
return Response(img_bytes, mimetype="image/png")
response = make_response(img_bytes)
response.headers.set("Content-Type", "image/JPEG")
return response
@manager.route("/forget/otp", methods=["POST"]) # noqa: F821

View File

@ -79,7 +79,7 @@ class DocumentService(CommonService):
@classmethod
@DB.connection_context()
def get_list(cls, kb_id, page_number, items_per_page,
orderby, desc, keywords, id, name):
orderby, desc, keywords, id, name, suffix=None, run = None):
fields = cls.get_cls_model_fields()
docs = cls.model.select(*[*fields, UserCanvas.title]).join(File2Document, on = (File2Document.document_id == cls.model.id))\
.join(File, on = (File.id == File2Document.file_id))\
@ -96,6 +96,10 @@ class DocumentService(CommonService):
docs = docs.where(
fn.LOWER(cls.model.name).contains(keywords.lower())
)
if suffix:
docs = docs.where(cls.model.suffix.in_(suffix))
if run:
docs = docs.where(cls.model.run.in_(run))
if desc:
docs = docs.order_by(cls.model.getter_by(orderby).desc())
else:
@ -667,9 +671,11 @@ class DocumentService(CommonService):
@classmethod
@DB.connection_context()
def _sync_progress(cls, docs:list[dict]):
from api.db.services.task_service import TaskService
for d in docs:
try:
tsks = Task.query(doc_id=d["id"], order_by=Task.create_time)
tsks = TaskService.query(doc_id=d["id"], order_by=Task.create_time)
if not tsks:
continue
msg = []
@ -787,21 +793,23 @@ class DocumentService(CommonService):
"cancelled": int(cancelled),
}
def queue_raptor_o_graphrag_tasks(doc, ty, priority, fake_doc_id="", doc_ids=[]):
def queue_raptor_o_graphrag_tasks(sample_doc_id, ty, priority, fake_doc_id="", doc_ids=[]):
"""
You can provide a fake_doc_id to bypass the restriction of tasks at the knowledgebase level.
Optionally, specify a list of doc_ids to determine which documents participate in the task.
"""
chunking_config = DocumentService.get_chunking_config(doc["id"])
assert ty in ["graphrag", "raptor", "mindmap"], "type should be graphrag, raptor or mindmap"
chunking_config = DocumentService.get_chunking_config(sample_doc_id["id"])
hasher = xxhash.xxh64()
for field in sorted(chunking_config.keys()):
hasher.update(str(chunking_config[field]).encode("utf-8"))
def new_task():
nonlocal doc
nonlocal sample_doc_id
return {
"id": get_uuid(),
"doc_id": fake_doc_id if fake_doc_id else doc["id"],
"doc_id": sample_doc_id["id"],
"from_page": 100000000,
"to_page": 100000000,
"task_type": ty,
@ -816,9 +824,9 @@ def queue_raptor_o_graphrag_tasks(doc, ty, priority, fake_doc_id="", doc_ids=[])
task["digest"] = hasher.hexdigest()
bulk_insert_into_db(Task, [task], True)
if ty in ["graphrag", "raptor", "mindmap"]:
task["doc_ids"] = doc_ids
DocumentService.begin2parse(doc["id"])
task["doc_id"] = fake_doc_id
task["doc_ids"] = doc_ids
DocumentService.begin2parse(sample_doc_id["id"])
assert REDIS_CONN.queue_product(get_svr_queue_name(priority), message=task), "Can't access Redis. Please check the Redis' status."
return task["id"]

View File

@ -210,19 +210,18 @@ class LLMBundle(LLM4Tenant):
def _clean_param(chat_partial, **kwargs):
func = chat_partial.func
sig = inspect.signature(func)
keyword_args = []
support_var_args = False
allowed_params = set()
for param in sig.parameters.values():
if param.kind == inspect.Parameter.VAR_KEYWORD or param.kind == inspect.Parameter.VAR_POSITIONAL:
if param.kind == inspect.Parameter.VAR_KEYWORD:
support_var_args = True
elif param.kind == inspect.Parameter.KEYWORD_ONLY:
keyword_args.append(param.name)
use_kwargs = kwargs
if not support_var_args:
use_kwargs = {k: v for k, v in kwargs.items() if k in keyword_args}
return use_kwargs
elif param.kind in (inspect.Parameter.POSITIONAL_OR_KEYWORD, inspect.Parameter.KEYWORD_ONLY):
allowed_params.add(param.name)
if support_var_args:
return kwargs
else:
return {k: v for k, v in kwargs.items() if k in allowed_params}
def chat(self, system: str, history: list, gen_conf: dict = {}, **kwargs) -> str:
if self.langfuse:
generation = self.langfuse.start_generation(trace_context=self.trace_context, name="chat", model=self.llm_name, input={"system": system, "history": history})

View File

@ -173,7 +173,7 @@ def filename_type(filename):
if re.match(r".*\.(wav|flac|ape|alac|wavpack|wv|mp3|aac|ogg|vorbis|opus)$", filename):
return FileType.AURAL.value
if re.match(r".*\.(jpg|jpeg|png|tif|gif|pcx|tga|exif|fpx|svg|psd|cdr|pcd|dxf|ufo|eps|ai|raw|WMF|webp|avif|apng|icon|ico|mpg|mpeg|avi|rm|rmvb|mov|wmv|asf|dat|asx|wvx|mpe|mpa|mp4)$", filename):
if re.match(r".*\.(jpg|jpeg|png|tif|gif|pcx|tga|exif|fpx|svg|psd|cdr|pcd|dxf|ufo|eps|ai|raw|WMF|webp|avif|apng|icon|ico|mpg|mpeg|avi|rm|rmvb|mov|wmv|asf|dat|asx|wvx|mpe|mpa|mp4|avi|mkv)$", filename):
return FileType.VISUAL.value
return FileType.OTHER.value

View File

@ -2987,7 +2987,7 @@
"tags": "LLM,CHAT,IMAGE2TEXT,32k",
"max_tokens": 32000,
"model_type": "image2text",
"is_tools": true
"is_tools": false
},
{
"llm_name": "THUDM/GLM-Z1-32B-0414",

View File

@ -54,8 +54,8 @@ class RAGFlowExcelParser:
try:
file_like_object.seek(0)
try:
df = pd.read_excel(file_like_object)
return RAGFlowExcelParser._dataframe_to_workbook(df)
dfs = pd.read_excel(file_like_object, sheet_name=None)
return RAGFlowExcelParser._dataframe_to_workbook(dfs)
except Exception as ex:
logging.info(f"pandas with default engine load error: {ex}, try calamine instead")
file_like_object.seek(0)
@ -75,6 +75,10 @@ class RAGFlowExcelParser:
@staticmethod
def _dataframe_to_workbook(df):
# if contains multiple sheets use _dataframes_to_workbook
if isinstance(df, dict) and len(df) > 1:
return RAGFlowExcelParser._dataframes_to_workbook(df)
df = RAGFlowExcelParser._clean_dataframe(df)
wb = Workbook()
ws = wb.active
@ -88,6 +92,22 @@ class RAGFlowExcelParser:
ws.cell(row=row_num, column=col_num, value=value)
return wb
@staticmethod
def _dataframes_to_workbook(dfs: dict):
wb = Workbook()
default_sheet = wb.active
wb.remove(default_sheet)
for sheet_name, df in dfs.items():
df = RAGFlowExcelParser._clean_dataframe(df)
ws = wb.create_sheet(title=sheet_name)
for col_num, column_name in enumerate(df.columns, 1):
ws.cell(row=1, column=col_num, value=column_name)
for row_num, row in enumerate(df.values, 2):
for col_num, value in enumerate(row, 1):
ws.cell(row=row_num, column=col_num, value=value)
return wb
def html(self, fnm, chunk_rows=256):
from html import escape

View File

@ -17,6 +17,8 @@ from concurrent.futures import ThreadPoolExecutor, as_completed
from PIL import Image
from api.db import LLMType
from api.db.services.llm_service import LLMBundle
from api.utils.api_utils import timeout
from rag.app.picture import vision_llm_chunk as picture_vision_llm_chunk
from rag.prompts.generator import vision_llm_figure_describe_prompt
@ -32,6 +34,43 @@ def vision_figure_parser_figure_data_wrapper(figures_data_without_positions):
if isinstance(figure_data[1], Image.Image)
]
def vision_figure_parser_docx_wrapper(sections,tbls,callback=None,**kwargs):
try:
vision_model = LLMBundle(kwargs["tenant_id"], LLMType.IMAGE2TEXT)
callback(0.7, "Visual model detected. Attempting to enhance figure extraction...")
except Exception:
vision_model = None
if vision_model:
figures_data = vision_figure_parser_figure_data_wrapper(sections)
try:
docx_vision_parser = VisionFigureParser(vision_model=vision_model, figures_data=figures_data, **kwargs)
boosted_figures = docx_vision_parser(callback=callback)
tbls.extend(boosted_figures)
except Exception as e:
callback(0.8, f"Visual model error: {e}. Skipping figure parsing enhancement.")
return tbls
def vision_figure_parser_pdf_wrapper(tbls,callback=None,**kwargs):
try:
vision_model = LLMBundle(kwargs["tenant_id"], LLMType.IMAGE2TEXT)
callback(0.7, "Visual model detected. Attempting to enhance figure extraction...")
except Exception:
vision_model = None
if vision_model:
def is_figure_item(item):
return (
isinstance(item[0][0], Image.Image) and
isinstance(item[0][1], list)
)
figures_data = [item for item in tbls if is_figure_item(item)]
try:
docx_vision_parser = VisionFigureParser(vision_model=vision_model, figures_data=figures_data, **kwargs)
boosted_figures = docx_vision_parser(callback=callback)
tbls = [item for item in tbls if not is_figure_item(item)]
tbls.extend(boosted_figures)
except Exception as e:
callback(0.8, f"Visual model error: {e}. Skipping figure parsing enhancement.")
return tbls
shared_executor = ThreadPoolExecutor(max_workers=10)

View File

@ -97,13 +97,13 @@ SVR_HTTP_PORT=9380
ADMIN_SVR_HTTP_PORT=9381
# The RAGFlow Docker image to download.
# Defaults to the v0.21.0-slim edition, which is the RAGFlow Docker image without embedding models.
RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0-slim
# Defaults to the v0.21.1-slim edition, which is the RAGFlow Docker image without embedding models.
RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1-slim
#
# To download the RAGFlow Docker image with embedding models, uncomment the following line instead:
# RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0
# RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1
#
# The Docker image of the v0.21.0 edition includes built-in embedding models:
# The Docker image of the v0.21.1 edition includes built-in embedding models:
# - BAAI/bge-large-zh-v1.5
# - maidalun1020/bce-embedding-base_v1
#

View File

@ -79,8 +79,8 @@ The [.env](./.env) file contains important environment variables for Docker.
- `RAGFLOW-IMAGE`
The Docker image edition. Available editions:
- `infiniflow/ragflow:v0.21.0-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.21.0`: The RAGFlow Docker image with embedding models including:
- `infiniflow/ragflow:v0.21.1-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.21.1`: The RAGFlow Docker image with embedding models including:
- Built-in embedding models:
- `BAAI/bge-large-zh-v1.5`
- `maidalun1020/bce-embedding-base_v1`

View File

@ -77,7 +77,7 @@ services:
container_name: ragflow-infinity
profiles:
- infinity
image: infiniflow/infinity:v0.6.0
image: infiniflow/infinity:v0.6.1
volumes:
- infinity_data:/var/infinity
- ./infinity_conf.toml:/infinity_conf.toml

View File

@ -1,5 +1,5 @@
[general]
version = "0.6.0"
version = "0.6.1"
time_zone = "utc-8"
[network]

View File

@ -99,8 +99,8 @@ RAGFlow utilizes MinIO as its object storage solution, leveraging its scalabilit
- `RAGFLOW-IMAGE`
The Docker image edition. Available editions:
- `infiniflow/ragflow:v0.21.0-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.21.0`: The RAGFlow Docker image with embedding models including:
- `infiniflow/ragflow:v0.21.1-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.21.1`: The RAGFlow Docker image with embedding models including:
- Built-in embedding models:
- `BAAI/bge-large-zh-v1.5`
- `maidalun1020/bce-embedding-base_v1`

View File

@ -77,7 +77,7 @@ After building the infiniflow/ragflow:nightly-slim image, you are ready to launc
1. Edit Docker Compose Configuration
Open the `docker/.env` file. Find the `RAGFLOW_IMAGE` setting and change the image reference from `infiniflow/ragflow:v0.21.0-slim` to `infiniflow/ragflow:nightly-slim` to use the pre-built image.
Open the `docker/.env` file. Find the `RAGFLOW_IMAGE` setting and change the image reference from `infiniflow/ragflow:v0.21.1-slim` to `infiniflow/ragflow:nightly-slim` to use the pre-built image.
2. Launch the Service

View File

@ -30,17 +30,17 @@ The "garbage in garbage out" status quo remains unchanged despite the fact that
Each RAGFlow release is available in two editions:
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.21.0-slim`
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.21.0`
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.21.1-slim`
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.21.1`
---
### Which embedding models can be deployed locally?
RAGFlow offers two Docker image editions, `v0.21.0-slim` and `v0.21.0`:
RAGFlow offers two Docker image editions, `v0.21.1-slim` and `v0.21.1`:
- `infiniflow/ragflow:v0.21.0-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.21.0`: The RAGFlow Docker image with the following built-in embedding models:
- `infiniflow/ragflow:v0.21.1-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.21.1`: The RAGFlow Docker image with the following built-in embedding models:
- `BAAI/bge-large-zh-v1.5`
- `maidalun1020/bce-embedding-base_v1`
@ -510,3 +510,27 @@ See [here](./guides/agent/best_practices/accelerate_agent_question_answering.md)
---
### How to use MinerU to parse PDF documents?
MinerU PDF document parsing is available starting from v0.21.1. To use this feature, follow these steps:
1. Before deploying ragflow-server, update your **docker/.env** file:
- Enable `HF_ENDPOINT=https://hf-mirror.com`
- Add a MinerU entry: `MINERU_EXECUTABLE=/ragflow/uv_tools/.venv/bin/mineru`
2. Start the ragflow-server and run the following commands inside the container:
```bash
mkdir uv_tools
cd uv_tools
uv venv .venv
source .venv/bin/activate
uv pip install -U "mineru[core]" -i https://mirrors.aliyun.com/pypi/simple
```
3. Restart the ragflow-server.
4. In the web UI, navigate to the **Configuration** page of your dataset. Click **Built-in** in the **Ingestion pipeline** section, select a chunking method from the **Built-in** dropdown, which supports PDF parsing, and slect **MinerU** in **PDF parser**.
5. If you use a custom ingestion pipeline instead, you must also complete the first three steps before selecting **MinerU** in the **Parsing method** section of the **Parser** component.

View File

@ -0,0 +1,40 @@
---
sidebar_position: 31
slug: /chunker_title_component
---
# Title chunker component
A component that splits texts into chunks by heading level.
---
A **Token chunker** component is a text splitter that uses specified heading level as delimiter to define chunk boundaries and create chunks.
## Scenario
A **Title chunker** component is optional, usually placed immediately after **Parser**.
:::caution WARNING
Placing a **Title chunker** after a **Token chunker** is invalid and will cause an error. Please note that this restriction is not currently system-enforced and requires your attention.
:::
## Configurations
### Hierarchy
Specifies the heading level to define chunk boundaries:
- H1
- H2
- H3 (Default)
- H4
Click **+ Add** to add heading levels here or update the corresponding **Regular Expressions** fields for custom heading patterns.
### Output
The global variable name for the output of the **Title chunker** component, which can be referenced by subsequent components in the ingestion pipeline.
- Default: `chunks`
- Type: `Array<Object>`

View File

@ -0,0 +1,43 @@
---
sidebar_position: 32
slug: /chunker_token_component
---
# Token chunker component
A component that splits texts into chunks, respecting a maximum token limit and using delimiters to find optimal breakpoints.
---
A **Token chunker** component is a text splitter that creates chunks by respecting a recommended maximum token length, using delimiters to ensure logical chunk breakpoints. It splits long texts into appropriately-sized, semantically related chunks.
## Scenario
A **Token chunker** component is optional, usually placed immediately after **Parser** or **Title chunker**.
## Configurations
### Recommended chunk size
The recommended maximum token limit for each created chunk. The **Token chunker** component creates chunks at specified delimiters. If this token limit is reached before a delimiter, a chunk is created at that point.
### Overlapped percent (%)
This defines the overlap percentage between chunks. An appropriate degree of overlap ensures semantic coherence without creating excessive, redundant tokens for the LLM.
- Default: 0
- Maximum: 30%
### Delimiters
Defaults to `\n`. Click the right-hand **Recycle bin** button to remove it, or click **+ Add** to add a delimiter.
### Output
The global variable name for the output of the **Token chunker** component, which can be referenced by subsequent components in the ingestion pipeline.
- Default: `chunks`
- Type: `Array<Object>`

View File

@ -48,7 +48,7 @@ You start an AI conversation by creating an assistant.
- If no target language is selected, the system will search only in the language of your query, which may cause relevant information in other languages to be missed.
- **Variable** refers to the variables (keys) to be used in the system prompt. `{knowledge}` is a reserved variable. Click **Add** to add more variables for the system prompt.
- If you are uncertain about the logic behind **Variable**, leave it *as-is*.
- As of v0.21.0, if you add custom variables here, the only way you can pass in their values is to call:
- As of v0.21.1, if you add custom variables here, the only way you can pass in their values is to call:
- HTTP method [Converse with chat assistant](../../references/http_api_reference.md#converse-with-chat-assistant), or
- Python method [Converse with chat assistant](../../references/python_api_reference.md#converse-with-chat-assistant).

View File

@ -59,7 +59,7 @@ You can also change a file's chunking method on the **Files** page.
![change chunking method](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/change_chunking_method.jpg)
<details>
<summary>From v0.21.0 onward, RAGFlow supports ingestion pipeline for customized data ingestion and cleansing workflows.</summary>
<summary>From v0.21.1 onward, RAGFlow supports ingestion pipeline for customized data ingestion and cleansing workflows.</summary>
To use a customized data pipeline:
@ -138,7 +138,7 @@ See [Run retrieval test](./run_retrieval_test.md) for details.
## Search for dataset
As of RAGFlow v0.21.0, the search feature is still in a rudimentary form, supporting only dataset search by name.
As of RAGFlow v0.21.1, the search feature is still in a rudimentary form, supporting only dataset search by name.
![search dataset](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/search_datasets.jpg)

View File

@ -35,8 +35,31 @@ RAGFlow isn't one-size-fits-all. It is built for flexibility and supports deeper
- DeepDoc: (Default) The default visual model performing OCR, TSR, and DLR tasks on PDFs, which can be time-consuming.
- Naive: Skip OCR, TSR, and DLR tasks if *all* your PDFs are plain text.
- MinerU: An experimental feature.
- A third-party visual model provided by a specific model provider.
:::danger IMPORTANG
MinerU PDF document parsing is available starting from v0.21.1. To use this feature, follow these steps:
1. Before deploying ragflow-server, update your **docker/.env** file:
- Enable `HF_ENDPOINT=https://hf-mirror.com`
- Add a MinerU entry: `MINERU_EXECUTABLE=/ragflow/uv_tools/.venv/bin/mineru`
2. Start the ragflow-server and run the following commands inside the container:
```bash
mkdir uv_tools
cd uv_tools
uv venv .venv
source .venv/bin/activate
uv pip install -U "mineru[core]" -i https://mirrors.aliyun.com/pypi/simple
```
3. Restart the ragflow-server.
4. In the web UI, navigate to the **Configuration** page of your dataset. Click **Built-in** in the **Ingestion pipeline** section, select a chunking method from the **Built-in** dropdown, which supports PDF parsing, and slect **MinerU** in **PDF parser**.
5. If you use a custom ingestion pipeline instead, you must also complete the first three steps before selecting **MinerU** in the **Parsing method** section of the **Parser** component.
:::
:::caution WARNING
Third-party visual models are marked **Experimental**, because we have not fully tested these models for the aforementioned data extraction tasks.
:::

View File

@ -87,4 +87,4 @@ RAGFlow's file management allows you to download an uploaded file:
![download_file](https://github.com/infiniflow/ragflow/assets/93570324/cf3b297f-7d9b-4522-bf5f-4f45743e4ed5)
> As of RAGFlow v0.21.0, bulk download is not supported, nor can you download an entire folder.
> As of RAGFlow v0.21.1, bulk download is not supported, nor can you download an entire folder.

View File

@ -46,7 +46,7 @@ The Admin CLI and Admin Service form a client-server architectural suite for RAG
2. Install ragflow-cli.
```bash
pip install ragflow-cli==0.21.0
pip install ragflow-cli==0.21.1
```
3. Launch the CLI client:
@ -348,7 +348,7 @@ Listing all agents of user: lynn_inf@hotmail.com
+-----------------+-------------+------------+-----------------+
| canvas_category | canvas_type | permission | title |
+-----------------+-------------+------------+-----------------+
| agent_canvas | None | team | research_helper |
| agent | None | team | research_helper |
+-----------------+-------------+------------+-----------------+
```

View File

@ -18,7 +18,7 @@ RAGFlow ships with a built-in [Langfuse](https://langfuse.com) integration so th
Langfuse stores traces, spans and prompt payloads in a purpose-built observability backend and offers filtering and visualisations on top.
:::info NOTE
• RAGFlow **≥ 0.21.0** (contains the Langfuse connector)
• RAGFlow **≥ 0.21.1** (contains the Langfuse connector)
• A Langfuse workspace (cloud or self-hosted) with a _Project Public Key_ and _Secret Key_
:::

View File

@ -66,10 +66,10 @@ To upgrade RAGFlow, you must upgrade **both** your code **and** your Docker imag
git clone https://github.com/infiniflow/ragflow.git
```
2. Switch to the latest, officially published release, e.g., `v0.21.0`:
2. Switch to the latest, officially published release, e.g., `v0.21.1`:
```bash
git checkout -f v0.21.0
git checkout -f v0.21.1
```
3. Update **ragflow/docker/.env**:
@ -83,14 +83,14 @@ To upgrade RAGFlow, you must upgrade **both** your code **and** your Docker imag
<TabItem value="slim">
```bash
RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0-slim
RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1-slim
```
</TabItem>
<TabItem value="full">
```bash
RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0
RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1
```
</TabItem>
@ -114,10 +114,10 @@ No, you do not need to. Upgrading RAGFlow in itself will *not* remove your uploa
1. From an environment with Internet access, pull the required Docker image.
2. Save the Docker image to a **.tar** file.
```bash
docker save -o ragflow.v0.21.0.tar infiniflow/ragflow:v0.21.0
docker save -o ragflow.v0.21.1.tar infiniflow/ragflow:v0.21.1
```
3. Copy the **.tar** file to the target server.
4. Load the **.tar** file into Docker:
```bash
docker load -i ragflow.v0.21.0.tar
docker load -i ragflow.v0.21.1.tar
```

View File

@ -44,7 +44,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
`vm.max_map_count`. This value sets the maximum number of memory map areas a process may have. Its default value is 65530. While most applications require fewer than a thousand maps, reducing this value can result in abnormal behaviors, and the system will throw out-of-memory errors when a process reaches the limitation.
RAGFlow v0.21.0 uses Elasticsearch or [Infinity](https://github.com/infiniflow/infinity) for multiple recall. Setting the value of `vm.max_map_count` correctly is crucial to the proper functioning of the Elasticsearch component.
RAGFlow v0.21.1 uses Elasticsearch or [Infinity](https://github.com/infiniflow/infinity) for multiple recall. Setting the value of `vm.max_map_count` correctly is crucial to the proper functioning of the Elasticsearch component.
<Tabs
defaultValue="linux"
@ -184,13 +184,13 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/docker
$ git checkout -f v0.21.0
$ git checkout -f v0.21.1
```
3. Use the pre-built Docker images and start up the server:
:::tip NOTE
The command below downloads the `v0.21.0-slim` edition of the RAGFlow Docker image. Refer to the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.21.0-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0` for the full edition `v0.21.0`.
The command below downloads the `v0.21.1-slim` edition of the RAGFlow Docker image. Refer to the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.21.1-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1` for the full edition `v0.21.1`.
:::
```bash
@ -207,8 +207,8 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
| RAGFlow image tag | Image size (GB) | Has embedding models and Python packages? | Stable? |
| ------------------- | --------------- | ----------------------------------------- | ------------------------ |
| `v0.21.0` | &approx;9 | :heavy_check_mark: | Stable release |
| `v0.21.0-slim` | &approx;2 | ❌ | Stable release |
| `v0.21.1` | &approx;9 | :heavy_check_mark: | Stable release |
| `v0.21.1-slim` | &approx;2 | ❌ | Stable release |
| `nightly` | &approx;9 | :heavy_check_mark: | *Unstable* nightly build |
| `nightly-slim` | &approx;2 | ❌ | *Unstable* nightly build |
@ -217,7 +217,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
```
:::danger IMPORTANT
The embedding models included in `v0.21.0` and `nightly` are:
The embedding models included in `v0.21.1` and `nightly` are:
- BAAI/bge-large-zh-v1.5
- maidalun1020/bce-embedding-base_v1

View File

@ -19,7 +19,7 @@ import TOCInline from '@theme/TOCInline';
### Cross-language search
Cross-language search (also known as cross-lingual retrieval) is a feature introduced in version 0.21.0. It enables users to submit queries in one language (for example, English) and retrieve relevant documents written in other languages such as Chinese or Spanish. This feature is enabled by the systems default chat model, which translates queries to ensure accurate matching of semantic meaning across languages.
Cross-language search (also known as cross-lingual retrieval) is a feature introduced in version 0.21.1. It enables users to submit queries in one language (for example, English) and retrieve relevant documents written in other languages such as Chinese or Spanish. This feature is enabled by the systems default chat model, which translates queries to ensure accurate matching of semantic meaning across languages.
By enabling cross-language search, users can effortlessly access a broader range of information regardless of language barriers, significantly enhancing the systems usability and inclusiveness.

View File

@ -1198,23 +1198,24 @@ Failure:
### List documents
**GET** `/api/v1/datasets/{dataset_id}/documents?page={page}&page_size={page_size}&orderby={orderby}&desc={desc}&keywords={keywords}&id={document_id}&name={document_name}&create_time_from={timestamp}&create_time_to={timestamp}`
**GET** `/api/v1/datasets/{dataset_id}/documents?page={page}&page_size={page_size}&orderby={orderby}&desc={desc}&keywords={keywords}&id={document_id}&name={document_name}&create_time_from={timestamp}&create_time_to={timestamp}&suffix={file_suffix}&run={run_status}`
Lists documents in a specified dataset.
#### Request
- Method: GET
- URL: `/api/v1/datasets/{dataset_id}/documents?page={page}&page_size={page_size}&orderby={orderby}&desc={desc}&keywords={keywords}&id={document_id}&name={document_name}&create_time_from={timestamp}&create_time_to={timestamp}`
- URL: `/api/v1/datasets/{dataset_id}/documents?page={page}&page_size={page_size}&orderby={orderby}&desc={desc}&keywords={keywords}&id={document_id}&name={document_name}&create_time_from={timestamp}&create_time_to={timestamp}&suffix={file_suffix}&run={run_status}`
- Headers:
- `'content-Type: application/json'`
- `'Authorization: Bearer <YOUR_API_KEY>'`
##### Request example
##### Request examples
**A basic request with pagination:**
```bash
curl --request GET \
--url http://{address}/api/v1/datasets/{dataset_id}/documents?page={page}&page_size={page_size}&orderby={orderby}&desc={desc}&keywords={keywords}&id={document_id}&name={document_name}&create_time_from={timestamp}&create_time_to={timestamp} \
--url http://{address}/api/v1/datasets/{dataset_id}/documents?page=1&page_size=10 \
--header 'Authorization: Bearer <YOUR_API_KEY>'
```
@ -1236,10 +1237,34 @@ curl --request GET \
Indicates whether the retrieved documents should be sorted in descending order. Defaults to `true`.
- `id`: (*Filter parameter*), `string`
The ID of the document to retrieve.
- `create_time_from`: (*Filter parameter*), `integer`
- `create_time_from`: (*Filter parameter*), `integer`
Unix timestamp for filtering documents created after this time. 0 means no filter. Defaults to `0`.
- `create_time_to`: (*Filter parameter*), `integer`
- `create_time_to`: (*Filter parameter*), `integer`
Unix timestamp for filtering documents created before this time. 0 means no filter. Defaults to `0`.
- `suffix`: (*Filter parameter*), `array[string]`
Filter by file suffix. Supports multiple values, e.g., `pdf`, `txt`, and `docx`. Defaults to all suffixes.
- `run`: (*Filter parameter*), `array[string]`
Filter by document processing status. Supports numeric, text, and mixed formats:
- Numeric format: `["0", "1", "2", "3", "4"]`
- Text format: `[UNSTART, RUNNING, CANCEL, DONE, FAIL]`
- Mixed format: `[UNSTART, 1, DONE]` (mixing numeric and text formats)
- Status mapping:
- `0` / `UNSTART`: Document not yet processed
- `1` / `RUNNING`: Document is currently being processed
- `2` / `CANCEL`: Document processing was cancelled
- `3` / `DONE`: Document processing completed successfully
- `4` / `FAIL`: Document processing failed
Defaults to all statuses.
##### Usage examples
**A request with multiple filtering parameters**
```bash
curl --request GET \
--url 'http://{address}/api/v1/datasets/{dataset_id}/documents?suffix=pdf&run=DONE&page=1&page_size=10' \
--header 'Authorization: Bearer <YOUR_API_KEY>'
```
#### Response
@ -1270,7 +1295,7 @@ Success:
"process_duration": 0.0,
"progress": 0.0,
"progress_msg": "",
"run": "0",
"run": "UNSTART",
"size": 7,
"source_type": "local",
"status": "1",

View File

@ -9,8 +9,8 @@ Key features, improvements and bug fixes in the latest releases.
:::info
Each RAGFlow release is available in two editions:
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.21.0-slim`
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.21.0`
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.21.1-slim`
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.21.1`
:::
:::danger IMPORTANT
@ -22,6 +22,23 @@ The embedding models included in a full edition are:
These two embedding models are optimized specifically for English and Chinese, so performance may be compromised if you use them to embed documents in other languages.
:::
## v0.21.1
Released on October 23, 2025.
### New features
- Experimental: Adds support for PDF document parsing using MinerU. See [here](./faq.mdx#how-to-use-mineru-to-parse-pdf-documents).
### Improvements
- Enhances UI/UX for the dataset and personal center pages.
- Upgrades RAGFlow's document engine, [Infinity](https://github.com/infiniflow/infinity), to v0.6.1.
### Fixed issues
- An issue with video parsing.
## v0.21.0
Released on October 15, 2025.

View File

@ -105,16 +105,36 @@ class Extractor:
async def extract_all(doc_id, chunks, max_concurrency=MAX_CONCURRENT_PROCESS_AND_EXTRACT_CHUNK):
out_results = []
error_count = 0
max_errors = 3
limiter = trio.Semaphore(max_concurrency)
async def worker(chunk_key_dp: tuple[str, str], idx: int, total: int):
nonlocal error_count
async with limiter:
await self._process_single_content(chunk_key_dp, idx, total, out_results)
try:
await self._process_single_content(chunk_key_dp, idx, total, out_results)
except Exception as e:
error_count += 1
error_msg = f"Error processing chunk {idx+1}/{total}: {str(e)}"
logging.warning(error_msg)
if self.callback:
self.callback(msg=error_msg)
if error_count > max_errors:
raise Exception(f"Maximum error count ({max_errors}) reached. Last errors: {str(e)}")
async with trio.open_nursery() as nursery:
for i, ck in enumerate(chunks):
nursery.start_soon(worker, (doc_id, ck), i, len(chunks))
if error_count > 0:
warning_msg = f"Completed with {error_count} errors (out of {len(chunks)} chunks processed)"
logging.warning(warning_msg)
if self.callback:
self.callback(msg=warning_msg)
return out_results
out_results = await extract_all(doc_id, chunks, max_concurrency=MAX_CONCURRENT_PROCESS_AND_EXTRACT_CHUNK)
@ -129,8 +149,8 @@ class Extractor:
maybe_edges[tuple(sorted(k))].extend(v)
sum_token_count += token_count
now = trio.current_time()
if callback:
callback(msg=f"Entities and relationships extraction done, {len(maybe_nodes)} nodes, {len(maybe_edges)} edges, {sum_token_count} tokens, {now - start_ts:.2f}s.")
if self.callback:
self.callback(msg=f"Entities and relationships extraction done, {len(maybe_nodes)} nodes, {len(maybe_edges)} edges, {sum_token_count} tokens, {now - start_ts:.2f}s.")
start_ts = now
logging.info("Entities merging...")
all_entities_data = []
@ -138,8 +158,8 @@ class Extractor:
for en_nm, ents in maybe_nodes.items():
nursery.start_soon(self._merge_nodes, en_nm, ents, all_entities_data)
now = trio.current_time()
if callback:
callback(msg=f"Entities merging done, {now - start_ts:.2f}s.")
if self.callback:
self.callback(msg=f"Entities merging done, {now - start_ts:.2f}s.")
start_ts = now
logging.info("Relationships merging...")
@ -148,8 +168,8 @@ class Extractor:
for (src, tgt), rels in maybe_edges.items():
nursery.start_soon(self._merge_edges, src, tgt, rels, all_relationships_data)
now = trio.current_time()
if callback:
callback(msg=f"Relationships merging done, {now - start_ts:.2f}s.")
if self.callback:
self.callback(msg=f"Relationships merging done, {now - start_ts:.2f}s.")
if not len(all_entities_data) and not len(all_relationships_data):
logging.warning("Didn't extract any entities and relationships, maybe your LLM is not working")

View File

@ -56,7 +56,7 @@ env:
ragflow:
image:
repository: infiniflow/ragflow
tag: v0.21.0-slim
tag: v0.21.1-slim
pullPolicy: IfNotPresent
pullSecrets: []
# Optional service configuration overrides
@ -96,7 +96,7 @@ ragflow:
infinity:
image:
repository: infiniflow/infinity
tag: v0.6.0
tag: v0.6.1
pullPolicy: IfNotPresent
pullSecrets: []
storage:

View File

@ -1,6 +1,6 @@
[project]
name = "ragflow"
version = "0.21.0"
version = "0.21.1"
description = "[RAGFlow](https://ragflow.io/) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data."
authors = [{ name = "Zhichang Yu", email = "yuzhichang@gmail.com" }]
license-files = ["LICENSE"]
@ -46,7 +46,7 @@ dependencies = [
"html-text==0.6.2",
"httpx[socks]>=0.28.1,<0.29.0",
"huggingface-hub>=0.25.0,<0.26.0",
"infinity-sdk==0.6.0",
"infinity-sdk==0.6.1",
"infinity-emb>=0.0.66,<0.0.67",
"itsdangerous==2.1.2",
"json-repair==0.35.0",

View File

@ -20,11 +20,14 @@ import re
from io import BytesIO
from deepdoc.parser.utils import get_text
from rag.app import naive
from rag.nlp import bullets_category, is_english,remove_contents_table, \
hierarchical_merge, make_colon_as_title, naive_merge, random_choices, tokenize_table, \
tokenize_chunks
from rag.nlp import rag_tokenizer
from deepdoc.parser import PdfParser, DocxParser, PlainParser, HtmlParser
from deepdoc.parser import PdfParser, PlainParser, HtmlParser
from deepdoc.parser.figure_parser import vision_figure_parser_pdf_wrapper,vision_figure_parser_docx_wrapper
from PIL import Image
class Pdf(PdfParser):
@ -81,13 +84,15 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
sections, tbls = [], []
if re.search(r"\.docx$", filename, re.IGNORECASE):
callback(0.1, "Start to parse.")
doc_parser = DocxParser()
doc_parser = naive.Docx()
# TODO: table of contents need to be removed
sections, tbls = doc_parser(
binary if binary else filename, from_page=from_page, to_page=to_page)
filename, binary=binary, from_page=from_page, to_page=to_page)
remove_contents_table(sections, eng=is_english(
random_choices([t for t, _ in sections], k=200)))
tbls = [((None, lns), None) for lns in tbls]
tbls=vision_figure_parser_docx_wrapper(sections=sections,tbls=tbls,callback=callback,**kwargs)
# tbls = [((None, lns), None) for lns in tbls]
sections=[(item[0],item[1] if item[1] is not None else "") for item in sections if not isinstance(item[1], Image.Image)]
callback(0.8, "Finish parsing.")
elif re.search(r"\.pdf$", filename, re.IGNORECASE):
@ -96,6 +101,7 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
pdf_parser = PlainParser()
sections, tbls = pdf_parser(filename if not binary else binary,
from_page=from_page, to_page=to_page, callback=callback)
tbls=vision_figure_parser_pdf_wrapper(tbls=tbls,callback=callback,**kwargs)
elif re.search(r"\.txt$", filename, re.IGNORECASE):
callback(0.1, "Start to parse.")

View File

@ -23,6 +23,7 @@ from io import BytesIO
from rag.nlp import rag_tokenizer, tokenize, tokenize_table, bullets_category, title_frequency, tokenize_chunks, docx_question_level
from rag.utils import num_tokens_from_string
from deepdoc.parser import PdfParser, PlainParser, DocxParser
from deepdoc.parser.figure_parser import vision_figure_parser_pdf_wrapper,vision_figure_parser_docx_wrapper
from docx import Document
from PIL import Image
@ -252,7 +253,7 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
tk_cnt = num_tokens_from_string(txt)
if sec_id > -1:
last_sid = sec_id
tbls=vision_figure_parser_pdf_wrapper(tbls=tbls,callback=callback,**kwargs)
res = tokenize_table(tbls, doc, eng)
res.extend(tokenize_chunks(chunks, doc, eng, pdf_parser))
return res
@ -261,6 +262,7 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
docx_parser = Docx()
ti_list, tbls = docx_parser(filename, binary,
from_page=0, to_page=10000, callback=callback)
tbls=vision_figure_parser_docx_wrapper(sections=ti_list,tbls=tbls,callback=callback,**kwargs)
res = tokenize_table(tbls, doc, eng)
for text, image in ti_list:
d = copy.deepcopy(doc)

View File

@ -32,7 +32,7 @@ from api.db import LLMType
from api.db.services.llm_service import LLMBundle
from api.utils.file_utils import extract_embed_file
from deepdoc.parser import DocxParser, ExcelParser, HtmlParser, JsonParser, MarkdownElementExtractor, MarkdownParser, PdfParser, TxtParser
from deepdoc.parser.figure_parser import VisionFigureParser, vision_figure_parser_figure_data_wrapper
from deepdoc.parser.figure_parser import VisionFigureParser,vision_figure_parser_docx_wrapper,vision_figure_parser_pdf_wrapper
from deepdoc.parser.pdf_parser import PlainParser, VisionParser
from deepdoc.parser.mineru_parser import MinerUParser
from rag.nlp import concat_img, find_codec, naive_merge, naive_merge_with_images, naive_merge_docx, rag_tokenizer, tokenize_chunks, tokenize_chunks_with_images, tokenize_table
@ -475,24 +475,13 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
if re.search(r"\.docx$", filename, re.IGNORECASE):
callback(0.1, "Start to parse.")
try:
vision_model = LLMBundle(kwargs["tenant_id"], LLMType.IMAGE2TEXT)
callback(0.15, "Visual model detected. Attempting to enhance figure extraction...")
except Exception:
vision_model = None
# fix "There is no item named 'word/NULL' in the archive", referring to https://github.com/python-openxml/python-docx/issues/1105#issuecomment-1298075246
_SerializedRelationships.load_from_xml = load_from_xml_v2
sections, tables = Docx()(filename, binary)
if vision_model:
figures_data = vision_figure_parser_figure_data_wrapper(sections)
try:
docx_vision_parser = VisionFigureParser(vision_model=vision_model, figures_data=figures_data, **kwargs)
boosted_figures = docx_vision_parser(callback=callback)
tables.extend(boosted_figures)
except Exception as e:
callback(0.6, f"Visual model error: {e}. Skipping figure parsing enhancement.")
tables=vision_figure_parser_docx_wrapper(sections=sections,tbls=tables,callback=callback,**kwargs)
res = tokenize_table(tables, doc, is_english)
callback(0.8, "Finish parsing.")
@ -521,25 +510,8 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
if layout_recognizer == "DeepDOC":
pdf_parser = Pdf()
try:
vision_model = LLMBundle(kwargs["tenant_id"], LLMType.IMAGE2TEXT)
callback(0.15, "Visual model detected. Attempting to enhance figure extraction...")
except Exception:
vision_model = None
if vision_model:
sections, tables, figures = pdf_parser(filename if not binary else binary, from_page=from_page, to_page=to_page, callback=callback, separate_tables_figures=True)
callback(0.5, "Basic parsing complete. Proceeding with figure enhancement...")
try:
pdf_vision_parser = VisionFigureParser(vision_model=vision_model, figures_data=figures, **kwargs)
boosted_figures = pdf_vision_parser(callback=callback)
tables.extend(boosted_figures)
except Exception as e:
callback(0.6, f"Visual model error: {e}. Skipping figure parsing enhancement.")
tables.extend(figures)
else:
sections, tables = pdf_parser(filename if not binary else binary, from_page=from_page, to_page=to_page, callback=callback)
sections, tables = pdf_parser(filename if not binary else binary, from_page=from_page, to_page=to_page, callback=callback)
tables=vision_figure_parser_pdf_wrapper(tbls=tables,callback=callback,**kwargs)
res = tokenize_table(tables, doc, is_english)
callback(0.8, "Finish parsing.")

View File

@ -23,6 +23,7 @@ from deepdoc.parser.utils import get_text
from rag.app import naive
from rag.nlp import rag_tokenizer, tokenize
from deepdoc.parser import PdfParser, ExcelParser, PlainParser, HtmlParser
from deepdoc.parser.figure_parser import vision_figure_parser_pdf_wrapper,vision_figure_parser_docx_wrapper
class Pdf(PdfParser):
@ -57,13 +58,8 @@ class Pdf(PdfParser):
sections = [(b["text"], self.get_position(b, zoomin))
for i, b in enumerate(self.boxes)]
for (img, rows), poss in tbls:
if not rows:
continue
sections.append((rows if isinstance(rows, str) else rows[0],
[(p[0] + 1 - from_page, p[1], p[2], p[3], p[4]) for p in poss]))
return [(txt, "") for txt, _ in sorted(sections, key=lambda x: (
x[-1][0][0], x[-1][0][3], x[-1][0][1]))], None
x[-1][0][0], x[-1][0][3], x[-1][0][1]))], tbls
def chunk(filename, binary=None, from_page=0, to_page=100000,
@ -80,6 +76,7 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
if re.search(r"\.docx$", filename, re.IGNORECASE):
callback(0.1, "Start to parse.")
sections, tbls = naive.Docx()(filename, binary)
tbls=vision_figure_parser_docx_wrapper(sections=sections,tbls=tbls,callback=callback,**kwargs)
sections = [s for s, _ in sections if s]
for (_, html), _ in tbls:
sections.append(html)
@ -89,8 +86,14 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
pdf_parser = Pdf()
if parser_config.get("layout_recognize", "DeepDOC") == "Plain Text":
pdf_parser = PlainParser()
sections, _ = pdf_parser(
sections, tbls = pdf_parser(
filename if not binary else binary, to_page=to_page, callback=callback)
tbls=vision_figure_parser_pdf_wrapper(tbls=tbls,callback=callback,**kwargs)
for (img, rows), poss in tbls:
if not rows:
continue
sections.append((rows if isinstance(rows, str) else rows[0],
[(p[0] + 1 - from_page, p[1], p[2], p[3], p[4]) for p in poss]))
sections = [s for s, _ in sections if s]
elif re.search(r"\.xlsx?$", filename, re.IGNORECASE):

View File

@ -18,12 +18,12 @@ import logging
import copy
import re
from deepdoc.parser.figure_parser import vision_figure_parser_pdf_wrapper
from api.db import ParserType
from rag.nlp import rag_tokenizer, tokenize, tokenize_table, add_positions, bullets_category, title_frequency, tokenize_chunks
from deepdoc.parser import PdfParser, PlainParser
import numpy as np
class Pdf(PdfParser):
def __init__(self):
self.model_speciess = ParserType.PAPER.value
@ -160,6 +160,9 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
pdf_parser = Pdf()
paper = pdf_parser(filename if not binary else binary,
from_page=from_page, to_page=to_page, callback=callback)
tbls=paper["tables"]
tbls=vision_figure_parser_pdf_wrapper(tbls=tbls,callback=callback,**kwargs)
paper["tables"] = tbls
else:
raise NotImplementedError("file type not supported yet(pdf supported)")

View File

@ -29,7 +29,7 @@ from rag.utils import clean_markdown_block
ocr = OCR()
# Gemini supported MIME types
VIDEO_EXTS = [".mp4", ".mov", ".avi", ".flv", ".mpeg", ".mpg", ".webm", ".wmv", ".3gp", ".3gpp"]
VIDEO_EXTS = [".mp4", ".mov", ".avi", ".flv", ".mpeg", ".mpg", ".webm", ".wmv", ".3gp", ".3gpp", ".mkv"]
def chunk(filename, binary, tenant_id, lang, callback=None, **kwargs):

View File

@ -29,6 +29,7 @@ from api.db.services.llm_service import LLMBundle
from api.utils import get_uuid
from api.utils.base64_image import image2id
from deepdoc.parser import ExcelParser
from deepdoc.parser.mineru_parser import MinerUParser
from deepdoc.parser.pdf_parser import PlainParser, RAGFlowPdfParser, VisionParser
from rag.app.naive import Docx
from rag.flow.base import ProcessBase, ProcessParamBase
@ -138,9 +139,16 @@ class ParserParam(ProcessParamBase):
"oggvorbis",
"ape"
],
"output_format": "json",
"output_format": "text",
},
"video": {
"suffix":[
"mp4",
"avi",
"mkv"
],
"output_format": "text",
},
"video": {},
}
def check(self):
@ -149,7 +157,7 @@ class ParserParam(ProcessParamBase):
pdf_parse_method = pdf_config.get("parse_method", "")
self.check_empty(pdf_parse_method, "Parse method abnormal.")
if pdf_parse_method.lower() not in ["deepdoc", "plain_text"]:
if pdf_parse_method.lower() not in ["deepdoc", "plain_text", "mineru"]:
self.check_empty(pdf_config.get("lang", ""), "PDF VLM language")
pdf_output_format = pdf_config.get("output_format", "")
@ -185,6 +193,10 @@ class ParserParam(ProcessParamBase):
if audio_config:
self.check_empty(audio_config.get("llm_id"), "Audio VLM")
video_config = self.setups.get("video", "")
if video_config:
self.check_empty(video_config.get("llm_id"), "Video VLM")
email_config = self.setups.get("email", "")
if email_config:
email_output_format = email_config.get("output_format", "")
@ -207,13 +219,34 @@ class Parser(ProcessBase):
elif conf.get("parse_method").lower() == "plain_text":
lines, _ = PlainParser()(blob)
bboxes = [{"text": t} for t, _ in lines]
elif conf.get("parse_method").lower() == "mineru":
mineru_executable = os.environ.get("MINERU_EXECUTABLE", "mineru")
pdf_parser = MinerUParser(mineru_path=mineru_executable)
if not pdf_parser.check_installation():
raise RuntimeError("MinerU not found. Please install it via: pip install -U 'mineru[core]'.")
lines, _ = pdf_parser.parse_pdf(
filepath=name,
binary=blob,
callback=self.callback,
output_dir=os.environ.get("MINERU_OUTPUT_DIR", ""),
delete_output=bool(int(os.environ.get("MINERU_DELETE_OUTPUT", 1))),
)
bboxes = []
for t, poss in lines:
box = {
"image": pdf_parser.crop(poss, 1),
"positions": [[pos[0][-1], *pos[1:]] for pos in pdf_parser.extract_positions(poss)],
"text": t,
}
bboxes.append(box)
else:
vision_model = LLMBundle(self._canvas._tenant_id, LLMType.IMAGE2TEXT, llm_name=conf.get("parse_method"), lang=self._param.setups["pdf"].get("lang"))
lines, _ = VisionParser(vision_model=vision_model)(blob, callback=self.callback)
bboxes = []
for t, poss in lines:
pn, x0, x1, top, bott = poss.split(" ")
bboxes.append({"page_number": int(pn), "x0": float(x0), "x1": float(x1), "top": float(top), "bottom": float(bott), "text": t})
for pn, x0, x1, top, bott in RAGFlowPdfParser.extract_positions(poss):
bboxes.append({"page_number": int(pn[0]), "x0": float(x0), "x1": float(x1), "top": float(top), "bottom": float(bott), "text": t})
if conf.get("output_format") == "json":
self.set_output("json", bboxes)
@ -357,6 +390,17 @@ class Parser(ProcessBase):
self.set_output("text", txt)
def _video(self, name, blob):
self.callback(random.randint(1, 5) / 100.0, "Start to work on an video.")
conf = self._param.setups["video"]
self.set_output("output_format", conf["output_format"])
cv_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.IMAGE2TEXT)
txt = cv_mdl.chat(system="", history=[], gen_conf={}, video_bytes=blob, filename=name)
self.set_output("text", txt)
def _email(self, name, blob):
self.callback(random.randint(1, 5) / 100.0, "Start to work on an email.")
@ -483,6 +527,7 @@ class Parser(ProcessBase):
"word": self._word,
"image": self._image,
"audio": self._audio,
"video": self._video,
"email": self._email,
}
try:

View File

@ -167,7 +167,7 @@ class Base(ABC):
ans = response.choices[0].message.content.strip()
if response.choices[0].finish_reason == "length":
ans = self._length_stop(ans)
return ans, self.total_token_count(response)
return ans, total_token_count_from_response(response)
def _chat_streamly(self, history, gen_conf, **kwargs):
logging.info("[HISTORY STREAMLY]" + json.dumps(history, ensure_ascii=False, indent=4))
@ -193,7 +193,7 @@ class Base(ABC):
reasoning_start = False
ans = resp.choices[0].delta.content
tol = self.total_token_count(resp)
tol = total_token_count_from_response(resp)
if not tol:
tol = num_tokens_from_string(resp.choices[0].delta.content)
@ -283,7 +283,7 @@ class Base(ABC):
for _ in range(self.max_rounds + 1):
logging.info(f"{self.tools=}")
response = self.client.chat.completions.create(model=self.model_name, messages=history, tools=self.tools, tool_choice="auto", **gen_conf)
tk_count += self.total_token_count(response)
tk_count += total_token_count_from_response(response)
if any([not response.choices, not response.choices[0].message]):
raise Exception(f"500 response structure error. Response: {response}")
@ -401,7 +401,7 @@ class Base(ABC):
answer += resp.choices[0].delta.content
yield resp.choices[0].delta.content
tol = self.total_token_count(resp)
tol = total_token_count_from_response(resp)
if not tol:
total_tokens += num_tokens_from_string(resp.choices[0].delta.content)
else:
@ -437,7 +437,7 @@ class Base(ABC):
if not resp.choices[0].delta.content:
resp.choices[0].delta.content = ""
continue
tol = self.total_token_count(resp)
tol = total_token_count_from_response(resp)
if not tol:
total_tokens += num_tokens_from_string(resp.choices[0].delta.content)
else:
@ -472,9 +472,6 @@ class Base(ABC):
yield total_tokens
def total_token_count(self, resp):
return total_token_count_from_response(resp)
def _calculate_dynamic_ctx(self, history):
"""Calculate dynamic context window size"""
@ -604,7 +601,7 @@ class BaiChuanChat(Base):
ans += LENGTH_NOTIFICATION_CN
else:
ans += LENGTH_NOTIFICATION_EN
return ans, self.total_token_count(response)
return ans, total_token_count_from_response(response)
def chat_streamly(self, system, history, gen_conf={}, **kwargs):
if system and history and history[0].get("role") != "system":
@ -627,7 +624,7 @@ class BaiChuanChat(Base):
if not resp.choices[0].delta.content:
resp.choices[0].delta.content = ""
ans = resp.choices[0].delta.content
tol = self.total_token_count(resp)
tol = total_token_count_from_response(resp)
if not tol:
total_tokens += num_tokens_from_string(resp.choices[0].delta.content)
else:
@ -691,9 +688,9 @@ class ZhipuChat(Base):
ans += LENGTH_NOTIFICATION_CN
else:
ans += LENGTH_NOTIFICATION_EN
tk_count = self.total_token_count(resp)
tk_count = total_token_count_from_response(resp)
if resp.choices[0].finish_reason == "stop":
tk_count = self.total_token_count(resp)
tk_count = total_token_count_from_response(resp)
yield ans
except Exception as e:
yield ans + "\n**ERROR**: " + str(e)
@ -812,7 +809,7 @@ class MiniMaxChat(Base):
ans += LENGTH_NOTIFICATION_CN
else:
ans += LENGTH_NOTIFICATION_EN
return ans, self.total_token_count(response)
return ans, total_token_count_from_response(response)
def chat_streamly(self, system, history, gen_conf):
if system and history and history[0].get("role") != "system":
@ -847,7 +844,7 @@ class MiniMaxChat(Base):
if "choices" in resp and "delta" in resp["choices"][0]:
text = resp["choices"][0]["delta"]["content"]
ans = text
tol = self.total_token_count(resp)
tol = total_token_count_from_response(resp)
if not tol:
total_tokens += num_tokens_from_string(text)
else:
@ -886,7 +883,7 @@ class MistralChat(Base):
ans += LENGTH_NOTIFICATION_CN
else:
ans += LENGTH_NOTIFICATION_EN
return ans, self.total_token_count(response)
return ans, total_token_count_from_response(response)
def chat_streamly(self, system, history, gen_conf={}, **kwargs):
if system and history and history[0].get("role") != "system":
@ -1110,7 +1107,7 @@ class BaiduYiyanChat(Base):
system = history[0]["content"] if history and history[0]["role"] == "system" else ""
response = self.client.do(model=self.model_name, messages=[h for h in history if h["role"] != "system"], system=system, **gen_conf).body
ans = response["result"]
return ans, self.total_token_count(response)
return ans, total_token_count_from_response(response)
def chat_streamly(self, system, history, gen_conf={}, **kwargs):
gen_conf["penalty_score"] = ((gen_conf.get("presence_penalty", 0) + gen_conf.get("frequency_penalty", 0)) / 2) + 1
@ -1124,7 +1121,7 @@ class BaiduYiyanChat(Base):
for resp in response:
resp = resp.body
ans = resp["result"]
total_tokens = self.total_token_count(resp)
total_tokens = total_token_count_from_response(resp)
yield ans
@ -1478,7 +1475,7 @@ class LiteLLMBase(ABC):
if response.choices[0].finish_reason == "length":
ans = self._length_stop(ans)
return ans, self.total_token_count(response)
return ans, total_token_count_from_response(response)
def _chat_streamly(self, history, gen_conf, **kwargs):
logging.info("[HISTORY STREAMLY]" + json.dumps(history, ensure_ascii=False, indent=4))
@ -1512,7 +1509,7 @@ class LiteLLMBase(ABC):
reasoning_start = False
ans = delta.content
tol = self.total_token_count(resp)
tol = total_token_count_from_response(resp)
if not tol:
tol = num_tokens_from_string(delta.content)
@ -1665,7 +1662,7 @@ class LiteLLMBase(ABC):
timeout=self.timeout,
)
tk_count += self.total_token_count(response)
tk_count += total_token_count_from_response(response)
if not hasattr(response, "choices") or not response.choices or not response.choices[0].message:
raise Exception(f"500 response structure error. Response: {response}")
@ -1797,7 +1794,7 @@ class LiteLLMBase(ABC):
answer += delta.content
yield delta.content
tol = self.total_token_count(resp)
tol = total_token_count_from_response(resp)
if not tol:
total_tokens += num_tokens_from_string(delta.content)
else:
@ -1846,7 +1843,7 @@ class LiteLLMBase(ABC):
delta = resp.choices[0].delta
if not hasattr(delta, "content") or delta.content is None:
continue
tol = self.total_token_count(resp)
tol = total_token_count_from_response(resp)
if not tol:
total_tokens += num_tokens_from_string(delta.content)
else:
@ -1880,17 +1877,6 @@ class LiteLLMBase(ABC):
yield total_tokens
def total_token_count(self, resp):
try:
return resp.usage.total_tokens
except Exception:
pass
try:
return resp["usage"]["total_tokens"]
except Exception:
pass
return 0
def _calculate_dynamic_ctx(self, history):
"""Calculate dynamic context window size"""

View File

@ -13,13 +13,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import base64
import json
import os
import tempfile
import logging
from abc import ABC
from copy import deepcopy
from io import BytesIO
from pathlib import Path
from urllib.parse import urljoin
import requests
from openai import OpenAI
@ -47,7 +50,7 @@ class Base(ABC):
def describe_with_prompt(self, image, prompt=None):
raise NotImplementedError("Please implement encode method!")
def _form_history(self, system, history, images=[]):
def _form_history(self, system, history, images=None):
hist = []
if system:
hist.append({"role": "system", "content": system})
@ -75,7 +78,7 @@ class Base(ABC):
})
return pmpt
def chat(self, system, history, gen_conf, images=[], **kwargs):
def chat(self, system, history, gen_conf, images=None, **kwargs):
try:
response = self.client.chat.completions.create(
model=self.model_name,
@ -86,7 +89,7 @@ class Base(ABC):
except Exception as e:
return "**ERROR**: " + str(e), 0
def chat_streamly(self, system, history, gen_conf, images=[], **kwargs):
def chat_streamly(self, system, history, gen_conf, images=None, **kwargs):
ans = ""
tk_count = 0
try:
@ -171,6 +174,7 @@ class GptV4(Base):
def __init__(self, key, model_name="gpt-4-vision-preview", lang="Chinese", base_url="https://api.openai.com/v1", **kwargs):
if not base_url:
base_url = "https://api.openai.com/v1"
self.api_key = key
self.client = OpenAI(api_key=key, base_url=base_url)
self.model_name = model_name
self.lang = lang
@ -224,6 +228,61 @@ class QWenCV(GptV4):
base_url = "https://dashscope.aliyuncs.com/compatible-mode/v1"
super().__init__(key, model_name, lang=lang, base_url=base_url, **kwargs)
def chat(self, system, history, gen_conf, images=None, video_bytes=None, filename=""):
if video_bytes:
try:
summary, summary_num_tokens = self._process_video(video_bytes, filename)
return summary, summary_num_tokens
except Exception as e:
return "**ERROR**: " + str(e), 0
return "**ERROR**: Method chat not supported yet.", 0
def _process_video(self, video_bytes, filename):
from dashscope import MultiModalConversation
video_suffix = Path(filename).suffix or ".mp4"
with tempfile.NamedTemporaryFile(delete=False, suffix=video_suffix) as tmp:
tmp.write(video_bytes)
tmp_path = tmp.name
video_path = f"file://{tmp_path}"
messages = [
{
"role": "user",
"content": [
{
"video": video_path,
"fps": 2,
},
{
"text": "Please summarize this video in proper sentences.",
},
],
}
]
def call_api():
response = MultiModalConversation.call(
api_key=self.api_key,
model=self.model_name,
messages=messages,
)
summary = response["output"]["choices"][0]["message"].content[0]["text"]
return summary, num_tokens_from_string(summary)
try:
return call_api()
except Exception as e1:
import dashscope
dashscope.base_http_api_url = "https://dashscope-intl.aliyuncs.com/api/v1"
try:
return call_api()
except Exception as e2:
raise RuntimeError(f"Both default and intl endpoint failed.\nFirst error: {e1}\nSecond error: {e2}")
class HunyuanCV(GptV4):
_FACTORY_NAME = "Tencent Hunyuan"
@ -447,7 +506,7 @@ class OllamaCV(Base):
options["frequency_penalty"] = gen_conf["frequency_penalty"]
return options
def _form_history(self, system, history, images=[]):
def _form_history(self, system, history, images=None):
hist = deepcopy(history)
if system and hist[0]["role"] == "user":
hist.insert(0, {"role": "system", "content": system})
@ -488,7 +547,7 @@ class OllamaCV(Base):
except Exception as e:
return "**ERROR**: " + str(e), 0
def chat(self, system, history, gen_conf, images=[]):
def chat(self, system, history, gen_conf, images=None):
try:
response = self.client.chat(
model=self.model_name,
@ -502,7 +561,7 @@ class OllamaCV(Base):
except Exception as e:
return "**ERROR**: " + str(e), 0
def chat_streamly(self, system, history, gen_conf, images=[]):
def chat_streamly(self, system, history, gen_conf, images=None):
ans = ""
try:
response = self.client.chat(
@ -537,7 +596,7 @@ class GeminiCV(Base):
self.lang = lang
Base.__init__(self, **kwargs)
def _form_history(self, system, history, images=[]):
def _form_history(self, system, history, images=None):
hist = []
if system:
hist.append({"role": "user", "parts": [system, history[0]["content"]]})
@ -574,7 +633,7 @@ class GeminiCV(Base):
return res.text, total_token_count_from_response(res)
def chat(self, system, history, gen_conf, images=[], video_bytes=None, filename=""):
def chat(self, system, history, gen_conf, images=None, video_bytes=None, filename=""):
if video_bytes:
try:
summary, summary_num_tokens = self._process_video(video_bytes, filename)
@ -592,7 +651,7 @@ class GeminiCV(Base):
except Exception as e:
return "**ERROR**: " + str(e), 0
def chat_streamly(self, system, history, gen_conf, images=[]):
def chat_streamly(self, system, history, gen_conf, images=None):
ans = ""
response = None
try:
@ -616,8 +675,6 @@ class GeminiCV(Base):
def _process_video(self, video_bytes, filename):
from google import genai
from google.genai import types
import tempfile
from pathlib import Path
video_size_mb = len(video_bytes) / (1024 * 1024)
client = genai.Client(api_key=self.api_key)
@ -725,7 +782,7 @@ class NvidiaCV(Base):
total_token_count_from_response(response)
)
def chat(self, system, history, gen_conf, images=[], **kwargs):
def chat(self, system, history, gen_conf, images=None, **kwargs):
try:
response = self._request(self._form_history(system, history, images), gen_conf)
return (
@ -735,7 +792,7 @@ class NvidiaCV(Base):
except Exception as e:
return "**ERROR**: " + str(e), 0
def chat_streamly(self, system, history, gen_conf, images=[], **kwargs):
def chat_streamly(self, system, history, gen_conf, images=None, **kwargs):
total_tokens = 0
try:
response = self._request(self._form_history(system, history, images), gen_conf)
@ -801,7 +858,7 @@ class AnthropicCV(Base):
gen_conf["max_tokens"] = self.max_tokens
return gen_conf
def chat(self, system, history, gen_conf, images=[]):
def chat(self, system, history, gen_conf, images=None):
gen_conf = self._clean_conf(gen_conf)
ans = ""
try:
@ -822,7 +879,7 @@ class AnthropicCV(Base):
except Exception as e:
return ans + "\n**ERROR**: " + str(e), 0
def chat_streamly(self, system, history, gen_conf, images=[]):
def chat_streamly(self, system, history, gen_conf, images=None):
gen_conf = self._clean_conf(gen_conf)
total_tokens = 0
try:
@ -906,13 +963,13 @@ class GoogleCV(AnthropicCV, GeminiCV):
else:
return GeminiCV.describe_with_prompt(self, image, prompt)
def chat(self, system, history, gen_conf, images=[]):
def chat(self, system, history, gen_conf, images=None):
if "claude" in self.model_name:
return AnthropicCV.chat(self, system, history, gen_conf, images)
else:
return GeminiCV.chat(self, system, history, gen_conf, images)
def chat_streamly(self, system, history, gen_conf, images=[]):
def chat_streamly(self, system, history, gen_conf, images=None):
if "claude" in self.model_name:
for ans in AnthropicCV.chat_streamly(self, system, history, gen_conf, images):
yield ans

View File

@ -459,12 +459,10 @@ def tree_merge(bull, sections, depth):
return len(BULLET_PATTERN[bull])+1, text
else:
return len(BULLET_PATTERN[bull])+2, text
level_set = set()
lines = []
for section in sections:
level, text = get_level(bull, section)
if not text.strip("\n"):
continue
@ -797,8 +795,8 @@ class Node:
def __init__(self, level, depth=-1, texts=None):
self.level = level
self.depth = depth
self.texts = texts if texts is not None else [] # 存放内容
self.children = [] # 子节点
self.texts = texts or []
self.children = []
def add_child(self, child_node):
self.children.append(child_node)
@ -825,35 +823,51 @@ class Node:
return f"Node(level={self.level}, texts={self.texts}, children={len(self.children)})"
def build_tree(self, lines):
stack = [self]
for line in lines:
level, text = line
node = Node(level=level, texts=[text])
if level <= self.depth or self.depth == -1:
while stack and level <= stack[-1].get_level():
stack.pop()
stack[-1].add_child(node)
stack.append(node)
else:
stack = [self]
for level, text in lines:
if self.depth != -1 and level > self.depth:
# Beyond target depth: merge content into the current leaf instead of creating deeper nodes
stack[-1].add_text(text)
return self
continue
# Move up until we find the proper parent whose level is strictly smaller than current
while len(stack) > 1 and level <= stack[-1].get_level():
stack.pop()
node = Node(level=level, texts=[text])
# Attach as child of current parent and descend
stack[-1].add_child(node)
stack.append(node)
return self
def get_tree(self):
tree_list = []
self._dfs(self, tree_list, 0, [])
self._dfs(self, tree_list, [])
return tree_list
def _dfs(self, node, tree_list, current_depth, titles):
def _dfs(self, node, tree_list, titles):
level = node.get_level()
texts = node.get_texts()
child = node.get_children()
if node.get_texts():
if 0 < node.get_level() < self.depth:
titles.extend(node.get_texts())
else:
combined_text = ["\n".join(titles + node.get_texts())]
tree_list.append(combined_text)
if level == 0 and texts:
tree_list.append("\n".join(titles+texts))
# Titles within configured depth are accumulated into the current path
if 1 <= level <= self.depth:
path_titles = titles + texts
else:
path_titles = titles
for child in node.get_children():
self._dfs(child, tree_list, current_depth + 1, titles.copy())
# Body outside the depth limit becomes its own chunk under the current title path
if level > self.depth and texts:
tree_list.append("\n".join(path_titles + texts))
# A leaf title within depth emits its title path as a chunk (header-only section)
elif not child and (1 <= level <= self.depth):
tree_list.append("\n".join(path_titles))
# Recurse into children with the updated title path
for c in child:
self._dfs(c, tree_list, path_titles)

View File

@ -388,6 +388,7 @@ class Dealer:
else:
# Don't need rerank here since Infinity normalizes each way score before fusion.
sim = [sres.field[id].get("_score", 0.0) for id in sres.ids]
sim = [s if s is not None else 0. for s in sim]
tsim = sim
vsim = sim
# Already paginated in search function

View File

@ -114,7 +114,7 @@ class RecursiveAbstractiveProcessing4TreeOrganizedRetrieval:
),
}
],
{"max_tokens": self._max_token},
{"max_tokens": max(self._max_token, 512)}, # fix issue: #10235
)
cnt = re.sub(
"(······\n由于长度的原因,回答被截断了,要继续吗?|For the content length reason, it stopped, continue?)",

View File

@ -228,9 +228,10 @@ async def collect():
canceled = False
if msg.get("doc_id", "") in [GRAPH_RAPTOR_FAKE_DOC_ID, CANVAS_DEBUG_DOC_ID]:
task = msg
if task["task_type"] in ["graphrag", "raptor", "mindmap"] and msg.get("doc_ids", []):
if task["task_type"] in ["graphrag", "raptor", "mindmap"]:
task = TaskService.get_task(msg["id"], msg["doc_ids"])
task["doc_ids"] = msg["doc_ids"]
task["doc_id"] = msg["doc_id"]
task["doc_ids"] = msg.get("doc_ids", []) or []
else:
task = TaskService.get_task(msg["id"])
@ -1052,13 +1053,14 @@ async def task_manager():
async def main():
logging.info(r"""
______ __ ______ __
/_ __/___ ______/ /__ / ____/ _____ _______ __/ /_____ _____
/ / / __ `/ ___/ //_/ / __/ | |/_/ _ \/ ___/ / / / __/ __ \/ ___/
/ / / /_/ (__ ) ,< / /____> </ __/ /__/ /_/ / /_/ /_/ / /
/_/ \__,_/____/_/|_| /_____/_/|_|\___/\___/\__,_/\__/\____/_/
____ __ _
/ _/___ ____ ____ _____/ /_(_)___ ____ ________ ______ _____ _____
/ // __ \/ __ `/ _ \/ ___/ __/ / __ \/ __ \ / ___/ _ \/ ___/ | / / _ \/ ___/
_/ // / / / /_/ / __(__ ) /_/ / /_/ / / / / (__ ) __/ / | |/ / __/ /
/___/_/ /_/\__, /\___/____/\__/_/\____/_/ /_/ /____/\___/_/ |___/\___/_/
/____/
""")
logging.info(f'TaskExecutor: RAGFlow version: {get_ragflow_version()}')
logging.info(f'RAGFlow version: {get_ragflow_version()}')
settings.init_settings()
print_rag_settings()
if sys.platform != "win32":

View File

@ -106,7 +106,7 @@ class RAGFlowOSS:
@use_prefix_path
@use_default_bucket
def put(self, bucket, fnm, binary):
def put(self, bucket, fnm, binary, tenant_id=None):
logging.debug(f"bucket name {bucket}; filename :{fnm}:")
for _ in range(1):
try:
@ -123,7 +123,7 @@ class RAGFlowOSS:
@use_prefix_path
@use_default_bucket
def rm(self, bucket, fnm):
def rm(self, bucket, fnm, tenant_id=None):
try:
self.conn.delete_object(Bucket=bucket, Key=fnm)
except Exception:
@ -131,7 +131,7 @@ class RAGFlowOSS:
@use_prefix_path
@use_default_bucket
def get(self, bucket, fnm):
def get(self, bucket, fnm, tenant_id=None):
for _ in range(1):
try:
r = self.conn.get_object(Bucket=bucket, Key=fnm)
@ -145,7 +145,7 @@ class RAGFlowOSS:
@use_prefix_path
@use_default_bucket
def obj_exist(self, bucket, fnm):
def obj_exist(self, bucket, fnm, tenant_id=None):
try:
if self.conn.head_object(Bucket=bucket, Key=fnm):
return True
@ -157,7 +157,7 @@ class RAGFlowOSS:
@use_prefix_path
@use_default_bucket
def get_presigned_url(self, bucket, fnm, expires):
def get_presigned_url(self, bucket, fnm, expires, tenant_id=None):
for _ in range(10):
try:
r = self.conn.generate_presigned_url('get_object',

View File

@ -1,6 +1,6 @@
[project]
name = "ragflow-sdk"
version = "0.21.0"
version = "0.21.1"
description = "Python client sdk of [RAGFlow](https://github.com/infiniflow/ragflow). RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding."
authors = [{ name = "Zhichang Yu", email = "yuzhichang@gmail.com" }]
license = { text = "Apache License, Version 2.0" }

2
sdk/python/uv.lock generated
View File

@ -342,7 +342,7 @@ wheels = [
[[package]]
name = "ragflow-sdk"
version = "0.21.0"
version = "0.21.1"
source = { virtual = "." }
dependencies = [
{ name = "beartype" },

View File

@ -83,7 +83,7 @@ class TestChunksRetrieval:
"ValueError('Search does not support negative slicing.')",
marks=pytest.mark.skip,
),
pytest.param({"page": 2, "page_size": 2}, 0, 2, "", marks=pytest.mark.skip(reason="issues/6646")),
({"page": 2, "page_size": 2}, 0, 2, ""),
({"page": 3, "page_size": 2}, 0, 0, ""),
({"page": "3", "page_size": 2}, 0, 0, ""),
pytest.param(
@ -124,9 +124,9 @@ class TestChunksRetrieval:
marks=pytest.mark.skip,
),
# ({"page_size": 0}, 0, 0, ""),
({"page_size": 1}, 0, 1, ""),
pytest.param({"page_size": 1}, 0, 1, "", marks=pytest.mark.skip(reason="issues/10692")),
({"page_size": 5}, 0, 4, ""),
({"page_size": "1"}, 0, 1, ""),
pytest.param({"page_size": "1"}, 0, 1, "", marks=pytest.mark.skip(reason="issues/10692")),
# ({"page_size": -1}, 0, 0, ""),
pytest.param(
{"page_size": "a"},

275
uv.lock generated
View File

@ -31,15 +31,6 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/9f/1c/a17fb513aeb684fb83bef5f395910f53103ab30308bbdd77fd66d6698c46/accelerate-1.9.0-py3-none-any.whl", hash = "sha256:c24739a97ade1d54af4549a65f8b6b046adc87e2b3e4d6c66516e32c53d5a8f1", size = 367073, upload-time = "2025-07-16T16:24:52.957Z" },
]
[[package]]
name = "acres"
version = "0.5.0"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/ec/ba/94b63a9af588fbf7bde25ce44d55456199654a92fb7b2337767198a824b0/acres-0.5.0.tar.gz", hash = "sha256:128b6447bf5df3b6210264feccbfa018b4ac5bd337358319aec6563f99db8f3a", size = 57750, upload-time = "2025-06-04T12:40:30.329Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/39/e8/806475fe4cdfd8635535d3fa11bd61d19b7cc94b61b9147ebdd2ab4cbbee/acres-0.5.0-py3-none-any.whl", hash = "sha256:fcc32b974b510897de0f041609b4234f9ff03e2e960aea088f63973fb106c772", size = 12703, upload-time = "2025-06-04T12:40:28.745Z" },
]
[[package]]
name = "aiofiles"
version = "24.1.0"
@ -300,9 +291,9 @@ dependencies = [
{ name = "python-dateutil" },
{ name = "types-python-dateutil" },
]
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/2e/00/0f6e8fcdb23ea632c866620cc872729ff43ed91d284c866b515c6342b173/arrow-1.3.0.tar.gz", hash = "sha256:d4540617648cb5f895730f1ad8c82a65f2dad0166f57b75f3ca54759c4d67a85" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/2e/00/0f6e8fcdb23ea632c866620cc872729ff43ed91d284c866b515c6342b173/arrow-1.3.0.tar.gz", hash = "sha256:d4540617648cb5f895730f1ad8c82a65f2dad0166f57b75f3ca54759c4d67a85", size = 131960, upload-time = "2023-09-30T22:11:18.25Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/f8/ed/e97229a566617f2ae958a6b13e7cc0f585470eac730a73e9e82c32a3cdd2/arrow-1.3.0-py3-none-any.whl", hash = "sha256:c728b120ebc00eb84e01882a6f5e7927a53960aa990ce7dd2b10f39005a67f80" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/f8/ed/e97229a566617f2ae958a6b13e7cc0f585470eac730a73e9e82c32a3cdd2/arrow-1.3.0-py3-none-any.whl", hash = "sha256:c728b120ebc00eb84e01882a6f5e7927a53960aa990ce7dd2b10f39005a67f80", size = 66419, upload-time = "2023-09-30T22:11:16.072Z" },
]
[[package]]
@ -827,15 +818,6 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/20/94/c5790835a017658cbfabd07f3bfb549140c3ac458cfc196323996b10095a/charset_normalizer-3.4.2-py3-none-any.whl", hash = "sha256:7f56930ab0abd1c45cd15be65cc741c28b1c9a34876ce8c17a2fa107810c0af0", size = 52626, upload-time = "2025-05-02T08:34:40.053Z" },
]
[[package]]
name = "ci-info"
version = "0.3.0"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/11/27/938d6ef93df09c686dcee1c7334578274320e98e7bf912a6409cf2c8c3e5/ci-info-0.3.0.tar.gz", hash = "sha256:1fd50cbd401f29adffeeb18b0489e232d16ac1a7458ac6bc316deab6ae535fb0", size = 25169, upload-time = "2022-07-27T17:22:49.365Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/13/c3/8ac768b389d5b6dda1c3ce7992b3acd2b46401f9b71439123858b17b1a2c/ci_info-0.3.0-py3-none-any.whl", hash = "sha256:e9e05d262a6c48aa03cd904475de5ce8c4da8a5435e516631c795d0487dc9e07", size = 7764, upload-time = "2022-07-27T17:22:47.196Z" },
]
[[package]]
name = "click"
version = "8.2.1"
@ -930,24 +912,6 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/07/1d/62f5bf92e12335eb63517f42671ed78512d48bbc69e02a942dd7b90f03f0/compressed_rtf-1.0.7-py3-none-any.whl", hash = "sha256:b7904921d78c67a0a4b7fff9fb361a00ae2b447b6edca010ce321cd98fa0fcc0", size = 7968, upload-time = "2025-03-24T23:03:57.433Z" },
]
[[package]]
name = "configobj"
version = "5.0.9"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/f5/c4/c7f9e41bc2e5f8eeae4a08a01c91b2aea3dfab40a3e14b25e87e7db8d501/configobj-5.0.9.tar.gz", hash = "sha256:03c881bbf23aa07bccf1b837005975993c4ab4427ba57f959afdd9d1a2386848", size = 101518, upload-time = "2024-09-21T12:47:46.315Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/a6/c4/0679472c60052c27efa612b4cd3ddd2a23e885dcdc73461781d2c802d39e/configobj-5.0.9-py2.py3-none-any.whl", hash = "sha256:1ba10c5b6ee16229c79a05047aeda2b55eb4e80d7c7d8ecf17ec1ca600c79882", size = 35615, upload-time = "2024-11-26T14:03:32.972Z" },
]
[[package]]
name = "configparser"
version = "7.2.0"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/8b/ac/ea19242153b5e8be412a726a70e82c7b5c1537c83f61b20995b2eda3dcd7/configparser-7.2.0.tar.gz", hash = "sha256:b629cc8ae916e3afbd36d1b3d093f34193d851e11998920fdcfc4552218b7b70", size = 51273, upload-time = "2025-03-08T16:04:09.339Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/09/fe/f61e7129e9e689d9e40bbf8a36fb90f04eceb477f4617c02c6a18463e81f/configparser-7.2.0-py3-none-any.whl", hash = "sha256:fee5e1f3db4156dcd0ed95bc4edfa3580475537711f67a819c966b389d09ce62", size = 17232, upload-time = "2025-03-08T16:04:07.743Z" },
]
[[package]]
name = "contourpy"
version = "1.3.2"
@ -1507,19 +1471,6 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/c1/8b/5fe2cc11fee489817272089c4203e679c63b570a5aaeb18d852ae3cbba6a/et_xmlfile-2.0.0-py3-none-any.whl", hash = "sha256:7a91720bc756843502c3b7504c77b8fe44217c85c537d85037f0f536151b2caa", size = 18059, upload-time = "2024-10-25T17:25:39.051Z" },
]
[[package]]
name = "etelemetry"
version = "0.3.1"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
dependencies = [
{ name = "ci-info" },
{ name = "packaging" },
{ name = "requests" },
]
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/83/27/f997c9da0e179986fadd6c8474d16743f1b3697c129c2fcd1e739cd038c2/etelemetry-0.3.1-py3-none-any.whl", hash = "sha256:a64f09bcd55cbfa5684e4d9fb6d1d6a018ab99d2ea28e638435c4c26e6814a6b" },
]
[[package]]
name = "events"
version = "0.5"
@ -2742,15 +2693,6 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/20/b0/36bd937216ec521246249be3bf9855081de4c5e06a0c9b4219dbeda50373/importlib_metadata-8.7.0-py3-none-any.whl", hash = "sha256:e5dd1551894c77868a30651cef00984d50e1002d06942a7101d34870c5f02afd", size = 27656, upload-time = "2025-04-27T15:29:00.214Z" },
]
[[package]]
name = "importlib-resources"
version = "6.5.2"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/cf/8c/f834fbf984f691b4f7ff60f50b514cc3de5cc08abfc3295564dd89c5e2e7/importlib_resources-6.5.2.tar.gz", hash = "sha256:185f87adef5bcc288449d98fb4fba07cea78bc036455dd44c5fc4a2fe78fed2c", size = 44693, upload-time = "2025-01-03T18:51:56.698Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/a4/ed/1f1afb2e9e7f38a545d628f864d562a5ae64fe6f7a10e28ffb9b185b4e89/importlib_resources-6.5.2-py3-none-any.whl", hash = "sha256:789cfdc3ed28c78b67a06acb8126751ced69a3d5f79c095a98298cd8a760ccec", size = 37461, upload-time = "2025-01-03T18:51:54.306Z" },
]
[[package]]
name = "infinity-emb"
version = "0.0.66"
@ -2767,7 +2709,7 @@ wheels = [
[[package]]
name = "infinity-sdk"
version = "0.6.0"
version = "0.6.1"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
dependencies = [
{ name = "numpy" },
@ -2784,7 +2726,7 @@ dependencies = [
{ name = "thrift" },
]
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/f4/12/1ce243cbede6da5fc28e5462d90d96b13995446b3a90889287d31255b36e/infinity_sdk-0.6.0-py3-none-any.whl", hash = "sha256:e379853ffc44acba428572d633032e6c9bb842d1f08e9cad690916f52a8c6ba8", size = 75256, upload-time = "2025-10-14T12:05:13.918Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/44/0e/7a596a41a79d15bb6c87e76862aa287bb98243a40fa7a31096b57df01613/infinity_sdk-0.6.1-py3-none-any.whl", hash = "sha256:b9cb1f7fee28569de8b763c353aa299fa141af70e67ceadc70562c84237229e4", size = 75260, upload-time = "2025-10-21T13:11:06.265Z" },
]
[[package]]
@ -3133,15 +3075,6 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/0c/29/0348de65b8cc732daa3e33e67806420b2ae89bdce2b04af740289c5c6c8c/loguru-0.7.3-py3-none-any.whl", hash = "sha256:31a33c10c8e1e10422bfd431aeb5d351c7cf7fa671e3c4df004162264b28220c", size = 61595, upload-time = "2024-12-06T11:20:54.538Z" },
]
[[package]]
name = "looseversion"
version = "1.3.0"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/64/7e/f13dc08e0712cc2eac8e56c7909ce2ac280dbffef2ffd87bd5277ce9d58b/looseversion-1.3.0.tar.gz", hash = "sha256:ebde65f3f6bb9531a81016c6fef3eb95a61181adc47b7f949e9c0ea47911669e", size = 8799, upload-time = "2023-07-05T16:07:51.173Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/4e/74/d5405b9b3b12e9176dff223576d7090bc161092878f533fd0dc23dd6ae1d/looseversion-1.3.0-py2.py3-none-any.whl", hash = "sha256:781ef477b45946fc03dd4c84ea87734b21137ecda0e1e122bcb3c8d16d2a56e0", size = 8237, upload-time = "2023-07-05T16:07:49.782Z" },
]
[[package]]
name = "lxml"
version = "5.3.0"
@ -3748,50 +3681,6 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/eb/8d/776adee7bbf76365fdd7f2552710282c79a4ead5d2a46408c9043a2b70ba/networkx-3.5-py3-none-any.whl", hash = "sha256:0030d386a9a06dee3565298b4a734b68589749a544acbb6c412dc9e2489ec6ec", size = 2034406, upload-time = "2025-05-29T11:35:04.961Z" },
]
[[package]]
name = "nibabel"
version = "5.3.2"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
dependencies = [
{ name = "importlib-resources", marker = "python_full_version < '3.12'" },
{ name = "numpy" },
{ name = "packaging" },
{ name = "typing-extensions" },
]
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/d9/61/33036cb89f1ec1fedbc4039602345d830b27cbd8a5c7bf28c2e5b5de3ea2/nibabel-5.3.2.tar.gz", hash = "sha256:0bdca6503b1c784b446c745a4542367de7756cfba0d72143b91f9ffb78be569b", size = 4504842, upload-time = "2024-10-23T14:19:55.866Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/43/b2/dc384197be44e2a640bb43311850e23c2c30f3b82ce7c8cdabbf0e53045e/nibabel-5.3.2-py3-none-any.whl", hash = "sha256:52970a5a8a53b1b55249cba4d9bcfaa8cc57e3e5af35a29d7352237e8680a6f8", size = 3293839, upload-time = "2024-10-23T14:19:52.65Z" },
]
[[package]]
name = "nipype"
version = "1.10.0"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
dependencies = [
{ name = "acres" },
{ name = "click" },
{ name = "etelemetry" },
{ name = "filelock" },
{ name = "looseversion" },
{ name = "networkx", version = "3.4.2", source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }, marker = "python_full_version < '3.11'" },
{ name = "networkx", version = "3.5", source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }, marker = "python_full_version >= '3.11'" },
{ name = "nibabel" },
{ name = "numpy" },
{ name = "packaging" },
{ name = "prov" },
{ name = "puremagic" },
{ name = "pydot" },
{ name = "python-dateutil" },
{ name = "rdflib" },
{ name = "scipy" },
{ name = "simplejson" },
{ name = "traits" },
]
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/e1/1a/7ff53f5802d37085a55d7c6df7c6ebebbc8a044930628ca21f7e661c1983/nipype-1.10.0.tar.gz", hash = "sha256:19e5d6cefa70997198f78bc665ef4d3d3cb53325b5b98a72e51aefadaf6b3e0e", size = 2919807, upload-time = "2025-03-19T23:30:07.473Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/91/53/c5ad0140e2e4c4d92ae45558587e26b2ebc62e39eafa30b74cb052d9375b/nipype-1.10.0-py3-none-any.whl", hash = "sha256:56ced3272e77952e330f13e28328a8fe2e8a69587ca89bc34234f7d06f8319bb", size = 3200685, upload-time = "2025-03-19T23:30:05.357Z" },
]
[[package]]
name = "nltk"
version = "3.9.1"
@ -4473,15 +4362,6 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/00/2f/804f58f0b856ab3bf21617cccf5b39206e6c4c94c2cd227bde125ea6105f/parameterized-0.9.0-py2.py3-none-any.whl", hash = "sha256:4e0758e3d41bea3bbd05ec14fc2c24736723f243b28d702081aef438c9372b1b", size = 20475, upload-time = "2023-03-27T02:01:09.31Z" },
]
[[package]]
name = "pathlib"
version = "1.0.1"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/ac/aa/9b065a76b9af472437a0059f77e8f962fe350438b927cb80184c32f075eb/pathlib-1.0.1.tar.gz", hash = "sha256:6940718dfc3eff4258203ad5021090933e5c04707d5ca8cc9e73c94a7894ea9f", size = 49298, upload-time = "2014-09-03T15:41:57.18Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/78/f9/690a8600b93c332de3ab4a344a4ac34f00c8f104917061f779db6a918ed6/pathlib-1.0.1-py3-none-any.whl", hash = "sha256:f35f95ab8b0f59e6d354090350b44a80a80635d22efdedfa84c7ad1cf0a74147", size = 14363, upload-time = "2022-05-04T13:37:20.585Z" },
]
[[package]]
name = "patsy"
version = "1.0.1"
@ -4817,21 +4697,6 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/3a/fa/4c3ac5527ed2e5f3577167ecd5f8180ffcdc8bdd59c9f143409c19706456/protobuf-5.27.2-py3-none-any.whl", hash = "sha256:54330f07e4949d09614707c48b06d1a22f8ffb5763c159efd5c0928326a91470", size = 164772, upload-time = "2024-06-25T20:54:52.196Z" },
]
[[package]]
name = "prov"
version = "2.1.1"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
dependencies = [
{ name = "networkx", version = "3.4.2", source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }, marker = "python_full_version < '3.11'" },
{ name = "networkx", version = "3.5", source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }, marker = "python_full_version >= '3.11'" },
{ name = "pydot" },
{ name = "python-dateutil" },
]
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/c6/bb/442f2e478061543c9c229f48c2d3a43cb0a77642584edecac126bc1ade99/prov-2.1.1.tar.gz", hash = "sha256:7d012b164f5bbb42e118ed9d25788ab012d09082b722bc9dd4e811a309ea57f5", size = 136802, upload-time = "2025-06-24T22:01:50.767Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/76/17/5703ad2380e57ecceb2700e30646ba0d856d9b90c9f33b01c68a3e298e3a/prov-2.1.1-py3-none-any.whl", hash = "sha256:04f74f9151b68f0bda68c943e111b1275207b19e197689043644a1b355a9d035", size = 425860, upload-time = "2025-06-24T22:01:49.485Z" },
]
[[package]]
name = "psutil"
version = "7.0.0"
@ -4891,15 +4756,6 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/7b/08/9c66c269b0d417a0af9fb969535f0371b8c538633535a7a6a5ca3f9231e2/psycopg2_binary-2.9.9-cp312-cp312-win_amd64.whl", hash = "sha256:81ff62668af011f9a48787564ab7eded4e9fb17a4a6a74af5ffa6a457400d2ab", size = 1163864, upload-time = "2023-10-28T09:37:28.155Z" },
]
[[package]]
name = "puremagic"
version = "1.30"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/dd/7f/9998706bc516bdd664ccf929a1da6c6e5ee06e48f723ce45aae7cf3ff36e/puremagic-1.30.tar.gz", hash = "sha256:f9ff7ac157d54e9cf3bff1addfd97233548e75e685282d84ae11e7ffee1614c9", size = 314785, upload-time = "2025-07-04T18:48:36.061Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/91/ed/1e347d85d05b37a8b9a039ca832e5747e1e5248d0bd66042783ef48b4a37/puremagic-1.30-py3-none-any.whl", hash = "sha256:5eeeb2dd86f335b9cfe8e205346612197af3500c6872dffebf26929f56e9d3c1", size = 43304, upload-time = "2025-07-04T18:48:34.801Z" },
]
[[package]]
name = "py"
version = "1.11.0"
@ -5156,18 +5012,6 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/ca/8f/86d7931c62013a5a7ebf4e1642a87d4a6050c0f570e714f61b0df1984c62/pydivert-2.1.0-py2.py3-none-any.whl", hash = "sha256:382db488e3c37c03ec9ec94e061a0b24334d78dbaeebb7d4e4d32ce4355d9da1", size = 104718, upload-time = "2017-10-20T21:36:56.726Z" },
]
[[package]]
name = "pydot"
version = "4.0.1"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
dependencies = [
{ name = "pyparsing" },
]
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/50/35/b17cb89ff865484c6a20ef46bf9d95a5f07328292578de0b295f4a6beec2/pydot-4.0.1.tar.gz", hash = "sha256:c2148f681c4a33e08bf0e26a9e5f8e4099a82e0e2a068098f32ce86577364ad5", size = 162594, upload-time = "2025-06-17T20:09:56.454Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/7e/32/a7125fb28c4261a627f999d5fb4afff25b523800faed2c30979949d6facd/pydot-4.0.1-py3-none-any.whl", hash = "sha256:869c0efadd2708c0be1f916eb669f3d664ca684bc57ffb7ecc08e70d5e93fee6", size = 37087, upload-time = "2025-06-17T20:09:55.25Z" },
]
[[package]]
name = "pyee"
version = "13.0.0"
@ -5560,20 +5404,6 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/ba/3a/2ae996277b4b50f17d61f0603efd8253cb2d79cc7ae159468007b586396d/pywin32-311-cp312-cp312-win_arm64.whl", hash = "sha256:e286f46a9a39c4a18b319c28f59b61de793654af2f395c102b4f819e584b5852", size = 8710102, upload-time = "2025-07-14T20:13:24.682Z" },
]
[[package]]
name = "pyxnat"
version = "1.6.3"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
dependencies = [
{ name = "lxml" },
{ name = "pathlib" },
{ name = "requests" },
]
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/7f/24/c8737985e65d8adbbf51970b2a75cf54b5376d68d251159d9b7c5c9673b6/pyxnat-1.6.3.tar.gz", hash = "sha256:ddd074f35f7b35b5dccb6f713b20cf083c79d6e0d3d9cafbcaabb7c661b0cc68", size = 82466, upload-time = "2025-02-04T19:03:53.801Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/1c/df/257c0f0af8e624daa924a3899f88e6465f162d72ada3fb0b96df9e61a2d6/pyxnat-1.6.3-py3-none-any.whl", hash = "sha256:a6d84dd24486eab9731a5de5df4fb486021b095665083c2fb1d33ac1e719d3c5", size = 95408, upload-time = "2025-02-04T19:03:51.707Z" },
]
[[package]]
name = "pyyaml"
version = "6.0.2"
@ -5637,7 +5467,7 @@ wheels = [
[[package]]
name = "ragflow"
version = "0.21.0"
version = "0.21.1"
source = { virtual = "." }
dependencies = [
{ name = "akshare" },
@ -5849,7 +5679,7 @@ requires-dist = [
{ name = "httpx", extras = ["socks"], specifier = ">=0.28.1,<0.29.0" },
{ name = "huggingface-hub", specifier = ">=0.25.0,<0.26.0" },
{ name = "infinity-emb", specifier = ">=0.0.66,<0.0.67" },
{ name = "infinity-sdk", specifier = "==0.6.0" },
{ name = "infinity-sdk", specifier = "==0.6.1" },
{ name = "itsdangerous", specifier = "==2.1.2" },
{ name = "json-repair", specifier = "==0.35.0" },
{ name = "langfuse", specifier = ">=2.60.0" },
@ -5985,19 +5815,6 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/1e/30/53f41b7b728a48da8974075f56c57200d7b11e4e9fa93be3cabf8218dc0c/ranx-0.3.20-py3-none-any.whl", hash = "sha256:e056e4d5981b0328b045868cc7064fc57a545f36009fbe9bb602295ec33335de", size = 99318, upload-time = "2024-07-01T17:40:27.095Z" },
]
[[package]]
name = "rdflib"
version = "7.2.1"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
dependencies = [
{ name = "isodate", marker = "python_full_version < '3.11'" },
{ name = "pyparsing" },
]
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/8d/99/d2fec85e5f6bdfe4367dea143119cb4469bf48710487939df0abf7e22003/rdflib-7.2.1.tar.gz", hash = "sha256:cf9b7fa25234e8925da8b1fb09700f8349b5f0f100e785fb4260e737308292ac", size = 4873802, upload-time = "2025-09-19T02:33:36.492Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/31/98/7fa830bb4b9da21905683a5352aa0a01a1f3082328ae976aad341e980c23/rdflib-7.2.1-py3-none-any.whl", hash = "sha256:1a175bc1386a167a42fbfaba003bfa05c164a2a3ca3cb9c0c97f9c9638ca6ac2", size = 565423, upload-time = "2025-09-19T02:33:30.889Z" },
]
[[package]]
name = "readability-lxml"
version = "0.8.1"
@ -6595,54 +6412,6 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/e0/f9/0595336914c5619e5f28a1fb793285925a8cd4b432c9da0a987836c7f822/shellingham-1.5.4-py2.py3-none-any.whl", hash = "sha256:7ecfff8f2fd72616f7481040475a65b2bf8af90a56c89140852d1120324e8686", size = 9755, upload-time = "2023-10-24T04:13:38.866Z" },
]
[[package]]
name = "simplejson"
version = "3.20.2"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/41/f4/a1ac5ed32f7ed9a088d62a59d410d4c204b3b3815722e2ccfb491fa8251b/simplejson-3.20.2.tar.gz", hash = "sha256:5fe7a6ce14d1c300d80d08695b7f7e633de6cd72c80644021874d985b3393649", size = 85784, upload-time = "2025-09-26T16:29:36.64Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/78/09/2bf3761de89ea2d91bdce6cf107dcd858892d0adc22c995684878826cc6b/simplejson-3.20.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:6d7286dc11af60a2f76eafb0c2acde2d997e87890e37e24590bb513bec9f1bc5", size = 94039, upload-time = "2025-09-26T16:27:29.283Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/0f/33/c3277db8931f0ae9e54b9292668863365672d90fb0f632f4cf9829cb7d68/simplejson-3.20.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c01379b4861c3b0aa40cba8d44f2b448f5743999aa68aaa5d3ef7049d4a28a2d", size = 75894, upload-time = "2025-09-26T16:27:30.378Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/fa/ea/ae47b04d03c7c8a7b7b1a8b39a6e27c3bd424e52f4988d70aca6293ff5e5/simplejson-3.20.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a16b029ca25645b3bc44e84a4f941efa51bf93c180b31bd704ce6349d1fc77c1", size = 76116, upload-time = "2025-09-26T16:27:31.42Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/4b/42/6c9af551e5a8d0f171d6dce3d9d1260068927f7b80f1f09834e07887c8c4/simplejson-3.20.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3e22a5fb7b1437ffb057e02e1936a3bfb19084ae9d221ec5e9f4cf85f69946b6", size = 138827, upload-time = "2025-09-26T16:27:32.486Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/2b/22/5e268bbcbe9f75577491e406ec0a5536f5b2fa91a3b52031fea51cd83e1d/simplejson-3.20.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d8b6ff02fc7b8555c906c24735908854819b0d0dc85883d453e23ca4c0445d01", size = 146772, upload-time = "2025-09-26T16:27:34.036Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/71/b4/800f14728e2ad666f420dfdb57697ca128aeae7f991b35759c09356b829a/simplejson-3.20.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2bfc1c396ad972ba4431130b42307b2321dba14d988580c1ac421ec6a6b7cee3", size = 134497, upload-time = "2025-09-26T16:27:35.211Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/c1/b9/c54eef4226c6ac8e9a389bbe5b21fef116768f97a2dc1a683c716ffe66ef/simplejson-3.20.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a97249ee1aee005d891b5a211faf58092a309f3d9d440bc269043b08f662eda", size = 138172, upload-time = "2025-09-26T16:27:36.44Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/09/36/4e282f5211b34620f1b2e4b51d9ddaab5af82219b9b7b78360a33f7e5387/simplejson-3.20.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:f1036be00b5edaddbddbb89c0f80ed229714a941cfd21e51386dc69c237201c2", size = 140272, upload-time = "2025-09-26T16:27:37.605Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/aa/b0/94ad2cf32f477c449e1f63c863d8a513e2408d651c4e58fe4b6a7434e168/simplejson-3.20.2-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:5d6f5bacb8cdee64946b45f2680afa3f54cd38e62471ceda89f777693aeca4e4", size = 140468, upload-time = "2025-09-26T16:27:39.015Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/e5/46/827731e4163be3f987cb8ee90f5d444161db8f540b5e735355faa098d9bc/simplejson-3.20.2-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:8db6841fb796ec5af632f677abf21c6425a1ebea0d9ac3ef1a340b8dc69f52b8", size = 148700, upload-time = "2025-09-26T16:27:40.171Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/c7/28/c32121064b1ec2fb7b5d872d9a1abda62df064d35e0160eddfa907118343/simplejson-3.20.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:c0a341f7cc2aae82ee2b31f8a827fd2e51d09626f8b3accc441a6907c88aedb7", size = 141323, upload-time = "2025-09-26T16:27:41.324Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/46/b6/c897c54326fe86dd12d101981171a49361949f4728294f418c3b86a1af77/simplejson-3.20.2-cp310-cp310-win32.whl", hash = "sha256:27f9c01a6bc581d32ab026f515226864576da05ef322d7fc141cd8a15a95ce53", size = 74377, upload-time = "2025-09-26T16:27:42.533Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/ad/87/a6e03d4d80cca99c1fee4e960f3440e2f21be9470e537970f960ca5547f1/simplejson-3.20.2-cp310-cp310-win_amd64.whl", hash = "sha256:c0a63ec98a4547ff366871bf832a7367ee43d047bcec0b07b66c794e2137b476", size = 76081, upload-time = "2025-09-26T16:27:43.945Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/b9/3e/96898c6c66d9dca3f9bd14d7487bf783b4acc77471b42f979babbb68d4ca/simplejson-3.20.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:06190b33cd7849efc413a5738d3da00b90e4a5382fd3d584c841ac20fb828c6f", size = 92633, upload-time = "2025-09-26T16:27:45.028Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/6b/a2/cd2e10b880368305d89dd540685b8bdcc136df2b3c76b5ddd72596254539/simplejson-3.20.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:4ad4eac7d858947a30d2c404e61f16b84d16be79eb6fb316341885bdde864fa8", size = 75309, upload-time = "2025-09-26T16:27:46.142Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/5d/02/290f7282eaa6ebe945d35c47e6534348af97472446951dce0d144e013f4c/simplejson-3.20.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b392e11c6165d4a0fde41754a0e13e1d88a5ad782b245a973dd4b2bdb4e5076a", size = 75308, upload-time = "2025-09-26T16:27:47.542Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/43/91/43695f17b69e70c4b0b03247aa47fb3989d338a70c4b726bbdc2da184160/simplejson-3.20.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:51eccc4e353eed3c50e0ea2326173acdc05e58f0c110405920b989d481287e51", size = 143733, upload-time = "2025-09-26T16:27:48.673Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/9b/4b/fdcaf444ac1c3cbf1c52bf00320c499e1cf05d373a58a3731ae627ba5e2d/simplejson-3.20.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:306e83d7c331ad833d2d43c76a67f476c4b80c4a13334f6e34bb110e6105b3bd", size = 153397, upload-time = "2025-09-26T16:27:49.89Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/c4/83/21550f81a50cd03599f048a2d588ffb7f4c4d8064ae091511e8e5848eeaa/simplejson-3.20.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f820a6ac2ef0bc338ae4963f4f82ccebdb0824fe9caf6d660670c578abe01013", size = 141654, upload-time = "2025-09-26T16:27:51.168Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/cf/54/d76c0e72ad02450a3e723b65b04f49001d0e73218ef6a220b158a64639cb/simplejson-3.20.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21e7a066528a5451433eb3418184f05682ea0493d14e9aae690499b7e1eb6b81", size = 144913, upload-time = "2025-09-26T16:27:52.331Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/3f/49/976f59b42a6956d4aeb075ada16ad64448a985704bc69cd427a2245ce835/simplejson-3.20.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:438680ddde57ea87161a4824e8de04387b328ad51cfdf1eaf723623a3014b7aa", size = 144568, upload-time = "2025-09-26T16:27:53.41Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/60/c7/30bae30424ace8cd791ca660fed454ed9479233810fe25c3f3eab3d9dc7b/simplejson-3.20.2-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:cac78470ae68b8d8c41b6fca97f5bf8e024ca80d5878c7724e024540f5cdaadb", size = 146239, upload-time = "2025-09-26T16:27:54.502Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/79/3e/7f3b7b97351c53746e7b996fcd106986cda1954ab556fd665314756618d2/simplejson-3.20.2-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:7524e19c2da5ef281860a3d74668050c6986be15c9dd99966034ba47c68828c2", size = 154497, upload-time = "2025-09-26T16:27:55.885Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/1d/48/7241daa91d0bf19126589f6a8dcbe8287f4ed3d734e76fd4a092708947be/simplejson-3.20.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:0e9b6d845a603b2eef3394eb5e21edb8626cd9ae9a8361d14e267eb969dbe413", size = 148069, upload-time = "2025-09-26T16:27:57.039Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/e6/f4/ef18d2962fe53e7be5123d3784e623859eec7ed97060c9c8536c69d34836/simplejson-3.20.2-cp311-cp311-win32.whl", hash = "sha256:47d8927e5ac927fdd34c99cc617938abb3624b06ff86e8e219740a86507eb961", size = 74158, upload-time = "2025-09-26T16:27:58.265Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/35/fd/3d1158ecdc573fdad81bf3cc78df04522bf3959758bba6597ba4c956c74d/simplejson-3.20.2-cp311-cp311-win_amd64.whl", hash = "sha256:ba4edf3be8e97e4713d06c3d302cba1ff5c49d16e9d24c209884ac1b8455520c", size = 75911, upload-time = "2025-09-26T16:27:59.292Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/9d/9e/1a91e7614db0416885eab4136d49b7303de20528860ffdd798ce04d054db/simplejson-3.20.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:4376d5acae0d1e91e78baeba4ee3cf22fbf6509d81539d01b94e0951d28ec2b6", size = 93523, upload-time = "2025-09-26T16:28:00.356Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/5e/2b/d2413f5218fc25608739e3d63fe321dfa85c5f097aa6648dbe72513a5f12/simplejson-3.20.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:f8fe6de652fcddae6dec8f281cc1e77e4e8f3575249e1800090aab48f73b4259", size = 75844, upload-time = "2025-09-26T16:28:01.756Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/ad/f1/efd09efcc1e26629e120fef59be059ce7841cc6e1f949a4db94f1ae8a918/simplejson-3.20.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:25ca2663d99328d51e5a138f22018e54c9162438d831e26cfc3458688616eca8", size = 75655, upload-time = "2025-09-26T16:28:03.037Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/97/ec/5c6db08e42f380f005d03944be1af1a6bd501cc641175429a1cbe7fb23b9/simplejson-3.20.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:12a6b2816b6cab6c3fd273d43b1948bc9acf708272074c8858f579c394f4cbc9", size = 150335, upload-time = "2025-09-26T16:28:05.027Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/81/f5/808a907485876a9242ec67054da7cbebefe0ee1522ef1c0be3bfc90f96f6/simplejson-3.20.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ac20dc3fcdfc7b8415bfc3d7d51beccd8695c3f4acb7f74e3a3b538e76672868", size = 158519, upload-time = "2025-09-26T16:28:06.5Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/66/af/b8a158246834645ea890c36136584b0cc1c0e4b83a73b11ebd9c2a12877c/simplejson-3.20.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:db0804d04564e70862ef807f3e1ace2cc212ef0e22deb1b3d6f80c45e5882c6b", size = 148571, upload-time = "2025-09-26T16:28:07.715Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/20/05/ed9b2571bbf38f1a2425391f18e3ac11cb1e91482c22d644a1640dea9da7/simplejson-3.20.2-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:979ce23ea663895ae39106946ef3d78527822d918a136dbc77b9e2b7f006237e", size = 152367, upload-time = "2025-09-26T16:28:08.921Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/81/2c/bad68b05dd43e93f77994b920505634d31ed239418eb6a88997d06599983/simplejson-3.20.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a2ba921b047bb029805726800819675249ef25d2f65fd0edb90639c5b1c3033c", size = 150205, upload-time = "2025-09-26T16:28:10.086Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/69/46/90c7fc878061adafcf298ce60cecdee17a027486e9dce507e87396d68255/simplejson-3.20.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:12d3d4dc33770069b780cc8f5abef909fe4a3f071f18f55f6d896a370fd0f970", size = 151823, upload-time = "2025-09-26T16:28:11.329Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/ab/27/b85b03349f825ae0f5d4f780cdde0bbccd4f06c3d8433f6a3882df887481/simplejson-3.20.2-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:aff032a59a201b3683a34be1169e71ddda683d9c3b43b261599c12055349251e", size = 158997, upload-time = "2025-09-26T16:28:12.917Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/71/ad/d7f3c331fb930638420ac6d236db68e9f4c28dab9c03164c3cd0e7967e15/simplejson-3.20.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:30e590e133b06773f0dc9c3f82e567463df40598b660b5adf53eb1c488202544", size = 154367, upload-time = "2025-09-26T16:28:14.393Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/f0/46/5c67324addd40fa2966f6e886cacbbe0407c03a500db94fb8bb40333fcdf/simplejson-3.20.2-cp312-cp312-win32.whl", hash = "sha256:8d7be7c99939cc58e7c5bcf6bb52a842a58e6c65e1e9cdd2a94b697b24cddb54", size = 74285, upload-time = "2025-09-26T16:28:15.931Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/fa/c9/5cc2189f4acd3a6e30ffa9775bf09b354302dbebab713ca914d7134d0f29/simplejson-3.20.2-cp312-cp312-win_amd64.whl", hash = "sha256:2c0b4a67e75b945489052af6590e7dca0ed473ead5d0f3aad61fa584afe814ab", size = 75969, upload-time = "2025-09-26T16:28:17.017Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/05/5b/83e1ff87eb60ca706972f7e02e15c0b33396e7bdbd080069a5d1b53cf0d8/simplejson-3.20.2-py3-none-any.whl", hash = "sha256:3b6bb7fb96efd673eac2e4235200bfffdc2353ad12c54117e1e4e2fc485ac017", size = 57309, upload-time = "2025-09-26T16:29:35.312Z" },
]
[[package]]
name = "six"
version = "1.16.0"
@ -7248,38 +7017,6 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl", hash = "sha256:26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2", size = 78540, upload-time = "2024-11-24T20:12:19.698Z" },
]
[[package]]
name = "traits"
version = "7.0.2"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/9e/ba/33e199bfae748e802f68a857035fb003089c176897bf43e2cf38ff167740/traits-7.0.2.tar.gz", hash = "sha256:a563515809cb3911975de5a54209855f0b6fdb7ca6912a5e81de26529f70428c", size = 9534785, upload-time = "2025-01-24T20:52:59.954Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/06/5c/6aa6aef1472a79accd4c077cc8eccf3c3a2acc4b42ece2c48f5651f2f915/traits-7.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:cb59a033260dfa3aacfe484307a91f318a1fa801f5e8c8293fe22834fa4b30a7", size = 5034452, upload-time = "2025-01-24T20:55:25.02Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/73/0a/8387ff6f32898c334b2a96b465a8790633cec3c2270893210946d43de0d3/traits-7.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f5c18d5f4aea2988b15bc10e2ac9f4eb49531d1ec380857f3046a7ba14509e4b", size = 5034825, upload-time = "2025-01-24T20:56:04.238Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/8f/15/a04a5e1cd0c2e2979365e1ac3a674ec0f16a5af36d19809c869985e63f7a/traits-7.0.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:11950d519b113e9a34d5a99fca112866d8c36aa8fce85edadf52995ad03de07e", size = 5110401, upload-time = "2025-01-24T20:57:19.172Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/b3/da/58d58c3495b2bfee03975d95799d5a8ac771a2f510d579935122c02d26dc/traits-7.0.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d50b42061cb8f34119b6b7abe703982c6fa157a2fe4e10a5b9ab9f93c340d5e3", size = 5121856, upload-time = "2025-01-24T20:57:20.949Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/fe/74/66ed1b2511c0a457f716f6c718abf807db58c76292cbd69ecf4390519fea/traits-7.0.2-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:53fbd8a0adf42d235e6a73bd3fbb3f7190a28302d151c9a25967ff6f12b918cd", size = 5109296, upload-time = "2025-01-24T20:57:23.835Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/9e/30/60efe8a3fe454fd7b939695d556cdee7943b1ced19fc40f9b4f2a240211c/traits-7.0.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:0b48be9fb0b9e5a733e9fa5a542b0751237822e20b52fac80b5796cc606af509", size = 5117788, upload-time = "2025-01-24T20:57:27.096Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/62/ef/e884bd2c05d52415acb0344ed3847f1c3835d1651a4189a17e06fa2363fa/traits-7.0.2-cp310-cp310-win32.whl", hash = "sha256:5b98600b9f40e980e0cc5b1f0ade5fb1c1f1c19d25afc2b33ea30773015eb3e5", size = 5033760, upload-time = "2025-01-24T21:01:04.683Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/d2/71/a630ee815843e3d87484c9a0368f81eb993e862aa4cb9c20822deee7e9a3/traits-7.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:def3ab01e7d636aceda9dc6ca2abf71f2a992f9ec993c7ea200157c1ca983ae7", size = 5036225, upload-time = "2025-01-24T21:01:07.817Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/73/db/da628e34564a89f68d6b3ff5caee8a0a932858a4a3e1bf0d077d9f6d053c/traits-7.0.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:33fd20c3bc29fbb1f51ddb23f63173bf59a2fdafd300e5f4790352d76e4cf68e", size = 5034488, upload-time = "2025-01-24T20:55:26.853Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/e9/4e/d64ad9fb725ff1b943432c5df32c64abb28ad17f66e976d6ce6aaa1b54d5/traits-7.0.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:018d4f7cbd5e18cb34bafc915134c29aa8568bccd35d9aa9102e2af9ef66cb80", size = 5034832, upload-time = "2025-01-24T20:56:06.125Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/3f/80/f32ade6b131c69d2a3451edfa5c9f23056c3c9889b1d7918890ff6dad273/traits-7.0.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fa323b634abd9c7f049892d86296bc1c46bad6ad80121fefeaf12039002d58ff", size = 5119215, upload-time = "2025-01-24T20:57:31.594Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/be/d6/0c7c2c12a53698906e86a0076d13ee3d529a5c0a44468e89cb8a91186f22/traits-7.0.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:209bfb95c094cd315f77fc610ae65af86ec0de075be2d84e6e6290ff2f860715", size = 5130753, upload-time = "2025-01-24T20:57:34.737Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/8b/09/070aef46f818eaab7afdada8647b303facb14d4d5f931c1fb560cfc24e1b/traits-7.0.2-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:4f38eee0b94f9fbab2f086200e35f835ad1563ba7e078a044cb268ce50542565", size = 5117762, upload-time = "2025-01-24T20:57:36.764Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/85/99/fb239d5fe1ac2931c284496995998abc72f6af0ca32cfdb70095b883fab9/traits-7.0.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:135dc11da393f5dec1ecaf6981f0608976354435f7be53b9e9175a9c8a118127", size = 5126325, upload-time = "2025-01-24T20:57:38.638Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/73/48/6c1484be7d5b322c57415c9b6d39c7419ad4ee1eb52b288ddfa3893caf31/traits-7.0.2-cp311-cp311-win32.whl", hash = "sha256:c588571d981d1254d9abf8bd2f8e449f82f31ebe8f951853290910ae2f03dc84", size = 5033773, upload-time = "2025-01-24T21:01:09.598Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/73/f4/d8cb863aaacfe1633d2b636647bcc70b1cd2e258e4a83e71eae995a34ed4/traits-7.0.2-cp311-cp311-win_amd64.whl", hash = "sha256:98a880b6adab40d66ce0eda1c6f4fdcf178bb182d28d0fb71d3755c36065dd39", size = 5036235, upload-time = "2025-01-24T21:01:12.296Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/7e/6c/9b3be8e459627267de56029a0c91e9a9c9a082353cd5b9ec1edd2f4738a5/traits-7.0.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:bccfafbda22346f0278f010458e819f0a58a95f242f91e14014b055580a15cd8", size = 5035260, upload-time = "2025-01-24T20:55:28.536Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/35/0c/990486e972614dd0173ea647b80c30c30d3ad4819befa9ec94f4a8a421b6/traits-7.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d9899ee203fd379fb0e07aebc178940d62d5790dc311263d5c3a577f3baf7dfa", size = 5035240, upload-time = "2025-01-24T20:56:08.856Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/11/7c/458041d4b345ddd351451303353acbc72a36cbc47649eedb29863a37f119/traits-7.0.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2938cccfea2da2fdce6cc7ec1e605c923e66610df1b223cf24a4b23ba97375de", size = 5121555, upload-time = "2025-01-24T20:57:41.688Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/77/f3/7736bf1bee46c6fd1c488e180236067c91490cf2aea235ed851bcf2151e2/traits-7.0.2-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f696c4d4d03b333e8f8beec206d80d4998ce6b4801deb74c258dbc4415f92345", size = 5135379, upload-time = "2025-01-24T20:57:45.797Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/f0/07/e80f6663d460f80f09b443175cb8118b74ca3b7bd164f1ec5c44e1da2047/traits-7.0.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:c49384b12ecaf39b9ab156e1c7d31960206e15071a9917596ab3c265d7bb99aa", size = 5120513, upload-time = "2025-01-24T20:57:49.354Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/f2/8b/0716f7b8f34e1b57b39f81472460f4e02491dde02fbc114bac42cf0acd85/traits-7.0.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:6932e5a784000368aa3948890bf55c4aba10494d4a45e9bb6c2b228644f2e67c", size = 5130509, upload-time = "2025-01-24T20:57:51.933Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/c5/bf/e0135ce54d5604c57caad8866ac56a05265943a1b3a438277fb6ee10b0f6/traits-7.0.2-cp312-cp312-win32.whl", hash = "sha256:f434da460be8b3eb9f9f35143af116622cd313fa346c0df37b026d318c88ad29", size = 5034118, upload-time = "2025-01-24T21:01:14.04Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/a7/2b/49423d5b269dfc095e09ecbb41b987b224f4154716d91da063cebaf963a0/traits-7.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:497463a437cb8cd4bb2ed27ae4e4491a8ed3d4d8515803476c94ce952a17af54", size = 5036464, upload-time = "2025-01-24T21:01:16.256Z" },
]
[[package]]
name = "transformers"
version = "4.36.2"

View File

@ -1,5 +1,5 @@
import { transformFile2Base64 } from '@/utils/file-util';
import { Pencil, Upload, XIcon } from 'lucide-react';
import { Pencil, Plus, XIcon } from 'lucide-react';
import {
ChangeEventHandler,
forwardRef,
@ -12,10 +12,14 @@ import { Avatar, AvatarFallback, AvatarImage } from './ui/avatar';
import { Button } from './ui/button';
import { Input } from './ui/input';
type AvatarUploadProps = { value?: string; onChange?: (value: string) => void };
type AvatarUploadProps = {
value?: string;
onChange?: (value: string) => void;
tips?: string;
};
export const AvatarUpload = forwardRef<HTMLInputElement, AvatarUploadProps>(
function AvatarUpload({ value, onChange }, ref) {
function AvatarUpload({ value, onChange, tips }, ref) {
const { t } = useTranslation();
const [avatarBase64Str, setAvatarBase64Str] = useState(''); // Avatar Image base64
@ -47,9 +51,9 @@ export const AvatarUpload = forwardRef<HTMLInputElement, AvatarUploadProps>(
<div className="flex justify-start items-end space-x-2">
<div className="relative group">
{!avatarBase64Str ? (
<div className="w-[64px] h-[64px] grid place-content-center border border-dashed rounded-md">
<div className="w-[64px] h-[64px] grid place-content-center border border-dashed bg-bg-input rounded-md">
<div className="flex flex-col items-center">
<Upload />
<Plus />
<p>{t('common.upload')}</p>
</div>
</div>
@ -86,8 +90,8 @@ export const AvatarUpload = forwardRef<HTMLInputElement, AvatarUploadProps>(
ref={ref}
/>
</div>
<div className="margin-1 text-muted-foreground">
{t('knowledgeConfiguration.photoTip')}
<div className="margin-1 text-text-secondary">
{tips ?? t('knowledgeConfiguration.photoTip')}
</div>
</div>
);

View File

@ -0,0 +1,17 @@
import { cn } from '@/lib/utils';
import { PropsWithChildren } from 'react';
type CardContainerProps = { className?: string } & PropsWithChildren;
export function CardContainer({ children, className }: CardContainerProps) {
return (
<section
className={cn(
'grid gap-6 sm:grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4 2xl:grid-cols-5',
className,
)}
>
{children}
</section>
);
}

View File

@ -51,8 +51,8 @@ export function Collapse({
onOpenChange={handleOpenChange}
disabled={disabled}
>
<CollapsibleTrigger className="w-full">
<section className="flex justify-between items-center pb-2">
<CollapsibleTrigger className={'w-full'}>
<section className="flex justify-between items-center">
<div className="flex items-center gap-1">
<IconFontFill
name={`more`}
@ -60,12 +60,18 @@ export function Collapse({
'rotate-90': !currentOpen,
})}
></IconFontFill>
{title}
<div
className={cn('text-text-secondary', {
'text-text-primary': open,
})}
>
{title}
</div>
</div>
<div>{rightContent}</div>
</section>
</CollapsibleTrigger>
<CollapsibleContent>{children}</CollapsibleContent>
<CollapsibleContent className="pt-2">{children}</CollapsibleContent>
</Collapsible>
);
}
@ -94,7 +100,7 @@ export function NodeCollapsible<T extends any[]>({
>
{nextItems.slice(0, 3).map(children)}
<CollapsibleContent className={nextClassName}>
{nextItems.slice(3).map(children)}
{nextItems.slice(3).map((x, idx) => children(x, idx + 3))}
</CollapsibleContent>
{nextItems.length > 3 && (
<CollapsibleTrigger

View File

@ -56,13 +56,13 @@ export function ConfirmDeleteDialog({
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel onClick={onCancel}>
{t('common.cancel')}
{t('common.no')}
</AlertDialogCancel>
<AlertDialogAction
className="bg-state-error text-text-primary"
onClick={onOk}
>
{t('common.ok')}
{t('common.yes')}
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>

View File

@ -1,5 +1,4 @@
import { PlusOutlined } from '@ant-design/icons';
import { TweenOneGroup } from 'rc-tween-one';
import React, { useEffect, useRef, useState } from 'react';
import { X } from 'lucide-react';
@ -57,7 +56,7 @@ const EditTag = React.forwardRef<HTMLDivElement, EditTagsProps>(
<HoverCard key={tag}>
<HoverCardContent side="top">{tag}</HoverCardContent>
<HoverCardTrigger asChild>
<div className="w-fit flex items-center justify-center gap-2 border-dashed border px-1 rounded-sm bg-bg-card">
<div className="w-fit flex items-center justify-center gap-2 border-dashed border px-2 py-1 rounded-sm bg-bg-card">
<div className="flex gap-2 items-center">
<div className="max-w-80 overflow-hidden text-ellipsis">
{tag}
@ -84,11 +83,11 @@ const EditTag = React.forwardRef<HTMLDivElement, EditTagsProps>(
return (
<div>
{inputVisible ? (
{inputVisible && (
<Input
ref={inputRef}
type="text"
className="h-8 bg-bg-card"
className="h-8 bg-bg-card mb-1"
value={inputValue}
onChange={handleInputChange}
onBlur={handleInputConfirm}
@ -98,36 +97,20 @@ const EditTag = React.forwardRef<HTMLDivElement, EditTagsProps>(
}
}}
/>
) : (
<Button
variant="dashed"
className="w-fit flex items-center justify-center gap-2 bg-bg-card"
onClick={showInput}
style={tagPlusStyle}
>
<PlusOutlined />
</Button>
)}
{Array.isArray(tagChild) && tagChild.length > 0 && (
<TweenOneGroup
className="flex gap-2 flex-wrap mt-2"
enter={{
scale: 0.8,
opacity: 0,
type: 'from',
duration: 100,
}}
onEnd={(e) => {
if (e.type === 'appear' || e.type === 'enter') {
(e.target as any).style = 'display: inline-block';
}
}}
leave={{ opacity: 0, width: 0, scale: 0, duration: 200 }}
appear={false}
>
{tagChild}
</TweenOneGroup>
)}
<div className="flex gap-2 py-1">
{Array.isArray(tagChild) && tagChild.length > 0 && <>{tagChild}</>}
{!inputVisible && (
<Button
variant="dashed"
className="w-fit flex items-center justify-center gap-2 bg-bg-card"
onClick={showInput}
style={tagPlusStyle}
>
<PlusOutlined />
</Button>
)}
</div>
</div>
);
},

View File

@ -24,7 +24,6 @@ export function HomeCard({
}: IProps) {
return (
<Card
className="bg-bg-card border-colors-outline-neutral-standard"
onClick={() => {
// navigateToSearch(data?.id);
onClick?.();

View File

@ -0,0 +1,48 @@
import { ThemeEnum } from '@/constants/common';
import { Moon, Sun } from 'lucide-react';
import { FC, useCallback } from 'react';
import { useIsDarkTheme, useTheme } from './theme-provider';
import { Button } from './ui/button';
const ThemeToggle: FC = () => {
const { setTheme } = useTheme();
const isDarkTheme = useIsDarkTheme();
const handleThemeChange = useCallback(
(checked: boolean) => {
setTheme(checked ? ThemeEnum.Dark : ThemeEnum.Light);
},
[setTheme],
);
return (
<Button
type="button"
onClick={() => handleThemeChange(!isDarkTheme)}
className="relative inline-flex h-6 w-14 items-center rounded-full transition-colors p-0.5 border-none focus:border-none bg-bg-card hover:bg-bg-card"
// aria-label={isDarkTheme ? 'Switch to light mode' : 'Switch to dark mode'}
>
<div className="inline-flex h-full w-full items-center">
<div
className={`inline-flex transform items-center justify-center rounded-full transition-transform ${
isDarkTheme
? ' text-text-disabled h-4 w-5'
: ' text-text-primary bg-bg-base h-full w-8 flex-1'
}`}
>
<Sun />
</div>
<div
className={`inline-flex transform items-center justify-center rounded-full transition-transform ${
isDarkTheme
? ' text-text-primary bg-bg-base h-full w-8 flex-1'
: 'text-text-disabled h-4 w-5'
}`}
>
<Moon />
</div>
</div>
</Button>
);
};
export default ThemeToggle;

View File

@ -8,7 +8,10 @@ const Card = React.forwardRef<
>(({ className, ...props }, ref) => (
<div
ref={ref}
className={cn('rounded-lg bg-bg-card shadow-sm', className)}
className={cn(
'rounded-lg border-border-default border shadow-sm bg-bg-input',
className,
)}
{...props}
/>
));

View File

@ -73,7 +73,7 @@ const DialogFooter = ({
}: React.HTMLAttributes<HTMLDivElement>) => (
<div
className={cn(
'flex flex-col-reverse sm:flex-row sm:justify-end sm:space-x-2',
'flex flex-col-reverse sm:flex-row sm:justify-end sm:space-x-4',
className,
)}
{...props}

View File

@ -106,8 +106,10 @@ const FormLabel = React.forwardRef<
htmlFor={formItemId}
{...props}
>
{required && <span className="text-destructive">*</span>}
{props.children}
<section>
{required && <span className="text-destructive">*</span>}
{props.children}
</section>
{tooltip && <FormTooltip tooltip={tooltip}></FormTooltip>}
</Label>
);

View File

@ -7,7 +7,7 @@ import * as React from 'react';
import { cn } from '@/lib/utils';
const labelVariants = cva(
'text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70',
'text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70 text-text-secondary',
);
const Label = React.forwardRef<

View File

@ -140,6 +140,7 @@ const Modal: ModalType = ({
</div>
);
}, [
disabled,
footer,
cancelText,
t,
@ -158,7 +159,7 @@ const Modal: ModalType = ({
onClick={() => maskClosable && onOpenChange?.(false)}
>
<DialogPrimitive.Content
className={`relative w-[700px] ${full ? 'max-w-full' : sizeClasses[size]} ${className} bg-colors-background-neutral-standard rounded-lg shadow-lg border transition-all focus-visible:!outline-none`}
className={`relative w-[700px] ${full ? 'max-w-full' : sizeClasses[size]} ${className} bg-bg-base rounded-lg shadow-lg border border-border-default transition-all focus-visible:!outline-none`}
onClick={(e) => e.stopPropagation()}
>
{/* title */}

View File

@ -94,9 +94,9 @@ export const useShowDeleteConfirm = () => {
title: title ?? t('common.deleteModalTitle'),
icon: <ExclamationCircleFilled />,
content,
okText: t('common.ok'),
okText: t('common.yes'),
okType: 'danger',
cancelText: t('common.cancel'),
cancelText: t('common.no'),
async onOk() {
try {
const ret = await onOk?.();

View File

@ -257,25 +257,32 @@ export const useSendMessageWithSse = (
.getReader();
while (true) {
const x = await reader?.read();
if (x) {
const { done, value } = x;
if (done) {
resetAnswer();
break;
}
try {
const val = JSON.parse(value?.data || '');
const d = val?.data;
if (typeof d !== 'boolean') {
setAnswer({
...d,
conversationId: body?.conversation_id,
chatBoxId: body.chatBoxId,
});
try {
const x = await reader?.read();
if (x) {
const { done, value } = x;
if (done) {
resetAnswer();
break;
}
} catch (e) {
// Swallow parse errors silently
try {
const val = JSON.parse(value?.data || '');
const d = val?.data;
if (typeof d !== 'boolean') {
setAnswer({
...d,
conversationId: body?.conversation_id,
chatBoxId: body.chatBoxId,
});
}
} catch (e) {
// Swallow parse errors silently
}
}
} catch (e) {
if (e instanceof DOMException && e.name === 'AbortError') {
console.log('Request was aborted by user or logic.');
break;
}
}
}

View File

@ -126,29 +126,36 @@ export const useSendMessageBySSE = (url: string = api.completeConversation) => {
.getReader();
while (true) {
const x = await reader?.read();
if (x) {
const { done, value } = x;
if (done) {
console.info('done');
resetAnswerList();
break;
}
try {
const val = JSON.parse(value?.data || '');
console.info('data:', val);
if (val.code === 500) {
message.error(val.message);
try {
const x = await reader?.read();
if (x) {
const { done, value } = x;
if (done) {
console.info('done');
resetAnswerList();
break;
}
try {
const val = JSON.parse(value?.data || '');
setAnswerList((list) => {
const nextList = [...list];
nextList.push(val);
return nextList;
});
} catch (e) {
console.warn(e);
console.info('data:', val);
if (val.code === 500) {
message.error(val.message);
}
setAnswerList((list) => {
const nextList = [...list];
nextList.push(val);
return nextList;
});
} catch (e) {
console.warn(e);
}
}
} catch (e) {
if (e instanceof DOMException && e.name === 'AbortError') {
console.log('Request was aborted by user or logic.');
break;
}
}
}

View File

@ -83,12 +83,19 @@ export interface IFlowTemplate {
canvas_type: string;
create_date: string;
create_time: number;
description: string;
canvas_category?: string;
dsl: DSL;
id: string;
title: string;
update_date: string;
update_time: number;
description: {
en: string;
zh: string;
};
title: {
en: string;
zh: string;
};
}
export interface IGenerateForm {

View File

@ -6,8 +6,9 @@ export default {
selectAll: 'Select All',
delete: 'Delete',
deleteModalTitle: 'Are you sure to delete this item?',
ok: 'Yes',
cancel: 'No',
ok: 'Ok',
cancel: 'Cancel',
yes: 'Yes',
no: 'No',
total: 'Total',
rename: 'Rename',
@ -137,7 +138,7 @@ export default {
completed: 'Completed',
datasetLog: 'Dataset Log',
created: 'Created',
learnMore: 'Learn More',
learnMore: 'Built-in pipeline introduction',
general: 'General',
chunkMethodTab: 'Chunk Method',
testResults: 'Test Results',
@ -429,7 +430,7 @@ export default {
`,
useRaptor: 'RAPTOR',
useRaptorTip:
'Enable RAPTOR for multi-hop question-answering tasks. See https://ragflow.io/docs/dev/enable_raptor for details.',
'RAPTOR can be used for multi-hop question-answering tasks. Navigate to the Files page, click Generate > RAPTOR to enable it. See https://ragflow.io/docs/dev/enable_raptor for details.',
prompt: 'Prompt',
promptTip:
'Use the system prompt to describe the task for the LLM, specify how it should respond, and outline other miscellaneous requirements. The system prompt is often used in conjunction with keys (variables), which serve as various data inputs for the LLM. Use a forward slash `/` or the (x) button to show the keys to use.',
@ -697,7 +698,7 @@ This auto-tagging feature enhances retrieval by adding another layer of domain-s
system: 'System',
logout: 'Log out',
api: 'API',
username: 'Username',
username: 'Name',
usernameMessage: 'Please input your username!',
photo: 'Your photo',
photoDescription: 'This will be displayed on your profile.',
@ -1533,8 +1534,8 @@ This delimiter is used to split the input text into several text pieces echo of
'Your users will see this welcome message at the beginning.',
modeTip: 'The mode defines how the workflow is initiated.',
mode: 'Mode',
conversational: 'conversational',
task: 'task',
conversational: 'Conversational',
task: 'Task',
beginInputTip:
'By defining input parameters, this content can be accessed by other components in subsequent processes.',
query: 'Query variables',
@ -1605,107 +1606,6 @@ This delimiter is used to split the input text into several text pieces echo of
ceateAgent: 'Agent flow',
createPipeline: 'Ingestion pipeline',
chooseAgentType: 'Choose Agent Type',
},
llmTools: {
bad_calculator: {
name: 'Calculator',
description:
'A tool to calculate the sum of two numbers (will give wrong answer)',
params: {
a: 'The first number',
b: 'The second number',
},
},
},
modal: {
okText: 'Confirm',
cancelText: 'Cancel',
},
mcp: {
export: 'Export',
import: 'Import',
url: 'URL',
serverType: 'Server Type',
addMCP: 'Add MCP',
editMCP: 'Edit MCP',
toolsAvailable: 'tools available',
mcpServers: 'MCP Servers',
customizeTheListOfMcpServers: 'Customize the list of MCP servers',
},
search: {
searchApps: 'Search Apps',
createSearch: 'Create Search',
searchGreeting: 'How can I help you today ',
profile: 'Hide Profile',
locale: 'Locale',
embedCode: 'Embed code',
id: 'ID',
copySuccess: 'Copy Success',
welcomeBack: 'Welcome back',
searchSettings: 'Search Settings',
name: 'Name',
avatar: 'Avatar',
description: 'Description',
datasets: 'Datasets',
rerankModel: 'Rerank Model',
AISummary: 'AI Summary',
enableWebSearch: 'Enable Web Search',
enableRelatedSearch: 'Enable Related Search',
showQueryMindmap: 'Show Query Mindmap',
embedApp: 'Embed App',
relatedSearch: 'Related Search',
descriptionValue: 'You are an intelligent assistant.',
okText: 'Save',
cancelText: 'Cancel',
chooseDataset: 'Please select a dataset first',
},
language: {
english: 'English',
chinese: 'Chinese',
spanish: 'Spanish',
french: 'French',
german: 'German',
japanese: 'Japanese',
korean: 'Korean',
vietnamese: 'Vietnamese',
},
pagination: {
total: 'Total {{total}}',
page: '{{page}} /Page',
},
dataflowParser: {
result: 'Result',
parseSummary: 'Parse Summary',
parseSummaryTip: 'Parserdeepdoc',
rerunFromCurrentStep: 'Rerun From Current Step',
rerunFromCurrentStepTip: 'Changes detected. Click to re-run.',
confirmRerun: 'Confirm Rerun Process',
confirmRerunModalContent: `
<p class="text-sm text-text-disabled font-medium mb-2">
You are about to rerun the process starting from the <strong class="text-text-primary">{{step}}</strong> step.
</p>
<p class="text-sm mb-3 text-text-secondary">This will:</p>
<ul class="list-disc list-inside space-y-1 text-sm text-text-secondary">
<li>Overwrite existing results from the current step onwards</li>
<li>Create a new log entry for tracking</li>
<li>Previous steps will remain unchanged</li>
</ul>`,
changeStepModalTitle: 'Step Switch Warning',
changeStepModalContent: `
<p>You are currently editing the results of this stage.</p>
<p>If you switch to a later stage, your changes will be lost. </p>
<p>To keep them, please click Rerun to re-run the current stage.</p> `,
changeStepModalConfirmText: 'Switch Anyway',
changeStepModalCancelText: 'Cancel',
unlinkPipelineModalTitle: 'Unlink Ingestion pipeline',
unlinkPipelineModalContent: `
<p>Once unlinked, this Dataset will no longer be connected to the current Ingestion pipeline.</p>
<p>Files that are already being parsed will continue until completion</p>
<p>Files that are not yet parsed will no longer be processed</p> <br/>
<p>Are you sure you want to proceed?</p> `,
unlinkPipelineModalConfirmText: 'Unlink',
},
dataflow: {
parser: 'Parser',
parserDescription:
'Extracts raw text and structure from files for downstream processing.',
@ -1723,7 +1623,6 @@ This delimiter is used to split the input text into several text pieces echo of
extractorDescription:
'Use an LLM to extract structured insights from document chunks—such as summaries, classifications, etc.',
outputFormat: 'Output format',
lang: 'Language',
fileFormats: 'File format',
fileFormatOptions: {
pdf: 'PDF',
@ -1734,6 +1633,7 @@ This delimiter is used to split the input text into several text pieces echo of
word: 'Word',
slides: 'PPT',
audio: 'Audio',
video: 'Video',
},
fields: 'Field',
addParser: 'Add Parser',
@ -1743,9 +1643,9 @@ This delimiter is used to split the input text into several text pieces echo of
searchMethod: 'Search method',
searchMethodTip: `Defines how the content can be searched — by full-text, embedding, or both.
The Indexer will store the content in the corresponding data structures for the selected methods.`,
begin: 'File',
// file: 'File',
parserMethod: 'Parsing method',
systemPrompt: 'System Prompt',
// systemPrompt: 'System Prompt',
systemPromptPlaceholder:
'Enter system prompt for image analysis, if empty the system default value will be used',
exportJson: 'Export JSON',
@ -1820,9 +1720,108 @@ Important structured information may include: names, dates, locations, events, k
imageParseMethodOptions: {
ocr: 'OCR',
},
note: 'Note',
noteDescription: 'Note',
notePlaceholder: 'Please enter a note',
},
llmTools: {
bad_calculator: {
name: 'Calculator',
description:
'A tool to calculate the sum of two numbers (will give wrong answer)',
params: {
a: 'The first number',
b: 'The second number',
},
},
},
modal: {
okText: 'Confirm',
cancelText: 'Cancel',
},
mcp: {
export: 'Export',
import: 'Import',
url: 'URL',
serverType: 'Server Type',
addMCP: 'Add MCP',
editMCP: 'Edit MCP',
toolsAvailable: 'tools available',
mcpServers: 'MCP Servers',
customizeTheListOfMcpServers: 'Customize the list of MCP servers',
cachedTools: 'cached tools',
},
search: {
searchApps: 'Search Apps',
createSearch: 'Create Search',
searchGreeting: 'How can I help you today ',
profile: 'Hide Profile',
locale: 'Locale',
embedCode: 'Embed code',
id: 'ID',
copySuccess: 'Copy Success',
welcomeBack: 'Welcome back',
searchSettings: 'Search Settings',
name: 'Name',
avatar: 'Avatar',
description: 'Description',
datasets: 'Datasets',
rerankModel: 'Rerank Model',
AISummary: 'AI Summary',
enableWebSearch: 'Enable Web Search',
enableRelatedSearch: 'Enable Related Search',
showQueryMindmap: 'Show Query Mindmap',
embedApp: 'Embed App',
relatedSearch: 'Related Search',
descriptionValue: 'You are an intelligent assistant.',
okText: 'Save',
cancelText: 'Cancel',
chooseDataset: 'Please select a dataset first',
},
language: {
english: 'English',
chinese: 'Chinese',
spanish: 'Spanish',
french: 'French',
german: 'German',
japanese: 'Japanese',
korean: 'Korean',
vietnamese: 'Vietnamese',
},
pagination: {
total: 'Total {{total}}',
page: '{{page}} /Page',
},
dataflowParser: {
result: 'Result',
parseSummary: 'Parse Summary',
parseSummaryTip: 'Parserdeepdoc',
parserMethod: 'Parser Method',
outputFormat: 'Output Format',
rerunFromCurrentStep: 'Rerun From Current Step',
rerunFromCurrentStepTip: 'Changes detected. Click to re-run.',
confirmRerun: 'Confirm Rerun Process',
confirmRerunModalContent: `
<p class="text-sm text-text-disabled font-medium mb-2">
You are about to rerun the process starting from the <strong class="text-text-primary">{{step}}</strong> step.
</p>
<p class="text-sm mb-3 text-text-secondary">This will:</p>
<ul class="list-disc list-inside space-y-1 text-sm text-text-secondary">
<li>Overwrite existing results from the current step onwards</li>
<li>Create a new log entry for tracking</li>
<li>Previous steps will remain unchanged</li>
</ul>`,
changeStepModalTitle: 'Step Switch Warning',
changeStepModalContent: `
<p>You are currently editing the results of this stage.</p>
<p>If you switch to a later stage, your changes will be lost. </p>
<p>To keep them, please click Rerun to re-run the current stage.</p> `,
changeStepModalConfirmText: 'Switch Anyway',
changeStepModalCancelText: 'Cancel',
unlinkPipelineModalTitle: 'Unlink Ingestion pipeline',
unlinkPipelineModalContent: `
<p>Once unlinked, this Dataset will no longer be connected to the current Ingestion pipeline.</p>
<p>Files that are already being parsed will continue until completion</p>
<p>Files that are not yet parsed will no longer be processed</p> <br/>
<p>Are you sure you want to proceed?</p> `,
unlinkPipelineModalConfirmText: 'Unlink',
},
datasetOverview: {
downloadTip: 'Files being downloaded from data sources. ',

View File

@ -6,8 +6,10 @@ export default {
selectAll: '全选',
delete: '删除',
deleteModalTitle: '确定删除吗?',
ok: '',
cancel: '',
ok: '确认',
cancel: '取消',
yes: '是',
no: '否',
total: '总共',
rename: '重命名',
name: '名称',
@ -125,7 +127,7 @@ export default {
completed: '已完成',
datasetLog: '知识库日志',
created: '创建于',
learnMore: '了解更多',
learnMore: '内置pipeline简介',
general: '通用',
chunkMethodTab: '切片方法',
testResults: '测试结果',
@ -423,7 +425,7 @@ export default {
`,
useRaptor: '使用召回增强 RAPTOR 策略',
useRaptorTip:
'为多跳问答任务启用 RAPTOR,详情请见 : https://ragflow.io/docs/dev/enable_raptor。',
'RAPTOR 常应用于复杂的多跳问答任务。如需打开,请跳转至知识库的文件页面,点击生成 > RAPTOR 开启。详见: https://ragflow.io/docs/dev/enable_raptor。',
prompt: '提示词',
promptMessage: '提示词是必填项',
promptText: `请总结以下段落。 小心数字,不要编造。 段落如下:
@ -1511,114 +1513,6 @@ General实体和关系提取提示来自 GitHub - microsoft/graphrag基于
createFromTemplate: '从模板创建',
importJsonFile: '导入 JSON 文件',
chooseAgentType: '选择智能体类型',
},
footer: {
profile: 'All rights reserved @ React',
},
layout: {
file: 'file',
knowledge: 'knowledge',
chat: 'chat',
},
llmTools: {
bad_calculator: {
name: '计算器',
description: '用于计算两个数的和的工具(会给出错误答案)',
params: {
a: '第一个数',
b: '第二个数',
},
},
},
modal: {
okText: '确认',
cancelText: '取消',
},
mcp: {
export: '导出',
import: '导入',
url: 'URL',
serverType: '服务器类型',
addMCP: '添加 MCP',
editMCP: '编辑 MCP',
toolsAvailable: '可用的工具',
mcpServers: 'MCP 服务器',
customizeTheListOfMcpServers: '自定义 MCP 服务器列表',
},
search: {
searchApps: '搜索',
createSearch: '创建查询',
searchGreeting: '今天我能为你做些什么?',
profile: '隐藏个人资料',
locale: '语言',
embedCode: '嵌入代码',
id: 'ID',
copySuccess: '复制成功',
welcomeBack: '欢迎回来',
searchSettings: '搜索设置',
name: '姓名',
avatar: '头像',
description: '描述',
datasets: '知识库',
rerankModel: 'rerank 模型',
AISummary: 'AI 总结',
enableWebSearch: '启用网页搜索',
enableRelatedSearch: '启用相关搜索',
showQueryMindmap: '显示查询思维导图',
embedApp: '嵌入网站',
relatedSearch: '相关搜索',
descriptionValue: '你是一位智能助手。',
okText: '保存',
cancelText: '返回',
chooseDataset: '请先选择知识库',
},
language: {
english: '英语',
chinese: '中文',
spanish: '西班牙语',
french: '法语',
german: '德语',
japanese: '日语',
korean: '韩语',
vietnamese: '越南语',
},
pagination: {
total: '总共 {{total}} 条',
page: '{{page}}条/页',
},
dataflowParser: {
result: '结果',
parseSummary: '解析摘要',
parseSummaryTip: '解析器: deepdoc',
rerunFromCurrentStep: '从当前步骤重新运行',
rerunFromCurrentStepTip: '已修改,点击重新运行。',
confirmRerun: '确认重新运行流程',
confirmRerunModalContent: `
<p class="text-sm text-text-disabled font-medium mb-2">
您即将从 <strong class="text-text-primary">{{step}}</strong> 步骤开始重新运行该过程
</p>
<p class="text-sm mb-3 text-text-secondary">这将:</p>
<ul class="list-disc list-inside space-y-1 text-sm text-text-secondary">
<li>从当前步骤开始覆盖现有结果</li>
<li>创建新的日志条目进行跟踪</li>
<li>之前的步骤将保持不变</li>
</ul>`,
changeStepModalTitle: '切换步骤警告',
changeStepModalContent: `
<p>您目前正在编辑此阶段的结果。</p>
<p>如果您切换到后续阶段,您的更改将会丢失。</p>
<p>要保留这些更改,请点击“重新运行”以重新运行当前阶段。</p> `,
changeStepModalConfirmText: '继续切换',
changeStepModalCancelText: '取消',
unlinkPipelineModalTitle: '解绑pipeline',
unlinkPipelineModalContent: `
<p>一旦取消链接,该数据集将不再连接到当前数据管道。</p>
<p>正在解析的文件将继续解析,直到完成。</p>
<p>尚未解析的文件将不再被处理。</p> <br/>
<p>你确定要继续吗?</p> `,
unlinkPipelineModalConfirmText: '解绑',
},
dataflow: {
parser: '解析器',
parserDescription: '从文件中提取原始文本和结构以供下游处理。',
tokenizer: '分词器',
@ -1635,7 +1529,6 @@ General实体和关系提取提示来自 GitHub - microsoft/graphrag基于
extractorDescription:
'使用 LLM 从文档块(例如摘要、分类等)中提取结构化见解。',
outputFormat: '输出格式',
lang: '语言',
fileFormats: '文件格式',
fields: '字段',
addParser: '增加解析器',
@ -1646,9 +1539,7 @@ General实体和关系提取提示来自 GitHub - microsoft/graphrag基于
searchMethodTip: `决定该数据集启用的搜索方式,可选择全文、向量,或两者兼有。
Tokenizer 会根据所选方式将内容存储为对应的数据结构。`,
filenameEmbdWeight: '文件名嵌入权重',
begin: '文件',
parserMethod: '解析方法',
systemPrompt: '系统提示词',
systemPromptPlaceholder:
'请输入用于图像分析的系统提示词,若为空则使用系统缺省值',
exportJson: '导出 JSON',
@ -1709,9 +1600,115 @@ Tokenizer 会根据所选方式将内容存储为对应的数据结构。`,
cancel: '取消',
filenameEmbeddingWeight: '文件名嵌入权重',
switchPromptMessage: '提示词将发生变化,请确认是否放弃已有提示词?',
note: '注释',
noteDescription: '注释',
notePlaceholder: '请输入注释',
},
footer: {
profile: 'All rights reserved @ React',
},
layout: {
file: 'file',
knowledge: 'knowledge',
chat: 'chat',
},
llmTools: {
bad_calculator: {
name: '计算器',
description: '用于计算两个数的和的工具(会给出错误答案)',
params: {
a: '第一个数',
b: '第二个数',
},
},
},
modal: {
okText: '确认',
cancelText: '取消',
},
mcp: {
export: '导出',
import: '导入',
url: 'URL',
serverType: '服务器类型',
addMCP: '添加 MCP',
editMCP: '编辑 MCP',
toolsAvailable: '可用的工具',
mcpServers: 'MCP 服务器',
customizeTheListOfMcpServers: '自定义 MCP 服务器列表',
cachedTools: '缓存工具',
},
search: {
searchApps: '搜索',
createSearch: '创建查询',
searchGreeting: '今天我能为你做些什么?',
profile: '隐藏个人资料',
locale: '语言',
embedCode: '嵌入代码',
id: 'ID',
copySuccess: '复制成功',
welcomeBack: '欢迎回来',
searchSettings: '搜索设置',
name: '姓名',
avatar: '头像',
description: '描述',
datasets: '知识库',
rerankModel: 'rerank 模型',
AISummary: 'AI 总结',
enableWebSearch: '启用网页搜索',
enableRelatedSearch: '启用相关搜索',
showQueryMindmap: '显示查询思维导图',
embedApp: '嵌入网站',
relatedSearch: '相关搜索',
descriptionValue: '你是一位智能助手。',
okText: '保存',
cancelText: '返回',
chooseDataset: '请先选择知识库',
},
language: {
english: '英语',
chinese: '中文',
spanish: '西班牙语',
french: '法语',
german: '德语',
japanese: '日语',
korean: '韩语',
vietnamese: '越南语',
},
pagination: {
total: '总共 {{total}} 条',
page: '{{page}}条/页',
},
dataflowParser: {
result: '结果',
parseSummary: '解析摘要',
parseSummaryTip: '解析器: deepdoc',
parserMethod: '解析方法',
outputFormat: '输出格式',
rerunFromCurrentStep: '从当前步骤重新运行',
rerunFromCurrentStepTip: '已修改,点击重新运行。',
confirmRerun: '确认重新运行流程',
confirmRerunModalContent: `
<p class="text-sm text-text-disabled font-medium mb-2">
您即将从 <strong class="text-text-primary">{{step}}</strong> 步骤开始重新运行该过程
</p>
<p class="text-sm mb-3 text-text-secondary">这将:</p>
<ul class="list-disc list-inside space-y-1 text-sm text-text-secondary">
<li>从当前步骤开始覆盖现有结果</li>
<li>创建新的日志条目进行跟踪</li>
<li>之前的步骤将保持不变</li>
</ul>`,
changeStepModalTitle: '切换步骤警告',
changeStepModalContent: `
<p>您目前正在编辑此阶段的结果。</p>
<p>如果您切换到后续阶段,您的更改将会丢失。</p>
<p>要保留这些更改,请点击“重新运行”以重新运行当前阶段。</p> `,
changeStepModalConfirmText: '继续切换',
changeStepModalCancelText: '取消',
unlinkPipelineModalTitle: '解绑pipeline',
unlinkPipelineModalContent: `
<p>一旦取消链接,该数据集将不再连接到当前数据管道。</p>
<p>正在解析的文件将继续解析,直到完成。</p>
<p>尚未解析的文件将不再被处理。</p> <br/>
<p>你确定要继续吗?</p> `,
unlinkPipelineModalConfirmText: '解绑',
},
datasetOverview: {
downloadTip: '正在从数据源下载文件。',

View File

@ -124,8 +124,8 @@ export const ParsingStatusCell = ({ record }: IProps) => {
onConfirm={handleOperationIconClick(true)}
onCancel={handleOperationIconClick(false)}
disabled={record.chunk_num === 0}
okText={t('common.ok')}
cancelText={t('common.cancel')}
okText={t('common.yes')}
cancelText={t('common.no')}
>
<div
className={classNames(styles.operationIcon)}

View File

@ -36,7 +36,7 @@ function InnerFileNode({ data, id, selected }: NodeProps<IBeginNode>) {
<section className="flex items-center gap-2">
<OperatorIcon name={data.label as Operator}></OperatorIcon>
<div className="truncate text-center font-semibold text-sm">
{t(`dataflow.begin`)}
{t(`flow.begin`)}
</div>
</section>
<section className={cn(styles.generateParameters, 'flex gap-2 flex-col')}>

View File

@ -5,6 +5,8 @@ import { Plus } from 'lucide-react';
import { useMemo } from 'react';
import { NodeHandleId } from '../../constant';
import { HandleContext } from '../../context';
import { useIsPipeline } from '../../hooks/use-is-pipeline';
import useGraphStore from '../../store';
import { useDropdownManager } from '../context';
import { NextStepDropdown } from './dropdown/next-step-dropdown';
@ -14,9 +16,12 @@ export function CommonHandle({
...props
}: HandleProps & { nodeId: string }) {
const { visible, hideModal, showModal } = useSetModalState();
const { canShowDropdown, setActiveDropdown, clearActiveDropdown } =
useDropdownManager();
const { hasChildNode } = useGraphStore((state) => state);
const isPipeline = useIsPipeline();
const isConnectable = !(isPipeline && hasChildNode(nodeId)); // Using useMemo will cause isConnectable to not be updated when the subsequent connection line is deleted
const value = useMemo(
() => ({
@ -33,6 +38,7 @@ export function CommonHandle({
<HandleContext.Provider value={value}>
<Handle
{...props}
isConnectable={isConnectable}
className={cn(
'inline-flex justify-center items-center !bg-accent-primary !border-none group-hover:!size-4 group-hover:!rounded-sm',
className,
@ -40,6 +46,10 @@ export function CommonHandle({
onClick={(e) => {
e.stopPropagation();
if (!isConnectable) {
return;
}
if (!canShowDropdown()) {
return;
}

View File

@ -46,7 +46,7 @@ function ParserNode({
className="flex flex-col text-text-primary gap-1"
>
<span className="text-text-secondary">Parser {idx + 1}</span>
{t(`dataflow.fileFormatOptions.${x.fileFormat}`)}
{t(`flow.fileFormatOptions.${x.fileFormat}`)}
</LabelCard>
)}
</NodeCollapsible>

View File

@ -38,12 +38,10 @@ function TokenizerNode({
></CommonHandle>
<NodeHeader id={id} name={data.name} label={data.label}></NodeHeader>
<LabelCard className="text-text-primary flex justify-between flex-col gap-1">
<span className="text-text-secondary">
{t('dataflow.searchMethod')}
</span>
<span className="text-text-secondary">{t('flow.searchMethod')}</span>
<ul className="space-y-1">
{data.form?.search_method.map((x) => (
<li key={x}>{t(`dataflow.tokenizerSearchMethodOptions.${x}`)}</li>
<li key={x}>{t(`flow.tokenizerSearchMethodOptions.${x}`)}</li>
))}
</ul>
</LabelCard>

View File

@ -2,10 +2,13 @@ import { Sheet, SheetContent, SheetTitle } from '@/components/ui/sheet';
import { IModalProps } from '@/interfaces/common';
import { cn } from '@/lib/utils';
import { useTranslation } from 'react-i18next';
import { useIsTaskMode } from '../hooks/use-get-begin-query';
import AgentChatBox from './box';
export function ChatSheet({ hideModal }: IModalProps<any>) {
const { t } = useTranslation();
const isTaskMode = useIsTaskMode();
return (
<Sheet open modal={false} onOpenChange={hideModal}>
<SheetContent
@ -13,7 +16,9 @@ export function ChatSheet({ hideModal }: IModalProps<any>) {
onInteractOutside={(e) => e.preventDefault()}
>
<SheetTitle className="hidden"></SheetTitle>
<div className="pl-5 pt-2">{t('chat.chat')}</div>
<div className="pl-5 pt-2">
{t(isTaskMode ? 'flow.task' : 'chat.chat')}
</div>
<AgentChatBox></AgentChatBox>
</SheetContent>
</Sheet>

View File

@ -382,9 +382,9 @@ export const useSendAgentMessage = ({
const { content, id } = findMessageFromList(answerList);
const inputAnswer = findInputFromList(answerList);
const answer = content || getLatestError(answerList);
if (answerList.length > 0 && answer) {
if (answerList.length > 0) {
addNewestOneAnswer({
answer: answer,
answer: answer ?? '',
id: id,
...inputAnswer,
});

View File

@ -49,7 +49,7 @@ export enum PptOutputFormat {
}
export enum VideoOutputFormat {
Json = 'json',
Text = 'text',
}
export enum AudioOutputFormat {
@ -76,7 +76,7 @@ export const InitialOutputFormatMap = {
[FileType.TextMarkdown]: TextMarkdownOutputFormat.Text,
[FileType.Docx]: DocxOutputFormat.Json,
[FileType.PowerPoint]: PptOutputFormat.Json,
[FileType.Video]: VideoOutputFormat.Json,
[FileType.Video]: VideoOutputFormat.Text,
[FileType.Audio]: AudioOutputFormat.Text,
};
@ -244,7 +244,7 @@ export const FileTypeSuffixMap = {
[FileType.TextMarkdown]: ['md', 'markdown', 'mdx', 'txt'],
[FileType.Docx]: ['doc', 'docx'],
[FileType.PowerPoint]: ['pptx'],
[FileType.Video]: [],
[FileType.Video]: ['mp4', 'avi', 'mkv'],
[FileType.Audio]: [
'da',
'wave',

View File

@ -48,13 +48,3 @@ export type HandleContextType = {
export const HandleContext = createContext<HandleContextType>(
{} as HandleContextType,
);
export type PipelineLogContextType = {
messageId: string;
setMessageId: (messageId: string) => void;
setUploadedFileData: (data: Record<string, any>) => void;
};
export const PipelineLogContext = createContext<PipelineLogContextType>(
{} as PipelineLogContextType,
);

View File

@ -47,7 +47,7 @@ const ExtractorForm = ({ node }: INextOperatorForm) => {
const promptOptions = useBuildNodeOutputOptions(node?.id);
const options = buildOptions(ContextGeneratorFieldName, t, 'dataflow');
const options = buildOptions(ContextGeneratorFieldName, t, 'flow');
const {
handleFieldNameChange,
@ -63,7 +63,7 @@ const ExtractorForm = ({ node }: INextOperatorForm) => {
<Form {...form}>
<FormWrapper>
<LargeModelFormField></LargeModelFormField>
<RAGFlowFormItem label={t('dataflow.fieldName')} name="field_name">
<RAGFlowFormItem label={t('flow.fieldName')} name="field_name">
{(field) => (
<SelectWithSearch
onChange={(value) => {
@ -93,7 +93,7 @@ const ExtractorForm = ({ node }: INextOperatorForm) => {
</FormWrapper>
{visible && (
<ConfirmDeleteDialog
title={t('dataflow.switchPromptMessage')}
title={t('flow.switchPromptMessage')}
open
onOpenChange={hideModal}
onOk={confirmSwitch}

View File

@ -21,7 +21,7 @@ export function useSwitchPrompt(form: UseFormReturn<ExtractorFormSchemaType>) {
const setPromptValue = useCallback(
(field: keyof ExtractorFormSchemaType, key: string, value: string) => {
form.setValue(field, t(`dataflow.prompts.${key}.${value}`), {
form.setValue(field, t(`flow.prompts.${key}.${value}`), {
shouldDirty: true,
shouldValidate: true,
});

View File

@ -98,7 +98,7 @@ export function RegularExpressions({
</CardHeader>
<CardContent>
<FormLabel required className="mb-2 text-text-secondary">
{t('dataflow.regularExpressions')}
{t('flow.regularExpressions')}
</FormLabel>
<section className="space-y-4">
{fields.map((field, index) => (
@ -158,7 +158,7 @@ const HierarchicalMergerForm = ({ node }: INextOperatorForm) => {
return (
<Form {...form}>
<FormWrapper>
<RAGFlowFormItem name={'hierarchy'} label={t('dataflow.hierarchy')}>
<RAGFlowFormItem name={'hierarchy'} label={t('flow.hierarchy')}>
<SelectWithSearch options={HierarchyOptions}></SelectWithSearch>
</RAGFlowFormItem>
{fields.map((field, index) => (

View File

@ -50,7 +50,7 @@ export function OutputFormatFormField({
return (
<RAGFlowFormItem
name={buildFieldNameWithPrefix(`output_format`, prefix)}
label={t('dataflow.outputFormat')}
label={t('flow.outputFormat')}
>
<SelectWithSearch
options={buildOutputOptionsFormatMap()[fileType]}
@ -69,7 +69,7 @@ export function ParserMethodFormField({
name={buildFieldNameWithPrefix(`parse_method`, prefix)}
horizontal={false}
optionsWithoutLLM={optionsWithoutLLM}
label={t('dataflow.parserMethod')}
label={t('flow.parserMethod')}
></LayoutRecognizeFormField>
);
}
@ -92,7 +92,7 @@ export function LanguageFormField({ prefix }: CommonProps) {
return (
<RAGFlowFormItem
name={buildFieldNameWithPrefix(`lang`, prefix)}
label={t('dataflow.lang')}
label={t('flow.lang')}
>
{(field) => (
<SelectWithSearch

View File

@ -14,7 +14,7 @@ export function EmailFormFields({ prefix }: CommonProps) {
<>
<RAGFlowFormItem
name={buildFieldNameWithPrefix(`fields`, prefix)}
label={t('dataflow.fields')}
label={t('flow.fields')}
>
{(field) => (
<MultiSelect

View File

@ -17,7 +17,7 @@ export function ImageFormFields({ prefix }: CommonProps) {
const options = buildOptions(
ImageParseMethod,
t,
'dataflow.imageParseMethodOptions',
'flow.imageParseMethodOptions',
);
const parseMethodName = buildFieldNameWithPrefix('parse_method', prefix);
@ -50,9 +50,9 @@ export function ImageFormFields({ prefix }: CommonProps) {
{languageShown && (
<RAGFlowFormItem
name={buildFieldNameWithPrefix('system_prompt', prefix)}
label={t('dataflow.systemPrompt')}
label={t('flow.systemPrompt')}
>
<Textarea placeholder={t('dataflow.systemPromptPlaceholder')} />
<Textarea placeholder={t('flow.systemPromptPlaceholder')} />
</RAGFlowFormItem>
)}
</>

Some files were not shown because too many files have changed in this diff Show More