Compare commits

...

12 Commits

Author SHA1 Message Date
8dc5b4dc56 Docs: Update version references to v0.23.0 in READMEs and docs (#12253)
### What problem does this PR solve?

- Update version tags in README files (including translations) from
v0.22.1 to v0.23.0
- Modify Docker image references and documentation to reflect new
version
- Update version badges and image descriptions
- Maintain consistency across all language variants of README files

### Type of change

- [x] Documentation Update

Co-authored-by: Jin Hai <haijin.chn@gmail.com>
2025-12-27 20:44:35 +08:00
ef5341b664 Fix memory issue on Infinity 0.6.15 (#12258)
### What problem does this PR solve?

1. Remove unused columns
2. Check the empty database
3. Switch on the order by expression

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-12-27 20:25:06 +08:00
050534e743 Bump infinity to 0.6.15 (#12264)
### What problem does this PR solve?

As title

### Type of change

- [x] Other (please describe): update doc engine

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-12-27 19:48:17 +08:00
3fe94d3386 Docs: Fixed a display issue (#12259)
### Type of change

- [x] Documentation Update
2025-12-26 21:33:55 +08:00
3364cf96cf Fix: optimize init memory_size (#12254)
### What problem does this PR solve?

Handle 404 exception when init memory size from es.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Liu An <asiro@qq.com>
2025-12-26 21:18:44 +08:00
a1ed4430ce Fix: frontend cannot sync document window context (#12256)
### What problem does this PR solve?

Frontend cannot sync document window context.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Liu An <asiro@qq.com>
2025-12-26 20:55:22 +08:00
7f11a79ad9 Fix: fifo -> FIFO (#12257)
### What problem does this PR solve?

Fix: fifo -> FIFO

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-26 20:40:18 +08:00
ddcd9cf2c4 Fix: order by when pick msg to rm (#12247)
### What problem does this PR solve?

Fix orde by when pick msg to remove.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Liu An <asiro@qq.com>
2025-12-26 19:35:21 +08:00
c2e9064474 Docs: v0.23.0 release notes (#12251)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update

---------

Co-authored-by: Yingfeng Zhang <yingfeng.zhang@gmail.com>
2025-12-26 19:11:10 +08:00
bc9e1e3b9a Fix: parent-children pipleine bad case. (#12246)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-26 18:57:16 +08:00
613d2c5790 Fix: Memory sava issue (#12243)
### What problem does this PR solve?

Fix: Memory sava issue

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-26 18:56:28 +08:00
51bc41b2e8 Refa: improve image table context (#12244)
### What problem does this PR solve?

Improve image table context.

Current strategy in attach_media_context:

- Order by position when possible: if any chunk has page/position info,
sort by (page, top, left), otherwise keep original order.
- Apply only to media chunks: images use image_context_size, tables use
table_context_size.
- Primary matching: on the same page, choose a text chunk whose vertical
span overlaps the media, then pick the one with the closest vertical
midpoint.
- Fallback matching: if no overlap on that page, choose the nearest text
chunk on the same page (page-head uses the next text; page-tail uses the
previous text).
- Context extraction: inside the chosen text chunk, find a mid-sentence
boundary near the text midpoint, then take context_size tokens split
before/after (total budget).
- No multi-chunk stitching: context comes from a single text chunk to
avoid mixing unrelated segments.

### Type of change

- [x] Refactoring

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-12-26 17:55:32 +08:00
53 changed files with 698 additions and 393 deletions

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.22.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.0">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -85,6 +85,7 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Latest Updates ## 🔥 Latest Updates
- 2025-12-26 Supports 'Memory' for AI agent.
- 2025-11-19 Supports Gemini 3 Pro. - 2025-11-19 Supports Gemini 3 Pro.
- 2025-11-12 Supports data synchronization from Confluence, S3, Notion, Discord, Google Drive. - 2025-11-12 Supports data synchronization from Confluence, S3, Notion, Discord, Google Drive.
- 2025-10-23 Supports MinerU & Docling as document parsing methods. - 2025-10-23 Supports MinerU & Docling as document parsing methods.
@ -187,12 +188,12 @@ releases! 🌟
> All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64. > All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64.
> If you are on an ARM64 platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a Docker image compatible with your system. > If you are on an ARM64 platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a Docker image compatible with your system.
> The command below downloads the `v0.22.1` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.22.1`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. > The command below downloads the `v0.23.0` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.23.0`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# git checkout v0.22.1 # git checkout v0.23.0
# Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases) # Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases)
# This step ensures the **entrypoint.sh** file in the code matches the Docker image version. # This step ensures the **entrypoint.sh** file in the code matches the Docker image version.

View File

@ -22,7 +22,7 @@
<img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.22.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.0">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru">
@ -85,6 +85,7 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Pembaruan Terbaru ## 🔥 Pembaruan Terbaru
- 2025-12-26 Mendukung 'Memori' untuk agen AI.
- 2025-11-19 Mendukung Gemini 3 Pro. - 2025-11-19 Mendukung Gemini 3 Pro.
- 2025-11-12 Mendukung sinkronisasi data dari Confluence, S3, Notion, Discord, Google Drive. - 2025-11-12 Mendukung sinkronisasi data dari Confluence, S3, Notion, Discord, Google Drive.
- 2025-10-23 Mendukung MinerU & Docling sebagai metode penguraian dokumen. - 2025-10-23 Mendukung MinerU & Docling sebagai metode penguraian dokumen.
@ -187,12 +188,12 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
> Semua gambar Docker dibangun untuk platform x86. Saat ini, kami tidak menawarkan gambar Docker untuk ARM64. > Semua gambar Docker dibangun untuk platform x86. Saat ini, kami tidak menawarkan gambar Docker untuk ARM64.
> Jika Anda menggunakan platform ARM64, [silakan gunakan panduan ini untuk membangun gambar Docker yang kompatibel dengan sistem Anda](https://ragflow.io/docs/dev/build_docker_image). > Jika Anda menggunakan platform ARM64, [silakan gunakan panduan ini untuk membangun gambar Docker yang kompatibel dengan sistem Anda](https://ragflow.io/docs/dev/build_docker_image).
> Perintah di bawah ini mengunduh edisi v0.22.1 dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.22.1, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. > Perintah di bawah ini mengunduh edisi v0.23.0 dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.23.0, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# git checkout v0.22.1 # git checkout v0.23.0
# Opsional: gunakan tag stabil (lihat releases: https://github.com/infiniflow/ragflow/releases) # Opsional: gunakan tag stabil (lihat releases: https://github.com/infiniflow/ragflow/releases)
# This steps ensures the **entrypoint.sh** file in the code matches the Docker image version. # This steps ensures the **entrypoint.sh** file in the code matches the Docker image version.

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.22.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.0">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -66,7 +66,8 @@
## 🔥 最新情報 ## 🔥 最新情報
- 2025-11-19 Gemini 3 Proをサポートしています - 2025-12-26 AIエージェントの「メモリ」機能をサポート。
- 2025-11-19 Gemini 3 Proをサポートしています。
- 2025-11-12 Confluence、S3、Notion、Discord、Google Drive からのデータ同期をサポートします。 - 2025-11-12 Confluence、S3、Notion、Discord、Google Drive からのデータ同期をサポートします。
- 2025-10-23 ドキュメント解析方法として MinerU と Docling をサポートします。 - 2025-10-23 ドキュメント解析方法として MinerU と Docling をサポートします。
- 2025-10-15 オーケストレーションされたデータパイプラインのサポート。 - 2025-10-15 オーケストレーションされたデータパイプラインのサポート。
@ -167,12 +168,12 @@
> 現在、公式に提供されているすべての Docker イメージは x86 アーキテクチャ向けにビルドされており、ARM64 用の Docker イメージは提供されていません。 > 現在、公式に提供されているすべての Docker イメージは x86 アーキテクチャ向けにビルドされており、ARM64 用の Docker イメージは提供されていません。
> ARM64 アーキテクチャのオペレーティングシステムを使用している場合は、[このドキュメント](https://ragflow.io/docs/dev/build_docker_image)を参照して Docker イメージを自分でビルドしてください。 > ARM64 アーキテクチャのオペレーティングシステムを使用している場合は、[このドキュメント](https://ragflow.io/docs/dev/build_docker_image)を参照して Docker イメージを自分でビルドしてください。
> 以下のコマンドは、RAGFlow Docker イメージの v0.22.1 エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.22.1 とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。 > 以下のコマンドは、RAGFlow Docker イメージの v0.23.0 エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.23.0 とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# git checkout v0.22.1 # git checkout v0.23.0
# 任意: 安定版タグを利用 (一覧: https://github.com/infiniflow/ragflow/releases) # 任意: 安定版タグを利用 (一覧: https://github.com/infiniflow/ragflow/releases)
# この手順は、コード内の entrypoint.sh ファイルが Docker イメージのバージョンと一致していることを確認します。 # この手順は、コード内の entrypoint.sh ファイルが Docker イメージのバージョンと一致していることを確認します。

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.22.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.0">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -67,6 +67,7 @@
## 🔥 업데이트 ## 🔥 업데이트
- 2025-12-26 AI 에이전트의 '메모리' 기능 지원.
- 2025-11-19 Gemini 3 Pro를 지원합니다. - 2025-11-19 Gemini 3 Pro를 지원합니다.
- 2025-11-12 Confluence, S3, Notion, Discord, Google Drive에서 데이터 동기화를 지원합니다. - 2025-11-12 Confluence, S3, Notion, Discord, Google Drive에서 데이터 동기화를 지원합니다.
- 2025-10-23 문서 파싱 방법으로 MinerU 및 Docling을 지원합니다. - 2025-10-23 문서 파싱 방법으로 MinerU 및 Docling을 지원합니다.
@ -169,12 +170,12 @@
> 모든 Docker 이미지는 x86 플랫폼을 위해 빌드되었습니다. 우리는 현재 ARM64 플랫폼을 위한 Docker 이미지를 제공하지 않습니다. > 모든 Docker 이미지는 x86 플랫폼을 위해 빌드되었습니다. 우리는 현재 ARM64 플랫폼을 위한 Docker 이미지를 제공하지 않습니다.
> ARM64 플랫폼을 사용 중이라면, [시스템과 호환되는 Docker 이미지를 빌드하려면 이 가이드를 사용해 주세요](https://ragflow.io/docs/dev/build_docker_image). > ARM64 플랫폼을 사용 중이라면, [시스템과 호환되는 Docker 이미지를 빌드하려면 이 가이드를 사용해 주세요](https://ragflow.io/docs/dev/build_docker_image).
> 아래 명령어는 RAGFlow Docker 이미지의 v0.22.1 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.22.1과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. > 아래 명령어는 RAGFlow Docker 이미지의 v0.23.0 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.23.0과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# git checkout v0.22.1 # git checkout v0.23.0
# Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases) # Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases)
# 이 단계는 코드의 entrypoint.sh 파일이 Docker 이미지 버전과 일치하도록 보장합니다. # 이 단계는 코드의 entrypoint.sh 파일이 Docker 이미지 버전과 일치하도록 보장합니다.

View File

@ -22,7 +22,7 @@
<img alt="Badge Estático" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Badge Estático" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.22.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.0">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Última%20Relese" alt="Última Versão"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Última%20Relese" alt="Última Versão">
@ -86,6 +86,7 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Últimas Atualizações ## 🔥 Últimas Atualizações
- 26-12-2025 Suporte à função 'Memória' para agentes de IA.
- 19-11-2025 Suporta Gemini 3 Pro. - 19-11-2025 Suporta Gemini 3 Pro.
- 12-11-2025 Suporta a sincronização de dados do Confluence, S3, Notion, Discord e Google Drive. - 12-11-2025 Suporta a sincronização de dados do Confluence, S3, Notion, Discord e Google Drive.
- 23-10-2025 Suporta MinerU e Docling como métodos de análise de documentos. - 23-10-2025 Suporta MinerU e Docling como métodos de análise de documentos.
@ -187,12 +188,12 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
> Todas as imagens Docker são construídas para plataformas x86. Atualmente, não oferecemos imagens Docker para ARM64. > Todas as imagens Docker são construídas para plataformas x86. Atualmente, não oferecemos imagens Docker para ARM64.
> Se você estiver usando uma plataforma ARM64, por favor, utilize [este guia](https://ragflow.io/docs/dev/build_docker_image) para construir uma imagem Docker compatível com o seu sistema. > Se você estiver usando uma plataforma ARM64, por favor, utilize [este guia](https://ragflow.io/docs/dev/build_docker_image) para construir uma imagem Docker compatível com o seu sistema.
> O comando abaixo baixa a edição`v0.22.1` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.22.1`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. > O comando abaixo baixa a edição`v0.23.0` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.23.0`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# git checkout v0.22.1 # git checkout v0.23.0
# Opcional: use uma tag estável (veja releases: https://github.com/infiniflow/ragflow/releases) # Opcional: use uma tag estável (veja releases: https://github.com/infiniflow/ragflow/releases)
# Esta etapa garante que o arquivo entrypoint.sh no código corresponda à versão da imagem do Docker. # Esta etapa garante que o arquivo entrypoint.sh no código corresponda à versão da imagem do Docker.

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.22.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.0">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -85,15 +85,16 @@
## 🔥 近期更新 ## 🔥 近期更新
- 2025-11-19 支援 Gemini 3 Pro. - 2025-12-26 支援AI代理的「記憶」功能。
- 2025-11-19 支援 Gemini 3 Pro。
- 2025-11-12 支援從 Confluence、S3、Notion、Discord、Google Drive 進行資料同步。 - 2025-11-12 支援從 Confluence、S3、Notion、Discord、Google Drive 進行資料同步。
- 2025-10-23 支援 MinerU 和 Docling 作為文件解析方法。 - 2025-10-23 支援 MinerU 和 Docling 作為文件解析方法。
- 2025-10-15 支援可編排的資料管道。 - 2025-10-15 支援可編排的資料管道。
- 2025-08-08 支援 OpenAI 最新的 GPT-5 系列模型。 - 2025-08-08 支援 OpenAI 最新的 GPT-5 系列模型。
- 2025-08-01 支援 agentic workflow 和 MCP - 2025-08-01 支援 agentic workflow 和 MCP
- 2025-05-23 為 Agent 新增 Python/JS 程式碼執行器元件。 - 2025-05-23 為 Agent 新增 Python/JS 程式碼執行器元件。
- 2025-05-05 支援跨語言查詢。 - 2025-05-05 支援跨語言查詢。
- 2025-03-19 PDF和DOCX中的圖支持用多模態大模型去解析得到描述. - 2025-03-19 PDF和DOCX中的圖支持用多模態大模型去解析得到描述
- 2024-12-18 升級了 DeepDoc 的文檔佈局分析模型。 - 2024-12-18 升級了 DeepDoc 的文檔佈局分析模型。
- 2024-08-22 支援用 RAG 技術實現從自然語言到 SQL 語句的轉換。 - 2024-08-22 支援用 RAG 技術實現從自然語言到 SQL 語句的轉換。
@ -186,12 +187,12 @@
> 所有 Docker 映像檔都是為 x86 平台建置的。目前,我們不提供 ARM64 平台的 Docker 映像檔。 > 所有 Docker 映像檔都是為 x86 平台建置的。目前,我們不提供 ARM64 平台的 Docker 映像檔。
> 如果您使用的是 ARM64 平台,請使用 [這份指南](https://ragflow.io/docs/dev/build_docker_image) 來建置適合您系統的 Docker 映像檔。 > 如果您使用的是 ARM64 平台,請使用 [這份指南](https://ragflow.io/docs/dev/build_docker_image) 來建置適合您系統的 Docker 映像檔。
> 執行以下指令會自動下載 RAGFlow Docker 映像 `v0.22.1`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.22.1` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。 > 執行以下指令會自動下載 RAGFlow Docker 映像 `v0.23.0`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.23.0` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# git checkout v0.22.1 # git checkout v0.23.0
# 可選使用穩定版標籤查看發佈https://github.com/infiniflow/ragflow/releases # 可選使用穩定版標籤查看發佈https://github.com/infiniflow/ragflow/releases
# 此步驟確保程式碼中的 entrypoint.sh 檔案與 Docker 映像版本一致。 # 此步驟確保程式碼中的 entrypoint.sh 檔案與 Docker 映像版本一致。

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.22.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.0">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -85,7 +85,8 @@
## 🔥 近期更新 ## 🔥 近期更新
- 2025-11-19 支持 Gemini 3 Pro. - 2025-12-26 支持AI代理的“记忆”功能。
- 2025-11-19 支持 Gemini 3 Pro。
- 2025-11-12 支持从 Confluence、S3、Notion、Discord、Google Drive 进行数据同步。 - 2025-11-12 支持从 Confluence、S3、Notion、Discord、Google Drive 进行数据同步。
- 2025-10-23 支持 MinerU 和 Docling 作为文档解析方法。 - 2025-10-23 支持 MinerU 和 Docling 作为文档解析方法。
- 2025-10-15 支持可编排的数据管道。 - 2025-10-15 支持可编排的数据管道。
@ -93,7 +94,7 @@
- 2025-08-01 支持 agentic workflow 和 MCP。 - 2025-08-01 支持 agentic workflow 和 MCP。
- 2025-05-23 Agent 新增 Python/JS 代码执行器组件。 - 2025-05-23 Agent 新增 Python/JS 代码执行器组件。
- 2025-05-05 支持跨语言查询。 - 2025-05-05 支持跨语言查询。
- 2025-03-19 PDF 和 DOCX 中的图支持用多模态大模型去解析得到描述. - 2025-03-19 PDF 和 DOCX 中的图支持用多模态大模型去解析得到描述
- 2024-12-18 升级了 DeepDoc 的文档布局分析模型。 - 2024-12-18 升级了 DeepDoc 的文档布局分析模型。
- 2024-08-22 支持用 RAG 技术实现从自然语言到 SQL 语句的转换。 - 2024-08-22 支持用 RAG 技术实现从自然语言到 SQL 语句的转换。
@ -187,12 +188,12 @@
> 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。 > 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。
> 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。 > 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。
> 运行以下命令会自动下载 RAGFlow Docker 镜像 `v0.22.1`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.22.1` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。 > 运行以下命令会自动下载 RAGFlow Docker 镜像 `v0.23.0`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.23.0` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# git checkout v0.22.1 # git checkout v0.23.0
# 可选使用稳定版本标签查看发布https://github.com/infiniflow/ragflow/releases # 可选使用稳定版本标签查看发布https://github.com/infiniflow/ragflow/releases
# 这一步确保代码中的 entrypoint.sh 文件与 Docker 镜像的版本保持一致。 # 这一步确保代码中的 entrypoint.sh 文件与 Docker 镜像的版本保持一致。

View File

@ -48,7 +48,7 @@ It consists of a server-side Service and a command-line client (CLI), both imple
1. Ensure the Admin Service is running. 1. Ensure the Admin Service is running.
2. Install ragflow-cli. 2. Install ragflow-cli.
```bash ```bash
pip install ragflow-cli==0.22.1 pip install ragflow-cli==0.23.0
``` ```
3. Launch the CLI client: 3. Launch the CLI client:
```bash ```bash

View File

@ -16,14 +16,14 @@
import argparse import argparse
import base64 import base64
from cmd import Cmd
from Cryptodome.PublicKey import RSA
from Cryptodome.Cipher import PKCS1_v1_5 as Cipher_pkcs1_v1_5
from typing import Dict, List, Any
from lark import Lark, Transformer, Tree
import requests
import getpass import getpass
from cmd import Cmd
from typing import Any, Dict, List
import requests
from Cryptodome.Cipher import PKCS1_v1_5 as Cipher_pkcs1_v1_5
from Cryptodome.PublicKey import RSA
from lark import Lark, Transformer, Tree
GRAMMAR = r""" GRAMMAR = r"""
start: command start: command
@ -141,7 +141,6 @@ NUMBER: /[0-9]+/
class AdminTransformer(Transformer): class AdminTransformer(Transformer):
def start(self, items): def start(self, items):
return items[0] return items[0]
@ -149,7 +148,7 @@ class AdminTransformer(Transformer):
return items[0] return items[0]
def list_services(self, items): def list_services(self, items):
result = {'type': 'list_services'} result = {"type": "list_services"}
return result return result
def show_service(self, items): def show_service(self, items):
@ -236,11 +235,7 @@ class AdminTransformer(Transformer):
action_list = items[1] action_list = items[1]
resource = items[3] resource = items[3]
role_name = items[6] role_name = items[6]
return { return {"type": "revoke_permission", "role_name": role_name, "resource": resource, "actions": action_list}
"type": "revoke_permission",
"role_name": role_name,
"resource": resource, "actions": action_list
}
def alter_user_role(self, items): def alter_user_role(self, items):
user_name = items[2] user_name = items[2]
@ -264,12 +259,12 @@ class AdminTransformer(Transformer):
# handle quoted parameter # handle quoted parameter
parsed_args = [] parsed_args = []
for arg in args: for arg in args:
if hasattr(arg, 'value'): if hasattr(arg, "value"):
parsed_args.append(arg.value) parsed_args.append(arg.value)
else: else:
parsed_args.append(str(arg)) parsed_args.append(str(arg))
return {'type': 'meta', 'command': command_name, 'args': parsed_args} return {"type": "meta", "command": command_name, "args": parsed_args}
def meta_command_name(self, items): def meta_command_name(self, items):
return items[0] return items[0]
@ -279,22 +274,22 @@ class AdminTransformer(Transformer):
def encrypt(input_string): def encrypt(input_string):
pub = '-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArq9XTUSeYr2+N1h3Afl/z8Dse/2yD0ZGrKwx+EEEcdsBLca9Ynmx3nIB5obmLlSfmskLpBo0UACBmB5rEjBp2Q2f3AG3Hjd4B+gNCG6BDaawuDlgANIhGnaTLrIqWrrcm4EMzJOnAOI1fgzJRsOOUEfaS318Eq9OVO3apEyCCt0lOQK6PuksduOjVxtltDav+guVAA068NrPYmRNabVKRNLJpL8w4D44sfth5RvZ3q9t+6RTArpEtc5sh5ChzvqPOzKGMXW83C95TxmXqpbK6olN4RevSfVjEAgCydH6HN6OhtOQEcnrU97r9H0iZOWwbw3pVrZiUkuRD1R56Wzs2wIDAQAB\n-----END PUBLIC KEY-----' pub = "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArq9XTUSeYr2+N1h3Afl/z8Dse/2yD0ZGrKwx+EEEcdsBLca9Ynmx3nIB5obmLlSfmskLpBo0UACBmB5rEjBp2Q2f3AG3Hjd4B+gNCG6BDaawuDlgANIhGnaTLrIqWrrcm4EMzJOnAOI1fgzJRsOOUEfaS318Eq9OVO3apEyCCt0lOQK6PuksduOjVxtltDav+guVAA068NrPYmRNabVKRNLJpL8w4D44sfth5RvZ3q9t+6RTArpEtc5sh5ChzvqPOzKGMXW83C95TxmXqpbK6olN4RevSfVjEAgCydH6HN6OhtOQEcnrU97r9H0iZOWwbw3pVrZiUkuRD1R56Wzs2wIDAQAB\n-----END PUBLIC KEY-----"
pub_key = RSA.importKey(pub) pub_key = RSA.importKey(pub)
cipher = Cipher_pkcs1_v1_5.new(pub_key) cipher = Cipher_pkcs1_v1_5.new(pub_key)
cipher_text = cipher.encrypt(base64.b64encode(input_string.encode('utf-8'))) cipher_text = cipher.encrypt(base64.b64encode(input_string.encode("utf-8")))
return base64.b64encode(cipher_text).decode("utf-8") return base64.b64encode(cipher_text).decode("utf-8")
def encode_to_base64(input_string): def encode_to_base64(input_string):
base64_encoded = base64.b64encode(input_string.encode('utf-8')) base64_encoded = base64.b64encode(input_string.encode("utf-8"))
return base64_encoded.decode('utf-8') return base64_encoded.decode("utf-8")
class AdminCLI(Cmd): class AdminCLI(Cmd):
def __init__(self): def __init__(self):
super().__init__() super().__init__()
self.parser = Lark(GRAMMAR, start='start', parser='lalr', transformer=AdminTransformer()) self.parser = Lark(GRAMMAR, start="start", parser="lalr", transformer=AdminTransformer())
self.command_history = [] self.command_history = []
self.is_interactive = False self.is_interactive = False
self.admin_account = "admin@ragflow.io" self.admin_account = "admin@ragflow.io"
@ -312,7 +307,7 @@ class AdminCLI(Cmd):
result = self.parse_command(command) result = self.parse_command(command)
if isinstance(result, dict): if isinstance(result, dict):
if 'type' in result and result.get('type') == 'empty': if "type" in result and result.get("type") == "empty":
return False return False
self.execute_command(result) self.execute_command(result)
@ -320,7 +315,7 @@ class AdminCLI(Cmd):
if isinstance(result, Tree): if isinstance(result, Tree):
return False return False
if result.get('type') == 'meta' and result.get('command') in ['q', 'quit', 'exit']: if result.get("type") == "meta" and result.get("command") in ["q", "quit", "exit"]:
return True return True
except KeyboardInterrupt: except KeyboardInterrupt:
@ -338,7 +333,7 @@ class AdminCLI(Cmd):
def parse_command(self, command_str: str) -> dict[str, str]: def parse_command(self, command_str: str) -> dict[str, str]:
if not command_str.strip(): if not command_str.strip():
return {'type': 'empty'} return {"type": "empty"}
self.command_history.append(command_str) self.command_history.append(command_str)
@ -346,11 +341,11 @@ class AdminCLI(Cmd):
result = self.parser.parse(command_str) result = self.parser.parse(command_str)
return result return result
except Exception as e: except Exception as e:
return {'type': 'error', 'message': f'Parse error: {str(e)}'} return {"type": "error", "message": f"Parse error: {str(e)}"}
def verify_admin(self, arguments: dict, single_command: bool): def verify_admin(self, arguments: dict, single_command: bool):
self.host = arguments['host'] self.host = arguments["host"]
self.port = arguments['port'] self.port = arguments["port"]
print("Attempt to access server for admin login") print("Attempt to access server for admin login")
url = f"http://{self.host}:{self.port}/api/v1/admin/login" url = f"http://{self.host}:{self.port}/api/v1/admin/login"
@ -365,25 +360,21 @@ class AdminCLI(Cmd):
return False return False
if single_command: if single_command:
admin_passwd = arguments['password'] admin_passwd = arguments["password"]
else: else:
admin_passwd = getpass.getpass(f"password for {self.admin_account}: ").strip() admin_passwd = getpass.getpass(f"password for {self.admin_account}: ").strip()
try: try:
self.admin_password = encrypt(admin_passwd) self.admin_password = encrypt(admin_passwd)
response = self.session.post(url, json={'email': self.admin_account, 'password': self.admin_password}) response = self.session.post(url, json={"email": self.admin_account, "password": self.admin_password})
if response.status_code == 200: if response.status_code == 200:
res_json = response.json() res_json = response.json()
error_code = res_json.get('code', -1) error_code = res_json.get("code", -1)
if error_code == 0: if error_code == 0:
self.session.headers.update({ self.session.headers.update({"Content-Type": "application/json", "Authorization": response.headers["Authorization"], "User-Agent": "RAGFlow-CLI/0.23.0"})
'Content-Type': 'application/json',
'Authorization': response.headers['Authorization'],
'User-Agent': 'RAGFlow-CLI/0.22.1'
})
print("Authentication successful.") print("Authentication successful.")
return True return True
else: else:
error_message = res_json.get('message', 'Unknown error') error_message = res_json.get("message", "Unknown error")
print(f"Authentication failed: {error_message}, try again") print(f"Authentication failed: {error_message}, try again")
continue continue
else: else:
@ -403,10 +394,14 @@ class AdminCLI(Cmd):
for k, v in data.items(): for k, v in data.items():
# display latest status # display latest status
heartbeats = sorted(v, key=lambda x: x["now"], reverse=True) heartbeats = sorted(v, key=lambda x: x["now"], reverse=True)
task_executor_list.append({ task_executor_list.append(
"task_executor_name": k, {
**heartbeats[0], "task_executor_name": k,
} if heartbeats else {"task_executor_name": k}) **heartbeats[0],
}
if heartbeats
else {"task_executor_name": k}
)
return task_executor_list return task_executor_list
def _print_table_simple(self, data): def _print_table_simple(self, data):
@ -422,12 +417,7 @@ class AdminCLI(Cmd):
col_widths = {} col_widths = {}
def get_string_width(text): def get_string_width(text):
half_width_chars = ( half_width_chars = " !\"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\t\n\r"
" !\"#$%&'()*+,-./0123456789:;<=>?@"
"ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`"
"abcdefghijklmnopqrstuvwxyz{|}~"
"\t\n\r"
)
width = 0 width = 0
for char in text: for char in text:
if char in half_width_chars: if char in half_width_chars:
@ -439,7 +429,7 @@ class AdminCLI(Cmd):
for col in columns: for col in columns:
max_width = get_string_width(str(col)) max_width = get_string_width(str(col))
for item in data: for item in data:
value_len = get_string_width(str(item.get(col, ''))) value_len = get_string_width(str(item.get(col, "")))
if value_len > max_width: if value_len > max_width:
max_width = value_len max_width = value_len
col_widths[col] = max(2, max_width) col_widths[col] = max(2, max_width)
@ -457,16 +447,15 @@ class AdminCLI(Cmd):
for item in data: for item in data:
row = "|" row = "|"
for col in columns: for col in columns:
value = str(item.get(col, '')) value = str(item.get(col, ""))
if get_string_width(value) > col_widths[col]: if get_string_width(value) > col_widths[col]:
value = value[:col_widths[col] - 3] + "..." value = value[: col_widths[col] - 3] + "..."
row += f" {value:<{col_widths[col] - (get_string_width(value) - len(value))}} |" row += f" {value:<{col_widths[col] - (get_string_width(value) - len(value))}} |"
print(row) print(row)
print(separator) print(separator)
def run_interactive(self): def run_interactive(self):
self.is_interactive = True self.is_interactive = True
print("RAGFlow Admin command line interface - Type '\\?' for help, '\\q' to quit") print("RAGFlow Admin command line interface - Type '\\?' for help, '\\q' to quit")
@ -483,7 +472,7 @@ class AdminCLI(Cmd):
if isinstance(result, Tree): if isinstance(result, Tree):
continue continue
if result.get('type') == 'meta' and result.get('command') in ['q', 'quit', 'exit']: if result.get("type") == "meta" and result.get("command") in ["q", "quit", "exit"]:
break break
except KeyboardInterrupt: except KeyboardInterrupt:
@ -497,36 +486,30 @@ class AdminCLI(Cmd):
self.execute_command(result) self.execute_command(result)
def parse_connection_args(self, args: List[str]) -> Dict[str, Any]: def parse_connection_args(self, args: List[str]) -> Dict[str, Any]:
parser = argparse.ArgumentParser(description='Admin CLI Client', add_help=False) parser = argparse.ArgumentParser(description="Admin CLI Client", add_help=False)
parser.add_argument('-h', '--host', default='localhost', help='Admin service host') parser.add_argument("-h", "--host", default="localhost", help="Admin service host")
parser.add_argument('-p', '--port', type=int, default=9381, help='Admin service port') parser.add_argument("-p", "--port", type=int, default=9381, help="Admin service port")
parser.add_argument('-w', '--password', default='admin', type=str, help='Superuser password') parser.add_argument("-w", "--password", default="admin", type=str, help="Superuser password")
parser.add_argument('command', nargs='?', help='Single command') parser.add_argument("command", nargs="?", help="Single command")
try: try:
parsed_args, remaining_args = parser.parse_known_args(args) parsed_args, remaining_args = parser.parse_known_args(args)
if remaining_args: if remaining_args:
command = remaining_args[0] command = remaining_args[0]
return { return {"host": parsed_args.host, "port": parsed_args.port, "password": parsed_args.password, "command": command}
'host': parsed_args.host,
'port': parsed_args.port,
'password': parsed_args.password,
'command': command
}
else: else:
return { return {
'host': parsed_args.host, "host": parsed_args.host,
'port': parsed_args.port, "port": parsed_args.port,
} }
except SystemExit: except SystemExit:
return {'error': 'Invalid connection arguments'} return {"error": "Invalid connection arguments"}
def execute_command(self, parsed_command: Dict[str, Any]): def execute_command(self, parsed_command: Dict[str, Any]):
command_dict: dict command_dict: dict
if isinstance(parsed_command, Tree): if isinstance(parsed_command, Tree):
command_dict = parsed_command.children[0] command_dict = parsed_command.children[0]
else: else:
if parsed_command['type'] == 'error': if parsed_command["type"] == "error":
print(f"Error: {parsed_command['message']}") print(f"Error: {parsed_command['message']}")
return return
else: else:
@ -534,56 +517,56 @@ class AdminCLI(Cmd):
# print(f"Parsed command: {command_dict}") # print(f"Parsed command: {command_dict}")
command_type = command_dict['type'] command_type = command_dict["type"]
match command_type: match command_type:
case 'list_services': case "list_services":
self._handle_list_services(command_dict) self._handle_list_services(command_dict)
case 'show_service': case "show_service":
self._handle_show_service(command_dict) self._handle_show_service(command_dict)
case 'restart_service': case "restart_service":
self._handle_restart_service(command_dict) self._handle_restart_service(command_dict)
case 'shutdown_service': case "shutdown_service":
self._handle_shutdown_service(command_dict) self._handle_shutdown_service(command_dict)
case 'startup_service': case "startup_service":
self._handle_startup_service(command_dict) self._handle_startup_service(command_dict)
case 'list_users': case "list_users":
self._handle_list_users(command_dict) self._handle_list_users(command_dict)
case 'show_user': case "show_user":
self._handle_show_user(command_dict) self._handle_show_user(command_dict)
case 'drop_user': case "drop_user":
self._handle_drop_user(command_dict) self._handle_drop_user(command_dict)
case 'alter_user': case "alter_user":
self._handle_alter_user(command_dict) self._handle_alter_user(command_dict)
case 'create_user': case "create_user":
self._handle_create_user(command_dict) self._handle_create_user(command_dict)
case 'activate_user': case "activate_user":
self._handle_activate_user(command_dict) self._handle_activate_user(command_dict)
case 'list_datasets': case "list_datasets":
self._handle_list_datasets(command_dict) self._handle_list_datasets(command_dict)
case 'list_agents': case "list_agents":
self._handle_list_agents(command_dict) self._handle_list_agents(command_dict)
case 'create_role': case "create_role":
self._create_role(command_dict) self._create_role(command_dict)
case 'drop_role': case "drop_role":
self._drop_role(command_dict) self._drop_role(command_dict)
case 'alter_role': case "alter_role":
self._alter_role(command_dict) self._alter_role(command_dict)
case 'list_roles': case "list_roles":
self._list_roles(command_dict) self._list_roles(command_dict)
case 'show_role': case "show_role":
self._show_role(command_dict) self._show_role(command_dict)
case 'grant_permission': case "grant_permission":
self._grant_permission(command_dict) self._grant_permission(command_dict)
case 'revoke_permission': case "revoke_permission":
self._revoke_permission(command_dict) self._revoke_permission(command_dict)
case 'alter_user_role': case "alter_user_role":
self._alter_user_role(command_dict) self._alter_user_role(command_dict)
case 'show_user_permission': case "show_user_permission":
self._show_user_permission(command_dict) self._show_user_permission(command_dict)
case 'show_version': case "show_version":
self._show_version(command_dict) self._show_version(command_dict)
case 'meta': case "meta":
self._handle_meta_command(command_dict) self._handle_meta_command(command_dict)
case _: case _:
print(f"Command '{command_type}' would be executed with API") print(f"Command '{command_type}' would be executed with API")
@ -591,29 +574,29 @@ class AdminCLI(Cmd):
def _handle_list_services(self, command): def _handle_list_services(self, command):
print("Listing all services") print("Listing all services")
url = f'http://{self.host}:{self.port}/api/v1/admin/services' url = f"http://{self.host}:{self.port}/api/v1/admin/services"
response = self.session.get(url) response = self.session.get(url)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
self._print_table_simple(res_json['data']) self._print_table_simple(res_json["data"])
else: else:
print(f"Fail to get all services, code: {res_json['code']}, message: {res_json['message']}") print(f"Fail to get all services, code: {res_json['code']}, message: {res_json['message']}")
def _handle_show_service(self, command): def _handle_show_service(self, command):
service_id: int = command['number'] service_id: int = command["number"]
print(f"Showing service: {service_id}") print(f"Showing service: {service_id}")
url = f'http://{self.host}:{self.port}/api/v1/admin/services/{service_id}' url = f"http://{self.host}:{self.port}/api/v1/admin/services/{service_id}"
response = self.session.get(url) response = self.session.get(url)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
res_data = res_json['data'] res_data = res_json["data"]
if 'status' in res_data and res_data['status'] == 'alive': if "status" in res_data and res_data["status"] == "alive":
print(f"Service {res_data['service_name']} is alive, ") print(f"Service {res_data['service_name']} is alive, ")
if isinstance(res_data['message'], str): if isinstance(res_data["message"], str):
print(res_data['message']) print(res_data["message"])
else: else:
data = self._format_service_detail_table(res_data['message']) data = self._format_service_detail_table(res_data["message"])
self._print_table_simple(data) self._print_table_simple(data)
else: else:
print(f"Service {res_data['service_name']} is down, {res_data['message']}") print(f"Service {res_data['service_name']} is down, {res_data['message']}")
@ -621,47 +604,47 @@ class AdminCLI(Cmd):
print(f"Fail to show service, code: {res_json['code']}, message: {res_json['message']}") print(f"Fail to show service, code: {res_json['code']}, message: {res_json['message']}")
def _handle_restart_service(self, command): def _handle_restart_service(self, command):
service_id: int = command['number'] service_id: int = command["number"]
print(f"Restart service {service_id}") print(f"Restart service {service_id}")
def _handle_shutdown_service(self, command): def _handle_shutdown_service(self, command):
service_id: int = command['number'] service_id: int = command["number"]
print(f"Shutdown service {service_id}") print(f"Shutdown service {service_id}")
def _handle_startup_service(self, command): def _handle_startup_service(self, command):
service_id: int = command['number'] service_id: int = command["number"]
print(f"Startup service {service_id}") print(f"Startup service {service_id}")
def _handle_list_users(self, command): def _handle_list_users(self, command):
print("Listing all users") print("Listing all users")
url = f'http://{self.host}:{self.port}/api/v1/admin/users' url = f"http://{self.host}:{self.port}/api/v1/admin/users"
response = self.session.get(url) response = self.session.get(url)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
self._print_table_simple(res_json['data']) self._print_table_simple(res_json["data"])
else: else:
print(f"Fail to get all users, code: {res_json['code']}, message: {res_json['message']}") print(f"Fail to get all users, code: {res_json['code']}, message: {res_json['message']}")
def _handle_show_user(self, command): def _handle_show_user(self, command):
username_tree: Tree = command['user_name'] username_tree: Tree = command["user_name"]
user_name: str = username_tree.children[0].strip("'\"") user_name: str = username_tree.children[0].strip("'\"")
print(f"Showing user: {user_name}") print(f"Showing user: {user_name}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name}' url = f"http://{self.host}:{self.port}/api/v1/admin/users/{user_name}"
response = self.session.get(url) response = self.session.get(url)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
table_data = res_json['data'] table_data = res_json["data"]
table_data.pop('avatar') table_data.pop("avatar")
self._print_table_simple(table_data) self._print_table_simple(table_data)
else: else:
print(f"Fail to get user {user_name}, code: {res_json['code']}, message: {res_json['message']}") print(f"Fail to get user {user_name}, code: {res_json['code']}, message: {res_json['message']}")
def _handle_drop_user(self, command): def _handle_drop_user(self, command):
username_tree: Tree = command['user_name'] username_tree: Tree = command["user_name"]
user_name: str = username_tree.children[0].strip("'\"") user_name: str = username_tree.children[0].strip("'\"")
print(f"Drop user: {user_name}") print(f"Drop user: {user_name}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name}' url = f"http://{self.host}:{self.port}/api/v1/admin/users/{user_name}"
response = self.session.delete(url) response = self.session.delete(url)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
@ -670,13 +653,13 @@ class AdminCLI(Cmd):
print(f"Fail to drop user, code: {res_json['code']}, message: {res_json['message']}") print(f"Fail to drop user, code: {res_json['code']}, message: {res_json['message']}")
def _handle_alter_user(self, command): def _handle_alter_user(self, command):
user_name_tree: Tree = command['user_name'] user_name_tree: Tree = command["user_name"]
user_name: str = user_name_tree.children[0].strip("'\"") user_name: str = user_name_tree.children[0].strip("'\"")
password_tree: Tree = command['password'] password_tree: Tree = command["password"]
password: str = password_tree.children[0].strip("'\"") password: str = password_tree.children[0].strip("'\"")
print(f"Alter user: {user_name}, password: ******") print(f"Alter user: {user_name}, password: ******")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name}/password' url = f"http://{self.host}:{self.port}/api/v1/admin/users/{user_name}/password"
response = self.session.put(url, json={'new_password': encrypt(password)}) response = self.session.put(url, json={"new_password": encrypt(password)})
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
print(res_json["message"]) print(res_json["message"])
@ -684,32 +667,29 @@ class AdminCLI(Cmd):
print(f"Fail to alter password, code: {res_json['code']}, message: {res_json['message']}") print(f"Fail to alter password, code: {res_json['code']}, message: {res_json['message']}")
def _handle_create_user(self, command): def _handle_create_user(self, command):
user_name_tree: Tree = command['user_name'] user_name_tree: Tree = command["user_name"]
user_name: str = user_name_tree.children[0].strip("'\"") user_name: str = user_name_tree.children[0].strip("'\"")
password_tree: Tree = command['password'] password_tree: Tree = command["password"]
password: str = password_tree.children[0].strip("'\"") password: str = password_tree.children[0].strip("'\"")
role: str = command['role'] role: str = command["role"]
print(f"Create user: {user_name}, password: ******, role: {role}") print(f"Create user: {user_name}, password: ******, role: {role}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users' url = f"http://{self.host}:{self.port}/api/v1/admin/users"
response = self.session.post( response = self.session.post(url, json={"user_name": user_name, "password": encrypt(password), "role": role})
url,
json={'user_name': user_name, 'password': encrypt(password), 'role': role}
)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
self._print_table_simple(res_json['data']) self._print_table_simple(res_json["data"])
else: else:
print(f"Fail to create user {user_name}, code: {res_json['code']}, message: {res_json['message']}") print(f"Fail to create user {user_name}, code: {res_json['code']}, message: {res_json['message']}")
def _handle_activate_user(self, command): def _handle_activate_user(self, command):
user_name_tree: Tree = command['user_name'] user_name_tree: Tree = command["user_name"]
user_name: str = user_name_tree.children[0].strip("'\"") user_name: str = user_name_tree.children[0].strip("'\"")
activate_tree: Tree = command['activate_status'] activate_tree: Tree = command["activate_status"]
activate_status: str = activate_tree.children[0].strip("'\"") activate_status: str = activate_tree.children[0].strip("'\"")
if activate_status.lower() in ['on', 'off']: if activate_status.lower() in ["on", "off"]:
print(f"Alter user {user_name} activate status, turn {activate_status.lower()}.") print(f"Alter user {user_name} activate status, turn {activate_status.lower()}.")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name}/activate' url = f"http://{self.host}:{self.port}/api/v1/admin/users/{user_name}/activate"
response = self.session.put(url, json={'activate_status': activate_status}) response = self.session.put(url, json={"activate_status": activate_status})
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
print(res_json["message"]) print(res_json["message"])
@ -719,202 +699,182 @@ class AdminCLI(Cmd):
print(f"Unknown activate status: {activate_status}.") print(f"Unknown activate status: {activate_status}.")
def _handle_list_datasets(self, command): def _handle_list_datasets(self, command):
username_tree: Tree = command['user_name'] username_tree: Tree = command["user_name"]
user_name: str = username_tree.children[0].strip("'\"") user_name: str = username_tree.children[0].strip("'\"")
print(f"Listing all datasets of user: {user_name}") print(f"Listing all datasets of user: {user_name}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name}/datasets' url = f"http://{self.host}:{self.port}/api/v1/admin/users/{user_name}/datasets"
response = self.session.get(url) response = self.session.get(url)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
table_data = res_json['data'] table_data = res_json["data"]
for t in table_data: for t in table_data:
t.pop('avatar') t.pop("avatar")
self._print_table_simple(table_data) self._print_table_simple(table_data)
else: else:
print(f"Fail to get all datasets of {user_name}, code: {res_json['code']}, message: {res_json['message']}") print(f"Fail to get all datasets of {user_name}, code: {res_json['code']}, message: {res_json['message']}")
def _handle_list_agents(self, command): def _handle_list_agents(self, command):
username_tree: Tree = command['user_name'] username_tree: Tree = command["user_name"]
user_name: str = username_tree.children[0].strip("'\"") user_name: str = username_tree.children[0].strip("'\"")
print(f"Listing all agents of user: {user_name}") print(f"Listing all agents of user: {user_name}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name}/agents' url = f"http://{self.host}:{self.port}/api/v1/admin/users/{user_name}/agents"
response = self.session.get(url) response = self.session.get(url)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
table_data = res_json['data'] table_data = res_json["data"]
for t in table_data: for t in table_data:
t.pop('avatar') t.pop("avatar")
self._print_table_simple(table_data) self._print_table_simple(table_data)
else: else:
print(f"Fail to get all agents of {user_name}, code: {res_json['code']}, message: {res_json['message']}") print(f"Fail to get all agents of {user_name}, code: {res_json['code']}, message: {res_json['message']}")
def _create_role(self, command): def _create_role(self, command):
role_name_tree: Tree = command['role_name'] role_name_tree: Tree = command["role_name"]
role_name: str = role_name_tree.children[0].strip("'\"") role_name: str = role_name_tree.children[0].strip("'\"")
desc_str: str = '' desc_str: str = ""
if 'description' in command: if "description" in command:
desc_tree: Tree = command['description'] desc_tree: Tree = command["description"]
desc_str = desc_tree.children[0].strip("'\"") desc_str = desc_tree.children[0].strip("'\"")
print(f"create role name: {role_name}, description: {desc_str}") print(f"create role name: {role_name}, description: {desc_str}")
url = f'http://{self.host}:{self.port}/api/v1/admin/roles' url = f"http://{self.host}:{self.port}/api/v1/admin/roles"
response = self.session.post( response = self.session.post(url, json={"role_name": role_name, "description": desc_str})
url,
json={'role_name': role_name, 'description': desc_str}
)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
self._print_table_simple(res_json['data']) self._print_table_simple(res_json["data"])
else: else:
print(f"Fail to create role {role_name}, code: {res_json['code']}, message: {res_json['message']}") print(f"Fail to create role {role_name}, code: {res_json['code']}, message: {res_json['message']}")
def _drop_role(self, command): def _drop_role(self, command):
role_name_tree: Tree = command['role_name'] role_name_tree: Tree = command["role_name"]
role_name: str = role_name_tree.children[0].strip("'\"") role_name: str = role_name_tree.children[0].strip("'\"")
print(f"drop role name: {role_name}") print(f"drop role name: {role_name}")
url = f'http://{self.host}:{self.port}/api/v1/admin/roles/{role_name}' url = f"http://{self.host}:{self.port}/api/v1/admin/roles/{role_name}"
response = self.session.delete(url) response = self.session.delete(url)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
self._print_table_simple(res_json['data']) self._print_table_simple(res_json["data"])
else: else:
print(f"Fail to drop role {role_name}, code: {res_json['code']}, message: {res_json['message']}") print(f"Fail to drop role {role_name}, code: {res_json['code']}, message: {res_json['message']}")
def _alter_role(self, command): def _alter_role(self, command):
role_name_tree: Tree = command['role_name'] role_name_tree: Tree = command["role_name"]
role_name: str = role_name_tree.children[0].strip("'\"") role_name: str = role_name_tree.children[0].strip("'\"")
desc_tree: Tree = command['description'] desc_tree: Tree = command["description"]
desc_str: str = desc_tree.children[0].strip("'\"") desc_str: str = desc_tree.children[0].strip("'\"")
print(f"alter role name: {role_name}, description: {desc_str}") print(f"alter role name: {role_name}, description: {desc_str}")
url = f'http://{self.host}:{self.port}/api/v1/admin/roles/{role_name}' url = f"http://{self.host}:{self.port}/api/v1/admin/roles/{role_name}"
response = self.session.put( response = self.session.put(url, json={"description": desc_str})
url,
json={'description': desc_str}
)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
self._print_table_simple(res_json['data']) self._print_table_simple(res_json["data"])
else: else:
print( print(f"Fail to update role {role_name} with description: {desc_str}, code: {res_json['code']}, message: {res_json['message']}")
f"Fail to update role {role_name} with description: {desc_str}, code: {res_json['code']}, message: {res_json['message']}")
def _list_roles(self, command): def _list_roles(self, command):
print("Listing all roles") print("Listing all roles")
url = f'http://{self.host}:{self.port}/api/v1/admin/roles' url = f"http://{self.host}:{self.port}/api/v1/admin/roles"
response = self.session.get(url) response = self.session.get(url)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
self._print_table_simple(res_json['data']) self._print_table_simple(res_json["data"])
else: else:
print(f"Fail to list roles, code: {res_json['code']}, message: {res_json['message']}") print(f"Fail to list roles, code: {res_json['code']}, message: {res_json['message']}")
def _show_role(self, command): def _show_role(self, command):
role_name_tree: Tree = command['role_name'] role_name_tree: Tree = command["role_name"]
role_name: str = role_name_tree.children[0].strip("'\"") role_name: str = role_name_tree.children[0].strip("'\"")
print(f"show role: {role_name}") print(f"show role: {role_name}")
url = f'http://{self.host}:{self.port}/api/v1/admin/roles/{role_name}/permission' url = f"http://{self.host}:{self.port}/api/v1/admin/roles/{role_name}/permission"
response = self.session.get(url) response = self.session.get(url)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
self._print_table_simple(res_json['data']) self._print_table_simple(res_json["data"])
else: else:
print(f"Fail to list roles, code: {res_json['code']}, message: {res_json['message']}") print(f"Fail to list roles, code: {res_json['code']}, message: {res_json['message']}")
def _grant_permission(self, command): def _grant_permission(self, command):
role_name_tree: Tree = command['role_name'] role_name_tree: Tree = command["role_name"]
role_name_str: str = role_name_tree.children[0].strip("'\"") role_name_str: str = role_name_tree.children[0].strip("'\"")
resource_tree: Tree = command['resource'] resource_tree: Tree = command["resource"]
resource_str: str = resource_tree.children[0].strip("'\"") resource_str: str = resource_tree.children[0].strip("'\"")
action_tree_list: list = command['actions'] action_tree_list: list = command["actions"]
actions: list = [] actions: list = []
for action_tree in action_tree_list: for action_tree in action_tree_list:
action_str: str = action_tree.children[0].strip("'\"") action_str: str = action_tree.children[0].strip("'\"")
actions.append(action_str) actions.append(action_str)
print(f"grant role_name: {role_name_str}, resource: {resource_str}, actions: {actions}") print(f"grant role_name: {role_name_str}, resource: {resource_str}, actions: {actions}")
url = f'http://{self.host}:{self.port}/api/v1/admin/roles/{role_name_str}/permission' url = f"http://{self.host}:{self.port}/api/v1/admin/roles/{role_name_str}/permission"
response = self.session.post( response = self.session.post(url, json={"actions": actions, "resource": resource_str})
url,
json={'actions': actions, 'resource': resource_str}
)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
self._print_table_simple(res_json['data']) self._print_table_simple(res_json["data"])
else: else:
print( print(f"Fail to grant role {role_name_str} with {actions} on {resource_str}, code: {res_json['code']}, message: {res_json['message']}")
f"Fail to grant role {role_name_str} with {actions} on {resource_str}, code: {res_json['code']}, message: {res_json['message']}")
def _revoke_permission(self, command): def _revoke_permission(self, command):
role_name_tree: Tree = command['role_name'] role_name_tree: Tree = command["role_name"]
role_name_str: str = role_name_tree.children[0].strip("'\"") role_name_str: str = role_name_tree.children[0].strip("'\"")
resource_tree: Tree = command['resource'] resource_tree: Tree = command["resource"]
resource_str: str = resource_tree.children[0].strip("'\"") resource_str: str = resource_tree.children[0].strip("'\"")
action_tree_list: list = command['actions'] action_tree_list: list = command["actions"]
actions: list = [] actions: list = []
for action_tree in action_tree_list: for action_tree in action_tree_list:
action_str: str = action_tree.children[0].strip("'\"") action_str: str = action_tree.children[0].strip("'\"")
actions.append(action_str) actions.append(action_str)
print(f"revoke role_name: {role_name_str}, resource: {resource_str}, actions: {actions}") print(f"revoke role_name: {role_name_str}, resource: {resource_str}, actions: {actions}")
url = f'http://{self.host}:{self.port}/api/v1/admin/roles/{role_name_str}/permission' url = f"http://{self.host}:{self.port}/api/v1/admin/roles/{role_name_str}/permission"
response = self.session.delete( response = self.session.delete(url, json={"actions": actions, "resource": resource_str})
url,
json={'actions': actions, 'resource': resource_str}
)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
self._print_table_simple(res_json['data']) self._print_table_simple(res_json["data"])
else: else:
print( print(f"Fail to revoke role {role_name_str} with {actions} on {resource_str}, code: {res_json['code']}, message: {res_json['message']}")
f"Fail to revoke role {role_name_str} with {actions} on {resource_str}, code: {res_json['code']}, message: {res_json['message']}")
def _alter_user_role(self, command): def _alter_user_role(self, command):
role_name_tree: Tree = command['role_name'] role_name_tree: Tree = command["role_name"]
role_name_str: str = role_name_tree.children[0].strip("'\"") role_name_str: str = role_name_tree.children[0].strip("'\"")
user_name_tree: Tree = command['user_name'] user_name_tree: Tree = command["user_name"]
user_name_str: str = user_name_tree.children[0].strip("'\"") user_name_str: str = user_name_tree.children[0].strip("'\"")
print(f"alter_user_role user_name: {user_name_str}, role_name: {role_name_str}") print(f"alter_user_role user_name: {user_name_str}, role_name: {role_name_str}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name_str}/role' url = f"http://{self.host}:{self.port}/api/v1/admin/users/{user_name_str}/role"
response = self.session.put( response = self.session.put(url, json={"role_name": role_name_str})
url,
json={'role_name': role_name_str}
)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
self._print_table_simple(res_json['data']) self._print_table_simple(res_json["data"])
else: else:
print( print(f"Fail to alter user: {user_name_str} to role {role_name_str}, code: {res_json['code']}, message: {res_json['message']}")
f"Fail to alter user: {user_name_str} to role {role_name_str}, code: {res_json['code']}, message: {res_json['message']}")
def _show_user_permission(self, command): def _show_user_permission(self, command):
user_name_tree: Tree = command['user_name'] user_name_tree: Tree = command["user_name"]
user_name_str: str = user_name_tree.children[0].strip("'\"") user_name_str: str = user_name_tree.children[0].strip("'\"")
print(f"show_user_permission user_name: {user_name_str}") print(f"show_user_permission user_name: {user_name_str}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name_str}/permission' url = f"http://{self.host}:{self.port}/api/v1/admin/users/{user_name_str}/permission"
response = self.session.get(url) response = self.session.get(url)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
self._print_table_simple(res_json['data']) self._print_table_simple(res_json["data"])
else: else:
print( print(f"Fail to show user: {user_name_str} permission, code: {res_json['code']}, message: {res_json['message']}")
f"Fail to show user: {user_name_str} permission, code: {res_json['code']}, message: {res_json['message']}")
def _show_version(self, command): def _show_version(self, command):
print("show_version") print("show_version")
url = f'http://{self.host}:{self.port}/api/v1/admin/version' url = f"http://{self.host}:{self.port}/api/v1/admin/version"
response = self.session.get(url) response = self.session.get(url)
res_json = response.json() res_json = response.json()
if response.status_code == 200: if response.status_code == 200:
self._print_table_simple(res_json['data']) self._print_table_simple(res_json["data"])
else: else:
print(f"Fail to show version, code: {res_json['code']}, message: {res_json['message']}") print(f"Fail to show version, code: {res_json['code']}, message: {res_json['message']}")
def _handle_meta_command(self, command): def _handle_meta_command(self, command):
meta_command = command['command'] meta_command = command["command"]
args = command.get('args', []) args = command.get("args", [])
if meta_command in ['?', 'h', 'help']: if meta_command in ["?", "h", "help"]:
self.show_help() self.show_help()
elif meta_command in ['q', 'quit', 'exit']: elif meta_command in ["q", "quit", "exit"]:
print("Goodbye!") print("Goodbye!")
else: else:
print(f"Meta command '{meta_command}' with args {args}") print(f"Meta command '{meta_command}' with args {args}")
@ -950,16 +910,16 @@ def main():
cli = AdminCLI() cli = AdminCLI()
args = cli.parse_connection_args(sys.argv) args = cli.parse_connection_args(sys.argv)
if 'error' in args: if "error" in args:
print("Error: Invalid connection arguments") print("Error: Invalid connection arguments")
return return
if 'command' in args: if "command" in args:
if 'password' not in args: if "password" not in args:
print("Error: password is missing") print("Error: password is missing")
return return
if cli.verify_admin(args, single_command=True): if cli.verify_admin(args, single_command=True):
command: str = args['command'] command: str = args["command"]
# print(f"Run single command: {command}") # print(f"Run single command: {command}")
cli.run_single_command(command) cli.run_single_command(command)
else: else:
@ -974,5 +934,5 @@ def main():
cli.cmdloop() cli.cmdloop()
if __name__ == '__main__': if __name__ == "__main__":
main() main()

View File

@ -1,6 +1,6 @@
[project] [project]
name = "ragflow-cli" name = "ragflow-cli"
version = "0.22.1" version = "0.23.0"
description = "Admin Service's client of [RAGFlow](https://github.com/infiniflow/ragflow). The Admin Service provides user management and system monitoring. " description = "Admin Service's client of [RAGFlow](https://github.com/infiniflow/ragflow). The Admin Service provides user management and system monitoring. "
authors = [{ name = "Lynn", email = "lynn_inf@hotmail.com" }] authors = [{ name = "Lynn", email = "lynn_inf@hotmail.com" }]
license = { text = "Apache License, Version 2.0" } license = { text = "Apache License, Version 2.0" }

2
admin/client/uv.lock generated
View File

@ -196,7 +196,7 @@ wheels = [
[[package]] [[package]]
name = "ragflow-cli" name = "ragflow-cli"
version = "0.22.1" version = "0.23.0"
source = { virtual = "." } source = { virtual = "." }
dependencies = [ dependencies = [
{ name = "beartype" }, { name = "beartype" },

View File

@ -223,13 +223,8 @@ def init_memory_size_cache():
if not memory_list: if not memory_list:
logging.info("No memory found, no need to init memory size.") logging.info("No memory found, no need to init memory size.")
else: else:
memory_size_map = MessageService.calculate_memory_size( for m in memory_list:
memory_ids=[m.id for m in memory_list], get_memory_size_cache(m.id, m.tenant_id)
uid_list=[m.tenant_id for m in memory_list],
)
for memory in memory_list:
memory_size = memory_size_map.get(memory.id, 0)
set_memory_size_cache(memory.id, memory_size)
logging.info("Memory size cache init done.") logging.info("Memory size cache init done.")

View File

@ -1206,7 +1206,7 @@ class RAGFlowPdfParser:
start = timer() start = timer()
self._text_merge() self._text_merge()
self._concat_downward() self._concat_downward()
self._naive_vertical_merge(zoomin) #self._naive_vertical_merge(zoomin)
if callback: if callback:
callback(0.92, "Text merged ({:.2f}s)".format(timer() - start)) callback(0.92, "Text merged ({:.2f}s)".format(timer() - start))

View File

@ -128,11 +128,11 @@ ADMIN_SVR_HTTP_PORT=9381
SVR_MCP_PORT=9382 SVR_MCP_PORT=9382
# The RAGFlow Docker image to download. v0.22+ doesn't include embedding models. # The RAGFlow Docker image to download. v0.22+ doesn't include embedding models.
RAGFLOW_IMAGE=infiniflow/ragflow:v0.22.1 RAGFLOW_IMAGE=infiniflow/ragflow:v0.23.0
# If you cannot download the RAGFlow Docker image: # If you cannot download the RAGFlow Docker image:
# RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:v0.22.1 # RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:v0.23.0
# RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:v0.22.1 # RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:v0.23.0
# #
# - For the `nightly` edition, uncomment either of the following: # - For the `nightly` edition, uncomment either of the following:
# RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:nightly # RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:nightly

View File

@ -77,7 +77,7 @@ The [.env](./.env) file contains important environment variables for Docker.
- `SVR_HTTP_PORT` - `SVR_HTTP_PORT`
The port used to expose RAGFlow's HTTP API service to the host machine, allowing **external** access to the service running inside the Docker container. Defaults to `9380`. The port used to expose RAGFlow's HTTP API service to the host machine, allowing **external** access to the service running inside the Docker container. Defaults to `9380`.
- `RAGFLOW-IMAGE` - `RAGFLOW-IMAGE`
The Docker image edition. Defaults to `infiniflow/ragflow:v0.22.1`. The RAGFlow Docker image does not include embedding models. The Docker image edition. Defaults to `infiniflow/ragflow:v0.23.0`. The RAGFlow Docker image does not include embedding models.
> [!TIP] > [!TIP]

View File

@ -72,7 +72,7 @@ services:
infinity: infinity:
profiles: profiles:
- infinity - infinity
image: infiniflow/infinity:v0.6.13 image: infiniflow/infinity:v0.6.15
volumes: volumes:
- infinity_data:/var/infinity - infinity_data:/var/infinity
- ./infinity_conf.toml:/infinity_conf.toml - ./infinity_conf.toml:/infinity_conf.toml

View File

@ -1,5 +1,5 @@
[general] [general]
version = "0.6.13" version = "0.6.15"
time_zone = "utc-8" time_zone = "utc-8"
[network] [network]

View File

@ -99,7 +99,7 @@ RAGFlow utilizes MinIO as its object storage solution, leveraging its scalabilit
- `SVR_HTTP_PORT` - `SVR_HTTP_PORT`
The port used to expose RAGFlow's HTTP API service to the host machine, allowing **external** access to the service running inside the Docker container. Defaults to `9380`. The port used to expose RAGFlow's HTTP API service to the host machine, allowing **external** access to the service running inside the Docker container. Defaults to `9380`.
- `RAGFLOW-IMAGE` - `RAGFLOW-IMAGE`
The Docker image edition. Defaults to `infiniflow/ragflow:v0.22.1` (the RAGFlow Docker image without embedding models). The Docker image edition. Defaults to `infiniflow/ragflow:v0.23.0` (the RAGFlow Docker image without embedding models).
:::tip NOTE :::tip NOTE
If you cannot download the RAGFlow Docker image, try the following mirrors. If you cannot download the RAGFlow Docker image, try the following mirrors.

View File

@ -47,7 +47,7 @@ After building the infiniflow/ragflow:nightly image, you are ready to launch a f
1. Edit Docker Compose Configuration 1. Edit Docker Compose Configuration
Open the `docker/.env` file. Find the `RAGFLOW_IMAGE` setting and change the image reference from `infiniflow/ragflow:v0.22.1` to `infiniflow/ragflow:nightly` to use the pre-built image. Open the `docker/.env` file. Find the `RAGFLOW_IMAGE` setting and change the image reference from `infiniflow/ragflow:v0.23.0` to `infiniflow/ragflow:nightly` to use the pre-built image.
2. Launch the Service 2. Launch the Service

View File

@ -0,0 +1,48 @@
---
sidebar_position: -6
slug: /auto_metadata
---
# Auto-extract metadata
Automatically extract metadata from uploaded files.
---
RAGFlow v0.23.0 introduces the Auto-metadata feature, which uses large language models to automatically extract and generate metadata for files—eliminating the need for manual entry. In a typical RAG pipeline, metadata serves two key purposes:
- During the retrieval stage: Filters out irrelevant documents, narrowing the search scope to improve retrieval accuracy.
- During the generation stage: If a text chunk is retrieved, its associated metadata is also passed to the LLM, providing richer contextual information about the source document to aid answer generation.
:::danger WARNING
Enabling TOC extraction requires significant memory, computational resources, and tokens.
:::
## Procedure
1. On your dataset's **Configuration** page, select an indexing model, which will be used to generate the knowledge graph, RAPTOR, auto-metadata, auto-keyword, and auto-question features for this dataset.
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/indexing_model.png)
2. Click **Auto metadata** **>** **Settings** to go to the configuration page for automatic metadata generation rules.
_The configuration page for rules on automatically generating metadata appears._
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/auto_metadata_settings.png)
3. Click **+** to add new fields and enter the congiruation page.
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/metadata_field_settings.png)
4. Enter a field name, such as Author, and add a description and examples in the Description section. This provides context to the large language model (LLM) for more accurate value extraction. If left blank, the LLM will extract values based only on the field name.
5. To restrict the LLM to generating metadata from a predefined list, enable the Restrict to defined values mode and manually add the allowed values. The LLM will then only generate results from this preset range.
6. Once configured, turn on the Auto-metadata switch on the Configuration page. All newly uploaded files will have these rules applied during parsing. For files that have already been processed, you must re-parse them to trigger metadata generation. You can then use the filter function to check the metadata generation status of your files.
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/enable_auto_metadata.png)

View File

@ -0,0 +1,34 @@
---
sidebar_position: -4
slug: /configure_child_chunking_strategy
---
# Configure child chunking strategy
Set parent-child chunking strategy to improve retrieval.
---
A persistent challenge in practical RAG applications lies in a structural tension within the traditional "chunk-embed-retrieve" pipeline: a single text chunk is tasked with both semantic matching (recall) and contextual understanding (utilization)—two inherently conflicting objectives. Recall demands fine-grained, precise chunks, while answer generation requires coherent, informationally complete context.
To resolve this tension, RAGFlow previously introduced the Table of Contents (TOC) enhancement feature, which uses a large language model (LLM) to generate document structure and automatically supplements missing context during retrieval based on that TOC. In version 0.23.0, this capability has been systematically integrated into the Ingestion Pipeline, and a novel parent-child chunking mechanism has been introduced.
Under this mechanism, a document is first segmented into larger parent chunks, each maintaining a relatively complete semantic unit to ensure logical and background integrity. Each parent chunk can then be further subdivided into multiple child chunks for precise recall. During retrieval, the system first locates the most relevant text segments based on the child chunks while automatically associating and recalling their parent chunk. This approach maintains high recall relevance while providing ample semantic background for the generation phase.
For instance, when processing a *Compliance Handbook*, a user query about "liability for breach" might precisely retrieve a child chunk stating, "The penalty for breach is 20% of the total contract value," but without context, it cannot clarify whether this clause applies to "minor breach" or "material breach." Leveraging the parent-child chunking mechanism, the system returns this child chunk along with its parent chunk, which contains the complete section of the clause. This allows the LLM to make accurate judgments based on broader context, avoiding misinterpretation.
Through this dual-layer structure of "precise localization + contextual supplementation," RAGFlow ensures retrieval accuracy while significantly enhancing the reliability and completeness of generated answers.
## Procedure
1. On your dataset's **Configuration** page, find the **Child chunk are used for retrieval** toggle:
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/child_chunking.png)
2. Set the delimiter for child chunks.
3. This configuration applies to the **Chunker** component when it comes to ingestion pipeline settings:
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/child_chunking_parser.png)

View File

@ -133,7 +133,7 @@ See [Run retrieval test](./run_retrieval_test.md) for details.
## Search for dataset ## Search for dataset
As of RAGFlow v0.22.1, the search feature is still in a rudimentary form, supporting only dataset search by name. As of RAGFlow v0.23.0, the search feature is still in a rudimentary form, supporting only dataset search by name.
![search dataset](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/search_datasets.jpg) ![search dataset](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/search_datasets.jpg)

View File

@ -0,0 +1,47 @@
---
sidebar_position: -5
slug: /manage_metadata
---
# Manage metadata
Manage metadata for your dataset and for your individual documents.
---
From v0.23.0 onwards, RAGFlow allows you to manage metadata both at the dataset level and for individual files.
## Procedure
1. Click on **Metadata** within your dataset to access the **Manage Metadata** page.
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/click_metadata.png)
2. On the **Manage Metadata** page, you can do either of the following:
- Edit Values: You can modify existing values. If you rename two values to be identical, they will be automatically merged.
- Delete: You can delete specific values or entire fields. These changes will apply to all associated files.
_The configuration page for rules on automatically generating metadata appears._
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/manage_metadata.png)
3. To manage metadata for a single file, navigate to the file's details page as shown below. Click on the parsing method (e.g., **General**), then select **Set Metadata** to view or edit the file's metadata. Here, you can add, delete, or modify metadata fields for this specific file. Any edits made here will be reflected in the global statistics on the main Metadata management page for the knowledge base.
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/set_metadata.png)
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/edit_metadata.png)
4. The filtering function operates at two levels: knowledge base management and retrieval. Within the dataset, click the Filter button to view the number of files associated with each value under existing metadata fields. By selecting specific values, you can display all linked files.
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/filter_metadata.png)
5. Metadata filtering is also supported during the retrieval stage. In Chat, for example, you can set metadata filtering rules after configuring a knowledge base:
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/metadata_filtering_rules.png)
- **Automatic** Mode: The system automatically filters documents based on the user's query and the existing metadata in the knowledge base.
- **Semi-automatic** Mode: Users first define the filtering scope at the field level (e.g., for **Author**), and then the system automatically filters within that preset range.
- **Manual** Mode: Users manually set precise, value-specific filter conditions, supported by operators such as **Equals**, **Not equals**, **In**, **Not in**, and more.

View File

@ -1,5 +1,5 @@
--- ---
sidebar_position: -4 sidebar_position: -3
slug: /select_pdf_parser slug: /select_pdf_parser
--- ---

View File

@ -0,0 +1,25 @@
---
sidebar_position: -8
slug: /set_context_window
---
# Set context window size
Set context window size for images and tables to improve long-context RAG performances.
---
RAGFlow leverages built-in DeepDoc, along with external document models like MinerU and Docling, to parse document layouts. In previous versions, images and tables extracted based on document layout were treated as independent chunks. Consequently, if a search query did not directly match the content of an image or table, these elements would not be retrieved. However, real-world documents frequently interweave charts and tables with surrounding text, which often describes them. Therefore, recalling charts based on this contextual text is an essential capability.
To address this, RAGFlow 0.23.0 introduces the **Image & table context window** feature. Inspired by key principles of the research-focused, open-source multimodal RAG project RAG-Anything, this functionality allows surrounding text and adjacent visuals to be grouped into a single chunk based on a user-configurable window size. This ensures they are retrieved together, significantly improving the recall accuracy for charts and tables.
## Procedure
1. On your dataset's **Configuration** page, find the **Image & table context window** slider:
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/image_table_context_window.png)
2. Adjust the number of context tokens according to your needs.
*The number in the red box indicates that approximately **N tokens** of text from above and below the image/table will be captured and inserted into the image or table chunk as contextual information. The capture process intelligently optimizes boundaries at punctuation marks to preserve semantic integrity. *

View File

@ -5,7 +5,7 @@ slug: /set_metadata
# Set metadata # Set metadata
Add metadata to an uploaded file Manually add metadata to an uploaded file
--- ---
@ -29,4 +29,4 @@ Ensure that your metadata is in JSON format; otherwise, your updates will not be
### Can I set metadata for multiple documents at once? ### Can I set metadata for multiple documents at once?
No, you must set metadata *individually* for each document, as RAGFlow does not support batch setting of metadata. If you still consider this feature essential, please [raise an issue](https://github.com/infiniflow/ragflow/issues) explaining your use case and its importance. From v0.23.0 onwards, you can set metadata for each document individually or have the LLM auto-generate metadata for multiple files. See [Extract metadata](./auto_metadata.md) for details.

View File

@ -87,4 +87,4 @@ RAGFlow's file management allows you to download an uploaded file:
![download_file](https://github.com/infiniflow/ragflow/assets/93570324/cf3b297f-7d9b-4522-bf5f-4f45743e4ed5) ![download_file](https://github.com/infiniflow/ragflow/assets/93570324/cf3b297f-7d9b-4522-bf5f-4f45743e4ed5)
> As of RAGFlow v0.22.1, bulk download is not supported, nor can you download an entire folder. > As of RAGFlow v0.23.0, bulk download is not supported, nor can you download an entire folder.

View File

@ -46,7 +46,7 @@ The Admin CLI and Admin Service form a client-server architectural suite for RAG
2. Install ragflow-cli. 2. Install ragflow-cli.
```bash ```bash
pip install ragflow-cli==0.22.1 pip install ragflow-cli==0.23.0
``` ```
3. Launch the CLI client: 3. Launch the CLI client:

View File

@ -338,14 +338,14 @@ Application startup complete.
``` ```
### 5.2 INTERGRATEING RAGFLOW WITH VLLM CHAT/EM/RERANK LLM WITH WEBUI ### 5.2 INTERGRATEING RAGFLOW WITH VLLM CHAT/EM/RERANK LLM WITH WEBUI
setting->model providers->search->vllm->add ,configure as follow:<br> setting->model providers->search->vllm->add ,configure as follow:
![add vllm](https://github.com/user-attachments/assets/6f1d9f1a-3507-465b-87a3-4427254fff86) ![add vllm](https://github.com/user-attachments/assets/6f1d9f1a-3507-465b-87a3-4427254fff86)
select vllm chat model as default llm model as follow:<br> select vllm chat model as default llm model as follow:
![chat](https://github.com/user-attachments/assets/05efbd4b-2c18-4c6b-8d1c-52bae712372d) ![chat](https://github.com/user-attachments/assets/05efbd4b-2c18-4c6b-8d1c-52bae712372d)
### 5.3 chat with vllm chat model ### 5.3 chat with vllm chat model
create chat->create conversations-chat as follow:<br> create chat->create conversations-chat as follow:
![chat](https://github.com/user-attachments/assets/dc1885f6-23a9-48f1-8850-d5f59b5e8f67) ![chat](https://github.com/user-attachments/assets/dc1885f6-23a9-48f1-8850-d5f59b5e8f67)

View File

@ -60,16 +60,16 @@ To upgrade RAGFlow, you must upgrade **both** your code **and** your Docker imag
git pull git pull
``` ```
3. Switch to the latest, officially published release, e.g., `v0.22.1`: 3. Switch to the latest, officially published release, e.g., `v0.23.0`:
```bash ```bash
git checkout -f v0.22.1 git checkout -f v0.23.0
``` ```
4. Update **ragflow/docker/.env**: 4. Update **ragflow/docker/.env**:
```bash ```bash
RAGFLOW_IMAGE=infiniflow/ragflow:v0.22.1 RAGFLOW_IMAGE=infiniflow/ragflow:v0.23.0
``` ```
5. Update the RAGFlow image and restart RAGFlow: 5. Update the RAGFlow image and restart RAGFlow:
@ -90,10 +90,10 @@ No, you do not need to. Upgrading RAGFlow in itself will *not* remove your uploa
1. From an environment with Internet access, pull the required Docker image. 1. From an environment with Internet access, pull the required Docker image.
2. Save the Docker image to a **.tar** file. 2. Save the Docker image to a **.tar** file.
```bash ```bash
docker save -o ragflow.v0.22.1.tar infiniflow/ragflow:v0.22.1 docker save -o ragflow.v0.23.0.tar infiniflow/ragflow:v0.23.0
``` ```
3. Copy the **.tar** file to the target server. 3. Copy the **.tar** file to the target server.
4. Load the **.tar** file into Docker: 4. Load the **.tar** file into Docker:
```bash ```bash
docker load -i ragflow.v0.22.1.tar docker load -i ragflow.v0.23.0.tar
``` ```

View File

@ -46,7 +46,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
`vm.max_map_count`. This value sets the maximum number of memory map areas a process may have. Its default value is 65530. While most applications require fewer than a thousand maps, reducing this value can result in abnormal behaviors, and the system will throw out-of-memory errors when a process reaches the limitation. `vm.max_map_count`. This value sets the maximum number of memory map areas a process may have. Its default value is 65530. While most applications require fewer than a thousand maps, reducing this value can result in abnormal behaviors, and the system will throw out-of-memory errors when a process reaches the limitation.
RAGFlow v0.22.1 uses Elasticsearch or [Infinity](https://github.com/infiniflow/infinity) for multiple recall. Setting the value of `vm.max_map_count` correctly is crucial to the proper functioning of the Elasticsearch component. RAGFlow v0.23.0 uses Elasticsearch or [Infinity](https://github.com/infiniflow/infinity) for multiple recall. Setting the value of `vm.max_map_count` correctly is crucial to the proper functioning of the Elasticsearch component.
<Tabs <Tabs
defaultValue="linux" defaultValue="linux"
@ -186,7 +186,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git $ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/docker $ cd ragflow/docker
$ git checkout -f v0.22.1 $ git checkout -f v0.23.0
``` ```
3. Use the pre-built Docker images and start up the server: 3. Use the pre-built Docker images and start up the server:
@ -202,7 +202,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
| RAGFlow image tag | Image size (GB) | Stable? | | RAGFlow image tag | Image size (GB) | Stable? |
| ------------------- | --------------- | ------------------------ | | ------------------- | --------------- | ------------------------ |
| v0.22.1 | &approx;2 | Stable release | | v0.23.0 | &approx;2 | Stable release |
| nightly | &approx;2 | _Unstable_ nightly build | | nightly | &approx;2 | _Unstable_ nightly build |
```mdx-code-block ```mdx-code-block

View File

@ -7,6 +7,61 @@ slug: /release_notes
Key features, improvements and bug fixes in the latest releases. Key features, improvements and bug fixes in the latest releases.
## v0.23.0
Released on December 29, 2025.
### New features
- Memory
- Implements a **Memory** interface for managing memory.
- Supports configuring context via the **Retrieval** or **Message** component.
- Agent
- Improves the **Agent** component's performance by refactoring the underlying architecture.
- The **Agent** component can now output structured data for use in downstream components.
- Supports using webhook to trigger agent execution.
- Supports voice input/output.
- Supports configuring multiple **Retrieval** components per **Agent** component.
- Ingestion pipeline
- Supports extracting table of contents in the **Transformer** component to improve long-context RAG performance.
- Dataset
- Supports configuring context window for images and tables.
- Introduces parent-child chunking strategy.
- Supports auto-generation of metadata during file parsing.
- Chat: Supports voice input.
### Improvements
- Bumps RAGFlow's document engine, [Infinity](https://github.com/infiniflow/infinity) to v0.6.13 (backward compatible).
### Data sources
- Google Cloud Storage
- Gmail
- Dropbox
- WebDAV
- Airtable
### Model support
- GPT-5.2
- GPT-5.2 Pro
- GPT-5.1
- GPT-5.1 Instant
- Claude Opus 4.5
- MiniMax M2
- GLM-4.7.
- A MinerU configuration interface.
- AI Badgr (model provider).
### API changes
#### HTTP API
- [Converse with Agent](./references/http_api_reference.md#converse-with-agent) returns complete execution trace logs.
- [Create chat completion](./references/http_api_reference.md#create-chat-completion) supports metadata-based filtering.
- [Converse with chat assistant](./references/http_api_reference.md#converse-with-chat-assistant) supports metadata-based filtering.
## v0.22.1 ## v0.22.1
Released on November 19, 2025. Released on November 19, 2025.

View File

@ -56,7 +56,7 @@ env:
ragflow: ragflow:
image: image:
repository: infiniflow/ragflow repository: infiniflow/ragflow
tag: v0.22.1 tag: v0.23.0
pullPolicy: IfNotPresent pullPolicy: IfNotPresent
pullSecrets: [] pullSecrets: []
# Optional service configuration overrides # Optional service configuration overrides
@ -96,7 +96,7 @@ ragflow:
infinity: infinity:
image: image:
repository: infiniflow/infinity repository: infiniflow/infinity
tag: v0.6.13 tag: v0.6.15
pullPolicy: IfNotPresent pullPolicy: IfNotPresent
pullSecrets: [] pullSecrets: []
storage: storage:

View File

@ -168,7 +168,7 @@ class MessageService:
order_by = OrderByExpr() order_by = OrderByExpr()
order_by.desc("valid_at") order_by.desc("valid_at")
res = settings.msgStoreConn.search( res, count = settings.msgStoreConn.search(
select_fields=["memory_id", "content", "content_embed"], select_fields=["memory_id", "content", "content_embed"],
highlight_fields=[], highlight_fields=[],
condition={}, condition={},
@ -177,8 +177,10 @@ class MessageService:
offset=0, limit=2048*len(memory_ids), offset=0, limit=2048*len(memory_ids),
index_names=index_names, memory_ids=memory_ids, agg_fields=[], hide_forgotten=False index_names=index_names, memory_ids=memory_ids, agg_fields=[], hide_forgotten=False
) )
if not res:
if count == 0:
return {} return {}
docs = settings.msgStoreConn.get_fields(res, ["memory_id", "content", "content_embed"]) docs = settings.msgStoreConn.get_fields(res, ["memory_id", "content", "content_embed"])
size_dict = {} size_dict = {}
for doc in docs.values(): for doc in docs.values():
@ -198,7 +200,7 @@ class MessageService:
message_list = settings.msgStoreConn.get_fields(res, select_fields) message_list = settings.msgStoreConn.get_fields(res, select_fields)
current_size = 0 current_size = 0
ids_to_remove = [] ids_to_remove = []
for message in message_list: for message in message_list.values():
if current_size < size_to_delete: if current_size < size_to_delete:
current_size += cls.calculate_message_size(message) current_size += cls.calculate_message_size(message)
ids_to_remove.append(message["message_id"]) ids_to_remove.append(message["message_id"])
@ -210,7 +212,7 @@ class MessageService:
order_by = OrderByExpr() order_by = OrderByExpr()
order_by.asc("valid_at") order_by.asc("valid_at")
res = settings.msgStoreConn.search( res = settings.msgStoreConn.search(
select_fields=["memory_id", "content", "content_embed"], select_fields=select_fields,
highlight_fields=[], highlight_fields=[],
condition={}, condition={},
match_expressions=[], match_expressions=[],
@ -222,7 +224,7 @@ class MessageService:
for doc in docs.values(): for doc in docs.values():
if current_size < size_to_delete: if current_size < size_to_delete:
current_size += cls.calculate_message_size(doc) current_size += cls.calculate_message_size(doc)
ids_to_remove.append(doc["memory_id"]) ids_to_remove.append(doc["message_id"])
else: else:
return ids_to_remove, current_size return ids_to_remove, current_size
return ids_to_remove, current_size return ids_to_remove, current_size

View File

@ -127,6 +127,11 @@ class ESConnection(ESConnectionBase):
index_names = index_names.split(",") index_names = index_names.split(",")
assert isinstance(index_names, list) and len(index_names) > 0 assert isinstance(index_names, list) and len(index_names) > 0
assert "_id" not in condition assert "_id" not in condition
exist_index_list = [idx for idx in index_names if self.index_exist(idx)]
if not exist_index_list:
return None
bool_query = Q("bool", must=[], must_not=[]) bool_query = Q("bool", must=[], must_not=[])
if hide_forgotten: if hide_forgotten:
# filter not forget # filter not forget
@ -214,7 +219,7 @@ class ESConnection(ESConnectionBase):
for i in range(ATTEMPT_TIME): for i in range(ATTEMPT_TIME):
try: try:
#print(json.dumps(q, ensure_ascii=False)) #print(json.dumps(q, ensure_ascii=False))
res = self.es.search(index=index_names, res = self.es.search(index=exist_index_list,
body=q, body=q,
timeout="600s", timeout="600s",
# search_type="dfs_query_then_fetch", # search_type="dfs_query_then_fetch",
@ -223,14 +228,14 @@ class ESConnection(ESConnectionBase):
if str(res.get("timed_out", "")).lower() == "true": if str(res.get("timed_out", "")).lower() == "true":
raise Exception("Es Timeout.") raise Exception("Es Timeout.")
self.logger.debug(f"ESConnection.search {str(index_names)} res: " + str(res)) self.logger.debug(f"ESConnection.search {str(index_names)} res: " + str(res))
return res return res, self.get_total(res)
except ConnectionTimeout: except ConnectionTimeout:
self.logger.exception("ES request timeout") self.logger.exception("ES request timeout")
self._connect() self._connect()
continue continue
except NotFoundError as e: except NotFoundError as e:
self.logger.debug(f"ESConnection.search {str(index_names)} query: " + str(q) + str(e)) self.logger.debug(f"ESConnection.search {str(index_names)} query: " + str(q) + str(e))
return None return None, 0
except Exception as e: except Exception as e:
self.logger.exception(f"ESConnection.search {str(index_names)} query: " + str(q) + str(e)) self.logger.exception(f"ESConnection.search {str(index_names)} query: " + str(q) + str(e))
raise e raise e
@ -239,8 +244,8 @@ class ESConnection(ESConnectionBase):
raise Exception("ESConnection.search timeout.") raise Exception("ESConnection.search timeout.")
def get_forgotten_messages(self, select_fields: list[str], index_name: str, memory_id: str, limit: int=512): def get_forgotten_messages(self, select_fields: list[str], index_name: str, memory_id: str, limit: int=512):
bool_query = Q("bool", must_not=[]) bool_query = Q("bool", must=[])
bool_query.must_not.append(Q("term", forget_at=None)) bool_query.must.append(Q("exists", field="forget_at"))
bool_query.filter.append(Q("term", memory_id=memory_id)) bool_query.filter.append(Q("term", memory_id=memory_id))
# from old to new # from old to new
order_by = OrderByExpr() order_by = OrderByExpr()
@ -248,7 +253,15 @@ class ESConnection(ESConnectionBase):
# build search # build search
s = Search() s = Search()
s = s.query(bool_query) s = s.query(bool_query)
s = s.sort(order_by) orders = list()
for field, order in order_by.fields:
order = "asc" if order == 0 else "desc"
if field.endswith("_int") or field.endswith("_flt"):
order_info = {"order": order, "unmapped_type": "float"}
else:
order_info = {"order": order, "unmapped_type": "text"}
orders.append({field: order_info})
s = s.sort(*orders)
s = s[:limit] s = s[:limit]
q = s.to_dict() q = s.to_dict()
# search # search

View File

@ -22,7 +22,6 @@ from infinity.errors import ErrorCode
from common.decorator import singleton from common.decorator import singleton
import pandas as pd import pandas as pd
from common.constants import PAGERANK_FLD, TAG_FLD
from common.doc_store.doc_store_base import MatchExpr, MatchTextExpr, MatchDenseExpr, FusionExpr, OrderByExpr from common.doc_store.doc_store_base import MatchExpr, MatchTextExpr, MatchDenseExpr, FusionExpr, OrderByExpr
from common.doc_store.infinity_conn_base import InfinityConnectionBase from common.doc_store.infinity_conn_base import InfinityConnectionBase
from common.time_utils import date_string_to_timestamp from common.time_utils import date_string_to_timestamp
@ -150,8 +149,6 @@ class InfinityConnection(InfinityConnectionBase):
if match_expressions: if match_expressions:
if score_func not in output: if score_func not in output:
output.append(score_func) output.append(score_func)
if PAGERANK_FLD not in output:
output.append(PAGERANK_FLD)
output = [f for f in output if f != "_score"] output = [f for f in output if f != "_score"]
if limit <= 0: if limit <= 0:
# ElasticSearch default limit is 10000 # ElasticSearch default limit is 10000
@ -192,17 +189,6 @@ class InfinityConnection(InfinityConnectionBase):
str_minimum_should_match = str(int(minimum_should_match * 100)) + "%" str_minimum_should_match = str(int(minimum_should_match * 100)) + "%"
matchExpr.extra_options["minimum_should_match"] = str_minimum_should_match matchExpr.extra_options["minimum_should_match"] = str_minimum_should_match
# Add rank_feature support
if rank_feature and "rank_features" not in matchExpr.extra_options:
# Convert rank_feature dict to Infinity's rank_features string format
# Format: "field^feature_name^weight,field^feature_name^weight"
rank_features_list = []
for feature_name, weight in rank_feature.items():
# Use TAG_FLD as the field containing rank features
rank_features_list.append(f"{TAG_FLD}^{feature_name}^{weight}")
if rank_features_list:
matchExpr.extra_options["rank_features"] = ",".join(rank_features_list)
for k, v in matchExpr.extra_options.items(): for k, v in matchExpr.extra_options.items():
if not isinstance(v, str): if not isinstance(v, str):
matchExpr.extra_options[k] = str(v) matchExpr.extra_options[k] = str(v)
@ -225,14 +211,13 @@ class InfinityConnection(InfinityConnectionBase):
self.logger.debug(f"INFINITY search FusionExpr: {json.dumps(matchExpr.__dict__)}") self.logger.debug(f"INFINITY search FusionExpr: {json.dumps(matchExpr.__dict__)}")
order_by_expr_list = list() order_by_expr_list = list()
# todo use order_by after infinity fixed bug if order_by.fields:
# if order_by.fields: for order_field in order_by.fields:
# for order_field in order_by.fields: order_field_name = self.convert_condition_and_order_field(order_field[0])
# order_field_name = self.convert_condition_and_order_field(order_field[0]) if order_field[1] == 0:
# if order_field[1] == 0: order_by_expr_list.append((order_field_name, SortType.Asc))
# order_by_expr_list.append((order_field_name, SortType.Asc)) else:
# else: order_by_expr_list.append((order_field_name, SortType.Desc))
# order_by_expr_list.append((order_field_name, SortType.Desc))
total_hits_count = 0 total_hits_count = 0
# Scatter search tables and gather the results # Scatter search tables and gather the results
@ -284,7 +269,7 @@ class InfinityConnection(InfinityConnectionBase):
self.connPool.release_conn(inf_conn) self.connPool.release_conn(inf_conn)
res = self.concat_dataframes(df_list, output) res = self.concat_dataframes(df_list, output)
if match_expressions: if match_expressions:
res["_score"] = res[score_column] + res[PAGERANK_FLD] res["_score"] = res[score_column]
res = res.sort_values(by="_score", ascending=False).reset_index(drop=True) res = res.sort_values(by="_score", ascending=False).reset_index(drop=True)
res = res.head(limit) res = res.head(limit)
self.logger.debug(f"INFINITY search final result: {str(res)}") self.logger.debug(f"INFINITY search final result: {str(res)}")

View File

@ -1,6 +1,6 @@
[project] [project]
name = "ragflow" name = "ragflow"
version = "0.22.1" version = "0.23.0"
description = "[RAGFlow](https://ragflow.io/) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data." description = "[RAGFlow](https://ragflow.io/) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data."
authors = [{ name = "Zhichang Yu", email = "yuzhichang@gmail.com" }] authors = [{ name = "Zhichang Yu", email = "yuzhichang@gmail.com" }]
license-files = ["LICENSE"] license-files = ["LICENSE"]
@ -46,7 +46,7 @@ dependencies = [
"groq==0.9.0", "groq==0.9.0",
"grpcio-status==1.67.1", "grpcio-status==1.67.1",
"html-text==0.6.2", "html-text==0.6.2",
"infinity-sdk==0.6.13", "infinity-sdk==0.6.15",
"infinity-emb>=0.0.66,<0.0.67", "infinity-emb>=0.0.66,<0.0.67",
"jira==3.10.5", "jira==3.10.5",
"json-repair==0.35.0", "json-repair==0.35.0",

View File

@ -92,9 +92,9 @@ class Splitter(ProcessBase):
continue continue
split_sec = re.split(r"(%s)" % custom_pattern, c, flags=re.DOTALL) split_sec = re.split(r"(%s)" % custom_pattern, c, flags=re.DOTALL)
if split_sec: if split_sec:
for txt in split_sec: for j in range(0, len(split_sec), 2):
docs.append({ docs.append({
"text": txt, "text": split_sec[j],
"mom": c "mom": c
}) })
else: else:
@ -155,9 +155,9 @@ class Splitter(ProcessBase):
split_sec = re.split(r"(%s)" % custom_pattern, c["text"], flags=re.DOTALL) split_sec = re.split(r"(%s)" % custom_pattern, c["text"], flags=re.DOTALL)
if split_sec: if split_sec:
c["mom"] = c["text"] c["mom"] = c["text"]
for txt in split_sec: for j in range(0, len(split_sec), 2):
cc = deepcopy(c) cc = deepcopy(c)
cc["text"] = txt cc["text"] = split_sec[j]
docs.append(cc) docs.append(cc)
else: else:
docs.append(c) docs.append(c)

View File

@ -376,6 +376,7 @@ def attach_media_context(chunks, table_context_size=0, image_context_size=0):
order chunks before collecting context; otherwise keep original order. order chunks before collecting context; otherwise keep original order.
""" """
from . import rag_tokenizer from . import rag_tokenizer
if not chunks or (table_context_size <= 0 and image_context_size <= 0): if not chunks or (table_context_size <= 0 and image_context_size <= 0):
return chunks return chunks
@ -418,6 +419,51 @@ def attach_media_context(chunks, table_context_size=0, image_context_size=0):
sentences.append(buf) sentences.append(buf)
return sentences return sentences
def get_bounds_by_page(ck):
bounds = {}
try:
if ck.get("position_int"):
for pos in ck["position_int"]:
if not pos or len(pos) < 5:
continue
pn, _, _, top, bottom = pos
if pn is None or top is None:
continue
top_val = float(top)
bottom_val = float(bottom) if bottom is not None else top_val
if bottom_val < top_val:
top_val, bottom_val = bottom_val, top_val
pn = int(pn)
if pn in bounds:
bounds[pn] = (min(bounds[pn][0], top_val), max(bounds[pn][1], bottom_val))
else:
bounds[pn] = (top_val, bottom_val)
else:
pn = None
if ck.get("page_num_int"):
pn = ck["page_num_int"][0]
elif ck.get("page_number") is not None:
pn = ck.get("page_number")
if pn is None:
return bounds
top = None
if ck.get("top_int"):
top = ck["top_int"][0]
elif ck.get("top") is not None:
top = ck.get("top")
if top is None:
return bounds
bottom = ck.get("bottom")
pn = int(pn)
top_val = float(top)
bottom_val = float(bottom) if bottom is not None else top_val
if bottom_val < top_val:
top_val, bottom_val = bottom_val, top_val
bounds[pn] = (top_val, bottom_val)
except Exception:
return {}
return bounds
def trim_to_tokens(text, token_budget, from_tail=False): def trim_to_tokens(text, token_budget, from_tail=False):
if token_budget <= 0 or not text: if token_budget <= 0 or not text:
return "" return ""
@ -442,6 +488,55 @@ def attach_media_context(chunks, table_context_size=0, image_context_size=0):
collected = list(reversed(collected)) collected = list(reversed(collected))
return "".join(collected) return "".join(collected)
def find_mid_sentence_index(sentences):
if not sentences:
return 0
total = sum(max(0, num_tokens_from_string(s)) for s in sentences)
if total <= 0:
return max(0, len(sentences) // 2)
target = total / 2.0
best_idx = 0
best_diff = None
cum = 0
for i, s in enumerate(sentences):
cum += max(0, num_tokens_from_string(s))
diff = abs(cum - target)
if best_diff is None or diff < best_diff:
best_diff = diff
best_idx = i
return best_idx
def collect_context_from_sentences(sentences, boundary_idx, token_budget):
prev_ctx = []
remaining_prev = token_budget
for s in reversed(sentences[:boundary_idx + 1]):
if remaining_prev <= 0:
break
tks = num_tokens_from_string(s)
if tks <= 0:
continue
if tks > remaining_prev:
s = trim_to_tokens(s, remaining_prev, from_tail=True)
tks = num_tokens_from_string(s)
prev_ctx.append(s)
remaining_prev -= tks
prev_ctx.reverse()
next_ctx = []
remaining_next = token_budget
for s in sentences[boundary_idx + 1:]:
if remaining_next <= 0:
break
tks = num_tokens_from_string(s)
if tks <= 0:
continue
if tks > remaining_next:
s = trim_to_tokens(s, remaining_next, from_tail=False)
tks = num_tokens_from_string(s)
next_ctx.append(s)
remaining_next -= tks
return prev_ctx, next_ctx
def extract_position(ck): def extract_position(ck):
pn = None pn = None
top = None top = None
@ -481,7 +576,14 @@ def attach_media_context(chunks, table_context_size=0, image_context_size=0):
else: else:
ordered_indices = [idx for idx, _ in indexed] ordered_indices = [idx for idx, _ in indexed]
total = len(ordered_indices) text_bounds = []
for idx, ck in indexed:
if not is_text_chunk(ck):
continue
bounds = get_bounds_by_page(ck)
if bounds:
text_bounds.append((idx, bounds))
for sorted_pos, idx in enumerate(ordered_indices): for sorted_pos, idx in enumerate(ordered_indices):
ck = chunks[idx] ck = chunks[idx]
token_budget = image_context_size if is_image_chunk(ck) else table_context_size if is_table_chunk(ck) else 0 token_budget = image_context_size if is_image_chunk(ck) else table_context_size if is_table_chunk(ck) else 0
@ -489,45 +591,51 @@ def attach_media_context(chunks, table_context_size=0, image_context_size=0):
continue continue
prev_ctx = [] prev_ctx = []
remaining_prev = token_budget
for prev_idx in range(sorted_pos - 1, -1, -1):
if remaining_prev <= 0:
break
neighbor_idx = ordered_indices[prev_idx]
if not is_text_chunk(chunks[neighbor_idx]):
break
txt = get_text(chunks[neighbor_idx])
if not txt:
continue
tks = num_tokens_from_string(txt)
if tks <= 0:
continue
if tks > remaining_prev:
txt = trim_to_tokens(txt, remaining_prev, from_tail=True)
tks = num_tokens_from_string(txt)
prev_ctx.append(txt)
remaining_prev -= tks
prev_ctx.reverse()
next_ctx = [] next_ctx = []
remaining_next = token_budget media_bounds = get_bounds_by_page(ck)
for next_idx in range(sorted_pos + 1, total): best_idx = None
if remaining_next <= 0: best_dist = None
break candidate_count = 0
neighbor_idx = ordered_indices[next_idx] if media_bounds and text_bounds:
if not is_text_chunk(chunks[neighbor_idx]): for text_idx, bounds in text_bounds:
break for pn, (t_top, t_bottom) in bounds.items():
txt = get_text(chunks[neighbor_idx]) if pn not in media_bounds:
if not txt: continue
continue m_top, m_bottom = media_bounds[pn]
tks = num_tokens_from_string(txt) if m_bottom < t_top or m_top > t_bottom:
if tks <= 0: continue
continue candidate_count += 1
if tks > remaining_next: m_mid = (m_top + m_bottom) / 2.0
txt = trim_to_tokens(txt, remaining_next, from_tail=False) t_mid = (t_top + t_bottom) / 2.0
tks = num_tokens_from_string(txt) dist = abs(m_mid - t_mid)
next_ctx.append(txt) if best_dist is None or dist < best_dist:
remaining_next -= tks best_dist = dist
best_idx = text_idx
if best_idx is None and media_bounds:
media_page = min(media_bounds.keys())
page_order = []
for ordered_idx in ordered_indices:
pn, _, _ = extract_position(chunks[ordered_idx])
if pn == media_page:
page_order.append(ordered_idx)
if page_order and idx in page_order:
pos_in_page = page_order.index(idx)
if pos_in_page == 0:
for neighbor in page_order[pos_in_page + 1:]:
if is_text_chunk(chunks[neighbor]):
best_idx = neighbor
break
elif pos_in_page == len(page_order) - 1:
for neighbor in reversed(page_order[:pos_in_page]):
if is_text_chunk(chunks[neighbor]):
best_idx = neighbor
break
if best_idx is not None:
base_text = get_text(chunks[best_idx])
sentences = split_sentences(base_text)
if sentences:
boundary_idx = find_mid_sentence_index(sentences)
prev_ctx, next_ctx = collect_context_from_sentences(sentences, boundary_idx, token_budget)
if not prev_ctx and not next_ctx: if not prev_ctx and not next_ctx:
continue continue

View File

@ -1,13 +1,17 @@
Extract important structured information from the given content. ## Role: Metadata extraction expert
Output ONLY a valid JSON string with no additional text. ## Constraints:
If no important structured information is found, output an empty JSON object: {}. - Core Directive: Extract important structured information from the given content. Output ONLY a valid JSON string. No Markdown (e.g., ```json), no explanations, and no notes.
- Schema Parsing: In the `properties` object provided in Schema, the attribute name (e.g., 'author') is the target Key. Extract values based on the `description`; if no `description` is provided, refer to the key's literal meaning.
- Extraction Rules: Extract only when there is an explicit semantic correlation. If multiple values or data points match a field's definition, extract and include all of them. Strictly follow the Schema below and only output matched key-value pairs. If the content is irrelevant or no matching information is identified, you **MUST** output {}.
- Data Source: Extraction must be based solely on content below. Semantic mapping (synonyms) is allowed, but strictly prohibit hallucinations or fabricated facts.
Important structured information structure as following: ## Enum Rules (Triggered ONLY if an enum list is present):
- Value Lock: All extracted values MUST strictly match the provided enum list.
- Normalization: Map synonyms or variants in the text back to the standard enum value (e.g., "Dec" to "December").
- Fallback: Output {} if no explicit match or synonym is identified.
## Schema for extraction:
{{ schema }} {{ schema }}
--------------------------- ## Content to analyze:
The given content as following:
{{ content }} {{ content }}

View File

@ -1,6 +1,6 @@
[project] [project]
name = "ragflow-sdk" name = "ragflow-sdk"
version = "0.22.1" version = "0.23.0"
description = "Python client sdk of [RAGFlow](https://github.com/infiniflow/ragflow). RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding." description = "Python client sdk of [RAGFlow](https://github.com/infiniflow/ragflow). RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding."
authors = [{ name = "Zhichang Yu", email = "yuzhichang@gmail.com" }] authors = [{ name = "Zhichang Yu", email = "yuzhichang@gmail.com" }]
license = { text = "Apache License, Version 2.0" } license = { text = "Apache License, Version 2.0" }

2
sdk/python/uv.lock generated
View File

@ -353,7 +353,7 @@ wheels = [
[[package]] [[package]]
name = "ragflow-sdk" name = "ragflow-sdk"
version = "0.22.1" version = "0.23.0"
source = { virtual = "." } source = { virtual = "." }
dependencies = [ dependencies = [
{ name = "beartype" }, { name = "beartype" },

10
uv.lock generated
View File

@ -3051,7 +3051,7 @@ wheels = [
[[package]] [[package]]
name = "infinity-sdk" name = "infinity-sdk"
version = "0.6.13" version = "0.6.15"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" } source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
dependencies = [ dependencies = [
{ name = "datrie" }, { name = "datrie" },
@ -3068,9 +3068,9 @@ dependencies = [
{ name = "sqlglot", extra = ["rs"] }, { name = "sqlglot", extra = ["rs"] },
{ name = "thrift" }, { name = "thrift" },
] ]
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/03/de/56fdc0fa962d5a8e0aa68d16f5321b2d88d79fceb7d0d6cfdde338b65d05/infinity_sdk-0.6.13.tar.gz", hash = "sha256:faf7bc23de7fa549a3842753eddad54ae551ada9df4fff25421658a7fa6fa8c2", size = 29518902, upload-time = "2025-12-24T10:00:01.483Z" } sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/f9/e3/c7433ce0017fba9cd833bc2f2d0208acfdfaf4e635594f7257976bb7230e/infinity_sdk-0.6.15.tar.gz", hash = "sha256:b3159acb1b026e1868ac90a480d8259748655df82a32acdd838279b867b5f587", size = 29518841, upload-time = "2025-12-27T10:39:09.676Z" }
wheels = [ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/f4/a0/8f1e134fdf4ca8bebac7b62caace1816953bb5ffc720d9f0004246c8c38d/infinity_sdk-0.6.13-py3-none-any.whl", hash = "sha256:c08a523d2c27e9a7e6e88be640970530b4661a67c3e9dc3e1aa89533a822fd78", size = 29737403, upload-time = "2025-12-24T09:56:16.93Z" }, { url = "https://pypi.tuna.tsinghua.edu.cn/packages/0f/2c/427702ff4231f8965053b8c585b32cfd7571e0515e05fe5e95ddf2c56030/infinity_sdk-0.6.15-py3-none-any.whl", hash = "sha256:06f8a7f50c9817f17aac9d3cafe08f3478423b02b233bd608d17317e23588dc7", size = 29737429, upload-time = "2025-12-27T10:41:58.352Z" },
] ]
[[package]] [[package]]
@ -6082,7 +6082,7 @@ wheels = [
[[package]] [[package]]
name = "ragflow" name = "ragflow"
version = "0.22.1" version = "0.23.0"
source = { virtual = "." } source = { virtual = "." }
dependencies = [ dependencies = [
{ name = "aiosmtplib" }, { name = "aiosmtplib" },
@ -6255,7 +6255,7 @@ requires-dist = [
{ name = "grpcio-status", specifier = "==1.67.1" }, { name = "grpcio-status", specifier = "==1.67.1" },
{ name = "html-text", specifier = "==0.6.2" }, { name = "html-text", specifier = "==0.6.2" },
{ name = "infinity-emb", specifier = ">=0.0.66,<0.0.67" }, { name = "infinity-emb", specifier = ">=0.0.66,<0.0.67" },
{ name = "infinity-sdk", specifier = "==0.6.13" }, { name = "infinity-sdk", specifier = "==0.6.15" },
{ name = "jira", specifier = "==3.10.5" }, { name = "jira", specifier = "==3.10.5" },
{ name = "json-repair", specifier = "==0.35.0" }, { name = "json-repair", specifier = "==0.35.0" },
{ name = "langfuse", specifier = ">=2.60.0" }, { name = "langfuse", specifier = ">=2.60.0" },

View File

@ -220,15 +220,22 @@ export function ChunkMethodDialog({
async function onSubmit(data: z.infer<typeof FormSchema>) { async function onSubmit(data: z.infer<typeof FormSchema>) {
console.log('🚀 ~ onSubmit ~ data:', data); console.log('🚀 ~ onSubmit ~ data:', data);
const parserConfig = data.parser_config;
const imageTableContextWindow = Number(
parserConfig?.image_table_context_window || 0,
);
const nextData = { const nextData = {
...data, ...data,
parser_config: { parser_config: {
...data.parser_config, ...parserConfig,
image_table_context_window: imageTableContextWindow,
image_context_size: imageTableContextWindow,
table_context_size: imageTableContextWindow,
// Unset children delimiter if this option is not enabled // Unset children delimiter if this option is not enabled
children_delimiter: data.parser_config.enable_children children_delimiter: parserConfig.enable_children
? data.parser_config.children_delimiter ? parserConfig.children_delimiter
: '', : '',
pages: data.parser_config?.pages?.map((x: any) => [x.from, x.to]) ?? [], pages: parserConfig?.pages?.map((x: any) => [x.from, x.to]) ?? [],
}, },
}; };
console.log('🚀 ~ onSubmit ~ nextData:', nextData); console.log('🚀 ~ onSubmit ~ nextData:', nextData);
@ -249,6 +256,10 @@ export function ChunkMethodDialog({
parser_config: fillDefaultParserValue({ parser_config: fillDefaultParserValue({
pages: pages.length > 0 ? pages : [{ from: 1, to: 1024 }], pages: pages.length > 0 ? pages : [{ from: 1, to: 1024 }],
...omit(parserConfig, 'pages'), ...omit(parserConfig, 'pages'),
image_table_context_window:
parserConfig?.image_table_context_window ??
parserConfig?.image_context_size ??
parserConfig?.table_context_size,
// graphrag: { // graphrag: {
// use_graphrag: get( // use_graphrag: get(
// parserConfig, // parserConfig,

View File

@ -44,6 +44,9 @@ export interface IParserConfig {
raptor?: Raptor; raptor?: Raptor;
graphrag?: GraphRag; graphrag?: GraphRag;
image_context_window?: number; image_context_window?: number;
image_table_context_window?: number;
image_context_size?: number;
table_context_size?: number;
mineru_parse_method?: 'auto' | 'txt' | 'ocr'; mineru_parse_method?: 'auto' | 'txt' | 'ocr';
mineru_formula_enable?: boolean; mineru_formula_enable?: boolean;
mineru_table_enable?: boolean; mineru_table_enable?: boolean;

View File

@ -8,6 +8,9 @@ export interface IChangeParserConfigRequestBody {
auto_questions?: number; auto_questions?: number;
html4excel?: boolean; html4excel?: boolean;
toc_extraction?: boolean; toc_extraction?: boolean;
image_table_context_window?: number;
image_context_size?: number;
table_context_size?: number;
} }
export interface IChangeParserRequestBody { export interface IChangeParserRequestBody {

View File

@ -99,7 +99,7 @@ export default {
llmTooltip: '分析对话内容,提取关键信息,并生成结构化的记忆摘要。', llmTooltip: '分析对话内容,提取关键信息,并生成结构化的记忆摘要。',
embeddingModelTooltip: embeddingModelTooltip:
'将文本转换为数值向量,用于语义相似度搜索和记忆检索。', '将文本转换为数值向量,用于语义相似度搜索和记忆检索。',
embeddingModelError: '记忆类型为必填项,且"row"类型不可删除。', embeddingModelError: '记忆类型为必填项,且"原始"类型不可删除。',
memoryTypeTooltip: `原始: 用户与代理之间的原始对话内容(默认必需)。 memoryTypeTooltip: `原始: 用户与代理之间的原始对话内容(默认必需)。
语义记忆: 关于用户和世界的通用知识和事实。 语义记忆: 关于用户和世界的通用知识和事实。
情景记忆: 带时间戳的特定事件和经历记录。 情景记忆: 带时间戳的特定事件和经历记录。

View File

@ -73,6 +73,8 @@ export function SavingButton() {
...values, ...values,
parser_config: { parser_config: {
...values.parser_config, ...values.parser_config,
image_table_context_window:
values.parser_config.image_table_context_window,
image_context_size: image_context_size:
values.parser_config.image_table_context_window, values.parser_config.image_table_context_window,
table_context_size: table_context_size:

View File

@ -85,6 +85,7 @@ export const ReparseDialog = ({
onCancel={() => handleCancel()} onCancel={() => handleCancel()}
hidden={hidden} hidden={hidden}
open={visible} open={visible}
okButtonText={t('common.confirm')}
content={{ content={{
title: t(`knowledgeDetails.parseFileTip`), title: t(`knowledgeDetails.parseFileTip`),
node: ( node: (

View File

@ -204,7 +204,7 @@ export function MemoryTable({
return ( return (
<div className="w-full"> <div className="w-full">
<Table rootClassName="max-h-[calc(100vh-222px)]"> <Table rootClassName="max-h-[calc(100vh-282px)]">
<TableHeader> <TableHeader>
{table.getHeaderGroups().map((headerGroup) => ( {table.getHeaderGroups().map((headerGroup) => (
<TableRow key={headerGroup.id}> <TableRow key={headerGroup.id}>
@ -327,7 +327,7 @@ export function MemoryTable({
</Modal> </Modal>
)} )}
<div className="flex items-center justify-end py-4 absolute bottom-3 right-3"> <div className="flex items-center justify-end absolute bottom-3 right-3">
<RAGFlowPagination <RAGFlowPagination
{...pick(pagination, 'current', 'pageSize')} {...pick(pagination, 'current', 'pageSize')}
total={total} total={total}

View File

@ -18,10 +18,10 @@ export const advancedSettingsFormSchema = {
user_prompt: z.string().optional(), user_prompt: z.string().optional(),
}; };
export const defaultAdvancedSettingsForm = { export const defaultAdvancedSettingsForm = {
permissions: 'me', permissions: '',
storage_type: 'table', storage_type: '',
forgetting_policy: 'FIFO', forgetting_policy: '',
temperature: 0.7, temperature: 0,
system_prompt: '', system_prompt: '',
user_prompt: '', user_prompt: '',
}; };
@ -95,7 +95,7 @@ export const AdvancedSettingsForm = () => {
// placeholder: t('memory.config.storageTypePlaceholder'), // placeholder: t('memory.config.storageTypePlaceholder'),
options: [ options: [
// { label: 'LRU', value: 'LRU' }, // { label: 'LRU', value: 'LRU' },
{ label: 'FIFO', value: 'fifo' }, { label: 'FIFO', value: 'FIFO' },
], ],
required: false, required: false,
}} }}

View File

@ -89,7 +89,7 @@ export const MemoryModelForm = () => {
<RenderField <RenderField
field={{ field={{
name: 'memory_size', name: 'memory_size',
label: t('memory.config.memorySize'), label: t('memory.config.memorySize') + ' (Bytes)',
type: FormFieldType.Number, type: FormFieldType.Number,
horizontal: true, horizontal: true,
// placeholder: t('memory.config.memorySizePlaceholder'), // placeholder: t('memory.config.memorySizePlaceholder'),

View File

@ -5,12 +5,13 @@ import {
FormFieldConfig, FormFieldConfig,
FormFieldType, FormFieldType,
} from '@/components/dynamic-form'; } from '@/components/dynamic-form';
import { Button } from '@/components/ui/button';
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card'; import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card';
import { Input } from '@/components/ui/input'; import { Input } from '@/components/ui/input';
import { Separator } from '@/components/ui/separator'; import { Separator } from '@/components/ui/separator';
import { RunningStatus } from '@/constants/knowledge'; import { RunningStatus } from '@/constants/knowledge';
import { t } from 'i18next'; import { t } from 'i18next';
import { CirclePause, Loader2, Repeat } from 'lucide-react'; import { CirclePause, Repeat } from 'lucide-react';
import { useCallback, useEffect, useMemo, useRef, useState } from 'react'; import { useCallback, useEffect, useMemo, useRef, useState } from 'react';
import { FieldValues } from 'react-hook-form'; import { FieldValues } from 'react-hook-form';
import { import {
@ -177,17 +178,18 @@ const SourceDetailPage = () => {
/> />
</div> </div>
<div className="max-w-[1200px] flex justify-end"> <div className="max-w-[1200px] flex justify-end">
<button <Button
type="button" type="button"
onClick={onSubmit} onClick={onSubmit}
disabled={addLoading} disabled={addLoading}
className="flex items-center justify-center min-w-[100px] px-4 py-2 bg-primary text-white rounded-md disabled:opacity-60" loading={addLoading}
> >
{addLoading && <Loader2 className="mr-2 h-4 w-4 animate-spin" />} {t('common.confirm')}
{/* {addLoading && <Loader2 className="mr-2 h-4 w-4 animate-spin" />}
{addLoading {addLoading
? t('modal.loadingText', { defaultValue: 'Submitting...' }) ? t('modal.loadingText', { defaultValue: 'Submitting...' })
: t('modal.okText', { defaultValue: 'Submit' })} : t('modal.okText', { defaultValue: 'Submit' })} */}
</button> </Button>
</div> </div>
<section className="flex flex-col gap-2"> <section className="flex flex-col gap-2">
<div className="text-2xl text-text-primary mb-2"> <div className="text-2xl text-text-primary mb-2">