mirror of
https://github.com/infiniflow/ragflow.git
synced 2026-01-04 03:25:30 +08:00
Compare commits
60 Commits
5e8cd693a5
...
v0.20.3
| Author | SHA1 | Date | |
|---|---|---|---|
| d55f44601a | |||
| abb6359547 | |||
| f55ff590d7 | |||
| 7d3bb3a2f9 | |||
| e6cb74b27f | |||
| 00f54c207e | |||
| d0dc56166c | |||
| e15e39f183 | |||
| 33f3e05b75 | |||
| b8bfbac2e5 | |||
| d5729e598f | |||
| f2c5ad170d | |||
| 0aa3c4cdae | |||
| f123587538 | |||
| a41a646909 | |||
| 787e0c6786 | |||
| 05ee1be1e9 | |||
| a0d630365c | |||
| b5b8032a56 | |||
| ccb9f0b0d7 | |||
| a0ab619aeb | |||
| 32349481ef | |||
| 2b9ed935f3 | |||
| 188c0f614b | |||
| dad97869b6 | |||
| 57c8a37285 | |||
| 9d0fed601d | |||
| fe32952825 | |||
| 5808aef28c | |||
| ca720bd811 | |||
| ba11312766 | |||
| c8bbf7452d | |||
| b08650bc4c | |||
| fb77f9917b | |||
| d874683ae4 | |||
| f9e5caa8ed | |||
| 99df0766fe | |||
| 3b50688228 | |||
| ffc095bd50 | |||
| 799c57287c | |||
| eef43fa25c | |||
| 5a4dfecfbe | |||
| 7f237fee16 | |||
| 30ae78755b | |||
| 2114e966d8 | |||
| 562349eb02 | |||
| 618d6bc924 | |||
| 762aa4b8c4 | |||
| 9cd09488ca | |||
| f2806a8332 | |||
| b6e34e3aa7 | |||
| 3ee9653170 | |||
| 6d1078b538 | |||
| 6e862553cb | |||
| b1baa91ff0 | |||
| b55c3d07dc | |||
| 2b3318cd3d | |||
| 434b55be70 | |||
| 98b4c67292 | |||
| 3d645ff31a |
@ -22,7 +22,7 @@
|
||||
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
|
||||
</a>
|
||||
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.1">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.3">
|
||||
</a>
|
||||
<a href="https://github.com/infiniflow/ragflow/releases/latest">
|
||||
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
|
||||
@ -190,7 +190,7 @@ releases! 🌟
|
||||
> All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64.
|
||||
> If you are on an ARM64 platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a Docker image compatible with your system.
|
||||
|
||||
> The command below downloads the `v0.20.1-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.20.1-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1` for the full edition `v0.20.1`.
|
||||
> The command below downloads the `v0.20.3-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.20.3-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.3` for the full edition `v0.20.3`.
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
@ -203,8 +203,8 @@ releases! 🌟
|
||||
|
||||
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|
||||
|-------------------|-----------------|-----------------------|--------------------------|
|
||||
| v0.20.1 | ≈9 | :heavy_check_mark: | Stable release |
|
||||
| v0.20.1-slim | ≈2 | ❌ | Stable release |
|
||||
| v0.20.3 | ≈9 | :heavy_check_mark: | Stable release |
|
||||
| v0.20.3-slim | ≈2 | ❌ | Stable release |
|
||||
| nightly | ≈9 | :heavy_check_mark: | _Unstable_ nightly build |
|
||||
| nightly-slim | ≈2 | ❌ | _Unstable_ nightly build |
|
||||
|
||||
|
||||
@ -22,7 +22,7 @@
|
||||
<img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99">
|
||||
</a>
|
||||
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.1">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.3">
|
||||
</a>
|
||||
<a href="https://github.com/infiniflow/ragflow/releases/latest">
|
||||
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru">
|
||||
@ -181,7 +181,7 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
|
||||
> Semua gambar Docker dibangun untuk platform x86. Saat ini, kami tidak menawarkan gambar Docker untuk ARM64.
|
||||
> Jika Anda menggunakan platform ARM64, [silakan gunakan panduan ini untuk membangun gambar Docker yang kompatibel dengan sistem Anda](https://ragflow.io/docs/dev/build_docker_image).
|
||||
|
||||
> Perintah di bawah ini mengunduh edisi v0.20.1-slim dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.20.1-slim, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. Misalnya, atur RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1 untuk edisi lengkap v0.20.1.
|
||||
> Perintah di bawah ini mengunduh edisi v0.20.3-slim dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.20.3-slim, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. Misalnya, atur RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.3 untuk edisi lengkap v0.20.3.
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
@ -194,8 +194,8 @@ $ docker compose -f docker-compose.yml up -d
|
||||
|
||||
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|
||||
| ----------------- | --------------- | --------------------- | ------------------------ |
|
||||
| v0.20.1 | ≈9 | :heavy_check_mark: | Stable release |
|
||||
| v0.20.1-slim | ≈2 | ❌ | Stable release |
|
||||
| v0.20.3 | ≈9 | :heavy_check_mark: | Stable release |
|
||||
| v0.20.3-slim | ≈2 | ❌ | Stable release |
|
||||
| nightly | ≈9 | :heavy_check_mark: | _Unstable_ nightly build |
|
||||
| nightly-slim | ≈2 | ❌ | _Unstable_ nightly build |
|
||||
|
||||
|
||||
@ -22,7 +22,7 @@
|
||||
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
|
||||
</a>
|
||||
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.1">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.3">
|
||||
</a>
|
||||
<a href="https://github.com/infiniflow/ragflow/releases/latest">
|
||||
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
|
||||
@ -160,7 +160,7 @@
|
||||
> 現在、公式に提供されているすべての Docker イメージは x86 アーキテクチャ向けにビルドされており、ARM64 用の Docker イメージは提供されていません。
|
||||
> ARM64 アーキテクチャのオペレーティングシステムを使用している場合は、[このドキュメント](https://ragflow.io/docs/dev/build_docker_image)を参照して Docker イメージを自分でビルドしてください。
|
||||
|
||||
> 以下のコマンドは、RAGFlow Docker イメージの v0.20.1-slim エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.20.1-slim とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。例えば、完全版 v0.20.1 をダウンロードするには、RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1 と設定します。
|
||||
> 以下のコマンドは、RAGFlow Docker イメージの v0.20.3-slim エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.20.3-slim とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。例えば、完全版 v0.20.3 をダウンロードするには、RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.3 と設定します。
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
@ -173,8 +173,8 @@
|
||||
|
||||
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|
||||
| ----------------- | --------------- | --------------------- | ------------------------ |
|
||||
| v0.20.1 | ≈9 | :heavy_check_mark: | Stable release |
|
||||
| v0.20.1-slim | ≈2 | ❌ | Stable release |
|
||||
| v0.20.3 | ≈9 | :heavy_check_mark: | Stable release |
|
||||
| v0.20.3-slim | ≈2 | ❌ | Stable release |
|
||||
| nightly | ≈9 | :heavy_check_mark: | _Unstable_ nightly build |
|
||||
| nightly-slim | ≈2 | ❌ | _Unstable_ nightly build |
|
||||
|
||||
|
||||
@ -22,7 +22,7 @@
|
||||
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
|
||||
</a>
|
||||
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.1">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.3">
|
||||
</a>
|
||||
<a href="https://github.com/infiniflow/ragflow/releases/latest">
|
||||
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
|
||||
@ -160,7 +160,7 @@
|
||||
> 모든 Docker 이미지는 x86 플랫폼을 위해 빌드되었습니다. 우리는 현재 ARM64 플랫폼을 위한 Docker 이미지를 제공하지 않습니다.
|
||||
> ARM64 플랫폼을 사용 중이라면, [시스템과 호환되는 Docker 이미지를 빌드하려면 이 가이드를 사용해 주세요](https://ragflow.io/docs/dev/build_docker_image).
|
||||
|
||||
> 아래 명령어는 RAGFlow Docker 이미지의 v0.20.1-slim 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.20.1-slim과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. 예를 들어, 전체 버전인 v0.20.1을 다운로드하려면 RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1로 설정합니다.
|
||||
> 아래 명령어는 RAGFlow Docker 이미지의 v0.20.3-slim 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.20.3-slim과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. 예를 들어, 전체 버전인 v0.20.3을 다운로드하려면 RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.3로 설정합니다.
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
@ -173,8 +173,8 @@
|
||||
|
||||
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|
||||
| ----------------- | --------------- | --------------------- | ------------------------ |
|
||||
| v0.20.1 | ≈9 | :heavy_check_mark: | Stable release |
|
||||
| v0.20.1-slim | ≈2 | ❌ | Stable release |
|
||||
| v0.20.3 | ≈9 | :heavy_check_mark: | Stable release |
|
||||
| v0.20.3-slim | ≈2 | ❌ | Stable release |
|
||||
| nightly | ≈9 | :heavy_check_mark: | _Unstable_ nightly build |
|
||||
| nightly-slim | ≈2 | ❌ | _Unstable_ nightly build |
|
||||
|
||||
|
||||
@ -22,7 +22,7 @@
|
||||
<img alt="Badge Estático" src="https://img.shields.io/badge/Online-Demo-4e6b99">
|
||||
</a>
|
||||
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.1">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.3">
|
||||
</a>
|
||||
<a href="https://github.com/infiniflow/ragflow/releases/latest">
|
||||
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Última%20Relese" alt="Última Versão">
|
||||
@ -180,7 +180,7 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
|
||||
> Todas as imagens Docker são construídas para plataformas x86. Atualmente, não oferecemos imagens Docker para ARM64.
|
||||
> Se você estiver usando uma plataforma ARM64, por favor, utilize [este guia](https://ragflow.io/docs/dev/build_docker_image) para construir uma imagem Docker compatível com o seu sistema.
|
||||
|
||||
> O comando abaixo baixa a edição `v0.20.1-slim` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.20.1-slim`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. Por exemplo: defina `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1` para a edição completa `v0.20.1`.
|
||||
> O comando abaixo baixa a edição `v0.20.3-slim` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.20.3-slim`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. Por exemplo: defina `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.3` para a edição completa `v0.20.3`.
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
@ -193,8 +193,8 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
|
||||
|
||||
| Tag da imagem RAGFlow | Tamanho da imagem (GB) | Possui modelos de incorporação? | Estável? |
|
||||
| --------------------- | ---------------------- | ------------------------------- | ------------------------ |
|
||||
| v0.20.1 | ~9 | :heavy_check_mark: | Lançamento estável |
|
||||
| v0.20.1-slim | ~2 | ❌ | Lançamento estável |
|
||||
| v0.20.3 | ~9 | :heavy_check_mark: | Lançamento estável |
|
||||
| v0.20.3-slim | ~2 | ❌ | Lançamento estável |
|
||||
| nightly | ~9 | :heavy_check_mark: | _Instável_ build noturno |
|
||||
| nightly-slim | ~2 | ❌ | _Instável_ build noturno |
|
||||
|
||||
|
||||
@ -22,7 +22,7 @@
|
||||
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
|
||||
</a>
|
||||
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.1">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.3">
|
||||
</a>
|
||||
<a href="https://github.com/infiniflow/ragflow/releases/latest">
|
||||
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
|
||||
@ -183,7 +183,7 @@
|
||||
> 所有 Docker 映像檔都是為 x86 平台建置的。目前,我們不提供 ARM64 平台的 Docker 映像檔。
|
||||
> 如果您使用的是 ARM64 平台,請使用 [這份指南](https://ragflow.io/docs/dev/build_docker_image) 來建置適合您系統的 Docker 映像檔。
|
||||
|
||||
> 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.20.1-slim`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.20.1-slim` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。例如,你可以透過設定 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1` 來下載 RAGFlow 鏡像的 `v0.20.1` 完整發行版。
|
||||
> 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.20.3-slim`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.20.3-slim` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。例如,你可以透過設定 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.3` 來下載 RAGFlow 鏡像的 `v0.20.3` 完整發行版。
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
@ -196,8 +196,8 @@
|
||||
|
||||
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|
||||
| ----------------- | --------------- | --------------------- | ------------------------ |
|
||||
| v0.20.1 | ≈9 | :heavy_check_mark: | Stable release |
|
||||
| v0.20.1-slim | ≈2 | ❌ | Stable release |
|
||||
| v0.20.3 | ≈9 | :heavy_check_mark: | Stable release |
|
||||
| v0.20.3-slim | ≈2 | ❌ | Stable release |
|
||||
| nightly | ≈9 | :heavy_check_mark: | _Unstable_ nightly build |
|
||||
| nightly-slim | ≈2 | ❌ | _Unstable_ nightly build |
|
||||
|
||||
|
||||
@ -22,7 +22,7 @@
|
||||
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
|
||||
</a>
|
||||
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.1">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.3">
|
||||
</a>
|
||||
<a href="https://github.com/infiniflow/ragflow/releases/latest">
|
||||
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
|
||||
@ -183,7 +183,7 @@
|
||||
> 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。
|
||||
> 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。
|
||||
|
||||
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.20.1-slim`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.20.1-slim` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。比如,你可以通过设置 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1` 来下载 RAGFlow 镜像的 `v0.20.1` 完整发行版。
|
||||
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.20.3-slim`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.20.3-slim` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。比如,你可以通过设置 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.3` 来下载 RAGFlow 镜像的 `v0.20.3` 完整发行版。
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
@ -196,8 +196,8 @@
|
||||
|
||||
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|
||||
| ----------------- | --------------- | --------------------- | ------------------------ |
|
||||
| v0.20.1 | ≈9 | :heavy_check_mark: | Stable release |
|
||||
| v0.20.1-slim | ≈2 | ❌ | Stable release |
|
||||
| v0.20.3 | ≈9 | :heavy_check_mark: | Stable release |
|
||||
| v0.20.3-slim | ≈2 | ❌ | Stable release |
|
||||
| nightly | ≈9 | :heavy_check_mark: | _Unstable_ nightly build |
|
||||
| nightly-slim | ≈2 | ❌ | _Unstable_ nightly build |
|
||||
|
||||
|
||||
@ -484,7 +484,7 @@ class Canvas:
|
||||
threads.append(exe.submit(FileService.parse, file["name"], FileService.get_blob(file["created_by"], file["id"]), True, file["created_by"]))
|
||||
return [th.result() for th in threads]
|
||||
|
||||
def tool_use_callback(self, agent_id: str, func_name: str, params: dict, result: Any):
|
||||
def tool_use_callback(self, agent_id: str, func_name: str, params: dict, result: Any, elapsed_time=None):
|
||||
agent_ids = agent_id.split("-->")
|
||||
agent_name = self.get_component_name(agent_ids[0])
|
||||
path = agent_name if len(agent_ids) < 2 else agent_name+"-->"+"-->".join(agent_ids[1:])
|
||||
@ -493,16 +493,16 @@ class Canvas:
|
||||
if bin:
|
||||
obj = json.loads(bin.encode("utf-8"))
|
||||
if obj[-1]["component_id"] == agent_ids[0]:
|
||||
obj[-1]["trace"].append({"path": path, "tool_name": func_name, "arguments": params, "result": result})
|
||||
obj[-1]["trace"].append({"path": path, "tool_name": func_name, "arguments": params, "result": result, "elapsed_time": elapsed_time})
|
||||
else:
|
||||
obj.append({
|
||||
"component_id": agent_ids[0],
|
||||
"trace": [{"path": path, "tool_name": func_name, "arguments": params, "result": result}]
|
||||
"trace": [{"path": path, "tool_name": func_name, "arguments": params, "result": result, "elapsed_time": elapsed_time}]
|
||||
})
|
||||
else:
|
||||
obj = [{
|
||||
"component_id": agent_ids[0],
|
||||
"trace": [{"path": path, "tool_name": func_name, "arguments": params, "result": result}]
|
||||
"trace": [{"path": path, "tool_name": func_name, "arguments": params, "result": result, "elapsed_time": elapsed_time}]
|
||||
}]
|
||||
REDIS_CONN.set_obj(f"{self.task_id}-{self.message_id}-logs", obj, 60*10)
|
||||
except Exception as e:
|
||||
|
||||
@ -22,7 +22,7 @@ from functools import partial
|
||||
from typing import Any
|
||||
|
||||
import json_repair
|
||||
|
||||
from timeit import default_timer as timer
|
||||
from agent.tools.base import LLMToolPluginCallSession, ToolParamBase, ToolBase, ToolMeta
|
||||
from api.db.services.llm_service import LLMBundle
|
||||
from api.db.services.tenant_llm_service import TenantLLMService
|
||||
@ -215,8 +215,9 @@ class Agent(LLM, ToolBase):
|
||||
hist = deepcopy(history)
|
||||
last_calling = ""
|
||||
if len(hist) > 3:
|
||||
st = timer()
|
||||
user_request = full_question(messages=history, chat_mdl=self.chat_mdl)
|
||||
self.callback("Multi-turn conversation optimization", {}, user_request)
|
||||
self.callback("Multi-turn conversation optimization", {}, user_request, elapsed_time=timer()-st)
|
||||
else:
|
||||
user_request = history[-1]["content"]
|
||||
|
||||
@ -244,7 +245,7 @@ class Agent(LLM, ToolBase):
|
||||
|
||||
def complete():
|
||||
nonlocal hist
|
||||
need2cite = self._canvas.get_reference()["chunks"] and self._id.find("-->") < 0
|
||||
need2cite = self._param.cite and self._canvas.get_reference()["chunks"] and self._id.find("-->") < 0
|
||||
cited = False
|
||||
if hist[0]["role"] == "system" and need2cite:
|
||||
if len(hist) < 7:
|
||||
@ -263,12 +264,13 @@ class Agent(LLM, ToolBase):
|
||||
if not need2cite or cited:
|
||||
return
|
||||
|
||||
st = timer()
|
||||
txt = ""
|
||||
for delta_ans in self._gen_citations(entire_txt):
|
||||
yield delta_ans, 0
|
||||
txt += delta_ans
|
||||
|
||||
self.callback("gen_citations", {}, txt)
|
||||
self.callback("gen_citations", {}, txt, elapsed_time=timer()-st)
|
||||
|
||||
def append_user_content(hist, content):
|
||||
if hist[-1]["role"] == "user":
|
||||
@ -276,8 +278,9 @@ class Agent(LLM, ToolBase):
|
||||
else:
|
||||
hist.append({"role": "user", "content": content})
|
||||
|
||||
st = timer()
|
||||
task_desc = analyze_task(self.chat_mdl, prompt, user_request, tool_metas)
|
||||
self.callback("analyze_task", {}, task_desc)
|
||||
self.callback("analyze_task", {}, task_desc, elapsed_time=timer()-st)
|
||||
for _ in range(self._param.max_rounds + 1):
|
||||
response, tk = next_step(self.chat_mdl, hist, tool_metas, task_desc)
|
||||
# self.callback("next_step", {}, str(response)[:256]+"...")
|
||||
@ -303,9 +306,10 @@ class Agent(LLM, ToolBase):
|
||||
|
||||
thr.append(executor.submit(use_tool, name, args))
|
||||
|
||||
st = timer()
|
||||
reflection = reflect(self.chat_mdl, hist, [th.result() for th in thr])
|
||||
append_user_content(hist, reflection)
|
||||
self.callback("reflection", {}, str(reflection))
|
||||
self.callback("reflection", {}, str(reflection), elapsed_time=timer()-st)
|
||||
|
||||
except Exception as e:
|
||||
logging.exception(msg=f"Wrong JSON argument format in LLM ReAct response: {e}")
|
||||
|
||||
@ -479,7 +479,7 @@ class ComponentBase(ABC):
|
||||
|
||||
def get_input_elements_from_text(self, txt: str) -> dict[str, dict[str, str]]:
|
||||
res = {}
|
||||
for r in re.finditer(self.variable_ref_patt, txt, flags=re.IGNORECASE):
|
||||
for r in re.finditer(self.variable_ref_patt, txt, flags=re.IGNORECASE|re.DOTALL):
|
||||
exp = r.group(1)
|
||||
cpn_id, var_nm = exp.split("@") if exp.find("@")>0 else ("", exp)
|
||||
res[exp] = {
|
||||
@ -529,8 +529,12 @@ class ComponentBase(ABC):
|
||||
@staticmethod
|
||||
def string_format(content: str, kv: dict[str, str]) -> str:
|
||||
for n, v in kv.items():
|
||||
def repl(_match, val=v):
|
||||
return str(val) if val is not None else ""
|
||||
content = re.sub(
|
||||
r"\{%s\}" % re.escape(n), v, content
|
||||
r"\{%s\}" % re.escape(n),
|
||||
repl,
|
||||
content
|
||||
)
|
||||
return content
|
||||
|
||||
|
||||
@ -145,7 +145,7 @@ class LLM(ComponentBase):
|
||||
prompt = self.string_format(prompt, args)
|
||||
for m in msg:
|
||||
m["content"] = self.string_format(m["content"], args)
|
||||
if self._canvas.get_reference()["chunks"]:
|
||||
if self._param.cite and self._canvas.get_reference()["chunks"]:
|
||||
prompt += citation_prompt()
|
||||
|
||||
return prompt, msg
|
||||
|
||||
@ -54,6 +54,8 @@ class Message(ComponentBase):
|
||||
if k in kwargs:
|
||||
continue
|
||||
v = v["value"]
|
||||
if not v:
|
||||
v = ""
|
||||
ans = ""
|
||||
if isinstance(v, partial):
|
||||
for t in v():
|
||||
@ -94,6 +96,8 @@ class Message(ComponentBase):
|
||||
continue
|
||||
|
||||
v = self._canvas.get_variable_value(exp)
|
||||
if not v:
|
||||
v = ""
|
||||
if isinstance(v, partial):
|
||||
cnt = ""
|
||||
for t in v():
|
||||
|
||||
327
agent/templates/knowledge_base_report.json
Normal file
327
agent/templates/knowledge_base_report.json
Normal file
@ -0,0 +1,327 @@
|
||||
{
|
||||
"id": 20,
|
||||
"title": "Report Agent Using Knowledge Base",
|
||||
"description": "A report generation assistant using local knowledge base, with advanced capabilities in task planning, reasoning, and reflective analysis. Recommended for academic research paper Q&A",
|
||||
"canvas_type": "Agent",
|
||||
"dsl": {
|
||||
"components": {
|
||||
"Agent:NewPumasLick": {
|
||||
"downstream": [
|
||||
"Message:OrangeYearsShine"
|
||||
],
|
||||
"obj": {
|
||||
"component_name": "Agent",
|
||||
"params": {
|
||||
"delay_after_error": 1,
|
||||
"description": "",
|
||||
"exception_comment": "",
|
||||
"exception_default_value": "",
|
||||
"exception_goto": [],
|
||||
"exception_method": null,
|
||||
"frequencyPenaltyEnabled": false,
|
||||
"frequency_penalty": 0.5,
|
||||
"llm_id": "qwen3-235b-a22b-instruct-2507@Tongyi-Qianwen",
|
||||
"maxTokensEnabled": true,
|
||||
"max_retries": 3,
|
||||
"max_rounds": 3,
|
||||
"max_tokens": 128000,
|
||||
"mcp": [],
|
||||
"message_history_window_size": 12,
|
||||
"outputs": {
|
||||
"content": {
|
||||
"type": "string",
|
||||
"value": ""
|
||||
}
|
||||
},
|
||||
"parameter": "Precise",
|
||||
"presencePenaltyEnabled": false,
|
||||
"presence_penalty": 0.5,
|
||||
"prompts": [
|
||||
{
|
||||
"content": "# User Query\n {sys.query}",
|
||||
"role": "user"
|
||||
}
|
||||
],
|
||||
"sys_prompt": "## Role & Task\nYou are a **\u201cKnowledge Base Retrieval Q\\&A Agent\u201d** whose goal is to break down the user\u2019s question into retrievable subtasks, and then produce a multi-source-verified, structured, and actionable research report using the internal knowledge base.\n## Execution Framework (Detailed Steps & Key Points)\n1. **Assessment & Decomposition**\n * Actions:\n * Automatically extract: main topic, subtopics, entities (people/organizations/products/technologies), time window, geographic/business scope.\n * Output as a list: N facts/data points that must be collected (*N* ranges from 5\u201320 depending on question complexity).\n2. **Query Type Determination (Rule-Based)**\n * Example rules:\n * If the question involves a single issue but requests \u201cmethod comparison/multiple explanations\u201d \u2192 use **depth-first**.\n * If the question can naturally be split into \u22653 independent sub-questions \u2192 use **breadth-first**.\n * If the question can be answered by a single fact/specification/definition \u2192 use **simple query**.\n3. **Research Plan Formulation**\n * Depth-first: define 3\u20135 perspectives (methodology/stakeholders/time dimension/technical route, etc.), assign search keywords, target document types, and output format for each perspective.\n * Breadth-first: list subtasks, prioritize them, and assign search terms.\n * Simple query: directly provide the search sentence and required fields.\n4. **Retrieval Execution**\n * After retrieval: perform coverage check (does it contain the key facts?) and quality check (source diversity, authority, latest update time).\n * If standards are not met, automatically loop: rewrite queries (synonyms/cross-domain terms) and retry \u22643 times, or flag as requiring external search.\n5. **Integration & Reasoning**\n * Build the answer using a **fact\u2013evidence\u2013reasoning** chain. For each conclusion, attach 1\u20132 strongest pieces of evidence.\n---\n## Quality Gate Checklist (Verify at Each Stage)\n* **Stage 1 (Decomposition)**:\n * [ ] Key concepts and expected outputs identified\n * [ ] Required facts/data points listed\n* **Stage 2 (Retrieval)**:\n * [ ] Meets quality standards (see above)\n * [ ] If not met: execute query iteration\n* **Stage 3 (Generation)**:\n * [ ] Each conclusion has at least one direct evidence source\n * [ ] State assumptions/uncertainties\n * [ ] Provide next-step suggestions or experiment/retrieval plans\n * [ ] Final length and depth match user expectations (comply with word count/format if specified)\n---\n## Core Principles\n1. **Strict reliance on the knowledge base**: answers must be **fully bounded** by the content retrieved from the knowledge base.\n2. **No fabrication**: do not generate, infer, or create information that is not explicitly present in the knowledge base.\n3. **Accuracy first**: prefer incompleteness over inaccurate content.\n4. **Output format**:\n * Hierarchically clear modular structure\n * Logical grouping according to the MECE principle\n * Professionally presented formatting\n * Step-by-step cognitive guidance\n * Reasonable use of headings and dividers for clarity\n * *Italicize* key parameters\n * **Bold** critical information\n5. **LaTeX formula requirements**:\n * Inline formulas: start and end with `$`\n * Block formulas: start and end with `$$`, each `$$` on its own line\n * Block formula content must comply with LaTeX math syntax\n * Verify formula correctness\n---\n## Additional Notes (Interaction & Failure Strategy)\n* If the knowledge base does not cover critical facts: explicitly inform the user (with sample wording)\n* For time-sensitive issues: enforce time filtering in the search request, and indicate the latest retrieval date in the answer.\n* Language requirement: answer in the user\u2019s preferred language\n",
|
||||
"temperature": "0.1",
|
||||
"temperatureEnabled": true,
|
||||
"tools": [
|
||||
{
|
||||
"component_name": "Retrieval",
|
||||
"name": "Retrieval",
|
||||
"params": {
|
||||
"cross_languages": [],
|
||||
"description": "",
|
||||
"empty_response": "",
|
||||
"kb_ids": [],
|
||||
"keywords_similarity_weight": 0.7,
|
||||
"outputs": {
|
||||
"formalized_content": {
|
||||
"type": "string",
|
||||
"value": ""
|
||||
}
|
||||
},
|
||||
"rerank_id": "",
|
||||
"similarity_threshold": 0.2,
|
||||
"top_k": 1024,
|
||||
"top_n": 8,
|
||||
"use_kg": false
|
||||
}
|
||||
}
|
||||
],
|
||||
"topPEnabled": false,
|
||||
"top_p": 0.75,
|
||||
"user_prompt": "",
|
||||
"visual_files_var": ""
|
||||
}
|
||||
},
|
||||
"upstream": [
|
||||
"begin"
|
||||
]
|
||||
},
|
||||
"Message:OrangeYearsShine": {
|
||||
"downstream": [],
|
||||
"obj": {
|
||||
"component_name": "Message",
|
||||
"params": {
|
||||
"content": [
|
||||
"{Agent:NewPumasLick@content}"
|
||||
]
|
||||
}
|
||||
},
|
||||
"upstream": [
|
||||
"Agent:NewPumasLick"
|
||||
]
|
||||
},
|
||||
"begin": {
|
||||
"downstream": [
|
||||
"Agent:NewPumasLick"
|
||||
],
|
||||
"obj": {
|
||||
"component_name": "Begin",
|
||||
"params": {
|
||||
"enablePrologue": true,
|
||||
"inputs": {},
|
||||
"mode": "conversational",
|
||||
"prologue": "\u4f60\u597d\uff01 \u6211\u662f\u4f60\u7684\u52a9\u7406\uff0c\u6709\u4ec0\u4e48\u53ef\u4ee5\u5e2e\u5230\u4f60\u7684\u5417\uff1f"
|
||||
}
|
||||
},
|
||||
"upstream": []
|
||||
}
|
||||
},
|
||||
"globals": {
|
||||
"sys.conversation_turns": 0,
|
||||
"sys.files": [],
|
||||
"sys.query": "",
|
||||
"sys.user_id": ""
|
||||
},
|
||||
"graph": {
|
||||
"edges": [
|
||||
{
|
||||
"data": {
|
||||
"isHovered": false
|
||||
},
|
||||
"id": "xy-edge__beginstart-Agent:NewPumasLickend",
|
||||
"source": "begin",
|
||||
"sourceHandle": "start",
|
||||
"target": "Agent:NewPumasLick",
|
||||
"targetHandle": "end"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"isHovered": false
|
||||
},
|
||||
"id": "xy-edge__Agent:NewPumasLickstart-Message:OrangeYearsShineend",
|
||||
"markerEnd": "logo",
|
||||
"source": "Agent:NewPumasLick",
|
||||
"sourceHandle": "start",
|
||||
"style": {
|
||||
"stroke": "rgba(91, 93, 106, 1)",
|
||||
"strokeWidth": 1
|
||||
},
|
||||
"target": "Message:OrangeYearsShine",
|
||||
"targetHandle": "end",
|
||||
"type": "buttonEdge",
|
||||
"zIndex": 1001
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"isHovered": false
|
||||
},
|
||||
"id": "xy-edge__Agent:NewPumasLicktool-Tool:AllBirdsNailend",
|
||||
"selected": false,
|
||||
"source": "Agent:NewPumasLick",
|
||||
"sourceHandle": "tool",
|
||||
"target": "Tool:AllBirdsNail",
|
||||
"targetHandle": "end"
|
||||
}
|
||||
],
|
||||
"nodes": [
|
||||
{
|
||||
"data": {
|
||||
"form": {
|
||||
"enablePrologue": true,
|
||||
"inputs": {},
|
||||
"mode": "conversational",
|
||||
"prologue": "\u4f60\u597d\uff01 \u6211\u662f\u4f60\u7684\u52a9\u7406\uff0c\u6709\u4ec0\u4e48\u53ef\u4ee5\u5e2e\u5230\u4f60\u7684\u5417\uff1f"
|
||||
},
|
||||
"label": "Begin",
|
||||
"name": "begin"
|
||||
},
|
||||
"dragging": false,
|
||||
"id": "begin",
|
||||
"measured": {
|
||||
"height": 48,
|
||||
"width": 200
|
||||
},
|
||||
"position": {
|
||||
"x": -9.569875358221438,
|
||||
"y": 205.84018385864917
|
||||
},
|
||||
"selected": false,
|
||||
"sourcePosition": "left",
|
||||
"targetPosition": "right",
|
||||
"type": "beginNode"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"form": {
|
||||
"content": [
|
||||
"{Agent:NewPumasLick@content}"
|
||||
]
|
||||
},
|
||||
"label": "Message",
|
||||
"name": "Response"
|
||||
},
|
||||
"dragging": false,
|
||||
"id": "Message:OrangeYearsShine",
|
||||
"measured": {
|
||||
"height": 56,
|
||||
"width": 200
|
||||
},
|
||||
"position": {
|
||||
"x": 734.4061285881053,
|
||||
"y": 199.9706031723009
|
||||
},
|
||||
"selected": false,
|
||||
"sourcePosition": "right",
|
||||
"targetPosition": "left",
|
||||
"type": "messageNode"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"form": {
|
||||
"delay_after_error": 1,
|
||||
"description": "",
|
||||
"exception_comment": "",
|
||||
"exception_default_value": "",
|
||||
"exception_goto": [],
|
||||
"exception_method": null,
|
||||
"frequencyPenaltyEnabled": false,
|
||||
"frequency_penalty": 0.5,
|
||||
"llm_id": "qwen3-235b-a22b-instruct-2507@Tongyi-Qianwen",
|
||||
"maxTokensEnabled": true,
|
||||
"max_retries": 3,
|
||||
"max_rounds": 3,
|
||||
"max_tokens": 128000,
|
||||
"mcp": [],
|
||||
"message_history_window_size": 12,
|
||||
"outputs": {
|
||||
"content": {
|
||||
"type": "string",
|
||||
"value": ""
|
||||
}
|
||||
},
|
||||
"parameter": "Precise",
|
||||
"presencePenaltyEnabled": false,
|
||||
"presence_penalty": 0.5,
|
||||
"prompts": [
|
||||
{
|
||||
"content": "# User Query\n {sys.query}",
|
||||
"role": "user"
|
||||
}
|
||||
],
|
||||
"sys_prompt": "## Role & Task\nYou are a **\u201cKnowledge Base Retrieval Q\\&A Agent\u201d** whose goal is to break down the user\u2019s question into retrievable subtasks, and then produce a multi-source-verified, structured, and actionable research report using the internal knowledge base.\n## Execution Framework (Detailed Steps & Key Points)\n1. **Assessment & Decomposition**\n * Actions:\n * Automatically extract: main topic, subtopics, entities (people/organizations/products/technologies), time window, geographic/business scope.\n * Output as a list: N facts/data points that must be collected (*N* ranges from 5\u201320 depending on question complexity).\n2. **Query Type Determination (Rule-Based)**\n * Example rules:\n * If the question involves a single issue but requests \u201cmethod comparison/multiple explanations\u201d \u2192 use **depth-first**.\n * If the question can naturally be split into \u22653 independent sub-questions \u2192 use **breadth-first**.\n * If the question can be answered by a single fact/specification/definition \u2192 use **simple query**.\n3. **Research Plan Formulation**\n * Depth-first: define 3\u20135 perspectives (methodology/stakeholders/time dimension/technical route, etc.), assign search keywords, target document types, and output format for each perspective.\n * Breadth-first: list subtasks, prioritize them, and assign search terms.\n * Simple query: directly provide the search sentence and required fields.\n4. **Retrieval Execution**\n * After retrieval: perform coverage check (does it contain the key facts?) and quality check (source diversity, authority, latest update time).\n * If standards are not met, automatically loop: rewrite queries (synonyms/cross-domain terms) and retry \u22643 times, or flag as requiring external search.\n5. **Integration & Reasoning**\n * Build the answer using a **fact\u2013evidence\u2013reasoning** chain. For each conclusion, attach 1\u20132 strongest pieces of evidence.\n---\n## Quality Gate Checklist (Verify at Each Stage)\n* **Stage 1 (Decomposition)**:\n * [ ] Key concepts and expected outputs identified\n * [ ] Required facts/data points listed\n* **Stage 2 (Retrieval)**:\n * [ ] Meets quality standards (see above)\n * [ ] If not met: execute query iteration\n* **Stage 3 (Generation)**:\n * [ ] Each conclusion has at least one direct evidence source\n * [ ] State assumptions/uncertainties\n * [ ] Provide next-step suggestions or experiment/retrieval plans\n * [ ] Final length and depth match user expectations (comply with word count/format if specified)\n---\n## Core Principles\n1. **Strict reliance on the knowledge base**: answers must be **fully bounded** by the content retrieved from the knowledge base.\n2. **No fabrication**: do not generate, infer, or create information that is not explicitly present in the knowledge base.\n3. **Accuracy first**: prefer incompleteness over inaccurate content.\n4. **Output format**:\n * Hierarchically clear modular structure\n * Logical grouping according to the MECE principle\n * Professionally presented formatting\n * Step-by-step cognitive guidance\n * Reasonable use of headings and dividers for clarity\n * *Italicize* key parameters\n * **Bold** critical information\n5. **LaTeX formula requirements**:\n * Inline formulas: start and end with `$`\n * Block formulas: start and end with `$$`, each `$$` on its own line\n * Block formula content must comply with LaTeX math syntax\n * Verify formula correctness\n---\n## Additional Notes (Interaction & Failure Strategy)\n* If the knowledge base does not cover critical facts: explicitly inform the user (with sample wording)\n* For time-sensitive issues: enforce time filtering in the search request, and indicate the latest retrieval date in the answer.\n* Language requirement: answer in the user\u2019s preferred language\n",
|
||||
"temperature": "0.1",
|
||||
"temperatureEnabled": true,
|
||||
"tools": [
|
||||
{
|
||||
"component_name": "Retrieval",
|
||||
"name": "Retrieval",
|
||||
"params": {
|
||||
"cross_languages": [],
|
||||
"description": "",
|
||||
"empty_response": "",
|
||||
"kb_ids": [],
|
||||
"keywords_similarity_weight": 0.7,
|
||||
"outputs": {
|
||||
"formalized_content": {
|
||||
"type": "string",
|
||||
"value": ""
|
||||
}
|
||||
},
|
||||
"rerank_id": "",
|
||||
"similarity_threshold": 0.2,
|
||||
"top_k": 1024,
|
||||
"top_n": 8,
|
||||
"use_kg": false
|
||||
}
|
||||
}
|
||||
],
|
||||
"topPEnabled": false,
|
||||
"top_p": 0.75,
|
||||
"user_prompt": "",
|
||||
"visual_files_var": ""
|
||||
},
|
||||
"label": "Agent",
|
||||
"name": "Knowledge Base Agent"
|
||||
},
|
||||
"dragging": false,
|
||||
"id": "Agent:NewPumasLick",
|
||||
"measured": {
|
||||
"height": 84,
|
||||
"width": 200
|
||||
},
|
||||
"position": {
|
||||
"x": 347.00048227952215,
|
||||
"y": 186.49109364794631
|
||||
},
|
||||
"selected": false,
|
||||
"sourcePosition": "right",
|
||||
"targetPosition": "left",
|
||||
"type": "agentNode"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"form": {
|
||||
"description": "This is an agent for a specific task.",
|
||||
"user_prompt": "This is the order you need to send to the agent."
|
||||
},
|
||||
"label": "Tool",
|
||||
"name": "flow.tool_10"
|
||||
},
|
||||
"dragging": false,
|
||||
"id": "Tool:AllBirdsNail",
|
||||
"measured": {
|
||||
"height": 48,
|
||||
"width": 200
|
||||
},
|
||||
"position": {
|
||||
"x": 220.24819746977118,
|
||||
"y": 403.31576836482583
|
||||
},
|
||||
"selected": false,
|
||||
"sourcePosition": "right",
|
||||
"targetPosition": "left",
|
||||
"type": "toolNode"
|
||||
}
|
||||
]
|
||||
},
|
||||
"history": [],
|
||||
"memory": [],
|
||||
"messages": [],
|
||||
"path": [],
|
||||
"retrieval": []
|
||||
},
|
||||
"avatar": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADAAAAAwCAYAAABXAvmHAAAH0klEQVR4nO2ZC1BU1wGG/3uRp/IygG+DGK0GOjE1U6cxI4tT03Y0E+kENbaJbKpj60wzgNMwnTjuEtu0miGasY+0krI202kMVEnVxtoOLG00oVa0LajVBDcSEI0REFBgkZv/3GWXfdzdvctuHs7kmzmec9//d+45914XCXc4Xwjk1+59VJGGF7C5QAFSWBvgyWmWLl7IKiny6QNL173B5YjB84bOyrpKA4B1DLySdQpLKAiZGtZ7a/KMVoQJz6UfEZyhTWwaEBmssiLvCueu6BJg8EwFqGTTAC+uvNWC9w82sRWcux/JwaSHstjywcogRt4RG0KExwWG4QsVYCebKSwe3L5lR9OOWjyzfg2WL/0a1/jncO3b2FHxGnKeWYqo+Giu8UEMrWJKWBACPMY/DG+63txhvnKshUu+DF2/hayMDFRsL+VScDb++AVc6OjAuInxXPJl2tfnIikrzUyJMi7qQmLRhOEr2fOFbX/7P6STF7BqoWevfdij4NWGQfx+57OYO2sG1wSnsek8Nm15EU8sikF6ouelXz9ph7JwDqYt+5IIZaGEkauDIrH4wPBmhjexCSEws+VdVG1M4NIoj+2xYzBuJtavWcEl/VS8dggx/ZdQvcGzQwp+cxOXsu5RBQQMVkYJM4LA/Txh+ELFMWFVPARS5kFiabZdx8Olh7l17BzdvhzZmROhdJ3j6D/nIyBgOCMlLAgA9xmF4TMV4BSbrgnrLiBl5rOsRCRRbDUsBzQFiJjY91PCBj9w+yiP1lXWsTLAjc9YQGB9I8+Yx1oTiUWFvW9QgDo2PdASaDp/EQ8/sRnhcPTVcuTMncXwQQVESL9DidscaPW+QEtAICRu9PSxFTpJiePV8AI9AsTvXZBY/Pa+wJ9ApNApIILm8S5Y4QXXQwhYFH6csemDP4G3G5v579i5d04mknknQhDYS4HCrCVr/mC3D305KnbCEpvVIia5Onw6WaWw+KAl0Np+FUXbdiMcyoqfUoeRHoFrJ1uRtnBG1/9Mf/3LtElp+VwF2wcd7woJib1vUPwMH4GWQCQJJtBa/V9cPmFD8uQUpMdNGDhY8bNYrobh8acHu270/l0ImJWRt64Wn6WACN9z5gq2lXwPW8pfweT0icP/fH23vO9QLYq3/QKyLBmFQI3CUcT9NdESEEPItKsSN3r7MBaSJoxHWZERM6ZmMLy2gDP8/pd/og418dTL37hFSUpMUC5f+UiWZcnY9s5+ixCwUiCXx2iiJdDNx6f4pgkH8Q3lbxK7h8+enoHha1cRNdMp8axiHxo6+/5bVdk8DSROYIW1X7QEIom3wHD3gEf4vu1bVYEJZeWQ0zJQvmcfyiv2QZak6raG/QWfK4Ez9mTc5v8xPMJfuojoxXmIX/9DOMe+FCWbcHu4BJJ0YEwCx0824bFNW9HesB+CqYu+jepfPYcHF+aoPXS8sQl/+vU2bgmOU2C+qRc9/YrrPPbGBtzavd0nvCxLxui4pJrBm911PFwak4CYA80cj+JCAiGUzYkmxrSY4N2c3GLi6UEIFL/wRxxqkhmHnTEpDQcrfq6ea+hcE8bNy3GFzyq4H22HW1Kd4WMSkg1jmsSRpKj0Rzhy4gNUv/y8Gjrv8SJK3OWScA+fMn/ysVPPvTmeh6nh1TcxBUJ+jEaKYr7N36x7h+Edj0pB6+WrLokn87+BrTt/p4ZPzZ6MM7/8R2//h33vOcNzdwgBMwVMbGvySQmo4a0NqOZccU7YmGXLEfPQUlUid/XT6B8YdIU/99vjsPcOdEhDsfOd4QVCwKB8yp8SWuG1njbTl83DpMWz1PCKAswuWPDI0e8WebyAJBbxNdrF7cls+hBpAb3h3XtehL/3+4u7D35rQwpP4YFTwMJ91rHpQyQFQgmf9sAMNL9Ur4afv/FBjIuPVj+n4YVTwMD96tj0IVICoYYXv/q1VJ1Sl8UveQyaRwErvOB6B5SwKhqP00gI6A0vhsycJ7/KIzxhyHqGN0ADbnNAAYOicRfCFdAb/p50Gbfuc/wy5w1D5lOghk0fuG0USlgVr7sQjoDe8C8WxKGKPy2KjzlvAQb02/sCbh+FApngX1QUtyeSuwDi0hxFByV7L+LIf3r5kvpp4PBr07Hqvn71Y85bgOG6WS2ggA1+4D6eUKKQApVsqngI6KSkqh9HzsoM/3zg8Oz5VQ9E8wjf30YFDGdkeAsCwH18oYRZGXk7C4HuYxcwe6rjQsFovzaEvoFxqNkTOPzMjGikJso8wsF77XYkLx6dAwxWxvBmBIH7aUMJi8J3w0DnTVz7dyvX6KPzVBt+kL8cmzesRq9ps2Z48bRJmOIapS7E4zM2lXNt5CcU6ID7+ocSZkqY2NRN6ysnsHbJEpR8ZwV6t5Yg+iuLELf2KVd48VwXQf3BQGUMb4ZOuH9gKFEIYJfiNrEDcXZHHV4q3YRv5i7ikgM94RlETNgihrcgBHhccCiRCf7VhBK5rAPyr9I/Y/WKPEyfksH/9NjQ2dODhsYzwcLXsypkeBtCRGLRDUUMAMyKHxEx4dtrzyP97nQMygripiQiKi4aSbPvQmKW7+OXF69ntYvBa1iPCYklZEZECsGm4ja0Ops7EJsaj4SprlU+8IJiqIjAFga3Ikx4vvAYkTGALxyWFArlsnbBC9Sz6mI5zWKNRGh3JJY7mjte4GOz+r4tkRbxQQAAAABJRU5ErkJggg=="
|
||||
}
|
||||
@ -206,7 +206,7 @@
|
||||
"enablePrologue": true,
|
||||
"inputs": {},
|
||||
"mode": "conversational",
|
||||
"prologue": "Hi! I'm your SQL assistant, what can I do for you?"
|
||||
"prologue": "Hi! I'm your SQL assistant. What can I do for you?"
|
||||
}
|
||||
},
|
||||
"upstream": []
|
||||
@ -319,7 +319,7 @@
|
||||
"enablePrologue": true,
|
||||
"inputs": {},
|
||||
"mode": "conversational",
|
||||
"prologue": "Hi! I'm your SQL assistant, what can I do for you?"
|
||||
"prologue": "Hi! I'm your SQL assistant. What can I do for you?"
|
||||
},
|
||||
"label": "Begin",
|
||||
"name": "begin"
|
||||
|
||||
@ -24,6 +24,7 @@ from api.utils import hash_str2int
|
||||
from rag.llm.chat_model import ToolCallSession
|
||||
from rag.prompts.prompts import kb_prompt
|
||||
from rag.utils.mcp_tool_call_conn import MCPToolCallSession
|
||||
from timeit import default_timer as timer
|
||||
|
||||
|
||||
class ToolParameter(TypedDict):
|
||||
@ -49,12 +50,13 @@ class LLMToolPluginCallSession(ToolCallSession):
|
||||
|
||||
def tool_call(self, name: str, arguments: dict[str, Any]) -> Any:
|
||||
assert name in self.tools_map, f"LLM tool {name} does not exist"
|
||||
st = timer()
|
||||
if isinstance(self.tools_map[name], MCPToolCallSession):
|
||||
resp = self.tools_map[name].tool_call(name, arguments, 60)
|
||||
else:
|
||||
resp = self.tools_map[name].invoke(**arguments)
|
||||
|
||||
self.callback(name, arguments, resp)
|
||||
self.callback(name, arguments, resp, elapsed_time=timer()-st)
|
||||
return resp
|
||||
|
||||
def get_tool_obj(self, name):
|
||||
|
||||
@ -67,9 +67,17 @@ class CodeExecParam(ToolParamBase):
|
||||
"description": """
|
||||
This tool has a sandbox that can execute code written in 'Python'/'Javascript'. It recieves a piece of code and return a Json string.
|
||||
Here's a code example for Python(`main` function MUST be included):
|
||||
def main(arg1: str, arg2: str) -> dict:
|
||||
def main() -> dict:
|
||||
\"\"\"
|
||||
Generate Fibonacci numbers within 100.
|
||||
\"\"\"
|
||||
def fibonacci_recursive(n):
|
||||
if n <= 1:
|
||||
return n
|
||||
else:
|
||||
return fibonacci_recursive(n-1) + fibonacci_recursive(n-2)
|
||||
return {
|
||||
"result": arg1 + arg2,
|
||||
"result": fibonacci_recursive(100),
|
||||
}
|
||||
|
||||
Here's a code example for Javascript(`main` function MUST be included and exported):
|
||||
|
||||
@ -79,6 +79,17 @@ class ExeSQL(ToolBase, ABC):
|
||||
|
||||
@timeout(os.environ.get("COMPONENT_EXEC_TIMEOUT", 60))
|
||||
def _invoke(self, **kwargs):
|
||||
|
||||
def convert_decimals(obj):
|
||||
from decimal import Decimal
|
||||
if isinstance(obj, Decimal):
|
||||
return float(obj) # 或 str(obj)
|
||||
elif isinstance(obj, dict):
|
||||
return {k: convert_decimals(v) for k, v in obj.items()}
|
||||
elif isinstance(obj, list):
|
||||
return [convert_decimals(item) for item in obj]
|
||||
return obj
|
||||
|
||||
sql = kwargs.get("sql")
|
||||
if not sql:
|
||||
raise Exception("SQL for `ExeSQL` MUST not be empty.")
|
||||
@ -122,7 +133,11 @@ class ExeSQL(ToolBase, ABC):
|
||||
single_res = pd.DataFrame([i for i in cursor.fetchmany(self._param.max_records)])
|
||||
single_res.columns = [i[0] for i in cursor.description]
|
||||
|
||||
sql_res.append(single_res.to_dict(orient='records'))
|
||||
for col in single_res.columns:
|
||||
if pd.api.types.is_datetime64_any_dtype(single_res[col]):
|
||||
single_res[col] = single_res[col].dt.strftime('%Y-%m-%d')
|
||||
|
||||
sql_res.append(convert_decimals(single_res.to_dict(orient='records')))
|
||||
formalized_content.append(single_res.to_markdown(index=False, floatfmt=".6f"))
|
||||
|
||||
self.set_output("json", sql_res)
|
||||
@ -130,4 +145,4 @@ class ExeSQL(ToolBase, ABC):
|
||||
return self.output("formalized_content")
|
||||
|
||||
def thoughts(self) -> str:
|
||||
return "Query sent—waiting for the data."
|
||||
return "Query sent—waiting for the data."
|
||||
|
||||
@ -86,10 +86,16 @@ class Retrieval(ToolBase, ABC):
|
||||
kb_ids.append(id)
|
||||
continue
|
||||
kb_nm = self._canvas.get_variable_value(id)
|
||||
e, kb = KnowledgebaseService.get_by_name(kb_nm, self._canvas._tenant_id)
|
||||
if not e:
|
||||
raise Exception(f"Dataset({kb_nm}) does not exist.")
|
||||
kb_ids.append(kb.id)
|
||||
# if kb_nm is a list
|
||||
kb_nm_list = kb_nm if isinstance(kb_nm, list) else [kb_nm]
|
||||
for nm_or_id in kb_nm_list:
|
||||
e, kb = KnowledgebaseService.get_by_name(nm_or_id,
|
||||
self._canvas._tenant_id)
|
||||
if not e:
|
||||
e, kb = KnowledgebaseService.get_by_id(nm_or_id)
|
||||
if not e:
|
||||
raise Exception(f"Dataset({nm_or_id}) does not exist.")
|
||||
kb_ids.append(kb.id)
|
||||
|
||||
filtered_kb_ids: list[str] = list(set([kb_id for kb_id in kb_ids if kb_id]))
|
||||
|
||||
@ -108,7 +114,9 @@ class Retrieval(ToolBase, ABC):
|
||||
if self._param.rerank_id:
|
||||
rerank_mdl = LLMBundle(kbs[0].tenant_id, LLMType.RERANK, self._param.rerank_id)
|
||||
|
||||
query = kwargs["query"]
|
||||
vars = self.get_input_elements_from_text(kwargs["query"])
|
||||
vars = {k:o["value"] for k,o in vars.items()}
|
||||
query = self.string_format(kwargs["query"], vars)
|
||||
if self._param.cross_languages:
|
||||
query = cross_languages(kbs[0].tenant_id, None, query, self._param.cross_languages)
|
||||
|
||||
|
||||
@ -29,6 +29,7 @@ from api.db.db_models import close_connection
|
||||
from api.db.services import UserService
|
||||
from api.utils import CustomJSONEncoder, commands
|
||||
|
||||
from flask_mail import Mail
|
||||
from flask_session import Session
|
||||
from flask_login import LoginManager
|
||||
from api import settings
|
||||
@ -40,6 +41,7 @@ __all__ = ["app"]
|
||||
Request.json = property(lambda self: self.get_json(force=True, silent=True))
|
||||
|
||||
app = Flask(__name__)
|
||||
smtp_mail_server = Mail()
|
||||
|
||||
# Add this at the beginning of your file to configure Swagger UI
|
||||
swagger_config = {
|
||||
@ -146,16 +148,16 @@ def load_user(web_request):
|
||||
if authorization:
|
||||
try:
|
||||
access_token = str(jwt.loads(authorization))
|
||||
|
||||
|
||||
if not access_token or not access_token.strip():
|
||||
logging.warning("Authentication attempt with empty access token")
|
||||
return None
|
||||
|
||||
|
||||
# Access tokens should be UUIDs (32 hex characters)
|
||||
if len(access_token.strip()) < 32:
|
||||
logging.warning(f"Authentication attempt with invalid token format: {len(access_token)} chars")
|
||||
return None
|
||||
|
||||
|
||||
user = UserService.query(
|
||||
access_token=access_token, status=StatusEnum.VALID.value
|
||||
)
|
||||
|
||||
@ -74,11 +74,11 @@ def rm():
|
||||
@login_required
|
||||
def save():
|
||||
req = request.json
|
||||
req["user_id"] = current_user.id
|
||||
if not isinstance(req["dsl"], str):
|
||||
req["dsl"] = json.dumps(req["dsl"], ensure_ascii=False)
|
||||
req["dsl"] = json.loads(req["dsl"])
|
||||
if "id" not in req:
|
||||
req["user_id"] = current_user.id
|
||||
if UserCanvasService.query(user_id=current_user.id, title=req["title"].strip()):
|
||||
return get_data_error_result(message=f"{req['title'].strip()} already exists.")
|
||||
req["id"] = get_uuid()
|
||||
@ -115,6 +115,12 @@ def getsse(canvas_id):
|
||||
if not objs:
|
||||
return get_data_error_result(message='Authentication error: API key is invalid!"')
|
||||
tenant_id = objs[0].tenant_id
|
||||
if not UserCanvasService.query(user_id=tenant_id, id=canvas_id):
|
||||
return get_json_result(
|
||||
data=False,
|
||||
message='Only owner of canvas authorized for this operation.',
|
||||
code=RetCode.OPERATING_ERROR
|
||||
)
|
||||
e, c = UserCanvasService.get_by_id(canvas_id)
|
||||
if not e or c.user_id != tenant_id:
|
||||
return get_data_error_result(message="canvas not found.")
|
||||
|
||||
@ -23,15 +23,18 @@ from flask_login import current_user, login_required
|
||||
|
||||
from api import settings
|
||||
from api.db import LLMType, ParserType
|
||||
from api.db.services.dialog_service import meta_filter
|
||||
from api.db.services.document_service import DocumentService
|
||||
from api.db.services.knowledgebase_service import KnowledgebaseService
|
||||
from api.db.services.llm_service import LLMBundle
|
||||
from api.db.services.search_service import SearchService
|
||||
from api.db.services.user_service import UserTenantService
|
||||
from api.utils.api_utils import get_data_error_result, get_json_result, server_error_response, validate_request
|
||||
from rag.app.qa import beAdoc, rmPrefix
|
||||
from rag.app.tag import label_question
|
||||
from rag.nlp import rag_tokenizer, search
|
||||
from rag.prompts import cross_languages, keyword_extraction
|
||||
from rag.prompts.prompts import gen_meta_filter
|
||||
from rag.settings import PAGERANK_FLD
|
||||
from rag.utils import rmSpace
|
||||
|
||||
@ -288,13 +291,26 @@ def retrieval_test():
|
||||
if isinstance(kb_ids, str):
|
||||
kb_ids = [kb_ids]
|
||||
doc_ids = req.get("doc_ids", [])
|
||||
similarity_threshold = float(req.get("similarity_threshold", 0.0))
|
||||
vector_similarity_weight = float(req.get("vector_similarity_weight", 0.3))
|
||||
use_kg = req.get("use_kg", False)
|
||||
top = int(req.get("top_k", 1024))
|
||||
langs = req.get("cross_languages", [])
|
||||
tenant_ids = []
|
||||
|
||||
if req.get("search_id", ""):
|
||||
search_config = SearchService.get_detail(req.get("search_id", "")).get("search_config", {})
|
||||
meta_data_filter = search_config.get("meta_data_filter", {})
|
||||
metas = DocumentService.get_meta_by_kbs(kb_ids)
|
||||
if meta_data_filter.get("method") == "auto":
|
||||
chat_mdl = LLMBundle(current_user.id, LLMType.CHAT, llm_name=search_config.get("chat_id", ""))
|
||||
filters = gen_meta_filter(chat_mdl, metas, question)
|
||||
doc_ids.extend(meta_filter(metas, filters))
|
||||
if not doc_ids:
|
||||
doc_ids = None
|
||||
elif meta_data_filter.get("method") == "manual":
|
||||
doc_ids.extend(meta_filter(metas, meta_data_filter["manual"]))
|
||||
if not doc_ids:
|
||||
doc_ids = None
|
||||
|
||||
try:
|
||||
tenants = UserTenantService.query(user_id=current_user.id)
|
||||
for kb_id in kb_ids:
|
||||
@ -327,7 +343,9 @@ def retrieval_test():
|
||||
|
||||
labels = label_question(question, [kb])
|
||||
ranks = settings.retrievaler.retrieval(question, embd_mdl, tenant_ids, kb_ids, page, size,
|
||||
similarity_threshold, vector_similarity_weight, top,
|
||||
float(req.get("similarity_threshold", 0.0)),
|
||||
float(req.get("vector_similarity_weight", 0.3)),
|
||||
top,
|
||||
doc_ids, rerank_mdl=rerank_mdl, highlight=req.get("highlight"),
|
||||
rank_feature=labels
|
||||
)
|
||||
|
||||
@ -17,22 +17,19 @@ import json
|
||||
import re
|
||||
import traceback
|
||||
from copy import deepcopy
|
||||
|
||||
import trio
|
||||
from flask import Response, request
|
||||
from flask_login import current_user, login_required
|
||||
|
||||
from api import settings
|
||||
from api.db import LLMType
|
||||
from api.db.db_models import APIToken
|
||||
from api.db.services.conversation_service import ConversationService, structure_answer
|
||||
from api.db.services.dialog_service import DialogService, ask, chat
|
||||
from api.db.services.knowledgebase_service import KnowledgebaseService
|
||||
from api.db.services.dialog_service import DialogService, ask, chat, gen_mindmap
|
||||
from api.db.services.llm_service import LLMBundle
|
||||
from api.db.services.user_service import UserTenantService, TenantService
|
||||
from api.db.services.search_service import SearchService
|
||||
from api.db.services.tenant_llm_service import TenantLLMService
|
||||
from api.db.services.user_service import TenantService, UserTenantService
|
||||
from api.utils.api_utils import get_data_error_result, get_json_result, server_error_response, validate_request
|
||||
from graphrag.general.mind_map_extractor import MindMapExtractor
|
||||
from rag.app.tag import label_question
|
||||
from rag.prompts.prompt_template import load_prompt
|
||||
from rag.prompts.prompts import chunks_format
|
||||
|
||||
|
||||
@ -66,8 +63,14 @@ def set_conversation():
|
||||
e, dia = DialogService.get_by_id(req["dialog_id"])
|
||||
if not e:
|
||||
return get_data_error_result(message="Dialog not found")
|
||||
conv = {"id": conv_id, "dialog_id": req["dialog_id"], "name": name, "message": [{"role": "assistant", "content": dia.prompt_config["prologue"]}],"user_id": current_user.id,
|
||||
"reference":[],}
|
||||
conv = {
|
||||
"id": conv_id,
|
||||
"dialog_id": req["dialog_id"],
|
||||
"name": name,
|
||||
"message": [{"role": "assistant", "content": dia.prompt_config["prologue"]}],
|
||||
"user_id": current_user.id,
|
||||
"reference": [],
|
||||
}
|
||||
ConversationService.save(**conv)
|
||||
return get_json_result(data=conv)
|
||||
except Exception as e:
|
||||
@ -174,6 +177,21 @@ def completion():
|
||||
continue
|
||||
msg.append(m)
|
||||
message_id = msg[-1].get("id")
|
||||
chat_model_id = req.get("llm_id", "")
|
||||
req.pop("llm_id", None)
|
||||
|
||||
chat_model_config = {}
|
||||
for model_config in [
|
||||
"temperature",
|
||||
"top_p",
|
||||
"frequency_penalty",
|
||||
"presence_penalty",
|
||||
"max_tokens",
|
||||
]:
|
||||
config = req.get(model_config)
|
||||
if config:
|
||||
chat_model_config[model_config] = config
|
||||
|
||||
try:
|
||||
e, conv = ConversationService.get_by_id(req["conversation_id"])
|
||||
if not e:
|
||||
@ -190,13 +208,23 @@ def completion():
|
||||
conv.reference = [r for r in conv.reference if r]
|
||||
conv.reference.append({"chunks": [], "doc_aggs": []})
|
||||
|
||||
if chat_model_id:
|
||||
if not TenantLLMService.get_api_key(tenant_id=dia.tenant_id, model_name=chat_model_id):
|
||||
req.pop("chat_model_id", None)
|
||||
req.pop("chat_model_config", None)
|
||||
return get_data_error_result(message=f"Cannot use specified model {chat_model_id}.")
|
||||
dia.llm_id = chat_model_id
|
||||
dia.llm_setting = chat_model_config
|
||||
|
||||
is_embedded = bool(chat_model_id)
|
||||
def stream():
|
||||
nonlocal dia, msg, req, conv
|
||||
try:
|
||||
for ans in chat(dia, msg, True, **req):
|
||||
ans = structure_answer(conv, ans, message_id, conv.id)
|
||||
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
|
||||
ConversationService.update_by_id(conv.id, conv.to_dict())
|
||||
if not is_embedded:
|
||||
ConversationService.update_by_id(conv.id, conv.to_dict())
|
||||
except Exception as e:
|
||||
traceback.print_exc()
|
||||
yield "data:" + json.dumps({"code": 500, "message": str(e), "data": {"answer": "**ERROR**: " + str(e), "reference": []}}, ensure_ascii=False) + "\n\n"
|
||||
@ -214,7 +242,8 @@ def completion():
|
||||
answer = None
|
||||
for ans in chat(dia, msg, **req):
|
||||
answer = structure_answer(conv, ans, message_id, conv.id)
|
||||
ConversationService.update_by_id(conv.id, conv.to_dict())
|
||||
if not is_embedded:
|
||||
ConversationService.update_by_id(conv.id, conv.to_dict())
|
||||
break
|
||||
return get_json_result(data=answer)
|
||||
except Exception as e:
|
||||
@ -310,10 +339,18 @@ def ask_about():
|
||||
req = request.json
|
||||
uid = current_user.id
|
||||
|
||||
search_id = req.get("search_id", "")
|
||||
search_app = None
|
||||
search_config = {}
|
||||
if search_id:
|
||||
search_app = SearchService.get_detail(search_id)
|
||||
if search_app:
|
||||
search_config = search_app.get("search_config", {})
|
||||
|
||||
def stream():
|
||||
nonlocal req, uid
|
||||
try:
|
||||
for ans in ask(req["question"], req["kb_ids"], uid):
|
||||
for ans in ask(req["question"], req["kb_ids"], uid, search_config=search_config):
|
||||
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
|
||||
except Exception as e:
|
||||
yield "data:" + json.dumps({"code": 500, "message": str(e), "data": {"answer": "**ERROR**: " + str(e), "reference": []}}, ensure_ascii=False) + "\n\n"
|
||||
@ -332,18 +369,14 @@ def ask_about():
|
||||
@validate_request("question", "kb_ids")
|
||||
def mindmap():
|
||||
req = request.json
|
||||
kb_ids = req["kb_ids"]
|
||||
e, kb = KnowledgebaseService.get_by_id(kb_ids[0])
|
||||
if not e:
|
||||
return get_data_error_result(message="Knowledgebase not found!")
|
||||
search_id = req.get("search_id", "")
|
||||
search_app = SearchService.get_detail(search_id) if search_id else {}
|
||||
search_config = search_app.get("search_config", {}) if search_app else {}
|
||||
kb_ids = search_config.get("kb_ids", [])
|
||||
kb_ids.extend(req["kb_ids"])
|
||||
kb_ids = list(set(kb_ids))
|
||||
|
||||
embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING, llm_name=kb.embd_id)
|
||||
chat_mdl = LLMBundle(current_user.id, LLMType.CHAT)
|
||||
question = req["question"]
|
||||
ranks = settings.retrievaler.retrieval(question, embd_mdl, kb.tenant_id, kb_ids, 1, 12, 0.3, 0.3, aggs=False, rank_feature=label_question(question, [kb]))
|
||||
mindmap = MindMapExtractor(chat_mdl)
|
||||
mind_map = trio.run(mindmap, [c["content_with_weight"] for c in ranks["chunks"]])
|
||||
mind_map = mind_map.output
|
||||
mind_map = gen_mindmap(req["question"], kb_ids, search_app.get("tenant_id", current_user.id), search_config)
|
||||
if "error" in mind_map:
|
||||
return server_error_response(Exception(mind_map["error"]))
|
||||
return get_json_result(data=mind_map)
|
||||
@ -354,41 +387,20 @@ def mindmap():
|
||||
@validate_request("question")
|
||||
def related_questions():
|
||||
req = request.json
|
||||
|
||||
search_id = req.get("search_id", "")
|
||||
search_config = {}
|
||||
if search_id:
|
||||
if search_app := SearchService.get_detail(search_id):
|
||||
search_config = search_app.get("search_config", {})
|
||||
|
||||
question = req["question"]
|
||||
chat_mdl = LLMBundle(current_user.id, LLMType.CHAT)
|
||||
prompt = """
|
||||
Role: You are an AI language model assistant tasked with generating 5-10 related questions based on a user’s original query. These questions should help expand the search query scope and improve search relevance.
|
||||
|
||||
Instructions:
|
||||
Input: You are provided with a user’s question.
|
||||
Output: Generate 5-10 alternative questions that are related to the original user question. These alternatives should help retrieve a broader range of relevant documents from a vector database.
|
||||
Context: Focus on rephrasing the original question in different ways, making sure the alternative questions are diverse but still connected to the topic of the original query. Do not create overly obscure, irrelevant, or unrelated questions.
|
||||
Fallback: If you cannot generate any relevant alternatives, do not return any questions.
|
||||
Guidance:
|
||||
1. Each alternative should be unique but still relevant to the original query.
|
||||
2. Keep the phrasing clear, concise, and easy to understand.
|
||||
3. Avoid overly technical jargon or specialized terms unless directly relevant.
|
||||
4. Ensure that each question contributes towards improving search results by broadening the search angle, not narrowing it.
|
||||
chat_id = search_config.get("chat_id", "")
|
||||
chat_mdl = LLMBundle(current_user.id, LLMType.CHAT, chat_id)
|
||||
|
||||
Example:
|
||||
Original Question: What are the benefits of electric vehicles?
|
||||
|
||||
Alternative Questions:
|
||||
1. How do electric vehicles impact the environment?
|
||||
2. What are the advantages of owning an electric car?
|
||||
3. What is the cost-effectiveness of electric vehicles?
|
||||
4. How do electric vehicles compare to traditional cars in terms of fuel efficiency?
|
||||
5. What are the environmental benefits of switching to electric cars?
|
||||
6. How do electric vehicles help reduce carbon emissions?
|
||||
7. Why are electric vehicles becoming more popular?
|
||||
8. What are the long-term savings of using electric vehicles?
|
||||
9. How do electric vehicles contribute to sustainability?
|
||||
10. What are the key benefits of electric vehicles for consumers?
|
||||
|
||||
Reason:
|
||||
Rephrasing the original query into multiple alternative questions helps the user explore different aspects of their search topic, improving the quality of search results.
|
||||
These questions guide the search engine to provide a more comprehensive set of relevant documents.
|
||||
"""
|
||||
gen_conf = search_config.get("llm_setting", {"temperature": 0.9})
|
||||
prompt = load_prompt("related_question")
|
||||
ans = chat_mdl.chat(
|
||||
prompt,
|
||||
[
|
||||
@ -400,6 +412,6 @@ Related search terms:
|
||||
""",
|
||||
}
|
||||
],
|
||||
{"temperature": 0.9},
|
||||
gen_conf,
|
||||
)
|
||||
return get_json_result(data=[re.sub(r"^[0-9]\. ", "", a) for a in ans.split("\n") if re.match(r"^[0-9]\. ", a)])
|
||||
|
||||
@ -16,6 +16,7 @@
|
||||
|
||||
from flask import request
|
||||
from flask_login import login_required, current_user
|
||||
from api.db.services import duplicate_name
|
||||
from api.db.services.dialog_service import DialogService
|
||||
from api.db import StatusEnum
|
||||
from api.db.services.tenant_llm_service import TenantLLMService
|
||||
@ -41,6 +42,15 @@ def set_dialog():
|
||||
return get_data_error_result(message="Dialog name can't be empty.")
|
||||
if len(name.encode("utf-8")) > 255:
|
||||
return get_data_error_result(message=f"Dialog name length is {len(name)} which is larger than 255")
|
||||
|
||||
if is_create and DialogService.query(tenant_id=current_user.id, name=name.strip()):
|
||||
name = name.strip()
|
||||
name = duplicate_name(
|
||||
DialogService.query,
|
||||
name=name,
|
||||
tenant_id=current_user.id,
|
||||
status=StatusEnum.VALID.value)
|
||||
|
||||
description = req.get("description", "A helpful dialog")
|
||||
icon = req.get("icon", "")
|
||||
top_n = req.get("top_n", 6)
|
||||
|
||||
@ -99,7 +99,7 @@ def create(tenant_id):
|
||||
Here is the knowledge base:
|
||||
{knowledge}
|
||||
The above is the knowledge base.""",
|
||||
"prologue": "Hi! I'm your assistant, what can I do for you?",
|
||||
"prologue": "Hi! I'm your assistant. What can I do for you?",
|
||||
"parameters": [{"key": "knowledge", "optional": False}],
|
||||
"empty_response": "Sorry! No relevant content was found in the knowledge base!",
|
||||
"quote": True,
|
||||
|
||||
@ -19,6 +19,7 @@ import time
|
||||
import tiktoken
|
||||
from flask import Response, jsonify, request
|
||||
from agent.canvas import Canvas
|
||||
from api import settings
|
||||
from api.db import LLMType, StatusEnum
|
||||
from api.db.db_models import APIToken
|
||||
from api.db.services.api_service import API4ConversationService
|
||||
@ -26,12 +27,17 @@ from api.db.services.canvas_service import UserCanvasService, completionOpenAI
|
||||
from api.db.services.canvas_service import completion as agent_completion
|
||||
from api.db.services.conversation_service import ConversationService, iframe_completion
|
||||
from api.db.services.conversation_service import completion as rag_completion
|
||||
from api.db.services.dialog_service import DialogService, ask, chat
|
||||
from api.db.services.dialog_service import DialogService, ask, chat, gen_mindmap
|
||||
from api.db.services.knowledgebase_service import KnowledgebaseService
|
||||
from api.db.services.llm_service import LLMBundle
|
||||
from api.db.services.search_service import SearchService
|
||||
from api.db.services.user_service import UserTenantService
|
||||
from api.utils import get_uuid
|
||||
from api.utils.api_utils import check_duplicate_ids, get_data_openai, get_error_data_result, get_result, token_required, validate_request
|
||||
from api.utils.api_utils import check_duplicate_ids, get_data_openai, get_error_data_result, get_json_result, get_result, server_error_response, token_required, validate_request
|
||||
from rag.app.tag import label_question
|
||||
from rag.prompts import chunks_format
|
||||
from rag.prompts.prompt_template import load_prompt
|
||||
from rag.prompts.prompts import cross_languages, keyword_extraction
|
||||
|
||||
|
||||
@manager.route("/chats/<chat_id>/sessions", methods=["POST"]) # noqa: F821
|
||||
@ -808,6 +814,29 @@ def chatbot_completions(dialog_id):
|
||||
return get_result(data=answer)
|
||||
|
||||
|
||||
@manager.route("/chatbots/<dialog_id>/info", methods=["GET"]) # noqa: F821
|
||||
def chatbots_inputs(dialog_id):
|
||||
token = request.headers.get("Authorization").split()
|
||||
if len(token) != 2:
|
||||
return get_error_data_result(message='Authorization is not valid!"')
|
||||
token = token[1]
|
||||
objs = APIToken.query(beta=token)
|
||||
if not objs:
|
||||
return get_error_data_result(message='Authentication error: API key is invalid!"')
|
||||
|
||||
e, dialog = DialogService.get_by_id(dialog_id)
|
||||
if not e:
|
||||
return get_error_data_result(f"Can't find dialog by ID: {dialog_id}")
|
||||
|
||||
return get_result(
|
||||
data={
|
||||
"title": dialog.name,
|
||||
"avatar": dialog.icon,
|
||||
"prologue": dialog.prompt_config.get("prologue", ""),
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@manager.route("/agentbots/<agent_id>/completions", methods=["POST"]) # noqa: F821
|
||||
def agent_bot_completions(agent_id):
|
||||
req = request.json
|
||||
@ -855,3 +884,225 @@ def begin_inputs(agent_id):
|
||||
"prologue": canvas.get_prologue()
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@manager.route("/searchbots/ask", methods=["POST"]) # noqa: F821
|
||||
@validate_request("question", "kb_ids")
|
||||
def ask_about_embedded():
|
||||
token = request.headers.get("Authorization").split()
|
||||
if len(token) != 2:
|
||||
return get_error_data_result(message='Authorization is not valid!"')
|
||||
token = token[1]
|
||||
objs = APIToken.query(beta=token)
|
||||
if not objs:
|
||||
return get_error_data_result(message='Authentication error: API key is invalid!"')
|
||||
|
||||
req = request.json
|
||||
uid = objs[0].tenant_id
|
||||
|
||||
search_id = req.get("search_id", "")
|
||||
search_config = {}
|
||||
if search_id:
|
||||
if search_app := SearchService.get_detail(search_id):
|
||||
search_config = search_app.get("search_config", {})
|
||||
|
||||
def stream():
|
||||
nonlocal req, uid
|
||||
try:
|
||||
for ans in ask(req["question"], req["kb_ids"], uid, search_config):
|
||||
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
|
||||
except Exception as e:
|
||||
yield "data:" + json.dumps({"code": 500, "message": str(e), "data": {"answer": "**ERROR**: " + str(e), "reference": []}}, ensure_ascii=False) + "\n\n"
|
||||
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
|
||||
|
||||
resp = Response(stream(), mimetype="text/event-stream")
|
||||
resp.headers.add_header("Cache-control", "no-cache")
|
||||
resp.headers.add_header("Connection", "keep-alive")
|
||||
resp.headers.add_header("X-Accel-Buffering", "no")
|
||||
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
|
||||
return resp
|
||||
|
||||
|
||||
@manager.route("/searchbots/retrieval_test", methods=['POST']) # noqa: F821
|
||||
@validate_request("kb_id", "question")
|
||||
def retrieval_test_embedded():
|
||||
token = request.headers.get("Authorization").split()
|
||||
if len(token) != 2:
|
||||
return get_error_data_result(message='Authorization is not valid!"')
|
||||
token = token[1]
|
||||
objs = APIToken.query(beta=token)
|
||||
if not objs:
|
||||
return get_error_data_result(message='Authentication error: API key is invalid!"')
|
||||
|
||||
req = request.json
|
||||
page = int(req.get("page", 1))
|
||||
size = int(req.get("size", 30))
|
||||
question = req["question"]
|
||||
kb_ids = req["kb_id"]
|
||||
if isinstance(kb_ids, str):
|
||||
kb_ids = [kb_ids]
|
||||
doc_ids = req.get("doc_ids", [])
|
||||
similarity_threshold = float(req.get("similarity_threshold", 0.0))
|
||||
vector_similarity_weight = float(req.get("vector_similarity_weight", 0.3))
|
||||
use_kg = req.get("use_kg", False)
|
||||
top = int(req.get("top_k", 1024))
|
||||
langs = req.get("cross_languages", [])
|
||||
tenant_ids = []
|
||||
|
||||
tenant_id = objs[0].tenant_id
|
||||
if not tenant_id:
|
||||
return get_error_data_result(message="permission denined.")
|
||||
|
||||
try:
|
||||
tenants = UserTenantService.query(user_id=tenant_id)
|
||||
for kb_id in kb_ids:
|
||||
for tenant in tenants:
|
||||
if KnowledgebaseService.query(
|
||||
tenant_id=tenant.tenant_id, id=kb_id):
|
||||
tenant_ids.append(tenant.tenant_id)
|
||||
break
|
||||
else:
|
||||
return get_json_result(
|
||||
data=False, message='Only owner of knowledgebase authorized for this operation.',
|
||||
code=settings.RetCode.OPERATING_ERROR)
|
||||
|
||||
e, kb = KnowledgebaseService.get_by_id(kb_ids[0])
|
||||
if not e:
|
||||
return get_error_data_result(message="Knowledgebase not found!")
|
||||
|
||||
if langs:
|
||||
question = cross_languages(kb.tenant_id, None, question, langs)
|
||||
|
||||
embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
|
||||
|
||||
rerank_mdl = None
|
||||
if req.get("rerank_id"):
|
||||
rerank_mdl = LLMBundle(kb.tenant_id, LLMType.RERANK.value, llm_name=req["rerank_id"])
|
||||
|
||||
if req.get("keyword", False):
|
||||
chat_mdl = LLMBundle(kb.tenant_id, LLMType.CHAT)
|
||||
question += keyword_extraction(chat_mdl, question)
|
||||
|
||||
labels = label_question(question, [kb])
|
||||
ranks = settings.retrievaler.retrieval(question, embd_mdl, tenant_ids, kb_ids, page, size,
|
||||
similarity_threshold, vector_similarity_weight, top,
|
||||
doc_ids, rerank_mdl=rerank_mdl, highlight=req.get("highlight"),
|
||||
rank_feature=labels
|
||||
)
|
||||
if use_kg:
|
||||
ck = settings.kg_retrievaler.retrieval(question,
|
||||
tenant_ids,
|
||||
kb_ids,
|
||||
embd_mdl,
|
||||
LLMBundle(kb.tenant_id, LLMType.CHAT))
|
||||
if ck["content_with_weight"]:
|
||||
ranks["chunks"].insert(0, ck)
|
||||
|
||||
for c in ranks["chunks"]:
|
||||
c.pop("vector", None)
|
||||
ranks["labels"] = labels
|
||||
|
||||
return get_json_result(data=ranks)
|
||||
except Exception as e:
|
||||
if str(e).find("not_found") > 0:
|
||||
return get_json_result(data=False, message='No chunk found! Check the chunk status please!',
|
||||
code=settings.RetCode.DATA_ERROR)
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
@manager.route("/searchbots/related_questions", methods=["POST"]) # noqa: F821
|
||||
@validate_request("question")
|
||||
def related_questions_embedded():
|
||||
token = request.headers.get("Authorization").split()
|
||||
if len(token) != 2:
|
||||
return get_error_data_result(message='Authorization is not valid!"')
|
||||
token = token[1]
|
||||
objs = APIToken.query(beta=token)
|
||||
if not objs:
|
||||
return get_error_data_result(message='Authentication error: API key is invalid!"')
|
||||
|
||||
req = request.json
|
||||
tenant_id = objs[0].tenant_id
|
||||
if not tenant_id:
|
||||
return get_error_data_result(message="permission denined.")
|
||||
|
||||
search_id = req.get("search_id", "")
|
||||
search_config = {}
|
||||
if search_id:
|
||||
if search_app := SearchService.get_detail(search_id):
|
||||
search_config = search_app.get("search_config", {})
|
||||
|
||||
question = req["question"]
|
||||
|
||||
chat_id = search_config.get("chat_id", "")
|
||||
chat_mdl = LLMBundle(tenant_id, LLMType.CHAT, chat_id)
|
||||
|
||||
gen_conf = search_config.get("llm_setting", {"temperature": 0.9})
|
||||
prompt = load_prompt("related_question")
|
||||
ans = chat_mdl.chat(
|
||||
prompt,
|
||||
[
|
||||
{
|
||||
"role": "user",
|
||||
"content": f"""
|
||||
Keywords: {question}
|
||||
Related search terms:
|
||||
""",
|
||||
}
|
||||
],
|
||||
gen_conf,
|
||||
)
|
||||
return get_json_result(data=[re.sub(r"^[0-9]\. ", "", a) for a in ans.split("\n") if re.match(r"^[0-9]\. ", a)])
|
||||
|
||||
|
||||
@manager.route("/searchbots/detail", methods=["GET"]) # noqa: F821
|
||||
def detail_share_embedded():
|
||||
token = request.headers.get("Authorization").split()
|
||||
if len(token) != 2:
|
||||
return get_error_data_result(message='Authorization is not valid!"')
|
||||
token = token[1]
|
||||
objs = APIToken.query(beta=token)
|
||||
if not objs:
|
||||
return get_error_data_result(message='Authentication error: API key is invalid!"')
|
||||
|
||||
search_id = request.args["search_id"]
|
||||
tenant_id = objs[0].tenant_id
|
||||
if not tenant_id:
|
||||
return get_error_data_result(message="permission denined.")
|
||||
try:
|
||||
tenants = UserTenantService.query(user_id=tenant_id)
|
||||
for tenant in tenants:
|
||||
if SearchService.query(tenant_id=tenant.tenant_id, id=search_id):
|
||||
break
|
||||
else:
|
||||
return get_json_result(data=False, message="Has no permission for this operation.", code=settings.RetCode.OPERATING_ERROR)
|
||||
|
||||
search = SearchService.get_detail(search_id)
|
||||
if not search:
|
||||
return get_error_data_result(message="Can't find this Search App!")
|
||||
return get_json_result(data=search)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
@manager.route("/searchbots/mindmap", methods=["POST"]) # noqa: F821
|
||||
@validate_request("question", "kb_ids")
|
||||
def mindmap():
|
||||
token = request.headers.get("Authorization").split()
|
||||
if len(token) != 2:
|
||||
return get_error_data_result(message='Authorization is not valid!"')
|
||||
token = token[1]
|
||||
objs = APIToken.query(beta=token)
|
||||
if not objs:
|
||||
return get_error_data_result(message='Authentication error: API key is invalid!"')
|
||||
|
||||
tenant_id = objs[0].tenant_id
|
||||
req = request.json
|
||||
|
||||
search_id = req.get("search_id", "")
|
||||
search_app = SearchService.get_detail(search_id) if search_id else {}
|
||||
|
||||
mind_map = gen_mindmap(req["question"], req["kb_ids"], tenant_id, search_app.get("search_config", {}))
|
||||
if "error" in mind_map:
|
||||
return server_error_response(Exception(mind_map["error"]))
|
||||
return get_json_result(data=mind_map)
|
||||
|
||||
@ -22,7 +22,6 @@ from api.constants import DATASET_NAME_LIMIT
|
||||
from api.db import StatusEnum
|
||||
from api.db.db_models import DB
|
||||
from api.db.services import duplicate_name
|
||||
from api.db.services.knowledgebase_service import KnowledgebaseService
|
||||
from api.db.services.search_service import SearchService
|
||||
from api.db.services.user_service import TenantService, UserTenantService
|
||||
from api.utils import get_uuid
|
||||
@ -47,7 +46,7 @@ def create():
|
||||
return get_data_error_result(message="Authorizationd identity.")
|
||||
|
||||
search_name = search_name.strip()
|
||||
search_name = duplicate_name(KnowledgebaseService.query, name=search_name, tenant_id=current_user.id, status=StatusEnum.VALID.value)
|
||||
search_name = duplicate_name(SearchService.query, name=search_name, tenant_id=current_user.id, status=StatusEnum.VALID.value)
|
||||
|
||||
req["id"] = get_uuid()
|
||||
req["name"] = search_name
|
||||
|
||||
@ -18,12 +18,14 @@ from flask import request
|
||||
from flask_login import login_required, current_user
|
||||
|
||||
from api import settings
|
||||
from api.apps import smtp_mail_server
|
||||
from api.db import UserTenantRole, StatusEnum
|
||||
from api.db.db_models import UserTenant
|
||||
from api.db.services.user_service import UserTenantService, UserService
|
||||
|
||||
from api.utils import get_uuid, delta_seconds
|
||||
from api.utils.api_utils import get_json_result, validate_request, server_error_response, get_data_error_result
|
||||
from api.utils.web_utils import send_invite_email
|
||||
|
||||
|
||||
@manager.route("/<tenant_id>/user/list", methods=["GET"]) # noqa: F821
|
||||
@ -78,6 +80,20 @@ def create(tenant_id):
|
||||
role=UserTenantRole.INVITE,
|
||||
status=StatusEnum.VALID.value)
|
||||
|
||||
if smtp_mail_server and settings.SMTP_CONF:
|
||||
from threading import Thread
|
||||
|
||||
user_name = ""
|
||||
_, user = UserService.get_by_id(current_user.id)
|
||||
if user:
|
||||
user_name = user.nickname
|
||||
|
||||
Thread(
|
||||
target=send_invite_email,
|
||||
args=(invite_user_email, settings.MAIL_FRONTEND_URL, tenant_id, user_name or current_user.email),
|
||||
daemon=True
|
||||
).start()
|
||||
|
||||
usr = invite_users[0].to_dict()
|
||||
usr = {k: v for k, v in usr.items() if k in ["id", "avatar", "email", "nickname"]}
|
||||
|
||||
|
||||
@ -28,7 +28,8 @@ from api.apps.auth import get_auth_client
|
||||
from api.db import FileType, UserTenantRole
|
||||
from api.db.db_models import TenantLLM
|
||||
from api.db.services.file_service import FileService
|
||||
from api.db.services.llm_service import TenantLLMService, get_init_tenant_llm
|
||||
from api.db.services.llm_service import get_init_tenant_llm
|
||||
from api.db.services.tenant_llm_service import TenantLLMService
|
||||
from api.db.services.user_service import TenantService, UserService, UserTenantService
|
||||
from api.utils import (
|
||||
current_timestamp,
|
||||
|
||||
@ -742,7 +742,7 @@ class Dialog(DataBaseModel):
|
||||
prompt_type = CharField(max_length=16, null=False, default="simple", help_text="simple|advanced", index=True)
|
||||
prompt_config = JSONField(
|
||||
null=False,
|
||||
default={"system": "", "prologue": "Hi! I'm your assistant, what can I do for you?", "parameters": [], "empty_response": "Sorry! No relevant content was found in the knowledge base!"},
|
||||
default={"system": "", "prologue": "Hi! I'm your assistant. What can I do for you?", "parameters": [], "empty_response": "Sorry! No relevant content was found in the knowledge base!"},
|
||||
)
|
||||
meta_data_filter = JSONField(null=True, default={})
|
||||
|
||||
@ -872,7 +872,7 @@ class Search(DataBaseModel):
|
||||
default={
|
||||
"kb_ids": [],
|
||||
"doc_ids": [],
|
||||
"similarity_threshold": 0.0,
|
||||
"similarity_threshold": 0.2,
|
||||
"vector_similarity_weight": 0.3,
|
||||
"use_kg": False,
|
||||
# rerank settings
|
||||
@ -881,11 +881,12 @@ class Search(DataBaseModel):
|
||||
# chat settings
|
||||
"summary": False,
|
||||
"chat_id": "",
|
||||
# Leave it here for reference, don't need to set default values
|
||||
"llm_setting": {
|
||||
"temperature": 0.1,
|
||||
"top_p": 0.3,
|
||||
"frequency_penalty": 0.7,
|
||||
"presence_penalty": 0.4,
|
||||
# "temperature": 0.1,
|
||||
# "top_p": 0.3,
|
||||
# "frequency_penalty": 0.7,
|
||||
# "presence_penalty": 0.4,
|
||||
},
|
||||
"chat_settingcross_languages": [],
|
||||
"highlight": False,
|
||||
@ -1020,4 +1021,4 @@ def migrate_db():
|
||||
migrate(migrator.add_column("dialog", "meta_data_filter", JSONField(null=True, default={})))
|
||||
except Exception:
|
||||
pass
|
||||
logging.disable(logging.NOTSET)
|
||||
logging.disable(logging.NOTSET)
|
||||
|
||||
@ -22,6 +22,7 @@ from datetime import datetime
|
||||
from functools import partial
|
||||
from timeit import default_timer as timer
|
||||
|
||||
import trio
|
||||
from langfuse import Langfuse
|
||||
from peewee import fn
|
||||
|
||||
@ -36,11 +37,12 @@ from api.db.services.langfuse_service import TenantLangfuseService
|
||||
from api.db.services.llm_service import LLMBundle
|
||||
from api.db.services.tenant_llm_service import TenantLLMService
|
||||
from api.utils import current_timestamp, datetime_format
|
||||
from graphrag.general.mind_map_extractor import MindMapExtractor
|
||||
from rag.app.resume import forbidden_select_fields4resume
|
||||
from rag.app.tag import label_question
|
||||
from rag.nlp.search import index_name
|
||||
from rag.prompts import chunks_format, citation_prompt, cross_languages, full_question, kb_prompt, keyword_extraction, message_fit_in
|
||||
from rag.prompts.prompts import gen_meta_filter
|
||||
from rag.prompts.prompts import gen_meta_filter, PROMPT_JINJA_ENV, ASK_SUMMARY
|
||||
from rag.utils import num_tokens_from_string, rmSpace
|
||||
from rag.utils.tavily_conn import Tavily
|
||||
|
||||
@ -99,7 +101,6 @@ class DialogService(CommonService):
|
||||
|
||||
return list(chats.dicts())
|
||||
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_by_tenant_ids(cls, joined_tenant_ids, user_id, page_number, items_per_page, orderby, desc, keywords, parser_id=None):
|
||||
@ -256,9 +257,10 @@ def repair_bad_citation_formats(answer: str, kbinfos: dict, idx: set):
|
||||
|
||||
def meta_filter(metas: dict, filters: list[dict]):
|
||||
doc_ids = []
|
||||
|
||||
def filter_out(v2docs, operator, value):
|
||||
nonlocal doc_ids
|
||||
for input,docids in v2docs.items():
|
||||
for input, docids in v2docs.items():
|
||||
try:
|
||||
input = float(input)
|
||||
value = float(value)
|
||||
@ -389,7 +391,17 @@ def chat(dialog, messages, stream=True, **kwargs):
|
||||
reasoner = DeepResearcher(
|
||||
chat_mdl,
|
||||
prompt_config,
|
||||
partial(retriever.retrieval, embd_mdl=embd_mdl, tenant_ids=tenant_ids, kb_ids=dialog.kb_ids, page=1, page_size=dialog.top_n, similarity_threshold=0.2, vector_similarity_weight=0.3, doc_ids=attachments),
|
||||
partial(
|
||||
retriever.retrieval,
|
||||
embd_mdl=embd_mdl,
|
||||
tenant_ids=tenant_ids,
|
||||
kb_ids=dialog.kb_ids,
|
||||
page=1,
|
||||
page_size=dialog.top_n,
|
||||
similarity_threshold=0.2,
|
||||
vector_similarity_weight=0.3,
|
||||
doc_ids=attachments,
|
||||
),
|
||||
)
|
||||
|
||||
for think in reasoner.thinking(kbinfos, " ".join(questions)):
|
||||
@ -677,7 +689,14 @@ def tts(tts_mdl, text):
|
||||
return binascii.hexlify(bin).decode("utf-8")
|
||||
|
||||
|
||||
def ask(question, kb_ids, tenant_id, chat_llm_name=None):
|
||||
def ask(question, kb_ids, tenant_id, chat_llm_name=None, search_config={}):
|
||||
doc_ids = search_config.get("doc_ids", [])
|
||||
rerank_mdl = None
|
||||
kb_ids = search_config.get("kb_ids", kb_ids)
|
||||
chat_llm_name = search_config.get("chat_id", chat_llm_name)
|
||||
rerank_id = search_config.get("rerank_id", "")
|
||||
meta_data_filter = search_config.get("meta_data_filter")
|
||||
|
||||
kbs = KnowledgebaseService.get_by_ids(kb_ids)
|
||||
embedding_list = list(set([kb.embd_id for kb in kbs]))
|
||||
|
||||
@ -686,30 +705,46 @@ def ask(question, kb_ids, tenant_id, chat_llm_name=None):
|
||||
|
||||
embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING, embedding_list[0])
|
||||
chat_mdl = LLMBundle(tenant_id, LLMType.CHAT, chat_llm_name)
|
||||
if rerank_id:
|
||||
rerank_mdl = LLMBundle(tenant_id, LLMType.RERANK, rerank_id)
|
||||
max_tokens = chat_mdl.max_length
|
||||
tenant_ids = list(set([kb.tenant_id for kb in kbs]))
|
||||
kbinfos = retriever.retrieval(question, embd_mdl, tenant_ids, kb_ids, 1, 12, 0.1, 0.3, aggs=False, rank_feature=label_question(question, kbs))
|
||||
|
||||
if meta_data_filter:
|
||||
metas = DocumentService.get_meta_by_kbs(kb_ids)
|
||||
if meta_data_filter.get("method") == "auto":
|
||||
filters = gen_meta_filter(chat_mdl, metas, question)
|
||||
doc_ids.extend(meta_filter(metas, filters))
|
||||
if not doc_ids:
|
||||
doc_ids = None
|
||||
elif meta_data_filter.get("method") == "manual":
|
||||
doc_ids.extend(meta_filter(metas, meta_data_filter["manual"]))
|
||||
if not doc_ids:
|
||||
doc_ids = None
|
||||
|
||||
kbinfos = retriever.retrieval(
|
||||
question = question,
|
||||
embd_mdl=embd_mdl,
|
||||
tenant_ids=tenant_ids,
|
||||
kb_ids=kb_ids,
|
||||
page=1,
|
||||
page_size=12,
|
||||
similarity_threshold=search_config.get("similarity_threshold", 0.1),
|
||||
vector_similarity_weight=search_config.get("vector_similarity_weight", 0.3),
|
||||
top=search_config.get("top_k", 1024),
|
||||
doc_ids=doc_ids,
|
||||
aggs=False,
|
||||
rerank_mdl=rerank_mdl,
|
||||
rank_feature=label_question(question, kbs)
|
||||
)
|
||||
|
||||
knowledges = kb_prompt(kbinfos, max_tokens)
|
||||
prompt = """
|
||||
Role: You're a smart assistant. Your name is Miss R.
|
||||
Task: Summarize the information from knowledge bases and answer user's question.
|
||||
Requirements and restriction:
|
||||
- DO NOT make things up, especially for numbers.
|
||||
- If the information from knowledge is irrelevant with user's question, JUST SAY: Sorry, no relevant information provided.
|
||||
- Answer with markdown format text.
|
||||
- Answer in language of user's question.
|
||||
- DO NOT make things up, especially for numbers.
|
||||
sys_prompt = PROMPT_JINJA_ENV.from_string(ASK_SUMMARY).render(knowledge="\n".join(knowledges))
|
||||
|
||||
### Information from knowledge bases
|
||||
%s
|
||||
|
||||
The above is information from knowledge bases.
|
||||
|
||||
""" % "\n".join(knowledges)
|
||||
msg = [{"role": "user", "content": question}]
|
||||
|
||||
def decorate_answer(answer):
|
||||
nonlocal knowledges, kbinfos, prompt
|
||||
nonlocal knowledges, kbinfos, sys_prompt
|
||||
answer, idx = retriever.insert_citations(answer, [ck["content_ltks"] for ck in kbinfos["chunks"]], [ck["vector"] for ck in kbinfos["chunks"]], embd_mdl, tkweight=0.7, vtweight=0.3)
|
||||
idx = set([kbinfos["chunks"][int(i)]["doc_id"] for i in idx])
|
||||
recall_docs = [d for d in kbinfos["doc_aggs"] if d["doc_id"] in idx]
|
||||
@ -727,7 +762,55 @@ def ask(question, kb_ids, tenant_id, chat_llm_name=None):
|
||||
return {"answer": answer, "reference": refs}
|
||||
|
||||
answer = ""
|
||||
for ans in chat_mdl.chat_streamly(prompt, msg, {"temperature": 0.1}):
|
||||
for ans in chat_mdl.chat_streamly(sys_prompt, msg, {"temperature": 0.1}):
|
||||
answer = ans
|
||||
yield {"answer": answer, "reference": {}}
|
||||
yield decorate_answer(answer)
|
||||
|
||||
|
||||
def gen_mindmap(question, kb_ids, tenant_id, search_config={}):
|
||||
meta_data_filter = search_config.get("meta_data_filter", {})
|
||||
doc_ids = search_config.get("doc_ids", [])
|
||||
rerank_id = search_config.get("rerank_id", "")
|
||||
rerank_mdl = None
|
||||
kbs = KnowledgebaseService.get_by_ids(kb_ids)
|
||||
if not kbs:
|
||||
return {"error": "No KB selected"}
|
||||
embedding_list = list(set([kb.embd_id for kb in kbs]))
|
||||
tenant_ids = list(set([kb.tenant_id for kb in kbs]))
|
||||
|
||||
embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING, llm_name=embedding_list[0])
|
||||
chat_mdl = LLMBundle(tenant_id, LLMType.CHAT, llm_name=search_config.get("chat_id", ""))
|
||||
if rerank_id:
|
||||
rerank_mdl = LLMBundle(tenant_id, LLMType.RERANK, rerank_id)
|
||||
|
||||
if meta_data_filter:
|
||||
metas = DocumentService.get_meta_by_kbs(kb_ids)
|
||||
if meta_data_filter.get("method") == "auto":
|
||||
filters = gen_meta_filter(chat_mdl, metas, question)
|
||||
doc_ids.extend(meta_filter(metas, filters))
|
||||
if not doc_ids:
|
||||
doc_ids = None
|
||||
elif meta_data_filter.get("method") == "manual":
|
||||
doc_ids.extend(meta_filter(metas, meta_data_filter["manual"]))
|
||||
if not doc_ids:
|
||||
doc_ids = None
|
||||
|
||||
ranks = settings.retrievaler.retrieval(
|
||||
question=question,
|
||||
embd_mdl=embd_mdl,
|
||||
tenant_ids=tenant_ids,
|
||||
kb_ids=kb_ids,
|
||||
page=1,
|
||||
page_size=12,
|
||||
similarity_threshold=search_config.get("similarity_threshold", 0.2),
|
||||
vector_similarity_weight=search_config.get("vector_similarity_weight", 0.3),
|
||||
top=search_config.get("top_k", 1024),
|
||||
doc_ids=doc_ids,
|
||||
aggs=False,
|
||||
rerank_mdl=rerank_mdl,
|
||||
rank_feature=label_question(question, kbs),
|
||||
)
|
||||
mindmap = MindMapExtractor(chat_mdl)
|
||||
mind_map = trio.run(mindmap, [c["content_with_weight"] for c in ranks["chunks"]])
|
||||
return mind_map.output
|
||||
@ -227,10 +227,13 @@ class FileService(CommonService):
|
||||
# tenant_id: Tenant ID
|
||||
# Returns:
|
||||
# Knowledge base folder dictionary
|
||||
for root in cls.model.select().where((cls.model.tenant_id == tenant_id), (cls.model.parent_id == cls.model.id)):
|
||||
for folder in cls.model.select().where((cls.model.tenant_id == tenant_id), (cls.model.parent_id == root.id), (cls.model.name == KNOWLEDGEBASE_FOLDER_NAME)):
|
||||
return folder.to_dict()
|
||||
assert False, "Can't find the KB folder. Database init error."
|
||||
root_folder = cls.get_root_folder(tenant_id)
|
||||
root_id = root_folder["id"]
|
||||
kb_folder = cls.model.select().where((cls.model.tenant_id == tenant_id), (cls.model.parent_id == root_id), (cls.model.name == KNOWLEDGEBASE_FOLDER_NAME)).first()
|
||||
if not kb_folder:
|
||||
kb_folder = cls.new_a_file_from_kb(tenant_id, KNOWLEDGEBASE_FOLDER_NAME, root_id)
|
||||
return kb_folder
|
||||
return kb_folder.to_dict()
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
@ -499,10 +502,9 @@ class FileService(CommonService):
|
||||
@staticmethod
|
||||
def get_blob(user_id, location):
|
||||
bname = f"{user_id}-downloads"
|
||||
return STORAGE_IMPL.get(bname, location)
|
||||
return STORAGE_IMPL.get(bname, location)
|
||||
|
||||
@staticmethod
|
||||
def put_blob(user_id, location, blob):
|
||||
bname = f"{user_id}-downloads"
|
||||
return STORAGE_IMPL.put(bname, location, blob)
|
||||
|
||||
return STORAGE_IMPL.put(bname, location, blob)
|
||||
|
||||
@ -71,6 +71,8 @@ class SearchService(CommonService):
|
||||
.first()
|
||||
.to_dict()
|
||||
)
|
||||
if not search:
|
||||
return {}
|
||||
return search
|
||||
|
||||
@classmethod
|
||||
|
||||
@ -33,7 +33,7 @@ import uuid
|
||||
|
||||
from werkzeug.serving import run_simple
|
||||
from api import settings
|
||||
from api.apps import app
|
||||
from api.apps import app, smtp_mail_server
|
||||
from api.db.runtime_config import RuntimeConfig
|
||||
from api.db.services.document_service import DocumentService
|
||||
from api import utils
|
||||
@ -59,11 +59,14 @@ def update_progress():
|
||||
if redis_lock.acquire():
|
||||
DocumentService.update_progress()
|
||||
redis_lock.release()
|
||||
stop_event.wait(6)
|
||||
except Exception:
|
||||
logging.exception("update_progress exception")
|
||||
finally:
|
||||
redis_lock.release()
|
||||
try:
|
||||
redis_lock.release()
|
||||
except Exception:
|
||||
logging.exception("update_progress exception")
|
||||
stop_event.wait(6)
|
||||
|
||||
def signal_handler(sig, frame):
|
||||
logging.info("Received interrupt signal, shutting down...")
|
||||
@ -74,11 +77,11 @@ def signal_handler(sig, frame):
|
||||
|
||||
if __name__ == '__main__':
|
||||
logging.info(r"""
|
||||
____ ___ ______ ______ __
|
||||
____ ___ ______ ______ __
|
||||
/ __ \ / | / ____// ____// /____ _ __
|
||||
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
|
||||
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
|
||||
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
|
||||
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
|
||||
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
|
||||
|
||||
""")
|
||||
logging.info(
|
||||
@ -137,6 +140,18 @@ if __name__ == '__main__':
|
||||
else:
|
||||
threading.Timer(1.0, delayed_start_update_progress).start()
|
||||
|
||||
# init smtp server
|
||||
if settings.SMTP_CONF:
|
||||
app.config["MAIL_SERVER"] = settings.MAIL_SERVER
|
||||
app.config["MAIL_PORT"] = settings.MAIL_PORT
|
||||
app.config["MAIL_USE_SSL"] = settings.MAIL_USE_SSL
|
||||
app.config["MAIL_USE_TLS"] = settings.MAIL_USE_TLS
|
||||
app.config["MAIL_USERNAME"] = settings.MAIL_USERNAME
|
||||
app.config["MAIL_PASSWORD"] = settings.MAIL_PASSWORD
|
||||
app.config["MAIL_DEFAULT_SENDER"] = settings.MAIL_DEFAULT_SENDER
|
||||
smtp_mail_server.init_app(app)
|
||||
|
||||
|
||||
# start http server
|
||||
try:
|
||||
logging.info("RAGFlow HTTP server start...")
|
||||
|
||||
@ -79,6 +79,16 @@ STRONG_TEST_COUNT = int(os.environ.get("STRONG_TEST_COUNT", "8"))
|
||||
|
||||
BUILTIN_EMBEDDING_MODELS = ["BAAI/bge-large-zh-v1.5@BAAI", "maidalun1020/bce-embedding-base_v1@Youdao"]
|
||||
|
||||
SMTP_CONF = None
|
||||
MAIL_SERVER = ""
|
||||
MAIL_PORT = 000
|
||||
MAIL_USE_SSL= True
|
||||
MAIL_USE_TLS = False
|
||||
MAIL_USERNAME = ""
|
||||
MAIL_PASSWORD = ""
|
||||
MAIL_DEFAULT_SENDER = ()
|
||||
MAIL_FRONTEND_URL = ""
|
||||
|
||||
|
||||
def get_or_create_secret_key():
|
||||
secret_key = os.environ.get("RAGFLOW_SECRET_KEY")
|
||||
@ -186,6 +196,21 @@ def init_settings():
|
||||
global SANDBOX_HOST
|
||||
SANDBOX_HOST = os.environ.get("SANDBOX_HOST", "sandbox-executor-manager")
|
||||
|
||||
global SMTP_CONF, MAIL_SERVER, MAIL_PORT, MAIL_USE_SSL, MAIL_USE_TLS
|
||||
global MAIL_USERNAME, MAIL_PASSWORD, MAIL_DEFAULT_SENDER, MAIL_FRONTEND_URL
|
||||
SMTP_CONF = get_base_config("smtp", {})
|
||||
|
||||
MAIL_SERVER = SMTP_CONF.get("mail_server", "")
|
||||
MAIL_PORT = SMTP_CONF.get("mail_port", 000)
|
||||
MAIL_USE_SSL = SMTP_CONF.get("mail_use_ssl", True)
|
||||
MAIL_USE_TLS = SMTP_CONF.get("mail_use_tls", False)
|
||||
MAIL_USERNAME = SMTP_CONF.get("mail_username", "")
|
||||
MAIL_PASSWORD = SMTP_CONF.get("mail_password", "")
|
||||
mail_default_sender = SMTP_CONF.get("mail_default_sender", [])
|
||||
if mail_default_sender and len(mail_default_sender) >= 2:
|
||||
MAIL_DEFAULT_SENDER = (mail_default_sender[0], mail_default_sender[1])
|
||||
MAIL_FRONTEND_URL = SMTP_CONF.get("mail_frontend_url", "")
|
||||
|
||||
|
||||
class CustomEnum(Enum):
|
||||
@classmethod
|
||||
|
||||
@ -21,6 +21,9 @@ import re
|
||||
import socket
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from api.apps import smtp_mail_server
|
||||
from flask_mail import Message
|
||||
from flask import render_template_string
|
||||
from selenium import webdriver
|
||||
from selenium.common.exceptions import TimeoutException
|
||||
from selenium.webdriver.chrome.options import Options
|
||||
@ -31,6 +34,7 @@ from selenium.webdriver.support.ui import WebDriverWait
|
||||
from webdriver_manager.chrome import ChromeDriverManager
|
||||
|
||||
|
||||
|
||||
CONTENT_TYPE_MAP = {
|
||||
# Office
|
||||
"docx": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
|
||||
@ -172,3 +176,26 @@ def get_float(req: dict, key: str, default: float | int = 10.0) -> float:
|
||||
return parsed if parsed > 0 else default
|
||||
except (TypeError, ValueError):
|
||||
return default
|
||||
|
||||
|
||||
INVITE_EMAIL_TMPL = """
|
||||
<p>Hi {{email}},</p>
|
||||
<p>{{inviter}} has invited you to join their team (ID: {{tenant_id}}).</p>
|
||||
<p>Click the link below to complete your registration:<br>
|
||||
<a href="{{invite_url}}">{{invite_url}}</a></p>
|
||||
<p>If you did not request this, please ignore this email.</p>
|
||||
"""
|
||||
|
||||
def send_invite_email(to_email, invite_url, tenant_id, inviter):
|
||||
from api.apps import app
|
||||
with app.app_context():
|
||||
msg = Message(subject="RAGFlow Invitation",
|
||||
recipients=[to_email])
|
||||
msg.html = render_template_string(
|
||||
INVITE_EMAIL_TMPL,
|
||||
email=to_email,
|
||||
invite_url=invite_url,
|
||||
tenant_id=tenant_id,
|
||||
inviter=inviter,
|
||||
)
|
||||
smtp_mail_server.send(msg)
|
||||
|
||||
@ -505,6 +505,24 @@
|
||||
"tags": "RE-RANK,4k",
|
||||
"max_tokens": 4000,
|
||||
"model_type": "rerank"
|
||||
},
|
||||
{
|
||||
"llm_name": "qwen-audio-asr",
|
||||
"tags": "SPEECH2TEXT,8k",
|
||||
"max_tokens": 8000,
|
||||
"model_type": "speech2text"
|
||||
},
|
||||
{
|
||||
"llm_name": "qwen-audio-asr-latest",
|
||||
"tags": "SPEECH2TEXT,8k",
|
||||
"max_tokens": 8000,
|
||||
"model_type": "speech2text"
|
||||
},
|
||||
{
|
||||
"llm_name": "qwen-audio-asr-1204",
|
||||
"tags": "SPEECH2TEXT,8k",
|
||||
"max_tokens": 8000,
|
||||
"model_type": "speech2text"
|
||||
}
|
||||
]
|
||||
},
|
||||
@ -1146,60 +1164,35 @@
|
||||
"llm_name": "gemini-2.5-flash",
|
||||
"tags": "LLM,CHAT,1024K,IMAGE2TEXT",
|
||||
"max_tokens": 1048576,
|
||||
"model_type": "image2text",
|
||||
"model_type": "chat",
|
||||
"is_tools": true
|
||||
},
|
||||
{
|
||||
"llm_name": "gemini-2.5-pro",
|
||||
"tags": "LLM,CHAT,IMAGE2TEXT,1024K",
|
||||
"max_tokens": 1048576,
|
||||
"model_type": "image2text",
|
||||
"model_type": "chat",
|
||||
"is_tools": true
|
||||
},
|
||||
{
|
||||
"llm_name": "gemini-2.5-flash-preview-05-20",
|
||||
"llm_name": "gemini-2.5-flash-lite",
|
||||
"tags": "LLM,CHAT,1024K,IMAGE2TEXT",
|
||||
"max_tokens": 1048576,
|
||||
"model_type": "image2text",
|
||||
"model_type": "chat",
|
||||
"is_tools": true
|
||||
},
|
||||
{
|
||||
"llm_name": "gemini-2.0-flash-001",
|
||||
"tags": "LLM,CHAT,1024K",
|
||||
"max_tokens": 1048576,
|
||||
"model_type": "image2text",
|
||||
"is_tools": true
|
||||
},
|
||||
{
|
||||
"llm_name": "gemini-2.0-flash-thinking-exp-01-21",
|
||||
"llm_name": "gemini-2.0-flash",
|
||||
"tags": "LLM,CHAT,1024K",
|
||||
"max_tokens": 1048576,
|
||||
"model_type": "chat",
|
||||
"is_tools": true
|
||||
},
|
||||
{
|
||||
"llm_name": "gemini-1.5-flash",
|
||||
"tags": "LLM,IMAGE2TEXT,1024K",
|
||||
"llm_name": "gemini-2.0-flash-lite",
|
||||
"tags": "LLM,CHAT,1024K",
|
||||
"max_tokens": 1048576,
|
||||
"model_type": "image2text"
|
||||
},
|
||||
{
|
||||
"llm_name": "gemini-2.5-pro-preview-05-06",
|
||||
"tags": "LLM,IMAGE2TEXT,1024K",
|
||||
"max_tokens": 1048576,
|
||||
"model_type": "image2text"
|
||||
},
|
||||
{
|
||||
"llm_name": "gemini-1.5-pro",
|
||||
"tags": "LLM,IMAGE2TEXT,2048K",
|
||||
"max_tokens": 2097152,
|
||||
"model_type": "image2text"
|
||||
},
|
||||
{
|
||||
"llm_name": "gemini-1.5-flash-8b",
|
||||
"tags": "LLM,IMAGE2TEXT,1024K",
|
||||
"max_tokens": 1048576,
|
||||
"model_type": "image2text",
|
||||
"model_type": "chat",
|
||||
"is_tools": true
|
||||
},
|
||||
{
|
||||
|
||||
@ -113,3 +113,14 @@ redis:
|
||||
# switch: false
|
||||
# component: false
|
||||
# dataset: false
|
||||
# smtp:
|
||||
# mail_server: ""
|
||||
# mail_port: 465
|
||||
# mail_use_ssl: true
|
||||
# mail_use_tls: false
|
||||
# mail_username: ""
|
||||
# mail_password: ""
|
||||
# mail_default_sender:
|
||||
# - "RAGFlow" # display name
|
||||
# - "" # sender email address
|
||||
# mail_frontend_url: "https://your-frontend.example.com"
|
||||
|
||||
@ -90,9 +90,17 @@ class RAGFlowExcelParser:
|
||||
return wb
|
||||
|
||||
def html(self, fnm, chunk_rows=256):
|
||||
from html import escape
|
||||
|
||||
file_like_object = BytesIO(fnm) if not isinstance(fnm, str) else fnm
|
||||
wb = RAGFlowExcelParser._load_excel_to_workbook(file_like_object)
|
||||
tb_chunks = []
|
||||
|
||||
def _fmt(v):
|
||||
if v is None:
|
||||
return ""
|
||||
return str(v).strip()
|
||||
|
||||
for sheetname in wb.sheetnames:
|
||||
ws = wb[sheetname]
|
||||
rows = list(ws.rows)
|
||||
@ -101,7 +109,7 @@ class RAGFlowExcelParser:
|
||||
|
||||
tb_rows_0 = "<tr>"
|
||||
for t in list(rows[0]):
|
||||
tb_rows_0 += f"<th>{t.value}</th>"
|
||||
tb_rows_0 += f"<th>{escape(_fmt(t.value))}</th>"
|
||||
tb_rows_0 += "</tr>"
|
||||
|
||||
for chunk_i in range((len(rows) - 1) // chunk_rows + 1):
|
||||
@ -109,7 +117,7 @@ class RAGFlowExcelParser:
|
||||
tb += f"<table><caption>{sheetname}</caption>"
|
||||
tb += tb_rows_0
|
||||
for r in list(
|
||||
rows[1 + chunk_i * chunk_rows: 1 + (chunk_i + 1) * chunk_rows]
|
||||
rows[1 + chunk_i * chunk_rows: min(1 + (chunk_i + 1) * chunk_rows, len(rows))]
|
||||
):
|
||||
tb += "<tr>"
|
||||
for i, c in enumerate(r):
|
||||
|
||||
@ -94,7 +94,7 @@ SVR_HTTP_PORT=9380
|
||||
|
||||
# The RAGFlow Docker image to download.
|
||||
# Defaults to the v0.20.1-slim edition, which is the RAGFlow Docker image without embedding models.
|
||||
RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1-slim
|
||||
RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.3-slim
|
||||
#
|
||||
# To download the RAGFlow Docker image with embedding models, uncomment the following line instead:
|
||||
# RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1
|
||||
|
||||
@ -79,8 +79,8 @@ The [.env](./.env) file contains important environment variables for Docker.
|
||||
- `RAGFLOW-IMAGE`
|
||||
The Docker image edition. Available editions:
|
||||
|
||||
- `infiniflow/ragflow:v0.20.1-slim` (default): The RAGFlow Docker image without embedding models.
|
||||
- `infiniflow/ragflow:v0.20.1`: The RAGFlow Docker image with embedding models including:
|
||||
- `infiniflow/ragflow:v0.20.3-slim` (default): The RAGFlow Docker image without embedding models.
|
||||
- `infiniflow/ragflow:v0.20.3`: The RAGFlow Docker image with embedding models including:
|
||||
- Built-in embedding models:
|
||||
- `BAAI/bge-large-zh-v1.5`
|
||||
- `maidalun1020/bce-embedding-base_v1`
|
||||
|
||||
@ -6,3 +6,7 @@ proxy_set_header Connection "";
|
||||
proxy_buffering off;
|
||||
proxy_read_timeout 3600s;
|
||||
proxy_send_timeout 3600s;
|
||||
proxy_buffer_size 1024k;
|
||||
proxy_buffers 16 1024k;
|
||||
proxy_busy_buffers_size 2048k;
|
||||
proxy_temp_file_write_size 2048k;
|
||||
@ -99,8 +99,8 @@ RAGFlow utilizes MinIO as its object storage solution, leveraging its scalabilit
|
||||
- `RAGFLOW-IMAGE`
|
||||
The Docker image edition. Available editions:
|
||||
|
||||
- `infiniflow/ragflow:v0.20.1-slim` (default): The RAGFlow Docker image without embedding models.
|
||||
- `infiniflow/ragflow:v0.20.1`: The RAGFlow Docker image with embedding models including:
|
||||
- `infiniflow/ragflow:v0.20.3-slim` (default): The RAGFlow Docker image without embedding models.
|
||||
- `infiniflow/ragflow:v0.20.3`: The RAGFlow Docker image with embedding models including:
|
||||
- Built-in embedding models:
|
||||
- `BAAI/bge-large-zh-v1.5`
|
||||
- `maidalun1020/bce-embedding-base_v1`
|
||||
|
||||
@ -77,7 +77,7 @@ After building the infiniflow/ragflow:nightly-slim image, you are ready to launc
|
||||
|
||||
1. Edit Docker Compose Configuration
|
||||
|
||||
Open the `docker/.env` file. Find the `RAGFLOW_IMAGE` setting and change the image reference from `infiniflow/ragflow:v0.20.1-slim` to `infiniflow/ragflow:nightly-slim` to use the pre-built image.
|
||||
Open the `docker/.env` file. Find the `RAGFLOW_IMAGE` setting and change the image reference from `infiniflow/ragflow:v0.20.3-slim` to `infiniflow/ragflow:nightly-slim` to use the pre-built image.
|
||||
|
||||
|
||||
2. Launch the Service
|
||||
|
||||
10
docs/faq.mdx
10
docs/faq.mdx
@ -30,17 +30,17 @@ The "garbage in garbage out" status quo remains unchanged despite the fact that
|
||||
|
||||
Each RAGFlow release is available in two editions:
|
||||
|
||||
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.20.1-slim`
|
||||
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.20.1`
|
||||
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.20.3-slim`
|
||||
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.20.3`
|
||||
|
||||
---
|
||||
|
||||
### Which embedding models can be deployed locally?
|
||||
|
||||
RAGFlow offers two Docker image editions, `v0.20.1-slim` and `v0.20.1`:
|
||||
RAGFlow offers two Docker image editions, `v0.20.3-slim` and `v0.20.3`:
|
||||
|
||||
- `infiniflow/ragflow:v0.20.1-slim` (default): The RAGFlow Docker image without embedding models.
|
||||
- `infiniflow/ragflow:v0.20.1`: The RAGFlow Docker image with embedding models including:
|
||||
- `infiniflow/ragflow:v0.20.3-slim` (default): The RAGFlow Docker image without embedding models.
|
||||
- `infiniflow/ragflow:v0.20.3`: The RAGFlow Docker image with embedding models including:
|
||||
- Built-in embedding models:
|
||||
- `BAAI/bge-large-zh-v1.5`
|
||||
- `maidalun1020/bce-embedding-base_v1`
|
||||
|
||||
@ -9,7 +9,7 @@ The component equipped with reasoning, tool usage, and multi-agent collaboration
|
||||
|
||||
---
|
||||
|
||||
An **Agent** component fine-tunes the LLM and sets its prompt. From v0.20.1 onwards, an **Agent** component is able to work independently and with the following capabilities:
|
||||
An **Agent** component fine-tunes the LLM and sets its prompt. From v0.20.3 onwards, an **Agent** component is able to work independently and with the following capabilities:
|
||||
|
||||
- Autonomous reasoning with reflection and adjustment based on environmental feedback.
|
||||
- Use of tools or subagents to complete tasks.
|
||||
|
||||
@ -9,7 +9,7 @@ A component that retrieves information from specified datasets.
|
||||
|
||||
## Scenarios
|
||||
|
||||
A **Retrieval** component is essential in most RAG scenarios, where information is extracted from designated knowledge bases before being sent to the LLM for content generation. As of v0.20.1, a **Retrieval** component can operate either as a workflow component or as a tool of an **Agent**, enabling the Agent to control its invocation and search queries.
|
||||
A **Retrieval** component is essential in most RAG scenarios, where information is extracted from designated knowledge bases before being sent to the LLM for content generation. As of v0.20.3, a **Retrieval** component can operate either as a workflow component or as a tool of an **Agent**, enabling the Agent to control its invocation and search queries.
|
||||
|
||||
## Configurations
|
||||
|
||||
|
||||
@ -48,7 +48,7 @@ You start an AI conversation by creating an assistant.
|
||||
- If no target language is selected, the system will search only in the language of your query, which may cause relevant information in other languages to be missed.
|
||||
- **Variable** refers to the variables (keys) to be used in the system prompt. `{knowledge}` is a reserved variable. Click **Add** to add more variables for the system prompt.
|
||||
- If you are uncertain about the logic behind **Variable**, leave it *as-is*.
|
||||
- As of v0.20.1, if you add custom variables here, the only way you can pass in their values is to call:
|
||||
- As of v0.20.3, if you add custom variables here, the only way you can pass in their values is to call:
|
||||
- HTTP method [Converse with chat assistant](../../references/http_api_reference.md#converse-with-chat-assistant), or
|
||||
- Python method [Converse with chat assistant](../../references/python_api_reference.md#converse-with-chat-assistant).
|
||||
|
||||
|
||||
@ -128,7 +128,7 @@ See [Run retrieval test](./run_retrieval_test.md) for details.
|
||||
|
||||
## Search for knowledge base
|
||||
|
||||
As of RAGFlow v0.20.1, the search feature is still in a rudimentary form, supporting only knowledge base search by name.
|
||||
As of RAGFlow v0.20.3, the search feature is still in a rudimentary form, supporting only knowledge base search by name.
|
||||
|
||||

|
||||
|
||||
|
||||
@ -87,4 +87,4 @@ RAGFlow's file management allows you to download an uploaded file:
|
||||
|
||||

|
||||
|
||||
> As of RAGFlow v0.20.1, bulk download is not supported, nor can you download an entire folder.
|
||||
> As of RAGFlow v0.20.3, bulk download is not supported, nor can you download an entire folder.
|
||||
|
||||
@ -18,7 +18,7 @@ RAGFlow ships with a built-in [Langfuse](https://langfuse.com) integration so th
|
||||
Langfuse stores traces, spans and prompt payloads in a purpose-built observability backend and offers filtering and visualisations on top.
|
||||
|
||||
:::info NOTE
|
||||
• RAGFlow **≥ 0.20.1** (contains the Langfuse connector)
|
||||
• RAGFlow **≥ 0.20.3** (contains the Langfuse connector)
|
||||
• A Langfuse workspace (cloud or self-hosted) with a _Project Public Key_ and _Secret Key_
|
||||
:::
|
||||
|
||||
|
||||
@ -66,10 +66,10 @@ To upgrade RAGFlow, you must upgrade **both** your code **and** your Docker imag
|
||||
git clone https://github.com/infiniflow/ragflow.git
|
||||
```
|
||||
|
||||
2. Switch to the latest, officially published release, e.g., `v0.20.1`:
|
||||
2. Switch to the latest, officially published release, e.g., `v0.20.3`:
|
||||
|
||||
```bash
|
||||
git checkout -f v0.20.1
|
||||
git checkout -f v0.20.3
|
||||
```
|
||||
|
||||
3. Update **ragflow/docker/.env**:
|
||||
@ -83,14 +83,14 @@ To upgrade RAGFlow, you must upgrade **both** your code **and** your Docker imag
|
||||
<TabItem value="slim">
|
||||
|
||||
```bash
|
||||
RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1-slim
|
||||
RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.3-slim
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="full">
|
||||
|
||||
```bash
|
||||
RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1
|
||||
RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.3
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
@ -114,10 +114,10 @@ No, you do not need to. Upgrading RAGFlow in itself will *not* remove your uploa
|
||||
1. From an environment with Internet access, pull the required Docker image.
|
||||
2. Save the Docker image to a **.tar** file.
|
||||
```bash
|
||||
docker save -o ragflow.v0.20.1.tar infiniflow/ragflow:v0.20.1
|
||||
docker save -o ragflow.v0.20.3.tar infiniflow/ragflow:v0.20.3
|
||||
```
|
||||
3. Copy the **.tar** file to the target server.
|
||||
4. Load the **.tar** file into Docker:
|
||||
```bash
|
||||
docker load -i ragflow.v0.20.1.tar
|
||||
docker load -i ragflow.v0.20.3.tar
|
||||
```
|
||||
|
||||
@ -44,7 +44,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
|
||||
|
||||
`vm.max_map_count`. This value sets the maximum number of memory map areas a process may have. Its default value is 65530. While most applications require fewer than a thousand maps, reducing this value can result in abnormal behaviors, and the system will throw out-of-memory errors when a process reaches the limitation.
|
||||
|
||||
RAGFlow v0.20.1 uses Elasticsearch or [Infinity](https://github.com/infiniflow/infinity) for multiple recall. Setting the value of `vm.max_map_count` correctly is crucial to the proper functioning of the Elasticsearch component.
|
||||
RAGFlow v0.20.3 uses Elasticsearch or [Infinity](https://github.com/infiniflow/infinity) for multiple recall. Setting the value of `vm.max_map_count` correctly is crucial to the proper functioning of the Elasticsearch component.
|
||||
|
||||
<Tabs
|
||||
defaultValue="linux"
|
||||
@ -184,13 +184,13 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
|
||||
```bash
|
||||
$ git clone https://github.com/infiniflow/ragflow.git
|
||||
$ cd ragflow/docker
|
||||
$ git checkout -f v0.20.1
|
||||
$ git checkout -f v0.20.3
|
||||
```
|
||||
|
||||
3. Use the pre-built Docker images and start up the server:
|
||||
|
||||
:::tip NOTE
|
||||
The command below downloads the `v0.20.1-slim` edition of the RAGFlow Docker image. Refer to the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.20.1-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1` for the full edition `v0.20.1`.
|
||||
The command below downloads the `v0.20.3-slim` edition of the RAGFlow Docker image. Refer to the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.20.3-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.3` for the full edition `v0.20.3`.
|
||||
:::
|
||||
|
||||
```bash
|
||||
@ -207,8 +207,8 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
|
||||
|
||||
| RAGFlow image tag | Image size (GB) | Has embedding models and Python packages? | Stable? |
|
||||
| ------------------- | --------------- | ----------------------------------------- | ------------------------ |
|
||||
| `v0.20.1` | ≈9 | :heavy_check_mark: | Stable release |
|
||||
| `v0.20.1-slim` | ≈2 | ❌ | Stable release |
|
||||
| `v0.20.3` | ≈9 | :heavy_check_mark: | Stable release |
|
||||
| `v0.20.3-slim` | ≈2 | ❌ | Stable release |
|
||||
| `nightly` | ≈9 | :heavy_check_mark: | *Unstable* nightly build |
|
||||
| `nightly-slim` | ≈2 | ❌ | *Unstable* nightly build |
|
||||
|
||||
@ -217,7 +217,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
|
||||
```
|
||||
|
||||
:::danger IMPORTANT
|
||||
The embedding models included in `v0.20.1` and `nightly` are:
|
||||
The embedding models included in `v0.20.3` and `nightly` are:
|
||||
|
||||
- BAAI/bge-large-zh-v1.5
|
||||
- maidalun1020/bce-embedding-base_v1
|
||||
|
||||
@ -19,7 +19,7 @@ import TOCInline from '@theme/TOCInline';
|
||||
|
||||
### Cross-language search
|
||||
|
||||
Cross-language search (also known as cross-lingual retrieval) is a feature introduced in version 0.20.1. It enables users to submit queries in one language (for example, English) and retrieve relevant documents written in other languages such as Chinese or Spanish. This feature is enabled by the system’s default chat model, which translates queries to ensure accurate matching of semantic meaning across languages.
|
||||
Cross-language search (also known as cross-lingual retrieval) is a feature introduced in version 0.20.3. It enables users to submit queries in one language (for example, English) and retrieve relevant documents written in other languages such as Chinese or Spanish. This feature is enabled by the system’s default chat model, which translates queries to ensure accurate matching of semantic meaning across languages.
|
||||
|
||||
By enabling cross-language search, users can effortlessly access a broader range of information regardless of language barriers, significantly enhancing the system’s usability and inclusiveness.
|
||||
|
||||
|
||||
@ -5,7 +5,7 @@ slug: /http_api_reference
|
||||
|
||||
# HTTP API
|
||||
|
||||
A complete reference for RAGFlow's RESTful API. Before proceeding, please ensure you [have your RAGFlow API key ready for authentication](../guides/models/llm_api_key_setup.md).
|
||||
A complete reference for RAGFlow's RESTful API. Before proceeding, please ensure you [have your RAGFlow API key ready for authentication](../develop/acquire_ragflow_api_key.md).
|
||||
|
||||
---
|
||||
|
||||
@ -79,91 +79,71 @@ curl --request POST \
|
||||
Stream:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "chatcmpl-3a9c3572f29311efa69751e139332ced",
|
||||
data:{
|
||||
"id": "chatcmpl-3b0397f277f511f0b47f729e3aa55728",
|
||||
"choices": [
|
||||
{
|
||||
"delta": {
|
||||
"content": "This is a test. If you have any specific questions or need information, feel",
|
||||
"content": "Hello! It seems like you're just greeting me. If you have a specific",
|
||||
"role": "assistant",
|
||||
"function_call": null,
|
||||
"tool_calls": null
|
||||
"tool_calls": null,
|
||||
"reasoning_content": null
|
||||
},
|
||||
"finish_reason": null,
|
||||
"index": 0,
|
||||
"logprobs": null
|
||||
}
|
||||
],
|
||||
"created": 1740543996,
|
||||
"created": 1755084508,
|
||||
"model": "model",
|
||||
"object": "chat.completion.chunk",
|
||||
"system_fingerprint": "",
|
||||
"usage": null
|
||||
}
|
||||
// omit duplicated information
|
||||
{"choices":[{"delta":{"content":" free to ask, and I will do my best to provide an answer based on","role":"assistant"}}]}
|
||||
{"choices":[{"delta":{"content":" the knowledge I have. If your question is unrelated to the provided knowledge base,","role":"assistant"}}]}
|
||||
{"choices":[{"delta":{"content":" I will let you know.","role":"assistant"}}]}
|
||||
// the last chunk
|
||||
{
|
||||
"id": "chatcmpl-3a9c3572f29311efa69751e139332ced",
|
||||
"choices": [
|
||||
{
|
||||
"delta": {
|
||||
"content": null,
|
||||
"role": "assistant",
|
||||
"function_call": null,
|
||||
"tool_calls": null
|
||||
},
|
||||
"finish_reason": "stop",
|
||||
"index": 0,
|
||||
"logprobs": null
|
||||
}
|
||||
],
|
||||
"created": 1740543996,
|
||||
"model": "model",
|
||||
"object": "chat.completion.chunk",
|
||||
"system_fingerprint": "",
|
||||
"usage": {
|
||||
"prompt_tokens": 18,
|
||||
"completion_tokens": 225,
|
||||
"total_tokens": 243
|
||||
}
|
||||
}
|
||||
|
||||
data:{"id": "chatcmpl-3b0397f277f511f0b47f729e3aa55728", "choices": [{"delta": {"content": " question or need information, feel free to ask, and I'll do my best", "role": "assistant", "function_call": null, "tool_calls": null, "reasoning_content": null}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1755084508, "model": "model", "object": "chat.completion.chunk", "system_fingerprint": "", "usage": null}
|
||||
|
||||
data:{"id": "chatcmpl-3b0397f277f511f0b47f729e3aa55728", "choices": [{"delta": {"content": " to assist you based on the knowledge base provided.", "role": "assistant", "function_call": null, "tool_calls": null, "reasoning_content": null}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1755084508, "model": "model", "object": "chat.completion.chunk", "system_fingerprint": "", "usage": null}
|
||||
|
||||
data:{"id": "chatcmpl-3b0397f277f511f0b47f729e3aa55728", "choices": [{"delta": {"content": null, "role": "assistant", "function_call": null, "tool_calls": null, "reasoning_content": null}, "finish_reason": "stop", "index": 0, "logprobs": null}], "created": 1755084508, "model": "model", "object": "chat.completion.chunk", "system_fingerprint": "", "usage": {"prompt_tokens": 5, "completion_tokens": 188, "total_tokens": 193}}
|
||||
|
||||
data:[DONE]
|
||||
```
|
||||
|
||||
Non-stream:
|
||||
|
||||
```json
|
||||
{
|
||||
"choices":[
|
||||
"choices": [
|
||||
{
|
||||
"finish_reason":"stop",
|
||||
"index":0,
|
||||
"logprobs":null,
|
||||
"message":{
|
||||
"content":"This is a test. If you have any specific questions or need information, feel free to ask, and I will do my best to provide an answer based on the knowledge I have. If your question is unrelated to the provided knowledge base, I will let you know.",
|
||||
"role":"assistant"
|
||||
"finish_reason": "stop",
|
||||
"index": 0,
|
||||
"logprobs": null,
|
||||
"message": {
|
||||
"content": "Hello! I'm your smart assistant. What can I do for you?",
|
||||
"role": "assistant"
|
||||
}
|
||||
}
|
||||
],
|
||||
"created":1740543499,
|
||||
"id":"chatcmpl-3a9c3572f29311efa69751e139332ced",
|
||||
"model":"model",
|
||||
"object":"chat.completion",
|
||||
"usage":{
|
||||
"completion_tokens":246,
|
||||
"completion_tokens_details":{
|
||||
"accepted_prediction_tokens":246,
|
||||
"reasoning_tokens":18,
|
||||
"rejected_prediction_tokens":0
|
||||
"created": 1755084403,
|
||||
"id": "chatcmpl-3b0397f277f511f0b47f729e3aa55728",
|
||||
"model": "model",
|
||||
"object": "chat.completion",
|
||||
"usage": {
|
||||
"completion_tokens": 55,
|
||||
"completion_tokens_details": {
|
||||
"accepted_prediction_tokens": 55,
|
||||
"reasoning_tokens": 5,
|
||||
"rejected_prediction_tokens": 0
|
||||
},
|
||||
"prompt_tokens":18,
|
||||
"total_tokens":264
|
||||
"prompt_tokens": 5,
|
||||
"total_tokens": 60
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
Failure:
|
||||
|
||||
```json
|
||||
@ -211,13 +191,13 @@ curl --request POST \
|
||||
|
||||
##### Request Parameters
|
||||
|
||||
- `model` (*Body parameter*) `string`, *Required*
|
||||
- `model` (*Body parameter*) `string`, *Required*
|
||||
The model used to generate the response. The server will parse this automatically, so you can set it to any value for now.
|
||||
|
||||
- `messages` (*Body parameter*) `list[object]`, *Required*
|
||||
- `messages` (*Body parameter*) `list[object]`, *Required*
|
||||
A list of historical chat messages used to generate the response. This must contain at least one message with the `user` role.
|
||||
|
||||
- `stream` (*Body parameter*) `boolean`
|
||||
- `stream` (*Body parameter*) `boolean`
|
||||
Whether to receive the response as a stream. Set this to `false` explicitly if you prefer to receive the entire response in one go instead of as a stream.
|
||||
|
||||
#### Response
|
||||
@ -225,91 +205,74 @@ curl --request POST \
|
||||
Stream:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "chatcmpl-3a9c3572f29311efa69751e139332ced",
|
||||
data: {
|
||||
"id": "5fa65c94-e316-4954-800a-06dfd5827052",
|
||||
"object": "chat.completion.chunk",
|
||||
"model": "99ee29d6783511f09c921a6272e682d8",
|
||||
"choices": [
|
||||
{
|
||||
"delta": {
|
||||
"content": "This is a test. If you have any specific questions or need information, feel",
|
||||
"role": "assistant",
|
||||
"function_call": null,
|
||||
"tool_calls": null
|
||||
"content": "Hello"
|
||||
},
|
||||
"finish_reason": null,
|
||||
"index": 0,
|
||||
"logprobs": null
|
||||
"index": 0
|
||||
}
|
||||
],
|
||||
"created": 1740543996,
|
||||
"model": "model",
|
||||
"object": "chat.completion.chunk",
|
||||
"system_fingerprint": "",
|
||||
"usage": null
|
||||
}
|
||||
// omit duplicated information
|
||||
{"choices":[{"delta":{"content":" free to ask, and I will do my best to provide an answer based on","role":"assistant"}}]}
|
||||
{"choices":[{"delta":{"content":" the knowledge I have. If your question is unrelated to the provided knowledge base,","role":"assistant"}}]}
|
||||
{"choices":[{"delta":{"content":" I will let you know.","role":"assistant"}}]}
|
||||
// the last chunk
|
||||
{
|
||||
"id": "chatcmpl-3a9c3572f29311efa69751e139332ced",
|
||||
"choices": [
|
||||
{
|
||||
"delta": {
|
||||
"content": null,
|
||||
"role": "assistant",
|
||||
"function_call": null,
|
||||
"tool_calls": null
|
||||
},
|
||||
"finish_reason": "stop",
|
||||
"index": 0,
|
||||
"logprobs": null
|
||||
}
|
||||
],
|
||||
"created": 1740543996,
|
||||
"model": "model",
|
||||
"object": "chat.completion.chunk",
|
||||
"system_fingerprint": "",
|
||||
"usage": {
|
||||
"prompt_tokens": 18,
|
||||
"completion_tokens": 225,
|
||||
"total_tokens": 243
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
data: {"id": "518022d9-545b-4100-89ed-ecd9e46fa753", "object": "chat.completion.chunk", "model": "99ee29d6783511f09c921a6272e682d8", "choices": [{"delta": {"content": "!"}, "finish_reason": null, "index": 0}]}
|
||||
|
||||
data: {"id": "f37c4af0-8187-4c86-8186-048c3c6ffe4e", "object": "chat.completion.chunk", "model": "99ee29d6783511f09c921a6272e682d8", "choices": [{"delta": {"content": " How"}, "finish_reason": null, "index": 0}]}
|
||||
|
||||
data: {"id": "3ebc0fcb-0f85-4024-b4a5-3b03234a16df", "object": "chat.completion.chunk", "model": "99ee29d6783511f09c921a6272e682d8", "choices": [{"delta": {"content": " can"}, "finish_reason": null, "index": 0}]}
|
||||
|
||||
data: {"id": "efa1f3cf-7bc4-47a4-8e53-cd696f290587", "object": "chat.completion.chunk", "model": "99ee29d6783511f09c921a6272e682d8", "choices": [{"delta": {"content": " I"}, "finish_reason": null, "index": 0}]}
|
||||
|
||||
data: {"id": "2eb6f741-50a3-4d3d-8418-88be27895611", "object": "chat.completion.chunk", "model": "99ee29d6783511f09c921a6272e682d8", "choices": [{"delta": {"content": " assist"}, "finish_reason": null, "index": 0}]}
|
||||
|
||||
data: {"id": "f1227e4f-bf8b-462c-8632-8f5269492ce9", "object": "chat.completion.chunk", "model": "99ee29d6783511f09c921a6272e682d8", "choices": [{"delta": {"content": " you"}, "finish_reason": null, "index": 0}]}
|
||||
|
||||
data: {"id": "35b669d0-b2be-4c0c-88d8-17ff98592b21", "object": "chat.completion.chunk", "model": "99ee29d6783511f09c921a6272e682d8", "choices": [{"delta": {"content": " today"}, "finish_reason": null, "index": 0}]}
|
||||
|
||||
data: {"id": "f00d8a39-af60-4f32-924f-d64106a7fdf1", "object": "chat.completion.chunk", "model": "99ee29d6783511f09c921a6272e682d8", "choices": [{"delta": {"content": "?"}, "finish_reason": null, "index": 0}]}
|
||||
|
||||
data: [DONE]
|
||||
```
|
||||
|
||||
Non-stream:
|
||||
|
||||
```json
|
||||
{
|
||||
"choices":[
|
||||
"choices": [
|
||||
{
|
||||
"finish_reason":"stop",
|
||||
"index":0,
|
||||
"logprobs":null,
|
||||
"message":{
|
||||
"content":"This is a test. If you have any specific questions or need information, feel free to ask, and I will do my best to provide an answer based on the knowledge I have. If your question is unrelated to the provided knowledge base, I will let you know.",
|
||||
"role":"assistant"
|
||||
"finish_reason": "stop",
|
||||
"index": 0,
|
||||
"logprobs": null,
|
||||
"message": {
|
||||
"content": "Hello! How can I assist you today?",
|
||||
"role": "assistant"
|
||||
}
|
||||
}
|
||||
],
|
||||
"created":1740543499,
|
||||
"id":"chatcmpl-3a9c3572f29311efa69751e139332ced",
|
||||
"model":"model",
|
||||
"object":"chat.completion",
|
||||
"usage":{
|
||||
"completion_tokens":246,
|
||||
"completion_tokens_details":{
|
||||
"accepted_prediction_tokens":246,
|
||||
"reasoning_tokens":18,
|
||||
"rejected_prediction_tokens":0
|
||||
"created": null,
|
||||
"id": "17aa4ec5-6d36-40c6-9a96-1b069c216d59",
|
||||
"model": "99ee29d6783511f09c921a6272e682d8",
|
||||
"object": "chat.completion",
|
||||
"param": null,
|
||||
"usage": {
|
||||
"completion_tokens": 9,
|
||||
"completion_tokens_details": {
|
||||
"accepted_prediction_tokens": 0,
|
||||
"reasoning_tokens": 0,
|
||||
"rejected_prediction_tokens": 0
|
||||
},
|
||||
"prompt_tokens":18,
|
||||
"total_tokens":264
|
||||
"prompt_tokens": 1,
|
||||
"total_tokens": 10
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
Failure:
|
||||
|
||||
```json
|
||||
@ -999,7 +962,7 @@ Updates configurations for a specified document.
|
||||
|
||||
```bash
|
||||
curl --request PUT \
|
||||
--url http://{address}/api/v1/datasets/{dataset_id}/info/{document_id} \
|
||||
--url http://{address}/api/v1/datasets/{dataset_id}/documents/{document_id} \
|
||||
--header 'Authorization: Bearer <YOUR_API_KEY>' \
|
||||
--header 'Content-Type: application/json' \
|
||||
--data '
|
||||
@ -1931,7 +1894,7 @@ Success:
|
||||
"prompt": {
|
||||
"empty_response": "Sorry! No relevant content was found in the knowledge base!",
|
||||
"keywords_similarity_weight": 0.3,
|
||||
"opener": "Hi! I'm your assistant, what can I do for you?",
|
||||
"opener": "Hi! I'm your assistant. What can I do for you?",
|
||||
"prompt": "You are an intelligent assistant. Please summarize the content of the knowledge base to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence \"The answer you are looking for is not found in the knowledge base!\" Answers need to consider chat history.\n ",
|
||||
"rerank_model": "",
|
||||
"similarity_threshold": 0.2,
|
||||
@ -2176,7 +2139,7 @@ Success:
|
||||
"prompt": {
|
||||
"empty_response": "Sorry! No relevant content was found in the knowledge base!",
|
||||
"keywords_similarity_weight": 0.3,
|
||||
"opener": "Hi! I'm your assistant, what can I do for you?",
|
||||
"opener": "Hi! I'm your assistant. What can I do for you?",
|
||||
"prompt": "You are an intelligent assistant. Please summarize the content of the knowledge base to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence \"The answer you are looking for is not found in the knowledge base!\" Answers need to consider chat history.\n",
|
||||
"rerank_model": "",
|
||||
"similarity_threshold": 0.2,
|
||||
@ -2571,7 +2534,7 @@ data:{
|
||||
"code": 0,
|
||||
"message": "",
|
||||
"data": {
|
||||
"answer": "Hi! I'm your assistant, what can I do for you?",
|
||||
"answer": "Hi! I'm your assistant. What can I do for you?",
|
||||
"reference": {},
|
||||
"audio_binary": null,
|
||||
"id": null,
|
||||
@ -2675,6 +2638,10 @@ Failure:
|
||||
|
||||
### Create session with agent
|
||||
|
||||
:::danger DEPRECATED
|
||||
This method is deprecated and not recommended. You can still call it but be mindful that calling `Converse with agent` will automatically generate a session ID for the associated agent.
|
||||
:::
|
||||
|
||||
**POST** `/api/v1/agents/{agent_id}/sessions`
|
||||
|
||||
Creates a session with an agent.
|
||||
@ -2689,7 +2656,7 @@ Creates a session with an agent.
|
||||
- Body:
|
||||
- the required parameters:`str`
|
||||
- other parameters:
|
||||
The parameters specified in the **Begin** component.
|
||||
The variables specified in the **Begin** component.
|
||||
|
||||
##### Request example
|
||||
|
||||
@ -2708,7 +2675,7 @@ curl --request POST \
|
||||
|
||||
- `agent_id`: (*Path parameter*)
|
||||
The ID of the associated agent.
|
||||
- `user_id`: (*Filter parameter*)
|
||||
- `user_id`: (*Filter parameter*)
|
||||
The optional user-defined ID for parsing docs (especially images) when creating a session while uploading files.
|
||||
|
||||
#### Response
|
||||
@ -2787,8 +2754,8 @@ Success:
|
||||
"message_history_window_size": 22,
|
||||
"mode": "conversational",
|
||||
"outputs": {},
|
||||
"prologue": "Hi! I'm your assistant, what can I do for you?",
|
||||
"tips": "Please fill up the form"
|
||||
"prologue": "Hi! I'm your assistant. What can I do for you?",
|
||||
"tips": "Please fill in the form"
|
||||
}
|
||||
},
|
||||
"upstream": []
|
||||
@ -2840,7 +2807,7 @@ Success:
|
||||
}
|
||||
},
|
||||
"mode": "conversational",
|
||||
"prologue": "Hi! I'm your assistant, what can I do for you?"
|
||||
"prologue": "Hi! I'm your assistant. What can I do for you?"
|
||||
},
|
||||
"label": "Begin",
|
||||
"name": "begin"
|
||||
@ -2897,7 +2864,7 @@ Success:
|
||||
"id": "0b02fe80780e11f084adcfdc3ed1d902",
|
||||
"message": [
|
||||
{
|
||||
"content": "Hi! I'm your assistant, what can I do for you?",
|
||||
"content": "Hi! I'm your assistant. What can I do for you?",
|
||||
"role": "assistant"
|
||||
}
|
||||
],
|
||||
@ -2929,12 +2896,8 @@ Asks a specified agent a question to start an AI-powered conversation.
|
||||
- In streaming mode, not all responses include a reference, as this depends on the system's judgement.
|
||||
- In streaming mode, the last message is an empty message:
|
||||
|
||||
```json
|
||||
data:
|
||||
{
|
||||
"code": 0,
|
||||
"data": true
|
||||
}
|
||||
```
|
||||
[DONE]
|
||||
```
|
||||
|
||||
:::
|
||||
@ -2949,18 +2912,17 @@ Asks a specified agent a question to start an AI-powered conversation.
|
||||
- Body:
|
||||
- `"question"`: `string`
|
||||
- `"stream"`: `boolean`
|
||||
- `"session_id"`: `string`
|
||||
- `"user_id"`: `string`(optional)
|
||||
- `"sync_dsl"`: `boolean` (optional)
|
||||
- other parameters: `string`
|
||||
- `"session_id"`: `string` (optional)
|
||||
- `"inputs"`: `object` (optional)
|
||||
- `"user_id"`: `string` (optional)
|
||||
|
||||
:::info IMPORTANT
|
||||
You can include custom parameters in the request body, but first ensure they are defined in the [Begin](../guides/agent/agent_component_reference/begin.mdx) agent component.
|
||||
You can include custom parameters in the request body, but first ensure they are defined in the [Begin](../guides/agent/agent_component_reference/begin.mdx) component.
|
||||
:::
|
||||
|
||||
##### Request example
|
||||
|
||||
- If the **Begin** component does not take parameters, the following code will create a session.
|
||||
- If the **Begin** component does not take parameters:
|
||||
|
||||
```bash
|
||||
curl --request POST \
|
||||
@ -2969,10 +2931,12 @@ curl --request POST \
|
||||
--header 'Authorization: Bearer <YOUR_API_KEY>' \
|
||||
--data-binary '
|
||||
{
|
||||
"question": "Hello",
|
||||
"stream": false,
|
||||
}'
|
||||
```
|
||||
|
||||
- If the **Begin** component takes parameters, the following code will create a session.
|
||||
- If the **Begin** component takes parameters, include their values in the body of `"inputs"` as follows:
|
||||
|
||||
```bash
|
||||
curl --request POST \
|
||||
@ -2980,10 +2944,32 @@ curl --request POST \
|
||||
--header 'Content-Type: application/json' \
|
||||
--header 'Authorization: Bearer <YOUR_API_KEY>' \
|
||||
--data-binary '
|
||||
{
|
||||
"lang":"English",
|
||||
"file":"How is the weather tomorrow?"
|
||||
}'
|
||||
{
|
||||
"question": "Hello",
|
||||
"stream": false,
|
||||
"inputs": {
|
||||
"line_var": {
|
||||
"type": "line",
|
||||
"value": "I am line_var"
|
||||
},
|
||||
"int_var": {
|
||||
"type": "integer",
|
||||
"value": 1
|
||||
},
|
||||
"paragraph_var": {
|
||||
"type": "paragraph",
|
||||
"value": "a\nb\nc"
|
||||
},
|
||||
"option_var": {
|
||||
"type": "options",
|
||||
"value": "option 2"
|
||||
},
|
||||
"boolean_var": {
|
||||
"type": "boolean",
|
||||
"value": true
|
||||
}
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
The following code will execute the completion process
|
||||
@ -3013,150 +2999,242 @@ curl --request POST \
|
||||
- `false`: Disable streaming.
|
||||
- `"session_id"`: (*Body Parameter*)
|
||||
The ID of the session. If it is not provided, a new session will be generated.
|
||||
- `"inputs"`: (*Body Parameter*)
|
||||
Variables specified in the **Begin** component.
|
||||
- `"user_id"`: (*Body parameter*), `string`
|
||||
The optional user-defined ID. Valid *only* when no `session_id` is provided.
|
||||
- `"sync_dsl"`: (*Body parameter*), `boolean`
|
||||
Whether to synchronize the changes to existing sessions when an agent is modified, defaults to `false`.
|
||||
- Other parameters: (*Body Parameter*)
|
||||
Parameters specified in the **Begin** component.
|
||||
|
||||
:::tip NOTE
|
||||
For now, this method does *not* support a file type input/variable. As a workaround, use the following to upload a file to an agent:
|
||||
`http://{address}/v1/canvas/upload/{agent_id}`
|
||||
*You will get a corresponding file ID from its response body.*
|
||||
:::
|
||||
|
||||
#### Response
|
||||
|
||||
success without `session_id` provided and with no parameters specified in the **Begin** component:
|
||||
success without `session_id` provided and with no variables specified in the **Begin** component:
|
||||
|
||||
Stream:
|
||||
|
||||
```json
|
||||
data:{
|
||||
"code": 0,
|
||||
"message": "",
|
||||
"event": "message",
|
||||
"message_id": "eb0c0a5e783511f0b9b61a6272e682d8",
|
||||
"created_at": 1755083342,
|
||||
"task_id": "99ee29d6783511f09c921a6272e682d8",
|
||||
"data": {
|
||||
"answer": "Hi! I'm your smart assistant. What can I do for you?",
|
||||
"reference": {},
|
||||
"id": "31e6091d-88d4-441b-ac65-eae1c055be7b",
|
||||
"session_id": "2987ad3eb85f11efb2a70242ac120005"
|
||||
}
|
||||
"content": "Hello"
|
||||
},
|
||||
"session_id": "eaf19a8e783511f0b9b61a6272e682d8"
|
||||
}
|
||||
|
||||
data:{
|
||||
"code": 0,
|
||||
"message": "",
|
||||
"data": true
|
||||
"event": "message",
|
||||
"message_id": "eb0c0a5e783511f0b9b61a6272e682d8",
|
||||
"created_at": 1755083342,
|
||||
"task_id": "99ee29d6783511f09c921a6272e682d8",
|
||||
"data": {
|
||||
"content": "!"
|
||||
},
|
||||
"session_id": "eaf19a8e783511f0b9b61a6272e682d8"
|
||||
}
|
||||
|
||||
data:{
|
||||
"event": "message",
|
||||
"message_id": "eb0c0a5e783511f0b9b61a6272e682d8",
|
||||
"created_at": 1755083342,
|
||||
"task_id": "99ee29d6783511f09c921a6272e682d8",
|
||||
"data": {
|
||||
"content": " How"
|
||||
},
|
||||
"session_id": "eaf19a8e783511f0b9b61a6272e682d8"
|
||||
}
|
||||
|
||||
...
|
||||
|
||||
data:[DONE]
|
||||
```
|
||||
|
||||
Success without `session_id` provided and with parameters specified in the **Begin** component:
|
||||
Non-stream:
|
||||
|
||||
```json
|
||||
data:{
|
||||
{
|
||||
"code": 0,
|
||||
"message": "",
|
||||
"data": {
|
||||
"session_id": "eacb36a0bdff11ef97120242ac120006",
|
||||
"answer": "",
|
||||
"reference": [],
|
||||
"param": [
|
||||
{
|
||||
"key": "lang",
|
||||
"name": "Target Language",
|
||||
"optional": false,
|
||||
"type": "line",
|
||||
"value": "English"
|
||||
},
|
||||
{
|
||||
"key": "file",
|
||||
"name": "Files",
|
||||
"optional": false,
|
||||
"type": "file",
|
||||
"value": "How is the weather tomorrow?"
|
||||
},
|
||||
{
|
||||
"key": "hhyt",
|
||||
"name": "hhty",
|
||||
"optional": true,
|
||||
"type": "line"
|
||||
"created_at": 1755083440,
|
||||
"data": {
|
||||
"created_at": 547061.147866385,
|
||||
"elapsed_time": 2.595433341921307,
|
||||
"inputs": {},
|
||||
"outputs": {
|
||||
"_created_time": 547061.149137775,
|
||||
"_elapsed_time": 8.720310870558023e-05,
|
||||
"content": "Hello! How can I assist you today?"
|
||||
}
|
||||
]
|
||||
},
|
||||
"event": "workflow_finished",
|
||||
"message_id": "25807f94783611f095171a6272e682d8",
|
||||
"session_id": "25663198783611f095171a6272e682d8",
|
||||
"task_id": "99ee29d6783511f09c921a6272e682d8"
|
||||
}
|
||||
}
|
||||
data:
|
||||
```
|
||||
|
||||
Success with parameters specified in the **Begin** component:
|
||||
Success without `session_id` provided and with variables specified in the **Begin** component:
|
||||
|
||||
Stream:
|
||||
|
||||
```json
|
||||
data:{
|
||||
"code": 0,
|
||||
"message": "",
|
||||
"event": "message",
|
||||
"message_id": "0e273472783711f0806e1a6272e682d8",
|
||||
"created_at": 1755083830,
|
||||
"task_id": "99ee29d6783511f09c921a6272e682d8",
|
||||
"data": {
|
||||
"answer": "How",
|
||||
"reference": {},
|
||||
"id": "0379ac4c-b26b-4a44-8b77-99cebf313fdf",
|
||||
"session_id": "4399c7d0b86311efac5b0242ac120005"
|
||||
"content": "Hello"
|
||||
},
|
||||
"session_id": "0e0d1542783711f0806e1a6272e682d8"
|
||||
}
|
||||
|
||||
data:{
|
||||
"event": "message",
|
||||
"message_id": "0e273472783711f0806e1a6272e682d8",
|
||||
"created_at": 1755083830,
|
||||
"task_id": "99ee29d6783511f09c921a6272e682d8",
|
||||
"data": {
|
||||
"content": "!"
|
||||
},
|
||||
"session_id": "0e0d1542783711f0806e1a6272e682d8"
|
||||
}
|
||||
|
||||
data:{
|
||||
"event": "message",
|
||||
"message_id": "0e273472783711f0806e1a6272e682d8",
|
||||
"created_at": 1755083830,
|
||||
"task_id": "99ee29d6783511f09c921a6272e682d8",
|
||||
"data": {
|
||||
"content": " How"
|
||||
},
|
||||
"session_id": "0e0d1542783711f0806e1a6272e682d8"
|
||||
}
|
||||
|
||||
...
|
||||
|
||||
data:[DONE]
|
||||
```
|
||||
|
||||
Non-stream:
|
||||
|
||||
```json
|
||||
{
|
||||
"code": 0,
|
||||
"data": {
|
||||
"created_at": 1755083779,
|
||||
"data": {
|
||||
"created_at": 547400.868004651,
|
||||
"elapsed_time": 3.5037803899031132,
|
||||
"inputs": {
|
||||
"boolean_var": {
|
||||
"type": "boolean",
|
||||
"value": true
|
||||
},
|
||||
"int_var": {
|
||||
"type": "integer",
|
||||
"value": 1
|
||||
},
|
||||
"line_var": {
|
||||
"type": "line",
|
||||
"value": "I am line_var"
|
||||
},
|
||||
"option_var": {
|
||||
"type": "options",
|
||||
"value": "option 2"
|
||||
},
|
||||
"paragraph_var": {
|
||||
"type": "paragraph",
|
||||
"value": "a\nb\nc"
|
||||
}
|
||||
},
|
||||
"outputs": {
|
||||
"_created_time": 547400.869271305,
|
||||
"_elapsed_time": 0.0001251999055966735,
|
||||
"content": "Hello there! How can I assist you today?"
|
||||
}
|
||||
},
|
||||
"event": "workflow_finished",
|
||||
"message_id": "effdad8c783611f089261a6272e682d8",
|
||||
"session_id": "efe523b6783611f089261a6272e682d8",
|
||||
"task_id": "99ee29d6783511f09c921a6272e682d8"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Success with variables specified in the **Begin** component:
|
||||
|
||||
Stream:
|
||||
|
||||
```json
|
||||
data:{
|
||||
"code": 0,
|
||||
"message": "",
|
||||
"event": "message",
|
||||
"message_id": "5b62e790783711f0bc531a6272e682d8",
|
||||
"created_at": 1755083960,
|
||||
"task_id": "99ee29d6783511f09c921a6272e682d8",
|
||||
"data": {
|
||||
"answer": "How is",
|
||||
"reference": {},
|
||||
"id": "0379ac4c-b26b-4a44-8b77-99cebf313fdf",
|
||||
"session_id": "4399c7d0b86311efac5b0242ac120005"
|
||||
}
|
||||
"content": "Hello"
|
||||
},
|
||||
"session_id": "979e450c781d11f095cb729e3aa55728"
|
||||
}
|
||||
|
||||
data:{
|
||||
"code": 0,
|
||||
"message": "",
|
||||
"event": "message",
|
||||
"message_id": "5b62e790783711f0bc531a6272e682d8",
|
||||
"created_at": 1755083960,
|
||||
"task_id": "99ee29d6783511f09c921a6272e682d8",
|
||||
"data": {
|
||||
"answer": "How is the",
|
||||
"reference": {},
|
||||
"id": "0379ac4c-b26b-4a44-8b77-99cebf313fdf",
|
||||
"session_id": "4399c7d0b86311efac5b0242ac120005"
|
||||
}
|
||||
"content": "!"
|
||||
},
|
||||
"session_id": "979e450c781d11f095cb729e3aa55728"
|
||||
}
|
||||
|
||||
data:{
|
||||
"code": 0,
|
||||
"message": "",
|
||||
"event": "message",
|
||||
"message_id": "5b62e790783711f0bc531a6272e682d8",
|
||||
"created_at": 1755083960,
|
||||
"task_id": "99ee29d6783511f09c921a6272e682d8",
|
||||
"data": {
|
||||
"answer": "How is the weather",
|
||||
"reference": {},
|
||||
"id": "0379ac4c-b26b-4a44-8b77-99cebf313fdf",
|
||||
"session_id": "4399c7d0b86311efac5b0242ac120005"
|
||||
}
|
||||
"content": " You"
|
||||
},
|
||||
"session_id": "979e450c781d11f095cb729e3aa55728"
|
||||
}
|
||||
data:{
|
||||
|
||||
...
|
||||
|
||||
data:[DONE]
|
||||
```
|
||||
|
||||
Non-stream:
|
||||
|
||||
```json
|
||||
{
|
||||
"code": 0,
|
||||
"message": "",
|
||||
"data": {
|
||||
"answer": "How is the weather tomorrow",
|
||||
"reference": {},
|
||||
"id": "0379ac4c-b26b-4a44-8b77-99cebf313fdf",
|
||||
"session_id": "4399c7d0b86311efac5b0242ac120005"
|
||||
"created_at": 1755084029,
|
||||
"data": {
|
||||
"created_at": 547650.750818867,
|
||||
"elapsed_time": 1.6227330720284954,
|
||||
"inputs": {},
|
||||
"outputs": {
|
||||
"_created_time": 547650.752800839,
|
||||
"_elapsed_time": 9.628792759031057e-05,
|
||||
"content": "Hello! It appears you've sent another \"Hello\" without additional context. I'm here and ready to respond to any requests or questions you may have. Is there something specific you'd like to discuss or learn about?"
|
||||
}
|
||||
},
|
||||
"event": "workflow_finished",
|
||||
"message_id": "84eec534783711f08db41a6272e682d8",
|
||||
"session_id": "979e450c781d11f095cb729e3aa55728",
|
||||
"task_id": "99ee29d6783511f09c921a6272e682d8"
|
||||
}
|
||||
}
|
||||
data:{
|
||||
"code": 0,
|
||||
"message": "",
|
||||
"data": {
|
||||
"answer": "How is the weather tomorrow?",
|
||||
"reference": {},
|
||||
"id": "0379ac4c-b26b-4a44-8b77-99cebf313fdf",
|
||||
"session_id": "4399c7d0b86311efac5b0242ac120005"
|
||||
}
|
||||
}
|
||||
data:{
|
||||
"code": 0,
|
||||
"message": "",
|
||||
"data": {
|
||||
"answer": "How is the weather tomorrow?",
|
||||
"reference": {},
|
||||
"id": "0379ac4c-b26b-4a44-8b77-99cebf313fdf",
|
||||
"session_id": "4399c7d0b86311efac5b0242ac120005"
|
||||
}
|
||||
}
|
||||
data:{
|
||||
"code": 0,
|
||||
"message": "",
|
||||
"data": true
|
||||
}
|
||||
```
|
||||
|
||||
Failure:
|
||||
|
||||
@ -5,7 +5,7 @@ slug: /python_api_reference
|
||||
|
||||
# Python API
|
||||
|
||||
A complete reference for RAGFlow's Python APIs. Before proceeding, please ensure you [have your RAGFlow API key ready for authentication](../guides/models/llm_api_key_setup.md).
|
||||
A complete reference for RAGFlow's Python APIs. Before proceeding, please ensure you [have your RAGFlow API key ready for authentication](../develop/acquire_ragflow_api_key.md).
|
||||
|
||||
:::tip NOTE
|
||||
Run the following command to download the Python SDK:
|
||||
|
||||
@ -22,6 +22,38 @@ The embedding models included in a full edition are:
|
||||
These two embedding models are optimized specifically for English and Chinese, so performance may be compromised if you use them to embed documents in other languages.
|
||||
:::
|
||||
|
||||
## v0.20.3
|
||||
|
||||
Released on August 20, 2025.
|
||||
|
||||
### Improvements
|
||||
|
||||
- Revamps the user interface for the **Datasets**, **Chat**, and **Search** pages.
|
||||
- Search and Chat: Introduces document-level metadata filtering, allowing automatic or manual filtering during chats or searches.
|
||||
- Search: Supports creating search apps tailored to various business scenarios
|
||||
- Chat: Supports comparing answer performance of up to three chat model settings on a single **Chat** page.
|
||||
- Agent:
|
||||
- Implements a toggle in the **Agent** component to enable or disable citation.
|
||||
- Introduces a drag-and-drop method for creating components.
|
||||
- Documentation: Corrects inaccuracies in the API reference.
|
||||
|
||||
### New Agent templates
|
||||
|
||||
- Report Agent: A template for generating summary reports in internal question-answering scenarios, supporting the display of tables and formulae. [#9427](https://github.com/infiniflow/ragflow/pull/9427)
|
||||
|
||||
### Fixed issues
|
||||
|
||||
- The timeout mechanism introduced in v0.20.0 caused tasks like GraphRAG to halt.
|
||||
- Predefined opening greeting in the **Agent** component was missing during conversations.
|
||||
- An automatic line break issue in the prompt editor.
|
||||
- A memory leak issue caused by PyPDF. [#9469](https://github.com/infiniflow/ragflow/pull/9469)
|
||||
|
||||
### API changes
|
||||
|
||||
#### Deprecated
|
||||
|
||||
[Create session with agent](./references/http_api_reference.md#create-session-with-agent)
|
||||
|
||||
## v0.20.1
|
||||
|
||||
Released on August 8, 2025.
|
||||
@ -182,7 +214,7 @@ From this release onwards, if you still see RAGFlow's responses being cut short
|
||||
|
||||
- Unable to add models via Ollama/Xinference, an issue introduced in v0.17.1.
|
||||
|
||||
### Related APIs
|
||||
### API changes
|
||||
|
||||
#### HTTP APIs
|
||||
|
||||
@ -243,7 +275,7 @@ The following is a screenshot of a conversation that integrates Deep Research:
|
||||
|
||||

|
||||
|
||||
### Related APIs
|
||||
### API changes
|
||||
|
||||
#### HTTP APIs
|
||||
|
||||
@ -318,7 +350,7 @@ This release fixes the following issues:
|
||||
- Using the **Table** parsing method results in information loss.
|
||||
- Miscellaneous API issues.
|
||||
|
||||
### Related APIs
|
||||
### API changes
|
||||
|
||||
#### HTTP APIs
|
||||
|
||||
@ -354,7 +386,7 @@ Released on December 18, 2024.
|
||||
- Upgrades the Document Layout Analysis model in DeepDoc.
|
||||
- Significantly enhances the retrieval performance when using [Infinity](https://github.com/infiniflow/infinity) as document engine.
|
||||
|
||||
### Related APIs
|
||||
### API changes
|
||||
|
||||
#### HTTP APIs
|
||||
|
||||
@ -411,7 +443,7 @@ This approach eliminates the need to manually update **service_config.yaml** aft
|
||||
Ensure that you [upgrade **both** your code **and** Docker image to this release](https://ragflow.io/docs/dev/upgrade_ragflow#upgrade-ragflow-to-the-most-recent-officially-published-release) before trying this new approach.
|
||||
:::
|
||||
|
||||
### Related APIs
|
||||
### API changes
|
||||
|
||||
#### HTTP APIs
|
||||
|
||||
@ -570,7 +602,7 @@ While we also test RAGFlow on ARM64 platforms, we do not maintain RAGFlow Docker
|
||||
If you are on an ARM platform, follow [this guide](./develop/build_docker_image.mdx) to build a RAGFlow Docker image.
|
||||
:::
|
||||
|
||||
### Related APIs
|
||||
### API changes
|
||||
|
||||
#### HTTP API
|
||||
|
||||
@ -591,7 +623,7 @@ Released on May 21, 2024.
|
||||
- Supports monitoring of system components, including Elasticsearch, MySQL, Redis, and MinIO.
|
||||
- Supports disabling **Layout Recognition** in the GENERAL chunking method to reduce file chunking time.
|
||||
|
||||
### Related APIs
|
||||
### API changes
|
||||
|
||||
#### HTTP API
|
||||
|
||||
|
||||
@ -106,7 +106,7 @@ class EntityResolution(Extractor):
|
||||
nonlocal remain_candidates_to_resolve, callback
|
||||
async with semaphore:
|
||||
try:
|
||||
with trio.move_on_after(180) as cancel_scope:
|
||||
with trio.move_on_after(280) as cancel_scope:
|
||||
await self._resolve_candidate(candidate_batch, result_set, result_lock)
|
||||
remain_candidates_to_resolve = remain_candidates_to_resolve - len(candidate_batch[1])
|
||||
callback(msg=f"Resolved {len(candidate_batch[1])} pairs, {remain_candidates_to_resolve} are remained to resolve. ")
|
||||
@ -169,7 +169,7 @@ class EntityResolution(Extractor):
|
||||
logging.info(f"Created resolution prompt {len(text)} bytes for {len(candidate_resolution_i[1])} entity pairs of type {candidate_resolution_i[0]}")
|
||||
async with chat_limiter:
|
||||
try:
|
||||
with trio.move_on_after(120) as cancel_scope:
|
||||
with trio.move_on_after(240) as cancel_scope:
|
||||
response = await trio.to_thread.run_sync(self._chat, text, [{"role": "user", "content": "Output:"}], {})
|
||||
if cancel_scope.cancelled_caught:
|
||||
logging.warning("_resolve_candidate._chat timeout, skipping...")
|
||||
|
||||
@ -92,7 +92,7 @@ class CommunityReportsExtractor(Extractor):
|
||||
text = perform_variable_replacements(self._extraction_prompt, variables=prompt_variables)
|
||||
async with chat_limiter:
|
||||
try:
|
||||
with trio.move_on_after(80) as cancel_scope:
|
||||
with trio.move_on_after(180) as cancel_scope:
|
||||
response = await trio.to_thread.run_sync( self._chat, text, [{"role": "user", "content": "Output:"}], {})
|
||||
if cancel_scope.cancelled_caught:
|
||||
logging.warning("extract_community_report._chat timeout, skipping...")
|
||||
|
||||
@ -57,20 +57,22 @@ async def run_graphrag(
|
||||
):
|
||||
chunks.append(d["content_with_weight"])
|
||||
|
||||
subgraph = await generate_subgraph(
|
||||
LightKGExt
|
||||
if "method" not in row["kb_parser_config"].get("graphrag", {}) or row["kb_parser_config"]["graphrag"]["method"] != "general"
|
||||
else GeneralKGExt,
|
||||
tenant_id,
|
||||
kb_id,
|
||||
doc_id,
|
||||
chunks,
|
||||
language,
|
||||
row["kb_parser_config"]["graphrag"].get("entity_types", []),
|
||||
chat_model,
|
||||
embedding_model,
|
||||
callback,
|
||||
)
|
||||
with trio.fail_after(max(120, len(chunks)*120)):
|
||||
subgraph = await generate_subgraph(
|
||||
LightKGExt
|
||||
if "method" not in row["kb_parser_config"].get("graphrag", {}) or row["kb_parser_config"]["graphrag"]["method"] != "general"
|
||||
else GeneralKGExt,
|
||||
tenant_id,
|
||||
kb_id,
|
||||
doc_id,
|
||||
chunks,
|
||||
language,
|
||||
row["kb_parser_config"]["graphrag"].get("entity_types", []),
|
||||
chat_model,
|
||||
embedding_model,
|
||||
callback,
|
||||
)
|
||||
|
||||
if not subgraph:
|
||||
return
|
||||
|
||||
@ -125,7 +127,6 @@ async def run_graphrag(
|
||||
return
|
||||
|
||||
|
||||
@timeout(60*60, 1)
|
||||
async def generate_subgraph(
|
||||
extractor: Extractor,
|
||||
tenant_id: str,
|
||||
|
||||
@ -44,9 +44,21 @@ spec:
|
||||
checksum/config-es: {{ include (print $.Template.BasePath "/elasticsearch-config.yaml") . | sha256sum }}
|
||||
checksum/config-env: {{ include (print $.Template.BasePath "/env.yaml") . | sha256sum }}
|
||||
spec:
|
||||
{{- if or .Values.imagePullSecrets .Values.elasticsearch.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.elasticsearch.image.pullSecrets }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
initContainers:
|
||||
- name: fix-data-volume-permissions
|
||||
image: alpine
|
||||
image: {{ .Values.elasticsearch.initContainers.alpine.repository }}:{{ .Values.elasticsearch.initContainers.alpine.tag }}
|
||||
{{- with .Values.elasticsearch.initContainers.alpine.pullPolicy }}
|
||||
imagePullPolicy: {{ . }}
|
||||
{{- end }}
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
@ -55,14 +67,20 @@ spec:
|
||||
- mountPath: /usr/share/elasticsearch/data
|
||||
name: es-data
|
||||
- name: sysctl
|
||||
image: busybox
|
||||
image: {{ .Values.elasticsearch.initContainers.busybox.repository }}:{{ .Values.elasticsearch.initContainers.busybox.tag }}
|
||||
{{- with .Values.elasticsearch.initContainers.busybox.pullPolicy }}
|
||||
imagePullPolicy: {{ . }}
|
||||
{{- end }}
|
||||
securityContext:
|
||||
privileged: true
|
||||
runAsUser: 0
|
||||
command: ["sysctl", "-w", "vm.max_map_count=262144"]
|
||||
containers:
|
||||
- name: elasticsearch
|
||||
image: elasticsearch:{{ .Values.env.STACK_VERSION }}
|
||||
image: {{ .Values.elasticsearch.image.repository }}:{{ .Values.elasticsearch.image.tag }}
|
||||
{{- with .Values.elasticsearch.image.pullPolicy }}
|
||||
imagePullPolicy: {{ . }}
|
||||
{{- end }}
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: {{ include "ragflow.fullname" . }}-env-config
|
||||
|
||||
@ -43,9 +43,21 @@ spec:
|
||||
annotations:
|
||||
checksum/config: {{ include (print $.Template.BasePath "/env.yaml") . | sha256sum }}
|
||||
spec:
|
||||
{{- if or .Values.imagePullSecrets .Values.infinity.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.infinity.image.pullSecrets }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: infinity
|
||||
image: {{ .Values.infinity.image.repository }}:{{ .Values.infinity.image.tag }}
|
||||
{{- with .Values.infinity.image.pullPolicy }}
|
||||
imagePullPolicy: {{ . }}
|
||||
{{- end }}
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: {{ include "ragflow.fullname" . }}-env-config
|
||||
|
||||
@ -43,9 +43,21 @@ spec:
|
||||
{{- include "ragflow.labels" . | nindent 8 }}
|
||||
app.kubernetes.io/component: minio
|
||||
spec:
|
||||
{{- if or .Values.imagePullSecrets .Values.minio.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.minio.image.pullSecrets }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: minio
|
||||
image: {{ .Values.minio.image.repository }}:{{ .Values.minio.image.tag }}
|
||||
{{- with .Values.minio.image.pullPolicy }}
|
||||
imagePullPolicy: {{ . }}
|
||||
{{- end }}
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: {{ include "ragflow.fullname" . }}-env-config
|
||||
|
||||
@ -44,9 +44,21 @@ spec:
|
||||
checksum/config-mysql: {{ include (print $.Template.BasePath "/mysql-config.yaml") . | sha256sum }}
|
||||
checksum/config-env: {{ include (print $.Template.BasePath "/env.yaml") . | sha256sum }}
|
||||
spec:
|
||||
{{- if or .Values.imagePullSecrets .Values.mysql.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.mysql.image.pullSecrets }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: mysql
|
||||
image: {{ .Values.mysql.image.repository }}:{{ .Values.mysql.image.tag }}
|
||||
{{- with .Values.mysql.image.pullPolicy }}
|
||||
imagePullPolicy: {{ . }}
|
||||
{{- end }}
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: {{ include "ragflow.fullname" . }}-env-config
|
||||
|
||||
@ -44,9 +44,21 @@ spec:
|
||||
checksum/config-opensearch: {{ include (print $.Template.BasePath "/opensearch-config.yaml") . | sha256sum }}
|
||||
checksum/config-env: {{ include (print $.Template.BasePath "/env.yaml") . | sha256sum }}
|
||||
spec:
|
||||
{{- if or .Values.imagePullSecrets .Values.opensearch.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.opensearch.image.pullSecrets }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
initContainers:
|
||||
- name: fix-data-volume-permissions
|
||||
image: alpine
|
||||
image: {{ .Values.opensearch.initContainers.alpine.repository }}:{{ .Values.opensearch.initContainers.alpine.tag }}
|
||||
{{- with .Values.opensearch.initContainers.alpine.pullPolicy }}
|
||||
imagePullPolicy: {{ . }}
|
||||
{{- end }}
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
@ -55,7 +67,10 @@ spec:
|
||||
- mountPath: /usr/share/opensearch/data
|
||||
name: opensearch-data
|
||||
- name: sysctl
|
||||
image: busybox
|
||||
image: {{ .Values.opensearch.initContainers.busybox.repository }}:{{ .Values.opensearch.initContainers.busybox.tag }}
|
||||
{{- with .Values.opensearch.initContainers.busybox.pullPolicy }}
|
||||
imagePullPolicy: {{ . }}
|
||||
{{- end }}
|
||||
securityContext:
|
||||
privileged: true
|
||||
runAsUser: 0
|
||||
@ -63,6 +78,9 @@ spec:
|
||||
containers:
|
||||
- name: opensearch
|
||||
image: {{ .Values.opensearch.image.repository }}:{{ .Values.opensearch.image.tag }}
|
||||
{{- with .Values.opensearch.image.pullPolicy }}
|
||||
imagePullPolicy: {{ . }}
|
||||
{{- end }}
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: {{ include "ragflow.fullname" . }}-env-config
|
||||
|
||||
@ -25,9 +25,21 @@ spec:
|
||||
checksum/config-env: {{ include (print $.Template.BasePath "/env.yaml") . | sha256sum }}
|
||||
checksum/config-ragflow: {{ include (print $.Template.BasePath "/ragflow_config.yaml") . | sha256sum }}
|
||||
spec:
|
||||
{{- if or .Values.imagePullSecrets .Values.ragflow.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.ragflow.image.pullSecrets }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: ragflow
|
||||
image: {{ .Values.env.RAGFLOW_IMAGE }}
|
||||
image: {{ .Values.ragflow.image.repository }}:{{ .Values.ragflow.image.tag }}
|
||||
{{- with .Values.ragflow.image.pullPolicy }}
|
||||
imagePullPolicy: {{ . }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: http
|
||||
|
||||
@ -40,10 +40,22 @@ spec:
|
||||
annotations:
|
||||
checksum/config-env: {{ include (print $.Template.BasePath "/env.yaml") . | sha256sum }}
|
||||
spec:
|
||||
{{- if or .Values.imagePullSecrets .Values.redis.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.redis.image.pullSecrets }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
terminationGracePeriodSeconds: 60
|
||||
containers:
|
||||
- name: redis
|
||||
image: {{ .Values.redis.image.repository }}:{{ .Values.redis.image.tag }}
|
||||
{{- with .Values.redis.image.pullPolicy }}
|
||||
imagePullPolicy: {{ . }}
|
||||
{{- end }}
|
||||
command:
|
||||
- "sh"
|
||||
- "-c"
|
||||
|
||||
@ -1,4 +1,8 @@
|
||||
# Based on docker compose .env file
|
||||
|
||||
# Global image pull secrets configuration
|
||||
imagePullSecrets: []
|
||||
|
||||
env:
|
||||
# The type of doc engine to use.
|
||||
# Available options:
|
||||
@ -32,31 +36,6 @@ env:
|
||||
# The password for Redis
|
||||
REDIS_PASSWORD: infini_rag_flow_helm
|
||||
|
||||
# The RAGFlow Docker image to download.
|
||||
# Defaults to the v0.20.1-slim edition, which is the RAGFlow Docker image without embedding models.
|
||||
RAGFLOW_IMAGE: infiniflow/ragflow:v0.20.1-slim
|
||||
#
|
||||
# To download the RAGFlow Docker image with embedding models, uncomment the following line instead:
|
||||
# RAGFLOW_IMAGE: infiniflow/ragflow:v0.20.1
|
||||
#
|
||||
# The Docker image of the v0.20.1 edition includes:
|
||||
# - Built-in embedding models:
|
||||
# - BAAI/bge-large-zh-v1.5
|
||||
# - BAAI/bge-reranker-v2-m3
|
||||
# - maidalun1020/bce-embedding-base_v1
|
||||
# - maidalun1020/bce-reranker-base_v1
|
||||
# - Embedding models that will be downloaded once you select them in the RAGFlow UI:
|
||||
# - BAAI/bge-base-en-v1.5
|
||||
# - BAAI/bge-large-en-v1.5
|
||||
# - BAAI/bge-small-en-v1.5
|
||||
# - BAAI/bge-small-zh-v1.5
|
||||
# - jinaai/jina-embeddings-v2-base-en
|
||||
# - jinaai/jina-embeddings-v2-small-en
|
||||
# - nomic-ai/nomic-embed-text-v1.5
|
||||
# - sentence-transformers/all-MiniLM-L6-v2
|
||||
#
|
||||
#
|
||||
|
||||
# The local time zone.
|
||||
TIMEZONE: "Asia/Shanghai"
|
||||
|
||||
@ -75,7 +54,11 @@ env:
|
||||
EMBEDDING_BATCH_SIZE: 16
|
||||
|
||||
ragflow:
|
||||
|
||||
image:
|
||||
repository: infiniflow/ragflow
|
||||
tag: v0.20.3-slim
|
||||
pullPolicy: IfNotPresent
|
||||
pullSecrets: []
|
||||
# Optional service configuration overrides
|
||||
# to be written to local.service_conf.yaml
|
||||
# inside the RAGFlow container
|
||||
@ -114,6 +97,8 @@ infinity:
|
||||
image:
|
||||
repository: infiniflow/infinity
|
||||
tag: v0.6.0-dev5
|
||||
pullPolicy: IfNotPresent
|
||||
pullSecrets: []
|
||||
storage:
|
||||
className:
|
||||
capacity: 5Gi
|
||||
@ -124,6 +109,20 @@ infinity:
|
||||
type: ClusterIP
|
||||
|
||||
elasticsearch:
|
||||
image:
|
||||
repository: elasticsearch
|
||||
tag: "8.11.3"
|
||||
pullPolicy: IfNotPresent
|
||||
pullSecrets: []
|
||||
initContainers:
|
||||
alpine:
|
||||
repository: alpine
|
||||
tag: latest
|
||||
pullPolicy: IfNotPresent
|
||||
busybox:
|
||||
repository: busybox
|
||||
tag: latest
|
||||
pullPolicy: IfNotPresent
|
||||
storage:
|
||||
className:
|
||||
capacity: 20Gi
|
||||
@ -140,6 +139,17 @@ opensearch:
|
||||
image:
|
||||
repository: opensearchproject/opensearch
|
||||
tag: 2.19.1
|
||||
pullPolicy: IfNotPresent
|
||||
pullSecrets: []
|
||||
initContainers:
|
||||
alpine:
|
||||
repository: alpine
|
||||
tag: latest
|
||||
pullPolicy: IfNotPresent
|
||||
busybox:
|
||||
repository: busybox
|
||||
tag: latest
|
||||
pullPolicy: IfNotPresent
|
||||
storage:
|
||||
className:
|
||||
capacity: 20Gi
|
||||
@ -156,6 +166,8 @@ minio:
|
||||
image:
|
||||
repository: quay.io/minio/minio
|
||||
tag: RELEASE.2023-12-20T01-00-02Z
|
||||
pullPolicy: IfNotPresent
|
||||
pullSecrets: []
|
||||
storage:
|
||||
className:
|
||||
capacity: 5Gi
|
||||
@ -169,6 +181,8 @@ mysql:
|
||||
image:
|
||||
repository: mysql
|
||||
tag: 8.0.39
|
||||
pullPolicy: IfNotPresent
|
||||
pullSecrets: []
|
||||
storage:
|
||||
className:
|
||||
capacity: 5Gi
|
||||
@ -182,6 +196,8 @@ redis:
|
||||
image:
|
||||
repository: valkey/valkey
|
||||
tag: 8
|
||||
pullPolicy: IfNotPresent
|
||||
pullSecrets: []
|
||||
storage:
|
||||
className:
|
||||
capacity: 5Gi
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
[project]
|
||||
name = "ragflow"
|
||||
version = "0.20.1"
|
||||
version = "0.20.3"
|
||||
description = "[RAGFlow](https://ragflow.io/) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data."
|
||||
authors = [{ name = "Zhichang Yu", email = "yuzhichang@gmail.com" }]
|
||||
license-files = ["LICENSE"]
|
||||
@ -43,7 +43,7 @@ dependencies = [
|
||||
"groq==0.9.0",
|
||||
"hanziconv==0.3.2",
|
||||
"html-text==0.6.2",
|
||||
"httpx==0.27.2",
|
||||
"httpx[socks]==0.27.2",
|
||||
"huggingface-hub>=0.25.0,<0.26.0",
|
||||
"infinity-sdk==0.6.0-dev4",
|
||||
"infinity-emb>=0.0.66,<0.0.67",
|
||||
@ -73,7 +73,7 @@ dependencies = [
|
||||
"pyclipper==1.3.0.post5",
|
||||
"pycryptodomex==3.20.0",
|
||||
"pymysql>=1.1.1,<2.0.0",
|
||||
"pypdf>=5.0.0,<6.0.0",
|
||||
"pypdf==6.0.0",
|
||||
"python-dotenv==1.0.1",
|
||||
"python-dateutil==2.8.2",
|
||||
"python-pptx>=1.0.2,<2.0.0",
|
||||
@ -130,6 +130,7 @@ dependencies = [
|
||||
"click>=8.1.8",
|
||||
"python-calamine>=0.4.0",
|
||||
"litellm>=1.74.15.post1",
|
||||
"flask-mail>=0.10.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
|
||||
@ -14,31 +14,48 @@
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import os
|
||||
import re
|
||||
import tempfile
|
||||
|
||||
from api.db import LLMType
|
||||
from rag.nlp import rag_tokenizer
|
||||
from api.db.services.llm_service import LLMBundle
|
||||
from rag.nlp import tokenize
|
||||
from rag.nlp import rag_tokenizer, tokenize
|
||||
|
||||
|
||||
def chunk(filename, binary, tenant_id, lang, callback=None, **kwargs):
|
||||
doc = {
|
||||
"docnm_kwd": filename,
|
||||
"title_tks": rag_tokenizer.tokenize(re.sub(r"\.[a-zA-Z]+$", "", filename))
|
||||
}
|
||||
doc = {"docnm_kwd": filename, "title_tks": rag_tokenizer.tokenize(re.sub(r"\.[a-zA-Z]+$", "", filename))}
|
||||
doc["title_sm_tks"] = rag_tokenizer.fine_grained_tokenize(doc["title_tks"])
|
||||
|
||||
# is it English
|
||||
eng = lang.lower() == "english" # is_english(sections)
|
||||
try:
|
||||
_, ext = os.path.splitext(filename)
|
||||
if not ext:
|
||||
raise RuntimeError("No extension detected.")
|
||||
|
||||
if ext not in [".da", ".wave", ".wav", ".mp3", ".wav", ".aac", ".flac", ".ogg", ".aiff", ".au", ".midi", ".wma", ".realaudio", ".vqf", ".oggvorbis", ".aac", ".ape"]:
|
||||
raise RuntimeError(f"Extension {ext} is not supported yet.")
|
||||
|
||||
tmp_path = ""
|
||||
with tempfile.NamedTemporaryFile(suffix=ext, delete=False) as tmpf:
|
||||
tmpf.write(binary)
|
||||
tmpf.flush()
|
||||
tmp_path = os.path.abspath(tmpf.name)
|
||||
|
||||
callback(0.1, "USE Sequence2Txt LLM to transcription the audio")
|
||||
seq2txt_mdl = LLMBundle(tenant_id, LLMType.SPEECH2TEXT, lang=lang)
|
||||
ans = seq2txt_mdl.transcription(binary)
|
||||
ans = seq2txt_mdl.transcription(tmp_path)
|
||||
callback(0.8, "Sequence2Txt LLM respond: %s ..." % ans[:32])
|
||||
|
||||
tokenize(doc, ans, eng)
|
||||
return [doc]
|
||||
except Exception as e:
|
||||
callback(prog=-1, msg=str(e))
|
||||
|
||||
finally:
|
||||
if tmp_path and os.path.exists(tmp_path):
|
||||
try:
|
||||
os.unlink(tmp_path)
|
||||
except Exception:
|
||||
pass
|
||||
return []
|
||||
|
||||
@ -22,6 +22,8 @@ from timeit import default_timer as timer
|
||||
|
||||
from docx import Document
|
||||
from docx.image.exceptions import InvalidImageStreamError, UnexpectedEndOfFileError, UnrecognizedImageError
|
||||
from docx.opc.pkgreader import _SerializedRelationships, _SerializedRelationship
|
||||
from docx.opc.oxml import parse_xml
|
||||
from markdown import markdown
|
||||
from PIL import Image
|
||||
from tika import parser
|
||||
@ -47,8 +49,8 @@ class Docx(DocxParser):
|
||||
if not embed:
|
||||
return None
|
||||
embed = embed[0]
|
||||
related_part = document.part.related_parts[embed]
|
||||
try:
|
||||
related_part = document.part.related_parts[embed]
|
||||
image_blob = related_part.image.blob
|
||||
except UnrecognizedImageError:
|
||||
logging.info("Unrecognized image format. Skipping image.")
|
||||
@ -62,6 +64,9 @@ class Docx(DocxParser):
|
||||
except UnicodeDecodeError:
|
||||
logging.info("The recognized image stream appears to be corrupted. Skipping image.")
|
||||
return None
|
||||
except Exception:
|
||||
logging.info("The recognized image stream appears to be corrupted. Skipping image.")
|
||||
return None
|
||||
try:
|
||||
image = Image.open(BytesIO(image_blob)).convert('RGB')
|
||||
return image
|
||||
@ -360,6 +365,20 @@ class Markdown(MarkdownParser):
|
||||
tbls.append(((None, markdown(table, extensions=['markdown.extensions.tables'])), ""))
|
||||
return sections, tbls
|
||||
|
||||
def load_from_xml_v2(baseURI, rels_item_xml):
|
||||
"""
|
||||
Return |_SerializedRelationships| instance loaded with the
|
||||
relationships contained in *rels_item_xml*. Returns an empty
|
||||
collection if *rels_item_xml* is |None|.
|
||||
"""
|
||||
srels = _SerializedRelationships()
|
||||
if rels_item_xml is not None:
|
||||
rels_elm = parse_xml(rels_item_xml)
|
||||
for rel_elm in rels_elm.Relationship_lst:
|
||||
if rel_elm.target_ref in ('../NULL', 'NULL'):
|
||||
continue
|
||||
srels._srels.append(_SerializedRelationship(baseURI, rel_elm))
|
||||
return srels
|
||||
|
||||
def chunk(filename, binary=None, from_page=0, to_page=100000,
|
||||
lang="Chinese", callback=None, **kwargs):
|
||||
@ -391,6 +410,8 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
|
||||
except Exception:
|
||||
vision_model = None
|
||||
|
||||
# fix "There is no item named 'word/NULL' in the archive", referring to https://github.com/python-openxml/python-docx/issues/1105#issuecomment-1298075246
|
||||
_SerializedRelationships.load_from_xml = load_from_xml_v2
|
||||
sections, tables = Docx()(filename, binary)
|
||||
|
||||
if vision_model:
|
||||
@ -469,6 +490,7 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
|
||||
sections = [(_, "") for _ in excel_parser.html(binary, 12) if _]
|
||||
else:
|
||||
sections = [(_, "") for _ in excel_parser(binary) if _]
|
||||
parser_config["chunk_token_num"] = 12800
|
||||
|
||||
elif re.search(r"\.(txt|py|js|java|c|cpp|h|php|go|ts|sh|cs|kt|sql)$", filename, re.IGNORECASE):
|
||||
callback(0.1, "Start to parse.")
|
||||
|
||||
@ -68,7 +68,7 @@ class Base(ABC):
|
||||
pmpt.append({
|
||||
"type": "image_url",
|
||||
"image_url": {
|
||||
"url": f"data:image/jpeg;base64,{img}" if img[:4] != "data" else img
|
||||
"url": img if isinstance(img, str) and img.startswith("data:") else f"data:image/png;base64,{img}"
|
||||
}
|
||||
})
|
||||
return pmpt
|
||||
@ -109,16 +109,33 @@ class Base(ABC):
|
||||
|
||||
@staticmethod
|
||||
def image2base64(image):
|
||||
# Return a data URL with the correct MIME to avoid provider mismatches
|
||||
if isinstance(image, bytes):
|
||||
return base64.b64encode(image).decode("utf-8")
|
||||
# Best-effort magic number sniffing
|
||||
mime = "image/png"
|
||||
if len(image) >= 2 and image[0] == 0xFF and image[1] == 0xD8:
|
||||
mime = "image/jpeg"
|
||||
b64 = base64.b64encode(image).decode("utf-8")
|
||||
return f"data:{mime};base64,{b64}"
|
||||
if isinstance(image, BytesIO):
|
||||
return base64.b64encode(image.getvalue()).decode("utf-8")
|
||||
data = image.getvalue()
|
||||
mime = "image/png"
|
||||
if len(data) >= 2 and data[0] == 0xFF and data[1] == 0xD8:
|
||||
mime = "image/jpeg"
|
||||
b64 = base64.b64encode(data).decode("utf-8")
|
||||
return f"data:{mime};base64,{b64}"
|
||||
buffered = BytesIO()
|
||||
fmt = "JPEG"
|
||||
try:
|
||||
image.save(buffered, format="JPEG")
|
||||
except Exception:
|
||||
buffered = BytesIO() # reset buffer before saving PNG
|
||||
image.save(buffered, format="PNG")
|
||||
return base64.b64encode(buffered.getvalue()).decode("utf-8")
|
||||
fmt = "PNG"
|
||||
data = buffered.getvalue()
|
||||
b64 = base64.b64encode(data).decode("utf-8")
|
||||
mime = f"image/{fmt.lower()}"
|
||||
return f"data:{mime};base64,{b64}"
|
||||
|
||||
def prompt(self, b64):
|
||||
return [
|
||||
@ -372,6 +389,16 @@ class OllamaCV(Base):
|
||||
self.keep_alive = kwargs.get("ollama_keep_alive", int(os.environ.get("OLLAMA_KEEP_ALIVE", -1)))
|
||||
Base.__init__(self, **kwargs)
|
||||
|
||||
|
||||
def _clean_img(self, img):
|
||||
if not isinstance(img, str):
|
||||
return img
|
||||
|
||||
#remove the header like "data/*;base64,"
|
||||
if img.startswith("data:") and ";base64," in img:
|
||||
img = img.split(";base64,")[1]
|
||||
return img
|
||||
|
||||
def _clean_conf(self, gen_conf):
|
||||
options = {}
|
||||
if "temperature" in gen_conf:
|
||||
@ -390,9 +417,12 @@ class OllamaCV(Base):
|
||||
hist.insert(0, {"role": "system", "content": system})
|
||||
if not images:
|
||||
return hist
|
||||
temp_images = []
|
||||
for img in images:
|
||||
temp_images.append(self._clean_img(img))
|
||||
for his in hist:
|
||||
if his["role"] == "user":
|
||||
his["images"] = images
|
||||
his["images"] = temp_images
|
||||
break
|
||||
return hist
|
||||
|
||||
@ -509,24 +539,24 @@ class GeminiCV(Base):
|
||||
return res.text, res.usage_metadata.total_token_count
|
||||
|
||||
def chat(self, system, history, gen_conf, images=[]):
|
||||
from transformers import GenerationConfig
|
||||
generation_config = dict(temperature=gen_conf.get("temperature", 0.3), top_p=gen_conf.get("top_p", 0.7))
|
||||
try:
|
||||
response = self.model.generate_content(
|
||||
self._form_history(system, history, images),
|
||||
generation_config=GenerationConfig(temperature=gen_conf.get("temperature", 0.3), top_p=gen_conf.get("top_p", 0.7)))
|
||||
generation_config=generation_config)
|
||||
ans = response.text
|
||||
return ans, response.usage_metadata.total_token_count
|
||||
except Exception as e:
|
||||
return "**ERROR**: " + str(e), 0
|
||||
|
||||
def chat_streamly(self, system, history, gen_conf, images=[]):
|
||||
from transformers import GenerationConfig
|
||||
ans = ""
|
||||
response = None
|
||||
try:
|
||||
generation_config = dict(temperature=gen_conf.get("temperature", 0.3), top_p=gen_conf.get("top_p", 0.7))
|
||||
response = self.model.generate_content(
|
||||
self._form_history(system, history, images),
|
||||
generation_config=GenerationConfig(temperature=gen_conf.get("temperature", 0.3), top_p=gen_conf.get("top_p", 0.7)),
|
||||
generation_config=generation_config,
|
||||
stream=True,
|
||||
)
|
||||
|
||||
@ -542,7 +572,7 @@ class GeminiCV(Base):
|
||||
yield response.usage_metadata.total_token_count
|
||||
else:
|
||||
yield 0
|
||||
|
||||
|
||||
|
||||
class NvidiaCV(Base):
|
||||
_FACTORY_NAME = "NVIDIA"
|
||||
@ -661,8 +691,8 @@ class AnthropicCV(Base):
|
||||
"type": "image",
|
||||
"source": {
|
||||
"type": "base64",
|
||||
"media_type": "image/jpeg" if img[:4] != "data" else img.split(":")[1].split(";")[0],
|
||||
"data": img if img[:4] != "data" else img.split(",")[1]
|
||||
"media_type": (img.split(":")[1].split(";")[0] if isinstance(img, str) and img[:4] == "data" else "image/png"),
|
||||
"data": (img.split(",")[1] if isinstance(img, str) and img[:4] == "data" else img)
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
@ -100,7 +100,7 @@ class DefaultRerank(Base):
|
||||
old_dynamic_batch_size = self._dynamic_batch_size
|
||||
if max_batch_size is not None:
|
||||
self._dynamic_batch_size = max_batch_size
|
||||
res = np.array([], dtype=float)
|
||||
res = np.array(len(pairs), dtype=float)
|
||||
i = 0
|
||||
while i < len(pairs):
|
||||
cur_i = i
|
||||
@ -111,7 +111,7 @@ class DefaultRerank(Base):
|
||||
try:
|
||||
# call subclass implemented batch processing calculation
|
||||
batch_scores = self._compute_batch_scores(pairs[i : i + current_batch])
|
||||
res = np.append(res, batch_scores)
|
||||
res[i : i + current_batch] = batch_scores
|
||||
i += current_batch
|
||||
self._dynamic_batch_size = min(self._dynamic_batch_size * 2, 8)
|
||||
break
|
||||
@ -125,8 +125,8 @@ class DefaultRerank(Base):
|
||||
raise
|
||||
if retry_count >= max_retries:
|
||||
raise RuntimeError("max retry times, still cannot process batch, please check your GPU memory")
|
||||
self.torch_empty_cache()
|
||||
|
||||
|
||||
self.torch_empty_cache()
|
||||
self._dynamic_batch_size = old_dynamic_batch_size
|
||||
return np.array(res)
|
||||
|
||||
@ -482,9 +482,10 @@ class VoyageRerank(Base):
|
||||
self.model_name = model_name
|
||||
|
||||
def similarity(self, query: str, texts: list):
|
||||
rank = np.zeros(len(texts), dtype=float)
|
||||
if not texts:
|
||||
return rank, 0
|
||||
return np.array([]), 0
|
||||
rank = np.zeros(len(texts), dtype=float)
|
||||
|
||||
res = self.client.rerank(query=query, documents=texts, model=self.model_name, top_k=len(texts))
|
||||
try:
|
||||
for r in res.results:
|
||||
|
||||
@ -35,8 +35,9 @@ class Base(ABC):
|
||||
"""
|
||||
pass
|
||||
|
||||
def transcription(self, audio, **kwargs):
|
||||
transcription = self.client.audio.transcriptions.create(model=self.model_name, file=audio, response_format="text")
|
||||
def transcription(self, audio_path, **kwargs):
|
||||
audio_file = open(audio_path, "rb")
|
||||
transcription = self.client.audio.transcriptions.create(model=self.model_name, file=audio_file)
|
||||
return transcription.text.strip(), num_tokens_from_string(transcription.text.strip())
|
||||
|
||||
def audio2base64(self, audio):
|
||||
@ -50,7 +51,7 @@ class Base(ABC):
|
||||
class GPTSeq2txt(Base):
|
||||
_FACTORY_NAME = "OpenAI"
|
||||
|
||||
def __init__(self, key, model_name="whisper-1", base_url="https://api.openai.com/v1"):
|
||||
def __init__(self, key, model_name="whisper-1", base_url="https://api.openai.com/v1", **kwargs):
|
||||
if not base_url:
|
||||
base_url = "https://api.openai.com/v1"
|
||||
self.client = OpenAI(api_key=key, base_url=base_url)
|
||||
@ -60,27 +61,38 @@ class GPTSeq2txt(Base):
|
||||
class QWenSeq2txt(Base):
|
||||
_FACTORY_NAME = "Tongyi-Qianwen"
|
||||
|
||||
def __init__(self, key, model_name="paraformer-realtime-8k-v1", **kwargs):
|
||||
def __init__(self, key, model_name="qwen-audio-asr", **kwargs):
|
||||
import dashscope
|
||||
|
||||
dashscope.api_key = key
|
||||
self.model_name = model_name
|
||||
|
||||
def transcription(self, audio, format):
|
||||
from http import HTTPStatus
|
||||
def transcription(self, audio_path):
|
||||
if "paraformer" in self.model_name or "sensevoice" in self.model_name:
|
||||
return f"**ERROR**: model {self.model_name} is not suppported yet.", 0
|
||||
|
||||
from dashscope.audio.asr import Recognition
|
||||
from dashscope import MultiModalConversation
|
||||
|
||||
recognition = Recognition(model=self.model_name, format=format, sample_rate=16000, callback=None)
|
||||
result = recognition.call(audio)
|
||||
audio_path = f"file://{audio_path}"
|
||||
messages = [
|
||||
{
|
||||
"role": "user",
|
||||
"content": [{"audio": audio_path}],
|
||||
}
|
||||
]
|
||||
|
||||
ans = ""
|
||||
if result.status_code == HTTPStatus.OK:
|
||||
for sentence in result.get_sentence():
|
||||
ans += sentence.text.decode("utf-8") + "\n"
|
||||
return ans, num_tokens_from_string(ans)
|
||||
|
||||
return "**ERROR**: " + result.message, 0
|
||||
response = None
|
||||
full_content = ""
|
||||
try:
|
||||
response = MultiModalConversation.call(model="qwen-audio-asr", messages=messages, result_format="message", stream=True)
|
||||
for response in response:
|
||||
try:
|
||||
full_content += response["output"]["choices"][0]["message"].content[0]["text"]
|
||||
except Exception:
|
||||
pass
|
||||
return full_content, num_tokens_from_string(full_content)
|
||||
except Exception as e:
|
||||
return "**ERROR**: " + str(e), 0
|
||||
|
||||
|
||||
class AzureSeq2txt(Base):
|
||||
@ -212,6 +224,7 @@ class GiteeSeq2txt(Base):
|
||||
self.client = OpenAI(api_key=key, base_url=base_url)
|
||||
self.model_name = model_name
|
||||
|
||||
|
||||
class DeepInfraSeq2txt(Base):
|
||||
_FACTORY_NAME = "DeepInfra"
|
||||
|
||||
|
||||
@ -611,10 +611,6 @@ def naive_merge_with_images(texts, images, chunk_token_num=128, delimiter="\n。
|
||||
if re.match(f"^{dels}$", sub_sec):
|
||||
continue
|
||||
add_chunk(sub_sec, image)
|
||||
|
||||
for img in images:
|
||||
if isinstance(img, Image.Image):
|
||||
img.close()
|
||||
|
||||
return cks, result_images
|
||||
|
||||
|
||||
14
rag/prompts/ask_summary.md
Normal file
14
rag/prompts/ask_summary.md
Normal file
@ -0,0 +1,14 @@
|
||||
Role: You're a smart assistant. Your name is Miss R.
|
||||
Task: Summarize the information from knowledge bases and answer user's question.
|
||||
Requirements and restriction:
|
||||
- DO NOT make things up, especially for numbers.
|
||||
- If the information from knowledge is irrelevant with user's question, JUST SAY: Sorry, no relevant information provided.
|
||||
- Answer with markdown format text.
|
||||
- Answer in language of user's question.
|
||||
- DO NOT make things up, especially for numbers.
|
||||
|
||||
### Information from knowledge bases
|
||||
|
||||
{{ knowledge }}
|
||||
|
||||
The above is information from knowledge bases.
|
||||
@ -150,6 +150,7 @@ REFLECT = load_prompt("reflect")
|
||||
SUMMARY4MEMORY = load_prompt("summary4memory")
|
||||
RANK_MEMORY = load_prompt("rank_memory")
|
||||
META_FILTER = load_prompt("meta_filter")
|
||||
ASK_SUMMARY = load_prompt("ask_summary")
|
||||
|
||||
PROMPT_JINJA_ENV = jinja2.Environment(autoescape=False, trim_blocks=True, lstrip_blocks=True)
|
||||
|
||||
|
||||
55
rag/prompts/related_question.md
Normal file
55
rag/prompts/related_question.md
Normal file
@ -0,0 +1,55 @@
|
||||
# Role
|
||||
You are an AI language model assistant tasked with generating **5-10 related questions** based on a user’s original query.
|
||||
These questions should help **expand the search query scope** and **improve search relevance**.
|
||||
|
||||
---
|
||||
|
||||
## Instructions
|
||||
|
||||
**Input:**
|
||||
You are provided with a **user’s question**.
|
||||
|
||||
**Output:**
|
||||
Generate **5-10 alternative questions** that are **related** to the original user question.
|
||||
These alternatives should help retrieve a **broader range of relevant documents** from a vector database.
|
||||
|
||||
**Context:**
|
||||
Focus on **rephrasing** the original question in different ways, ensuring the alternative questions are **diverse but still connected** to the topic of the original query.
|
||||
Do **not** create overly obscure, irrelevant, or unrelated questions.
|
||||
|
||||
**Fallback:**
|
||||
If you cannot generate any relevant alternatives, do **not** return any questions.
|
||||
|
||||
---
|
||||
|
||||
## Guidance
|
||||
|
||||
1. Each alternative should be **unique** but still **relevant** to the original query.
|
||||
2. Keep the phrasing **clear, concise, and easy to understand**.
|
||||
3. Avoid overly technical jargon or specialized terms **unless directly relevant**.
|
||||
4. Ensure that each question **broadens** the search angle, **not narrows** it.
|
||||
|
||||
---
|
||||
|
||||
## Example
|
||||
|
||||
**Original Question:**
|
||||
> What are the benefits of electric vehicles?
|
||||
|
||||
**Alternative Questions:**
|
||||
1. How do electric vehicles impact the environment?
|
||||
2. What are the advantages of owning an electric car?
|
||||
3. What is the cost-effectiveness of electric vehicles?
|
||||
4. How do electric vehicles compare to traditional cars in terms of fuel efficiency?
|
||||
5. What are the environmental benefits of switching to electric cars?
|
||||
6. How do electric vehicles help reduce carbon emissions?
|
||||
7. Why are electric vehicles becoming more popular?
|
||||
8. What are the long-term savings of using electric vehicles?
|
||||
9. How do electric vehicles contribute to sustainability?
|
||||
10. What are the key benefits of electric vehicles for consumers?
|
||||
|
||||
---
|
||||
|
||||
## Reason
|
||||
Rephrasing the original query into multiple alternative questions helps the user explore **different aspects** of their search topic, improving the **quality of search results**.
|
||||
These questions guide the search engine to provide a **more comprehensive set** of relevant documents.
|
||||
@ -302,7 +302,7 @@ async def build_chunks(task, progress_callback):
|
||||
# If the image is in RGBA mode, convert it to RGB mode before saving it in JPEG format.
|
||||
if d["image"].mode in ("RGBA", "P"):
|
||||
converted_image = d["image"].convert("RGB")
|
||||
d["image"].close() # Close original image
|
||||
#d["image"].close() # Close original image
|
||||
d["image"] = converted_image
|
||||
try:
|
||||
d["image"].save(output_buffer, format='JPEG')
|
||||
@ -520,7 +520,7 @@ async def run_raptor(row, chat_mdl, embd_mdl, vector_size, callback=None):
|
||||
return res, tk_count
|
||||
|
||||
|
||||
@timeout(60*60, 1)
|
||||
@timeout(60*60*2, 1)
|
||||
async def do_handle_task(task):
|
||||
task_id = task["id"]
|
||||
task_from_page = task["from_page"]
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
[project]
|
||||
name = "ragflow-sdk"
|
||||
version = "0.20.1"
|
||||
version = "0.20.3"
|
||||
description = "Python client sdk of [RAGFlow](https://github.com/infiniflow/ragflow). RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding."
|
||||
authors = [{ name = "Zhichang Yu", email = "yuzhichang@gmail.com" }]
|
||||
license = { text = "Apache License, Version 2.0" }
|
||||
|
||||
@ -47,7 +47,7 @@ class Chat(Base):
|
||||
self.variables = [{"key": "knowledge", "optional": True}]
|
||||
self.rerank_model = ""
|
||||
self.empty_response = None
|
||||
self.opener = "Hi! I'm your assistant, what can I do for you?"
|
||||
self.opener = "Hi! I'm your assistant. What can I do for you?"
|
||||
self.show_quote = True
|
||||
self.prompt = (
|
||||
"You are an intelligent assistant. Please summarize the content of the knowledge base to answer the question. "
|
||||
|
||||
@ -143,7 +143,7 @@ class RAGFlow:
|
||||
},
|
||||
)
|
||||
if prompt.opener is None:
|
||||
prompt.opener = "Hi! I'm your assistant, what can I do for you?"
|
||||
prompt.opener = "Hi! I'm your assistant. What can I do for you?"
|
||||
if prompt.prompt is None:
|
||||
prompt.prompt = (
|
||||
"You are an intelligent assistant. Please summarize the content of the knowledge base to answer the question. "
|
||||
|
||||
@ -221,7 +221,7 @@ class TestChatAssistantCreate:
|
||||
assert res["data"]["prompt"]["variables"] == [{"key": "knowledge", "optional": False}]
|
||||
assert res["data"]["prompt"]["rerank_model"] == ""
|
||||
assert res["data"]["prompt"]["empty_response"] == "Sorry! No relevant content was found in the knowledge base!"
|
||||
assert res["data"]["prompt"]["opener"] == "Hi! I'm your assistant, what can I do for you?"
|
||||
assert res["data"]["prompt"]["opener"] == "Hi! I'm your assistant. What can I do for you?"
|
||||
assert res["data"]["prompt"]["show_quote"] is True
|
||||
assert (
|
||||
res["data"]["prompt"]["prompt"]
|
||||
|
||||
@ -218,7 +218,7 @@ class TestChatAssistantUpdate:
|
||||
assert res["data"]["prompt"][0]["variables"] == [{"key": "knowledge", "optional": False}]
|
||||
assert res["data"]["prompt"][0]["rerank_model"] == ""
|
||||
assert res["data"]["prompt"][0]["empty_response"] == "Sorry! No relevant content was found in the knowledge base!"
|
||||
assert res["data"]["prompt"][0]["opener"] == "Hi! I'm your assistant, what can I do for you?"
|
||||
assert res["data"]["prompt"][0]["opener"] == "Hi! I'm your assistant. What can I do for you?"
|
||||
assert res["data"]["prompt"][0]["show_quote"] is True
|
||||
assert (
|
||||
res["data"]["prompt"][0]["prompt"]
|
||||
|
||||
2
sdk/python/uv.lock
generated
2
sdk/python/uv.lock
generated
@ -342,7 +342,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "ragflow-sdk"
|
||||
version = "0.20.1"
|
||||
version = "0.20.3"
|
||||
source = { virtual = "." }
|
||||
dependencies = [
|
||||
{ name = "beartype" },
|
||||
|
||||
@ -222,7 +222,7 @@ class TestChatAssistantCreate:
|
||||
assert res["data"]["prompt"]["variables"] == [{"key": "knowledge", "optional": False}]
|
||||
assert res["data"]["prompt"]["rerank_model"] == ""
|
||||
assert res["data"]["prompt"]["empty_response"] == "Sorry! No relevant content was found in the knowledge base!"
|
||||
assert res["data"]["prompt"]["opener"] == "Hi! I'm your assistant, what can I do for you?"
|
||||
assert res["data"]["prompt"]["opener"] == "Hi! I'm your assistant. What can I do for you?"
|
||||
assert res["data"]["prompt"]["show_quote"] is True
|
||||
assert (
|
||||
res["data"]["prompt"]["prompt"]
|
||||
|
||||
@ -219,7 +219,7 @@ class TestChatAssistantUpdate:
|
||||
assert res["data"]["prompt"][0]["variables"] == [{"key": "knowledge", "optional": False}]
|
||||
assert res["data"]["prompt"][0]["rerank_model"] == ""
|
||||
assert res["data"]["prompt"][0]["empty_response"] == "Sorry! No relevant content was found in the knowledge base!"
|
||||
assert res["data"]["prompt"][0]["opener"] == "Hi! I'm your assistant, what can I do for you?"
|
||||
assert res["data"]["prompt"][0]["opener"] == "Hi! I'm your assistant. What can I do for you?"
|
||||
assert res["data"]["prompt"][0]["show_quote"] is True
|
||||
assert (
|
||||
res["data"]["prompt"][0]["prompt"]
|
||||
|
||||
@ -245,4 +245,4 @@ class TestUpdatedChunk:
|
||||
delete_documents(HttpApiAuth, dataset_id, {"ids": [document_id]})
|
||||
res = update_chunk(HttpApiAuth, dataset_id, document_id, chunk_ids[0])
|
||||
assert res["code"] == 102
|
||||
assert res["message"] == f"You don't own the document {document_id}."
|
||||
assert res["message"] in [f"You don't own the document {document_id}.", f"Can't find this chunk {chunk_ids[0]}"]
|
||||
|
||||
@ -207,7 +207,7 @@ class TestChatAssistantCreate:
|
||||
assert attrgetter("variables")(chat_assistant.prompt) == [{"key": "knowledge", "optional": False}]
|
||||
assert attrgetter("rerank_model")(chat_assistant.prompt) == ""
|
||||
assert attrgetter("empty_response")(chat_assistant.prompt) == "Sorry! No relevant content was found in the knowledge base!"
|
||||
assert attrgetter("opener")(chat_assistant.prompt) == "Hi! I'm your assistant, what can I do for you?"
|
||||
assert attrgetter("opener")(chat_assistant.prompt) == "Hi! I'm your assistant. What can I do for you?"
|
||||
assert attrgetter("show_quote")(chat_assistant.prompt) is True
|
||||
assert (
|
||||
attrgetter("prompt")(chat_assistant.prompt)
|
||||
|
||||
@ -200,7 +200,7 @@ class TestChatAssistantUpdate:
|
||||
"variables": [{"key": "knowledge", "optional": False}],
|
||||
"rerank_model": "",
|
||||
"empty_response": "Sorry! No relevant content was found in the knowledge base!",
|
||||
"opener": "Hi! I'm your assistant, what can I do for you?",
|
||||
"opener": "Hi! I'm your assistant. What can I do for you?",
|
||||
"show_quote": True,
|
||||
"prompt": 'You are an intelligent assistant. Please summarize the content of the knowledge base to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence "The answer you are looking for is not found in the knowledge base!" Answers need to consider chat history.\n Here is the knowledge base:\n {knowledge}\n The above is the knowledge base.',
|
||||
},
|
||||
|
||||
@ -151,4 +151,4 @@ class TestUpdatedChunk:
|
||||
|
||||
with pytest.raises(Exception) as excinfo:
|
||||
chunks[0].update({})
|
||||
assert f"You don't own the document {chunks[0].document_id}" in str(excinfo.value), str(excinfo.value)
|
||||
assert str(excinfo.value) in [f"You don't own the document {chunks[0].document_id}", f"Can't find this chunk {chunks[0].id}"], str(excinfo.value)
|
||||
|
||||
113
uv.lock
generated
113
uv.lock
generated
@ -1651,6 +1651,19 @@ wheels = [
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/59/f5/67e9cc5c2036f58115f9fe0f00d203cf6780c3ff8ae0e705e7a9d9e8ff9e/Flask_Login-0.6.3-py3-none-any.whl", hash = "sha256:849b25b82a436bf830a054e74214074af59097171562ab10bfa999e6b78aae5d" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "flask-mail"
|
||||
version = "0.10.0"
|
||||
source = { registry = "https://mirrors.aliyun.com/pypi/simple" }
|
||||
dependencies = [
|
||||
{ name = "blinker" },
|
||||
{ name = "flask" },
|
||||
]
|
||||
sdist = { url = "https://mirrors.aliyun.com/pypi/packages/ba/29/e92dc84c675d1e8d260d5768eb3fb65c70cbd33addecf424187587bee862/flask_mail-0.10.0.tar.gz", hash = "sha256:44083e7b02bbcce792209c06252f8569dd5a325a7aaa76afe7330422bd97881d" }
|
||||
wheels = [
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/e4/c0/a81083da779f482494d49195d8b6c9fde21072558253e4a9fb2ec969c3c1/flask_mail-0.10.0-py3-none-any.whl", hash = "sha256:a451e490931bb3441d9b11ebab6812a16bfa81855792ae1bf9c1e1e22c4e51e7" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "flask-session"
|
||||
version = "0.8.0"
|
||||
@ -2162,37 +2175,37 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "greenlet"
|
||||
version = "3.2.3"
|
||||
version = "3.2.4"
|
||||
source = { registry = "https://mirrors.aliyun.com/pypi/simple" }
|
||||
sdist = { url = "https://mirrors.aliyun.com/pypi/packages/c9/92/bb85bd6e80148a4d2e0c59f7c0c2891029f8fd510183afc7d8d2feeed9b6/greenlet-3.2.3.tar.gz", hash = "sha256:8b0dd8ae4c0d6f5e54ee55ba935eeb3d735a9b58a8a1e5b5cbab64e01a39f365" }
|
||||
sdist = { url = "https://mirrors.aliyun.com/pypi/packages/03/b8/704d753a5a45507a7aab61f18db9509302ed3d0a27ac7e0359ec2905b1a6/greenlet-3.2.4.tar.gz", hash = "sha256:0dca0d95ff849f9a364385f36ab49f50065d76964944638be9691e1832e9f86d" }
|
||||
wheels = [
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/92/db/b4c12cff13ebac2786f4f217f06588bccd8b53d260453404ef22b121fc3a/greenlet-3.2.3-cp310-cp310-macosx_11_0_universal2.whl", hash = "sha256:1afd685acd5597349ee6d7a88a8bec83ce13c106ac78c196ee9dde7c04fe87be" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/52/61/75b4abd8147f13f70986df2801bf93735c1bd87ea780d70e3b3ecda8c165/greenlet-3.2.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:761917cac215c61e9dc7324b2606107b3b292a8349bdebb31503ab4de3f559ac" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/35/aa/6894ae299d059d26254779a5088632874b80ee8cf89a88bca00b0709d22f/greenlet-3.2.3-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:a433dbc54e4a37e4fff90ef34f25a8c00aed99b06856f0119dcf09fbafa16392" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/30/64/e01a8261d13c47f3c082519a5e9dbf9e143cc0498ed20c911d04e54d526c/greenlet-3.2.3-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:72e77ed69312bab0434d7292316d5afd6896192ac4327d44f3d613ecb85b037c" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/47/48/ff9ca8ba9772d083a4f5221f7b4f0ebe8978131a9ae0909cf202f94cd879/greenlet-3.2.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:68671180e3849b963649254a882cd544a3c75bfcd2c527346ad8bb53494444db" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/e9/45/626e974948713bc15775b696adb3eb0bd708bec267d6d2d5c47bb47a6119/greenlet-3.2.3-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:49c8cfb18fb419b3d08e011228ef8a25882397f3a859b9fe1436946140b6756b" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/b1/8e/8b6f42c67d5df7db35b8c55c9a850ea045219741bb14416255616808c690/greenlet-3.2.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:efc6dc8a792243c31f2f5674b670b3a95d46fa1c6a912b8e310d6f542e7b0712" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/05/46/ab58828217349500a7ebb81159d52ca357da747ff1797c29c6023d79d798/greenlet-3.2.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:731e154aba8e757aedd0781d4b240f1225b075b4409f1bb83b05ff410582cf00" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/68/7f/d1b537be5080721c0f0089a8447d4ef72839039cdb743bdd8ffd23046e9a/greenlet-3.2.3-cp310-cp310-win_amd64.whl", hash = "sha256:96c20252c2f792defe9a115d3287e14811036d51e78b3aaddbee23b69b216302" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/fc/2e/d4fcb2978f826358b673f779f78fa8a32ee37df11920dc2bb5589cbeecef/greenlet-3.2.3-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:784ae58bba89fa1fa5733d170d42486580cab9decda3484779f4759345b29822" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/16/24/929f853e0202130e4fe163bc1d05a671ce8dcd604f790e14896adac43a52/greenlet-3.2.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0921ac4ea42a5315d3446120ad48f90c3a6b9bb93dd9b3cf4e4d84a66e42de83" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/d1/b2/0320715eb61ae70c25ceca2f1d5ae620477d246692d9cc284c13242ec31c/greenlet-3.2.3-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:d2971d93bb99e05f8c2c0c2f4aa9484a18d98c4c3bd3c62b65b7e6ae33dfcfaf" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/bd/49/445fd1a210f4747fedf77615d941444349c6a3a4a1135bba9701337cd966/greenlet-3.2.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:c667c0bf9d406b77a15c924ef3285e1e05250948001220368e039b6aa5b5034b" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/7e/c8/ca19760cf6eae75fa8dc32b487e963d863b3ee04a7637da77b616703bc37/greenlet-3.2.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:592c12fb1165be74592f5de0d70f82bc5ba552ac44800d632214b76089945147" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/65/89/77acf9e3da38e9bcfca881e43b02ed467c1dedc387021fc4d9bd9928afb8/greenlet-3.2.3-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:29e184536ba333003540790ba29829ac14bb645514fbd7e32af331e8202a62a5" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/97/c6/ae244d7c95b23b7130136e07a9cc5aadd60d59b5951180dc7dc7e8edaba7/greenlet-3.2.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:93c0bb79844a367782ec4f429d07589417052e621aa39a5ac1fb99c5aa308edc" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/89/5f/b16dec0cbfd3070658e0d744487919740c6d45eb90946f6787689a7efbce/greenlet-3.2.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:751261fc5ad7b6705f5f76726567375bb2104a059454e0226e1eef6c756748ba" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/66/77/d48fb441b5a71125bcac042fc5b1494c806ccb9a1432ecaa421e72157f77/greenlet-3.2.3-cp311-cp311-win_amd64.whl", hash = "sha256:83a8761c75312361aa2b5b903b79da97f13f556164a7dd2d5448655425bd4c34" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/f3/94/ad0d435f7c48debe960c53b8f60fb41c2026b1d0fa4a99a1cb17c3461e09/greenlet-3.2.3-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:25ad29caed5783d4bd7a85c9251c651696164622494c00802a139c00d639242d" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/93/5d/7c27cf4d003d6e77749d299c7c8f5fd50b4f251647b5c2e97e1f20da0ab5/greenlet-3.2.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:88cd97bf37fe24a6710ec6a3a7799f3f81d9cd33317dcf565ff9950c83f55e0b" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/c6/7e/807e1e9be07a125bb4c169144937910bf59b9d2f6d931578e57f0bce0ae2/greenlet-3.2.3-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:baeedccca94880d2f5666b4fa16fc20ef50ba1ee353ee2d7092b383a243b0b0d" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/9d/ab/158c1a4ea1068bdbc78dba5a3de57e4c7aeb4e7fa034320ea94c688bfb61/greenlet-3.2.3-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:be52af4b6292baecfa0f397f3edb3c6092ce071b499dd6fe292c9ac9f2c8f264" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/cc/0d/93729068259b550d6a0288da4ff72b86ed05626eaf1eb7c0d3466a2571de/greenlet-3.2.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:0cc73378150b8b78b0c9fe2ce56e166695e67478550769536a6742dca3651688" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/f6/f6/c82ac1851c60851302d8581680573245c8fc300253fc1ff741ae74a6c24d/greenlet-3.2.3-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:706d016a03e78df129f68c4c9b4c4f963f7d73534e48a24f5f5a7101ed13dbbb" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/98/82/d022cf25ca39cf1200650fc58c52af32c90f80479c25d1cbf57980ec3065/greenlet-3.2.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:419e60f80709510c343c57b4bb5a339d8767bf9aef9b8ce43f4f143240f88b7c" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/f5/e1/25297f70717abe8104c20ecf7af0a5b82d2f5a980eb1ac79f65654799f9f/greenlet-3.2.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:93d48533fade144203816783373f27a97e4193177ebaaf0fc396db19e5d61163" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/1f/8f/8f9e56c5e82eb2c26e8cde787962e66494312dc8cb261c460e1f3a9c88bc/greenlet-3.2.3-cp312-cp312-win_amd64.whl", hash = "sha256:7454d37c740bb27bdeddfc3f358f26956a07d5220818ceb467a483197d84f849" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/7d/ed/6bfa4109fcb23a58819600392564fea69cdc6551ffd5e69ccf1d52a40cbc/greenlet-3.2.4-cp310-cp310-macosx_11_0_universal2.whl", hash = "sha256:8c68325b0d0acf8d91dde4e6f930967dd52a5302cd4062932a6b2e7c2969f47c" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/2a/fc/102ec1a2fc015b3a7652abab7acf3541d58c04d3d17a8d3d6a44adae1eb1/greenlet-3.2.4-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:94385f101946790ae13da500603491f04a76b6e4c059dab271b3ce2e283b2590" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/c5/26/80383131d55a4ac0fb08d71660fd77e7660b9db6bdb4e8884f46d9f2cc04/greenlet-3.2.4-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f10fd42b5ee276335863712fa3da6608e93f70629c631bf77145021600abc23c" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/9f/7c/e7833dbcd8f376f3326bd728c845d31dcde4c84268d3921afcae77d90d08/greenlet-3.2.4-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:c8c9e331e58180d0d83c5b7999255721b725913ff6bc6cf39fa2a45841a4fd4b" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/e9/49/547b93b7c0428ede7b3f309bc965986874759f7d89e4e04aeddbc9699acb/greenlet-3.2.4-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:58b97143c9cc7b86fc458f215bd0932f1757ce649e05b640fea2e79b54cedb31" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/7f/91/ae2eb6b7979e2f9b035a9f612cf70f1bf54aad4e1d125129bef1eae96f19/greenlet-3.2.4-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c2ca18a03a8cfb5b25bc1cbe20f3d9a4c80d8c3b13ba3df49ac3961af0b1018d" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/f7/85/433de0c9c0252b22b16d413c9407e6cb3b41df7389afc366ca204dbc1393/greenlet-3.2.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9fe0a28a7b952a21e2c062cd5756d34354117796c6d9215a87f55e38d15402c5" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/a1/8d/88f3ebd2bc96bf7747093696f4335a0a8a4c5acfcf1b757717c0d2474ba3/greenlet-3.2.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8854167e06950ca75b898b104b63cc646573aa5fef1353d4508ecdd1ee76254f" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/d6/6f/b60b0291d9623c496638c582297ead61f43c4b72eef5e9c926ef4565ec13/greenlet-3.2.4-cp310-cp310-win_amd64.whl", hash = "sha256:73f49b5368b5359d04e18d15828eecc1806033db5233397748f4ca813ff1056c" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/a4/de/f28ced0a67749cac23fecb02b694f6473f47686dff6afaa211d186e2ef9c/greenlet-3.2.4-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:96378df1de302bc38e99c3a9aa311967b7dc80ced1dcc6f171e99842987882a2" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/09/16/2c3792cba130000bf2a31c5272999113f4764fd9d874fb257ff588ac779a/greenlet-3.2.4-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:1ee8fae0519a337f2329cb78bd7a8e128ec0f881073d43f023c7b8d4831d5246" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/ae/8f/95d48d7e3d433e6dae5b1682e4292242a53f22df82e6d3dda81b1701a960/greenlet-3.2.4-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:94abf90142c2a18151632371140b3dba4dee031633fe614cb592dbb6c9e17bc3" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/d5/5e/405965351aef8c76b8ef7ad370e5da58d57ef6068df197548b015464001a/greenlet-3.2.4-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:4d1378601b85e2e5171b99be8d2dc85f594c79967599328f95c1dc1a40f1c633" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/25/5d/382753b52006ce0218297ec1b628e048c4e64b155379331f25a7316eb749/greenlet-3.2.4-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:0db5594dce18db94f7d1650d7489909b57afde4c580806b8d9203b6e79cdc079" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/1f/8e/abdd3f14d735b2929290a018ecf133c901be4874b858dd1c604b9319f064/greenlet-3.2.4-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2523e5246274f54fdadbce8494458a2ebdcdbc7b802318466ac5606d3cded1f8" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/5d/65/deb2a69c3e5996439b0176f6651e0052542bb6c8f8ec2e3fba97c9768805/greenlet-3.2.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:1987de92fec508535687fb807a5cea1560f6196285a4cde35c100b8cd632cc52" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/3f/cc/b07000438a29ac5cfb2194bfc128151d52f333cee74dd7dfe3fb733fc16c/greenlet-3.2.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:55e9c5affaa6775e2c6b67659f3a71684de4c549b3dd9afca3bc773533d284fa" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/d8/0f/30aef242fcab550b0b3520b8e3561156857c94288f0332a79928c31a52cf/greenlet-3.2.4-cp311-cp311-win_amd64.whl", hash = "sha256:9c40adce87eaa9ddb593ccb0fa6a07caf34015a29bf8d344811665b573138db9" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/44/69/9b804adb5fd0671f367781560eb5eb586c4d495277c93bde4307b9e28068/greenlet-3.2.4-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:3b67ca49f54cede0186854a008109d6ee71f66bd57bb36abd6d0a0267b540cdd" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/46/e9/d2a80c99f19a153eff70bc451ab78615583b8dac0754cfb942223d2c1a0d/greenlet-3.2.4-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ddf9164e7a5b08e9d22511526865780a576f19ddd00d62f8a665949327fde8bb" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/3b/16/035dcfcc48715ccd345f3a93183267167cdd162ad123cd93067d86f27ce4/greenlet-3.2.4-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f28588772bb5fb869a8eb331374ec06f24a83a9c25bfa1f38b6993afe9c1e968" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/31/da/0386695eef69ffae1ad726881571dfe28b41970173947e7c558d9998de0f/greenlet-3.2.4-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:5c9320971821a7cb77cfab8d956fa8e39cd07ca44b6070db358ceb7f8797c8c9" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/68/88/69bf19fd4dc19981928ceacbc5fd4bb6bc2215d53199e367832e98d1d8fe/greenlet-3.2.4-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c60a6d84229b271d44b70fb6e5fa23781abb5d742af7b808ae3f6efd7c9c60f6" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/19/0d/6660d55f7373b2ff8152401a83e02084956da23ae58cddbfb0b330978fe9/greenlet-3.2.4-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3b3812d8d0c9579967815af437d96623f45c0f2ae5f04e366de62a12d83a8fb0" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/8e/1a/c953fdedd22d81ee4629afbb38d2f9d71e37d23caace44775a3a969147d4/greenlet-3.2.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:abbf57b5a870d30c4675928c37278493044d7c14378350b3aa5d484fa65575f0" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/3f/c7/12381b18e21aef2c6bd3a636da1088b888b97b7a0362fac2e4de92405f97/greenlet-3.2.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:20fb936b4652b6e307b8f347665e2c615540d4b42b3b4c8a321d8286da7e520f" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/e9/08/b0814846b79399e585f974bbeebf5580fbe59e258ea7be64d9dfb253c84f/greenlet-3.2.4-cp312-cp312-win_amd64.whl", hash = "sha256:a7d4e128405eea3814a12cc2605e0e6aedb4035bf32697f72deca74de4105e02" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -2409,6 +2422,11 @@ wheels = [
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/56/95/9377bcb415797e44274b51d46e3249eba641711cf3348050f76ee7b15ffc/httpx-0.27.2-py3-none-any.whl", hash = "sha256:7bb2708e112d8fdd7829cd4243970f0c223274051cb35ee80c03301ee29a3df0" },
|
||||
]
|
||||
|
||||
[package.optional-dependencies]
|
||||
socks = [
|
||||
{ name = "socksio" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "httpx-sse"
|
||||
version = "0.4.1"
|
||||
@ -2877,7 +2895,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "litellm"
|
||||
version = "1.75.0"
|
||||
version = "1.75.5.post1"
|
||||
source = { registry = "https://mirrors.aliyun.com/pypi/simple" }
|
||||
dependencies = [
|
||||
{ name = "aiohttp" },
|
||||
@ -2892,9 +2910,9 @@ dependencies = [
|
||||
{ name = "tiktoken" },
|
||||
{ name = "tokenizers" },
|
||||
]
|
||||
sdist = { url = "https://mirrors.aliyun.com/pypi/packages/1b/28/50837cb0246c42a8caac45610572883de7f478543cf4d143e84f099c0234/litellm-1.75.0.tar.gz", hash = "sha256:ec7fbfe79e1b9cd4a2b36ca9e71e71959d8fc43305b222e5f257aced1a0d1d63" }
|
||||
sdist = { url = "https://mirrors.aliyun.com/pypi/packages/10/97/6091a020895102a20f1da204ebe68c1293123555476b38e749f95ba5981c/litellm-1.75.5.post1.tar.gz", hash = "sha256:e40a0e4b25032755dc5df7f02742abe9e3b8836236363f605f3bdd363cb5a0d0" }
|
||||
wheels = [
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/db/43/e10905870d42e927de3b095a9248f2764156c7eb45ec172d72be35cd2bb4/litellm-1.75.0-py3-none-any.whl", hash = "sha256:1657472f37d291b366050dd2035e3640eebd96142d6fa0f935ceb290a0e1d5ad" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/8f/76/780f68a3b26227136a5147c76860aacedcae9bf1b7afc1c991ec9cad11bc/litellm-1.75.5.post1-py3-none-any.whl", hash = "sha256:1c72809a9c8f6e132ad06eb7e628f674c775b0ce6bfb58cbd37e8b585d929cb7" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -3803,7 +3821,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "openai"
|
||||
version = "1.99.1"
|
||||
version = "1.99.9"
|
||||
source = { registry = "https://mirrors.aliyun.com/pypi/simple" }
|
||||
dependencies = [
|
||||
{ name = "anyio" },
|
||||
@ -3815,9 +3833,9 @@ dependencies = [
|
||||
{ name = "tqdm" },
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://mirrors.aliyun.com/pypi/packages/03/30/f0fb7907a77e733bb801c7bdcde903500b31215141cdb261f04421e6fbec/openai-1.99.1.tar.gz", hash = "sha256:2c9d8e498c298f51bb94bcac724257a3a6cac6139ccdfc1186c6708f7a93120f" }
|
||||
sdist = { url = "https://mirrors.aliyun.com/pypi/packages/8a/d2/ef89c6f3f36b13b06e271d3cc984ddd2f62508a0972c1cbcc8485a6644ff/openai-1.99.9.tar.gz", hash = "sha256:f2082d155b1ad22e83247c3de3958eb4255b20ccf4a1de2e6681b6957b554e92" }
|
||||
wheels = [
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/54/15/9c85154ffd283abfc43309ff3aaa63c3fd02f7767ee684e73670f6c5ade2/openai-1.99.1-py3-none-any.whl", hash = "sha256:8eeccc69e0ece1357b51ca0d9fb21324afee09b20c3e5b547d02445ca18a4e03" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/e8/fb/df274ca10698ee77b07bff952f302ea627cc12dac6b85289485dd77db6de/openai-1.99.9-py3-none-any.whl", hash = "sha256:9dbcdb425553bae1ac5d947147bebbd630d91bbfc7788394d4c4f3a35682ab3a" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -4920,14 +4938,14 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "pypdf"
|
||||
version = "5.9.0"
|
||||
version = "6.0.0"
|
||||
source = { registry = "https://mirrors.aliyun.com/pypi/simple" }
|
||||
dependencies = [
|
||||
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
|
||||
]
|
||||
sdist = { url = "https://mirrors.aliyun.com/pypi/packages/89/3a/584b97a228950ed85aec97c811c68473d9b8d149e6a8c155668287cf1a28/pypdf-5.9.0.tar.gz", hash = "sha256:30f67a614d558e495e1fbb157ba58c1de91ffc1718f5e0dfeb82a029233890a1" }
|
||||
sdist = { url = "https://mirrors.aliyun.com/pypi/packages/20/ac/a300a03c3b34967c050677ccb16e7a4b65607ee5df9d51e8b6d713de4098/pypdf-6.0.0.tar.gz", hash = "sha256:282a99d2cc94a84a3a3159f0d9358c0af53f85b4d28d76ea38b96e9e5ac2a08d" }
|
||||
wheels = [
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/48/d9/6cff57c80a6963e7dd183bf09e9f21604a77716644b1e580e97b259f7612/pypdf-5.9.0-py3-none-any.whl", hash = "sha256:be10a4c54202f46d9daceaa8788be07aa8cd5ea8c25c529c50dd509206382c35" },
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/2c/83/2cacc506eb322bb31b747bc06ccb82cc9aa03e19ee9c1245e538e49d52be/pypdf-6.0.0-py3-none-any.whl", hash = "sha256:56ea60100ce9f11fc3eec4f359da15e9aec3821b036c1f06d2b660d35683abb8" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -5250,7 +5268,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "ragflow"
|
||||
version = "0.20.1"
|
||||
version = "0.20.3"
|
||||
source = { virtual = "." }
|
||||
dependencies = [
|
||||
{ name = "akshare" },
|
||||
@ -5287,6 +5305,7 @@ dependencies = [
|
||||
{ name = "flask" },
|
||||
{ name = "flask-cors" },
|
||||
{ name = "flask-login" },
|
||||
{ name = "flask-mail" },
|
||||
{ name = "flask-session" },
|
||||
{ name = "google-generativeai" },
|
||||
{ name = "google-search-results" },
|
||||
@ -5294,7 +5313,7 @@ dependencies = [
|
||||
{ name = "groq" },
|
||||
{ name = "hanziconv" },
|
||||
{ name = "html-text" },
|
||||
{ name = "httpx" },
|
||||
{ name = "httpx", extra = ["socks"] },
|
||||
{ name = "huggingface-hub" },
|
||||
{ name = "infinity-emb" },
|
||||
{ name = "infinity-sdk" },
|
||||
@ -5441,6 +5460,7 @@ requires-dist = [
|
||||
{ name = "flask", specifier = "==3.0.3" },
|
||||
{ name = "flask-cors", specifier = "==5.0.0" },
|
||||
{ name = "flask-login", specifier = "==0.6.3" },
|
||||
{ name = "flask-mail", specifier = ">=0.10.0" },
|
||||
{ name = "flask-session", specifier = "==0.8.0" },
|
||||
{ name = "google-generativeai", specifier = ">=0.8.1,<0.9.0" },
|
||||
{ name = "google-search-results", specifier = "==2.4.2" },
|
||||
@ -5448,7 +5468,7 @@ requires-dist = [
|
||||
{ name = "groq", specifier = "==0.9.0" },
|
||||
{ name = "hanziconv", specifier = "==0.3.2" },
|
||||
{ name = "html-text", specifier = "==0.6.2" },
|
||||
{ name = "httpx", specifier = "==0.27.2" },
|
||||
{ name = "httpx", extras = ["socks"], specifier = "==0.27.2" },
|
||||
{ name = "huggingface-hub", specifier = ">=0.25.0,<0.26.0" },
|
||||
{ name = "infinity-emb", specifier = ">=0.0.66,<0.0.67" },
|
||||
{ name = "infinity-sdk", specifier = "==0.6.0.dev4" },
|
||||
@ -5486,7 +5506,7 @@ requires-dist = [
|
||||
{ name = "pyicu", specifier = ">=2.13.1,<3.0.0" },
|
||||
{ name = "pymysql", specifier = ">=1.1.1,<2.0.0" },
|
||||
{ name = "pyodbc", specifier = ">=5.2.0,<6.0.0" },
|
||||
{ name = "pypdf", specifier = ">=5.0.0,<6.0.0" },
|
||||
{ name = "pypdf", specifier = "==6.0.0" },
|
||||
{ name = "pypdf2", specifier = ">=3.0.1,<4.0.0" },
|
||||
{ name = "python-calamine", specifier = ">=0.4.0" },
|
||||
{ name = "python-dateutil", specifier = "==2.8.2" },
|
||||
@ -6201,6 +6221,15 @@ wheels = [
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/ed/dc/c02e01294f7265e63a7315fe086dd1df7dacb9f840a804da846b96d01b96/snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "socksio"
|
||||
version = "1.0.0"
|
||||
source = { registry = "https://mirrors.aliyun.com/pypi/simple" }
|
||||
sdist = { url = "https://mirrors.aliyun.com/pypi/packages/f8/5c/48a7d9495be3d1c651198fd99dbb6ce190e2274d0f28b9051307bdec6b85/socksio-1.0.0.tar.gz", hash = "sha256:f88beb3da5b5c38b9890469de67d0cb0f9d494b78b106ca1845f96c10b91c4ac" }
|
||||
wheels = [
|
||||
{ url = "https://mirrors.aliyun.com/pypi/packages/37/c3/6eeb6034408dac0fa653d126c9204ade96b819c936e136c5e8a6897eee9c/socksio-1.0.0-py3-none-any.whl", hash = "sha256:95dc1f15f9b34e8d7b16f06d74b8ccf48f609af32ab33c608d08761c5dcbb1f3" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "sortedcontainers"
|
||||
version = "2.4.0"
|
||||
|
||||
@ -317,7 +317,11 @@ export function ChunkMethodDialog({
|
||||
</FormContainer>
|
||||
)}
|
||||
{showGraphRagItems(selectedTag as DocumentParserType) &&
|
||||
useGraphRag && <UseGraphRagFormField></UseGraphRagFormField>}
|
||||
useGraphRag && (
|
||||
<FormContainer>
|
||||
<UseGraphRagFormField></UseGraphRagFormField>
|
||||
</FormContainer>
|
||||
)}
|
||||
{showEntityTypes && <EntityTypesFormField></EntityTypesFormField>}
|
||||
</form>
|
||||
</Form>
|
||||
|
||||
@ -50,10 +50,10 @@ export function DelimiterFormField() {
|
||||
}
|
||||
return (
|
||||
<FormItem className=" items-center space-y-0 ">
|
||||
<div className="flex items-center">
|
||||
<div className="flex items-center gap-1">
|
||||
<FormLabel
|
||||
tooltip={t('knowledgeDetails.delimiterTip')}
|
||||
className="text-sm text-muted-foreground whitespace-nowrap w-1/4"
|
||||
className="text-sm text-muted-foreground whitespace-break-spaces w-1/4"
|
||||
>
|
||||
{t('knowledgeDetails.delimiter')}
|
||||
</FormLabel>
|
||||
|
||||
48
web/src/components/embed-container.tsx
Normal file
48
web/src/components/embed-container.tsx
Normal file
@ -0,0 +1,48 @@
|
||||
import { useFetchAppConf } from '@/hooks/logic-hooks';
|
||||
import { RefreshCcw } from 'lucide-react';
|
||||
import { PropsWithChildren } from 'react';
|
||||
import { RAGFlowAvatar } from './ragflow-avatar';
|
||||
import { Button } from './ui/button';
|
||||
|
||||
type EmbedContainerProps = {
|
||||
title: string;
|
||||
avatar?: string;
|
||||
handleReset?(): void;
|
||||
} & PropsWithChildren;
|
||||
|
||||
export function EmbedContainer({
|
||||
title,
|
||||
avatar,
|
||||
children,
|
||||
handleReset,
|
||||
}: EmbedContainerProps) {
|
||||
const appConf = useFetchAppConf();
|
||||
|
||||
return (
|
||||
<section className="h-[100vh] flex justify-center items-center">
|
||||
<div className="w-40 flex gap-2 absolute left-3 top-12 items-center">
|
||||
<img src="/logo.svg" alt="" />
|
||||
<span className="text-2xl font-bold">{appConf.appName}</span>
|
||||
</div>
|
||||
<div className=" w-[80vw] border rounded-lg">
|
||||
<div className="flex justify-between items-center border-b p-3">
|
||||
<div className="flex gap-2 items-center">
|
||||
<RAGFlowAvatar avatar={avatar} name={title} isPerson />
|
||||
<div className="text-xl text-foreground">{title}</div>
|
||||
</div>
|
||||
<Button
|
||||
variant={'secondary'}
|
||||
className="text-sm text-foreground cursor-pointer"
|
||||
onClick={handleReset}
|
||||
>
|
||||
<div className="flex gap-1 items-center">
|
||||
<RefreshCcw size={14} />
|
||||
<span className="text-lg ">Reset</span>
|
||||
</div>
|
||||
</Button>
|
||||
</div>
|
||||
{children}
|
||||
</div>
|
||||
</section>
|
||||
);
|
||||
}
|
||||
@ -23,6 +23,7 @@ import {
|
||||
} from '@/constants/common';
|
||||
import { useTranslate } from '@/hooks/common-hooks';
|
||||
import { IModalProps } from '@/interfaces/common';
|
||||
import { Routes } from '@/routes';
|
||||
import { zodResolver } from '@hookform/resolvers/zod';
|
||||
import { memo, useCallback, useMemo } from 'react';
|
||||
import { useForm, useWatch } from 'react-hook-form';
|
||||
@ -68,7 +69,7 @@ function EmbedDialog({
|
||||
|
||||
const generateIframeSrc = useCallback(() => {
|
||||
const { visibleAvatar, locale } = values;
|
||||
let src = `${location.origin}/next-chat/share?shared_id=${token}&from=${from}&auth=${beta}`;
|
||||
let src = `${location.origin}${from === SharedFrom.Agent ? Routes.AgentShare : Routes.ChatShare}?shared_id=${token}&from=${from}&auth=${beta}`;
|
||||
if (visibleAvatar) {
|
||||
src += '&visible_avatar=1';
|
||||
}
|
||||
87
web/src/components/embed-dialog/use-show-embed-dialog.ts
Normal file
87
web/src/components/embed-dialog/use-show-embed-dialog.ts
Normal file
@ -0,0 +1,87 @@
|
||||
import { useSetModalState, useTranslate } from '@/hooks/common-hooks';
|
||||
import { useFetchManualSystemTokenList } from '@/hooks/user-setting-hooks';
|
||||
import { useCallback } from 'react';
|
||||
import message from '../ui/message';
|
||||
|
||||
export const useShowTokenEmptyError = () => {
|
||||
const { t } = useTranslate('chat');
|
||||
|
||||
const showTokenEmptyError = useCallback(() => {
|
||||
message.error(t('tokenError'));
|
||||
}, [t]);
|
||||
return { showTokenEmptyError };
|
||||
};
|
||||
|
||||
export const useShowBetaEmptyError = () => {
|
||||
const { t } = useTranslate('chat');
|
||||
|
||||
const showBetaEmptyError = useCallback(() => {
|
||||
message.error(t('betaError'));
|
||||
}, [t]);
|
||||
return { showBetaEmptyError };
|
||||
};
|
||||
|
||||
export const useFetchTokenListBeforeOtherStep = () => {
|
||||
const { showTokenEmptyError } = useShowTokenEmptyError();
|
||||
const { showBetaEmptyError } = useShowBetaEmptyError();
|
||||
|
||||
const { data: tokenList, fetchSystemTokenList } =
|
||||
useFetchManualSystemTokenList();
|
||||
|
||||
let token = '',
|
||||
beta = '';
|
||||
|
||||
if (Array.isArray(tokenList) && tokenList.length > 0) {
|
||||
token = tokenList[0].token;
|
||||
beta = tokenList[0].beta;
|
||||
}
|
||||
|
||||
token =
|
||||
Array.isArray(tokenList) && tokenList.length > 0 ? tokenList[0].token : '';
|
||||
|
||||
const handleOperate = useCallback(async () => {
|
||||
const ret = await fetchSystemTokenList();
|
||||
const list = ret;
|
||||
if (Array.isArray(list) && list.length > 0) {
|
||||
if (!list[0].beta) {
|
||||
showBetaEmptyError();
|
||||
return false;
|
||||
}
|
||||
return list[0]?.token;
|
||||
} else {
|
||||
showTokenEmptyError();
|
||||
return false;
|
||||
}
|
||||
}, [fetchSystemTokenList, showBetaEmptyError, showTokenEmptyError]);
|
||||
|
||||
return {
|
||||
token,
|
||||
beta,
|
||||
handleOperate,
|
||||
};
|
||||
};
|
||||
|
||||
export const useShowEmbedModal = () => {
|
||||
const {
|
||||
visible: embedVisible,
|
||||
hideModal: hideEmbedModal,
|
||||
showModal: showEmbedModal,
|
||||
} = useSetModalState();
|
||||
|
||||
const { handleOperate, token, beta } = useFetchTokenListBeforeOtherStep();
|
||||
|
||||
const handleShowEmbedModal = useCallback(async () => {
|
||||
const succeed = await handleOperate();
|
||||
if (succeed) {
|
||||
showEmbedModal();
|
||||
}
|
||||
}, [handleOperate, showEmbedModal]);
|
||||
|
||||
return {
|
||||
showEmbedModal: handleShowEmbedModal,
|
||||
hideEmbedModal,
|
||||
embedVisible,
|
||||
embedToken: token,
|
||||
beta,
|
||||
};
|
||||
};
|
||||
@ -25,10 +25,10 @@ export function ExcelToHtmlFormField() {
|
||||
|
||||
return (
|
||||
<FormItem defaultChecked={false} className=" items-center space-y-0 ">
|
||||
<div className="flex items-center">
|
||||
<div className="flex items-center gap-1">
|
||||
<FormLabel
|
||||
tooltip={t('html4excelTip')}
|
||||
className="text-sm text-muted-foreground whitespace-nowrap w-1/4"
|
||||
className="text-sm text-muted-foreground whitespace-break-spaces w-1/4"
|
||||
>
|
||||
{t('html4excel')}
|
||||
</FormLabel>
|
||||
|
||||
54
web/src/components/home-card.tsx
Normal file
54
web/src/components/home-card.tsx
Normal file
@ -0,0 +1,54 @@
|
||||
import { RAGFlowAvatar } from '@/components/ragflow-avatar';
|
||||
import { Card, CardContent } from '@/components/ui/card';
|
||||
import { formatDate } from '@/utils/date';
|
||||
|
||||
interface IProps {
|
||||
data: {
|
||||
name: string;
|
||||
description?: string;
|
||||
avatar?: string;
|
||||
update_time?: string | number;
|
||||
};
|
||||
onClick?: () => void;
|
||||
moreDropdown: React.ReactNode;
|
||||
}
|
||||
export function HomeCard({ data, onClick, moreDropdown }: IProps) {
|
||||
return (
|
||||
<Card
|
||||
className="bg-bg-card border-colors-outline-neutral-standard"
|
||||
onClick={() => {
|
||||
// navigateToSearch(data?.id);
|
||||
onClick?.();
|
||||
}}
|
||||
>
|
||||
<CardContent className="p-4 flex gap-2 items-start group h-full">
|
||||
<div className="flex justify-between mb-4">
|
||||
<RAGFlowAvatar
|
||||
className="w-[32px] h-[32px]"
|
||||
avatar={data.avatar}
|
||||
name={data.name}
|
||||
/>
|
||||
</div>
|
||||
<div className="flex flex-col justify-between gap-1 flex-1 h-full w-[calc(100%-50px)]">
|
||||
<section className="flex justify-between">
|
||||
<div className="text-[20px] font-bold w-80% leading-5">
|
||||
{data.name}
|
||||
</div>
|
||||
{moreDropdown}
|
||||
</section>
|
||||
|
||||
<section className="flex flex-col gap-1 mt-1">
|
||||
<div className="whitespace-nowrap overflow-hidden text-ellipsis">
|
||||
{data.description}
|
||||
</div>
|
||||
<div>
|
||||
<p className="text-sm opacity-80">
|
||||
{formatDate(data.update_time)}
|
||||
</p>
|
||||
</div>
|
||||
</section>
|
||||
</div>
|
||||
</CardContent>
|
||||
</Card>
|
||||
);
|
||||
}
|
||||
@ -70,6 +70,10 @@ const KnowledgeBaseItem = ({
|
||||
|
||||
export default KnowledgeBaseItem;
|
||||
|
||||
function buildQueryVariableOptionsByShowVariable(showVariable?: boolean) {
|
||||
return showVariable ? useBuildQueryVariableOptions : () => [];
|
||||
}
|
||||
|
||||
export function KnowledgeBaseFormField({
|
||||
showVariable = false,
|
||||
}: {
|
||||
@ -84,7 +88,7 @@ export function KnowledgeBaseFormField({
|
||||
(x) => x.parser_id !== DocumentParserType.Tag,
|
||||
);
|
||||
|
||||
const nextOptions = useBuildQueryVariableOptions();
|
||||
const nextOptions = buildQueryVariableOptionsByShowVariable(showVariable)();
|
||||
|
||||
const knowledgeOptions = filteredKnowledgeList.map((x) => ({
|
||||
label: x.name,
|
||||
|
||||
@ -16,7 +16,7 @@ import { Funnel } from 'lucide-react';
|
||||
import { useFormContext, useWatch } from 'react-hook-form';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { z } from 'zod';
|
||||
import { NextLLMSelect } from './llm-select/next';
|
||||
import { NextInnerLLMSelectProps, NextLLMSelect } from './llm-select/next';
|
||||
import { Button } from './ui/button';
|
||||
|
||||
const ModelTypes = [
|
||||
@ -38,7 +38,13 @@ export const LargeModelFilterFormSchema = {
|
||||
llm_filter: z.string().optional(),
|
||||
};
|
||||
|
||||
export function LargeModelFormField() {
|
||||
type LargeModelFormFieldProps = Pick<
|
||||
NextInnerLLMSelectProps,
|
||||
'showSpeech2TextModel'
|
||||
>;
|
||||
export function LargeModelFormField({
|
||||
showSpeech2TextModel: showTTSModel,
|
||||
}: LargeModelFormFieldProps) {
|
||||
const form = useFormContext();
|
||||
const { t } = useTranslation();
|
||||
const filter = useWatch({ control: form.control, name: 'llm_filter' });
|
||||
@ -85,7 +91,11 @@ export function LargeModelFormField() {
|
||||
/>
|
||||
|
||||
<FormControl>
|
||||
<NextLLMSelect {...field} filter={filter} />
|
||||
<NextLLMSelect
|
||||
{...field}
|
||||
filter={filter}
|
||||
showSpeech2TextModel={showTTSModel}
|
||||
/>
|
||||
</FormControl>
|
||||
</section>
|
||||
|
||||
@ -96,3 +106,22 @@ export function LargeModelFormField() {
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
export function LargeModelFormFieldWithoutFilter() {
|
||||
const form = useFormContext();
|
||||
|
||||
return (
|
||||
<FormField
|
||||
control={form.control}
|
||||
name="llm_id"
|
||||
render={({ field }) => (
|
||||
<FormItem>
|
||||
<FormControl>
|
||||
<NextLLMSelect {...field} />
|
||||
</FormControl>
|
||||
<FormMessage />
|
||||
</FormItem>
|
||||
)}
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user