Compare commits

..

34 Commits

Author SHA1 Message Date
675d18d359 Add docs category file (#12359)
### What problem does this PR solve?

As title.

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-12-31 13:56:11 +08:00
750335978c Fix: Batch parsing problem (#12358)
### What problem does this PR solve?

Fix: Batch parsing problem

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-31 13:52:50 +08:00
ae7c623a35 fix(rag/prompts): Restructure metadata extraction rules for precision (#12360)
### What problem does this PR solve?

- Simplified and consolidated extraction rules
- Emphasized strict evidence-based extraction only
- Strengthened enum handling and hallucination prevention
- Clarified output requirements for empty results

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-31 13:52:33 +08:00
f24bdc0f83 Remove doc of health check (#12363)
### What problem does this PR solve?

System web page is disabled since v0.22.0, and the health check API is
also described in API reference. This document is obsolete.

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-12-31 13:49:38 +08:00
07ef35b7e6 Docs: Update version references to v0.23.1 in READMEs and docs (#12349)
### What problem does this PR solve?

- Update version tags in README files (including translations) from
v0.23.0 to v0.23.1
- Modify Docker image references and documentation to reflect new
version
- Update version badges and image descriptions
- Maintain consistency across all language variants of README files

### Type of change

- [x] Documentation Update
2025-12-31 12:49:42 +08:00
7c9823a1ff Update release notes (#12356)
### What problem does this PR solve?

As title

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-12-31 12:49:09 +08:00
a0c3bcf798 [Bug] Don't display not used component status in admin status dashboard (#12355)
### What problem does this PR solve?

Currently, all components in configs are displayed even they are not
used. This PR is to fix it.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-12-31 12:40:52 +08:00
1a4a7d1705 Fix: apply kb configured llm issue. (#12354)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-31 12:40:28 +08:00
f141947085 [Feature] Admin: sort user list by email (#12350)
### What problem does this PR solve?

Sort the user list by user email address.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-12-31 11:55:18 +08:00
a07e947644 Feat: Fixed the issue where the newly created agent begin node displayed "undefined". #10427 (#12348)
### What problem does this PR solve?
Feat: Fixed the issue where the newly created agent begin node displayed
"undefined". #10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-12-31 11:19:33 +08:00
ae4692a845 Fix: Bug fixed (#12345)
### What problem does this PR solve?

Fix: Bug fixed
- Memory type multilingual display
- Name modification is prohibited in Data source
- Jump directly to Metadata settings

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-31 10:43:57 +08:00
7dac269429 fix: correct session reference initialization to prevent dialogue misalignment (#12343)
## Summary

Fixes #12311

Changes the `reference` field initialization from `[{}]` to `[]` in
session creation.

### Problem

When creating a session via the SDK API, the `reference` field was
incorrectly initialized as `[{}]`. This caused:
- First dialogue round: Empty reference
- Second dialogue round: Reference pointing to first round's data
- Overall misalignment between dialogue rounds and their references

### Solution

Changed the initialization to `[]` (empty list), which:
- Matches the `Conversation` model's expected default
- Ensures references grow correctly one-to-one with assistant responses
- Aligns with the service layer's expectations

### Testing

After applying this fix:
1. Create a session via `POST /api/v1/chats/{conversation_id}/sessions`
2. Send multiple questions via `POST
/api/v1/chats/{conversation_id}/completions`
3. View the conversation on web - references should now align correctly
with each dialogue round
2025-12-31 10:25:49 +08:00
ec5575dce2 fix(admin-ui): pagination auto reset to first page when after refetching data (#12339)
### What problem does this PR solve?

Admin user enabling/disabling a user will causing user list reset to
first page

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-31 09:39:13 +08:00
6fee60e110 Docs: What is RAG & What is Agent context engine (#12341)
### Type of change

- [x] Documentation Update
2025-12-30 21:29:21 +08:00
52f91c2388 Refine: image/table context. (#12336)
### What problem does this PR solve?

#12303

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-30 20:24:27 +08:00
348265afc1 Feat: Display mode at the begin node #10427 (#12326)
### What problem does this PR solve?

Feat: Display mode at the begin node #10427
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-12-30 19:53:24 +08:00
a7e466142d Fix: Dataset parse logic (#12330)
### What problem does this PR solve?

Fix: Dataset  logic of parser

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-30 19:53:00 +08:00
2fccf3924d Feat: Adapt the theme of the documentation page. #10427 (#12337)
### What problem does this PR solve?

Feat: Adapt the theme of the documentation page. #10427
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-12-30 19:35:44 +08:00
4705d07e11 fix: malformed dynamic translation key chunk.docType.${chunkType} (#12329)
### What problem does this PR solve?

Back-end may returns empty array on `"doc_type_kwd"` property which
causes translation key malformed.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-30 19:33:20 +08:00
68be3b9a3d Update release workflow (#12335)
### What problem does this PR solve?

As title

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-12-30 19:06:27 +08:00
e2d17d808b Potential fix for code scanning alert no. 62: Workflow does not contain permissions (#12334)
Potential fix for
[https://github.com/infiniflow/ragflow/security/code-scanning/62](https://github.com/infiniflow/ragflow/security/code-scanning/62)

In general, the fix is to explicitly declare a `permissions:` block so
the GITHUB_TOKEN used by this workflow only has the scopes required:
read access to repository contents and write access to
contents/releases. Since this workflow creates or moves tags and
creates/overwrites releases via `softprops/action-gh-release`, it needs
`contents: write`. There is no evidence that it needs other elevated
scopes (issues, pull-requests, actions, etc.), so these should remain at
their default of `none` by omission.

The best minimal fix without changing existing functionality is to add a
workflow-level `permissions:` block near the top of
`.github/workflows/release.yml`, after `name:` and before `on:` (or
anywhere at the root level, but this is conventional). This will apply
to all jobs (there is only `jobs.release`) and ensure that the
GITHUB_TOKEN has only `contents: write`. No additional imports or
methods are needed because this is a YAML configuration change only.

Concretely:
- Edit `.github/workflows/release.yml`.
- Insert:

```yaml
permissions:
  contents: write
```

between line 2 (empty line after `name: release`) and line 3 (`on:`). No
other lines need to be changed.


_Suggested fixes powered by Copilot Autofix. Review carefully before
merging._

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-12-30 18:59:51 +08:00
95edbd43ba Update model providers (#12333)
### What problem does this PR solve?

As title

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-12-30 18:51:35 +08:00
b96d553cd8 Update release workflow (#12327)
### What problem does this PR solve?

As title.

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-12-30 17:25:27 +08:00
bffdb5fb11 Feat: add IMAP data source integration with configuration and sync capabilities (#12316)
### What problem does this PR solve?
issue:
#12217 [#12313](https://github.com/infiniflow/ragflow/issues/12313)
change:
add IMAP data source integration with configuration and sync
capabilities

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-12-30 17:09:13 +08:00
109e782493 Feat: On the agent page and chat page, you can only select knowledge bases that use the same embedding model. #12320 (#12321)
### What problem does this PR solve?

Feat: On the agent page and chat page, you can only select knowledge
bases that use the same embedding model. #12320

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-12-30 17:08:30 +08:00
ff2c70608d Fix: judge index exist before delete memory message. (#12318)
### What problem does this PR solve?

Judge index exist before delete memory message.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-30 15:54:07 +08:00
5903d1c8f1 Feat: GitHub connector (#12314)
### What problem does this PR solve?

Feat: GitHub connector

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-12-30 15:09:52 +08:00
f0392e7501 Fix IDE warnings (#12315)
### What problem does this PR solve?

As title.

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-12-30 15:04:09 +08:00
4037788e0c Fix: Dataset parse error (#12310)
### What problem does this PR solve?

Fix: Dataset parse error

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-30 13:08:20 +08:00
59884ab0fb Fix TypeError in meta_filter when using numeric metadata (#12286)
The filter_out function in metadata_utils.py was using a list of tuples
to evaluate conditions. Python eagerly evaluates all tuple elements when
constructing the list, causing "input in value" to be evaluated even
when the operator is "=". When input and value are floats (after numeric
conversion), this causes TypeError: "argument of type 'float' is not
iterable".

This change replaces the tuple list with if-elif chain, ensuring only
the matching condition is evaluated.

### What problem does this PR solve?

Fixes #12285

When using comparison operators like `=`, `>`, `<` with numeric
metadata, the `filter_out` function throws `TypeError("argument of type
'float' is not iterable")`. This is because Python eagerly evaluates all
tuple elements when constructing a list, causing `input in value` to be
evaluated even when the operator is `=`.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2025-12-30 11:56:48 +08:00
4a6d37f0e8 Fix: use async task to save memory (#12308)
### What problem does this PR solve?

Use async task to save memory.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Jin Hai <haijin.chn@gmail.com>
2025-12-30 11:41:38 +08:00
731e2d5f26 api key delete bug - Bug #3045 (#12299)
Description:
Fixed an issue where deleting an API token would fail because it was
incorrectly using current_user.id as the tenant_id instead of querying
the actual tenant ID from UserTenantService.

Changes:

Updated rm() endpoint to fetch the correct tenant_id from
UserTenantService before deleting the API token
Added proper error handling with try/except block
Code style cleanup: consistent quote usage and formatting
Related Issue: #3045

https://github.com/infiniflow/ragflow/issues/3045

Co-authored-by: Mardani, Ramin <ramin.mardani@sscinc.com>
2025-12-30 11:27:04 +08:00
df3cbb9b9e Refactor code (#12305)
### What problem does this PR solve?

as title

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-12-30 11:09:18 +08:00
5402666b19 docs: fix typos (#12301)
### What problem does this PR solve?

fix typos

### Type of change

- [x] Documentation Update
2025-12-30 09:39:28 +08:00
103 changed files with 3815 additions and 892 deletions

View File

@ -10,6 +10,12 @@ on:
tags:
- "v*.*.*" # normal release
permissions:
contents: write
actions: read
checks: read
statuses: read
# https://docs.github.com/en/actions/using-jobs/using-concurrency
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
@ -76,6 +82,14 @@ jobs:
# The body field does not support environment variable substitution directly.
body_path: release_body.md
- name: Build and push image
run: |
sudo docker login --username infiniflow --password-stdin <<< ${{ secrets.DOCKERHUB_TOKEN }}
sudo docker build --build-arg NEED_MIRROR=1 --build-arg HTTPS_PROXY=${HTTPS_PROXY} --build-arg HTTP_PROXY=${HTTP_PROXY} -t infiniflow/ragflow:${RELEASE_TAG} -f Dockerfile .
sudo docker tag infiniflow/ragflow:${RELEASE_TAG} infiniflow/ragflow:latest
sudo docker push infiniflow/ragflow:${RELEASE_TAG}
sudo docker push infiniflow/ragflow:latest
- name: Build and push ragflow-sdk
if: startsWith(github.ref, 'refs/tags/v')
run: |
@ -85,11 +99,3 @@ jobs:
if: startsWith(github.ref, 'refs/tags/v')
run: |
cd admin/client && uv build && uv publish --token ${{ secrets.PYPI_API_TOKEN }}
- name: Build and push image
run: |
sudo docker login --username infiniflow --password-stdin <<< ${{ secrets.DOCKERHUB_TOKEN }}
sudo docker build --build-arg NEED_MIRROR=1 --build-arg HTTPS_PROXY=${HTTPS_PROXY} --build-arg HTTP_PROXY=${HTTP_PROXY} -t infiniflow/ragflow:${RELEASE_TAG} -f Dockerfile .
sudo docker tag infiniflow/ragflow:${RELEASE_TAG} infiniflow/ragflow:latest
sudo docker push infiniflow/ragflow:${RELEASE_TAG}
sudo docker push infiniflow/ragflow:latest

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -37,7 +37,7 @@
<h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://github.com/infiniflow/ragflow/issues/12241">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/NjYzJD3GM3">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a>
@ -188,12 +188,12 @@ releases! 🌟
> All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64.
> If you are on an ARM64 platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a Docker image compatible with your system.
> The command below downloads the `v0.23.0` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.23.0`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server.
> The command below downloads the `v0.23.1` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.23.1`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server.
```bash
$ cd ragflow/docker
# git checkout v0.23.0
# git checkout v0.23.1
# Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases)
# This step ensures the **entrypoint.sh** file in the code matches the Docker image version.
@ -233,7 +233,7 @@ releases! 🌟
* Running on all addresses (0.0.0.0)
```
> If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network anormal`
> If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network abnormal`
> error because, at that moment, your RAGFlow may not be fully initialized.
>
5. In your web browser, enter the IP address of your server and log in to RAGFlow.
@ -396,7 +396,7 @@ docker build --platform linux/amd64 \
## 📜 Roadmap
See the [RAGFlow Roadmap 2025](https://github.com/infiniflow/ragflow/issues/4214)
See the [RAGFlow Roadmap 2026](https://github.com/infiniflow/ragflow/issues/12241)
## 🏄 Community

View File

@ -22,7 +22,7 @@
<img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru">
@ -37,7 +37,7 @@
<h4 align="center">
<a href="https://ragflow.io/docs/dev/">Dokumentasi</a> |
<a href="https://github.com/infiniflow/ragflow/issues/4214">Peta Jalan</a> |
<a href="https://github.com/infiniflow/ragflow/issues/12241">Peta Jalan</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/NjYzJD3GM3">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a>
@ -188,12 +188,12 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
> Semua gambar Docker dibangun untuk platform x86. Saat ini, kami tidak menawarkan gambar Docker untuk ARM64.
> Jika Anda menggunakan platform ARM64, [silakan gunakan panduan ini untuk membangun gambar Docker yang kompatibel dengan sistem Anda](https://ragflow.io/docs/dev/build_docker_image).
> Perintah di bawah ini mengunduh edisi v0.23.0 dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.23.0, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server.
> Perintah di bawah ini mengunduh edisi v0.23.1 dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.23.1, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server.
```bash
$ cd ragflow/docker
# git checkout v0.23.0
# git checkout v0.23.1
# Opsional: gunakan tag stabil (lihat releases: https://github.com/infiniflow/ragflow/releases)
# This steps ensures the **entrypoint.sh** file in the code matches the Docker image version.
@ -233,7 +233,7 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
* Running on all addresses (0.0.0.0)
```
> Jika Anda melewatkan langkah ini dan langsung login ke RAGFlow, browser Anda mungkin menampilkan error `network anormal`
> Jika Anda melewatkan langkah ini dan langsung login ke RAGFlow, browser Anda mungkin menampilkan error `network abnormal`
> karena RAGFlow mungkin belum sepenuhnya siap.
>
2. Buka browser web Anda, masukkan alamat IP server Anda, dan login ke RAGFlow.
@ -368,7 +368,7 @@ docker build --platform linux/amd64 \
## 📜 Roadmap
Lihat [Roadmap RAGFlow 2025](https://github.com/infiniflow/ragflow/issues/4214)
Lihat [Roadmap RAGFlow 2026](https://github.com/infiniflow/ragflow/issues/12241)
## 🏄 Komunitas

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -37,7 +37,7 @@
<h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://github.com/infiniflow/ragflow/issues/12241">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/NjYzJD3GM3">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a>
@ -168,12 +168,12 @@
> 現在、公式に提供されているすべての Docker イメージは x86 アーキテクチャ向けにビルドされており、ARM64 用の Docker イメージは提供されていません。
> ARM64 アーキテクチャのオペレーティングシステムを使用している場合は、[このドキュメント](https://ragflow.io/docs/dev/build_docker_image)を参照して Docker イメージを自分でビルドしてください。
> 以下のコマンドは、RAGFlow Docker イメージの v0.23.0 エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.23.0 とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。
> 以下のコマンドは、RAGFlow Docker イメージの v0.23.1 エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.23.1 とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。
```bash
$ cd ragflow/docker
# git checkout v0.23.0
# git checkout v0.23.1
# 任意: 安定版タグを利用 (一覧: https://github.com/infiniflow/ragflow/releases)
# この手順は、コード内の entrypoint.sh ファイルが Docker イメージのバージョンと一致していることを確認します。
@ -368,7 +368,7 @@ docker build --platform linux/amd64 \
## 📜 ロードマップ
[RAGFlow ロードマップ 2025](https://github.com/infiniflow/ragflow/issues/4214) を参照
[RAGFlow ロードマップ 2026](https://github.com/infiniflow/ragflow/issues/12241) を参照
## 🏄 コミュニティ

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -37,7 +37,7 @@
<h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://github.com/infiniflow/ragflow/issues/12241">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/NjYzJD3GM3">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a>
@ -170,12 +170,12 @@
> 모든 Docker 이미지는 x86 플랫폼을 위해 빌드되었습니다. 우리는 현재 ARM64 플랫폼을 위한 Docker 이미지를 제공하지 않습니다.
> ARM64 플랫폼을 사용 중이라면, [시스템과 호환되는 Docker 이미지를 빌드하려면 이 가이드를 사용해 주세요](https://ragflow.io/docs/dev/build_docker_image).
> 아래 명령어는 RAGFlow Docker 이미지의 v0.23.0 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.23.0과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오.
> 아래 명령어는 RAGFlow Docker 이미지의 v0.23.1 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.23.1과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오.
```bash
$ cd ragflow/docker
# git checkout v0.23.0
# git checkout v0.23.1
# Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases)
# 이 단계는 코드의 entrypoint.sh 파일이 Docker 이미지 버전과 일치하도록 보장합니다.
@ -214,7 +214,7 @@
* Running on all addresses (0.0.0.0)
```
> 만약 확인 단계를 건너뛰고 바로 RAGFlow에 로그인하면, RAGFlow가 완전히 초기화되지 않았기 때문에 브라우저에서 `network anormal` 오류가 발생할 수 있습니다.
> 만약 확인 단계를 건너뛰고 바로 RAGFlow에 로그인하면, RAGFlow가 완전히 초기화되지 않았기 때문에 브라우저에서 `network abnormal` 오류가 발생할 수 있습니다.
2. 웹 브라우저에 서버의 IP 주소를 입력하고 RAGFlow에 로그인하세요.
> 기본 설정을 사용할 경우, `http://IP_OF_YOUR_MACHINE`만 입력하면 됩니다 (포트 번호는 제외). 기본 HTTP 서비스 포트 `80`은 기본 구성으로 사용할 때 생략할 수 있습니다.
@ -372,7 +372,7 @@ docker build --platform linux/amd64 \
## 📜 로드맵
[RAGFlow 로드맵 2025](https://github.com/infiniflow/ragflow/issues/4214)을 확인하세요.
[RAGFlow 로드맵 2026](https://github.com/infiniflow/ragflow/issues/12241)을 확인하세요.
## 🏄 커뮤니티

View File

@ -22,7 +22,7 @@
<img alt="Badge Estático" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Última%20Relese" alt="Última Versão">
@ -37,7 +37,7 @@
<h4 align="center">
<a href="https://ragflow.io/docs/dev/">Documentação</a> |
<a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://github.com/infiniflow/ragflow/issues/12241">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/NjYzJD3GM3">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a>
@ -188,12 +188,12 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
> Todas as imagens Docker são construídas para plataformas x86. Atualmente, não oferecemos imagens Docker para ARM64.
> Se você estiver usando uma plataforma ARM64, por favor, utilize [este guia](https://ragflow.io/docs/dev/build_docker_image) para construir uma imagem Docker compatível com o seu sistema.
> O comando abaixo baixa a edição`v0.23.0` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.23.0`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor.
> O comando abaixo baixa a edição`v0.23.1` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.23.1`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor.
```bash
$ cd ragflow/docker
# git checkout v0.23.0
# git checkout v0.23.1
# Opcional: use uma tag estável (veja releases: https://github.com/infiniflow/ragflow/releases)
# Esta etapa garante que o arquivo entrypoint.sh no código corresponda à versão da imagem do Docker.
@ -232,7 +232,7 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
* Rodando em todos os endereços (0.0.0.0)
```
> Se você pular essa etapa de confirmação e acessar diretamente o RAGFlow, seu navegador pode exibir um erro `network anormal`, pois, nesse momento, seu RAGFlow pode não estar totalmente inicializado.
> Se você pular essa etapa de confirmação e acessar diretamente o RAGFlow, seu navegador pode exibir um erro `network abnormal`, pois, nesse momento, seu RAGFlow pode não estar totalmente inicializado.
>
5. No seu navegador, insira o endereço IP do seu servidor e faça login no RAGFlow.
@ -385,7 +385,7 @@ docker build --platform linux/amd64 \
## 📜 Roadmap
Veja o [RAGFlow Roadmap 2025](https://github.com/infiniflow/ragflow/issues/4214)
Veja o [RAGFlow Roadmap 2026](https://github.com/infiniflow/ragflow/issues/12241)
## 🏄 Comunidade

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -37,7 +37,7 @@
<h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://github.com/infiniflow/ragflow/issues/12241">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/NjYzJD3GM3">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a>
@ -125,7 +125,7 @@
### 🍔 **相容各類異質資料來源**
- 支援豐富的文件類型,包括 Word 文件、PPT、excel 表格、txt 檔案、圖片、PDF、影印件、印件、結構化資料、網頁等。
- 支援豐富的文件類型,包括 Word 文件、PPT、excel 表格、txt 檔案、圖片、PDF、影印件、印件、結構化資料、網頁等。
### 🛀 **全程無憂、自動化的 RAG 工作流程**
@ -187,12 +187,12 @@
> 所有 Docker 映像檔都是為 x86 平台建置的。目前,我們不提供 ARM64 平台的 Docker 映像檔。
> 如果您使用的是 ARM64 平台,請使用 [這份指南](https://ragflow.io/docs/dev/build_docker_image) 來建置適合您系統的 Docker 映像檔。
> 執行以下指令會自動下載 RAGFlow Docker 映像 `v0.23.0`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.23.0` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。
> 執行以下指令會自動下載 RAGFlow Docker 映像 `v0.23.1`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.23.1` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。
```bash
$ cd ragflow/docker
# git checkout v0.23.0
# git checkout v0.23.1
# 可選使用穩定版標籤查看發佈https://github.com/infiniflow/ragflow/releases
# 此步驟確保程式碼中的 entrypoint.sh 檔案與 Docker 映像版本一致。
@ -237,7 +237,7 @@
* Running on all addresses (0.0.0.0)
```
> 如果您跳過這一步驟系統確認步驟就登入 RAGFlow你的瀏覽器有可能會提示 `network anormal` 或 `網路異常`,因為 RAGFlow 可能並未完全啟動成功。
> 如果您跳過這一步驟系統確認步驟就登入 RAGFlow你的瀏覽器有可能會提示 `network abnormal` 或 `網路異常`,因為 RAGFlow 可能並未完全啟動成功。
>
5. 在你的瀏覽器中輸入你的伺服器對應的 IP 位址並登入 RAGFlow。
@ -399,7 +399,7 @@ docker build --platform linux/amd64 \
## 📜 路線圖
詳見 [RAGFlow Roadmap 2025](https://github.com/infiniflow/ragflow/issues/4214) 。
詳見 [RAGFlow Roadmap 2026](https://github.com/infiniflow/ragflow/issues/12241) 。
## 🏄 開源社群

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -37,7 +37,7 @@
<h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://github.com/infiniflow/ragflow/issues/12241">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/NjYzJD3GM3">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a>
@ -188,12 +188,12 @@
> 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。
> 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。
> 运行以下命令会自动下载 RAGFlow Docker 镜像 `v0.23.0`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.23.0` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。
> 运行以下命令会自动下载 RAGFlow Docker 镜像 `v0.23.1`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.23.1` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。
```bash
$ cd ragflow/docker
# git checkout v0.23.0
# git checkout v0.23.1
# 可选使用稳定版本标签查看发布https://github.com/infiniflow/ragflow/releases
# 这一步确保代码中的 entrypoint.sh 文件与 Docker 镜像的版本保持一致。
@ -238,7 +238,7 @@
* Running on all addresses (0.0.0.0)
```
> 如果您在没有看到上面的提示信息出来之前,就尝试登录 RAGFlow你的浏览器有可能会提示 `network anormal` 或 `网络异常`。
> 如果您在没有看到上面的提示信息出来之前,就尝试登录 RAGFlow你的浏览器有可能会提示 `network abnormal` 或 `网络异常`。
5. 在你的浏览器中输入你的服务器对应的 IP 地址并登录 RAGFlow。
> 上面这个例子中,您只需输入 http://IP_OF_YOUR_MACHINE 即可:未改动过配置则无需输入端口(默认的 HTTP 服务端口 80
@ -402,7 +402,7 @@ docker build --platform linux/amd64 \
## 📜 路线图
详见 [RAGFlow Roadmap 2025](https://github.com/infiniflow/ragflow/issues/4214) 。
详见 [RAGFlow Roadmap 2026](https://github.com/infiniflow/ragflow/issues/12241) 。
## 🏄 开源社区

View File

@ -48,7 +48,7 @@ It consists of a server-side Service and a command-line client (CLI), both imple
1. Ensure the Admin Service is running.
2. Install ragflow-cli.
```bash
pip install ragflow-cli==0.23.0
pip install ragflow-cli==0.23.1
```
3. Launch the CLI client:
```bash

View File

@ -370,7 +370,7 @@ class AdminCLI(Cmd):
res_json = response.json()
error_code = res_json.get("code", -1)
if error_code == 0:
self.session.headers.update({"Content-Type": "application/json", "Authorization": response.headers["Authorization"], "User-Agent": "RAGFlow-CLI/0.23.0"})
self.session.headers.update({"Content-Type": "application/json", "Authorization": response.headers["Authorization"], "User-Agent": "RAGFlow-CLI/0.23.1"})
print("Authentication successful.")
return True
else:

View File

@ -1,6 +1,6 @@
[project]
name = "ragflow-cli"
version = "0.23.0"
version = "0.23.1"
description = "Admin Service's client of [RAGFlow](https://github.com/infiniflow/ragflow). The Admin Service provides user management and system monitoring. "
authors = [{ name = "Lynn", email = "lynn_inf@hotmail.com" }]
license = { text = "Apache License, Version 2.0" }

2
admin/client/uv.lock generated
View File

@ -196,7 +196,7 @@ wheels = [
[[package]]
name = "ragflow-cli"
version = "0.23.0"
version = "0.23.1"
source = { virtual = "." }
dependencies = [
{ name = "beartype" },

View File

@ -13,6 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import logging
import re
from werkzeug.security import check_password_hash
@ -179,10 +181,14 @@ class ServiceMgr:
@staticmethod
def get_all_services():
doc_engine = os.getenv('DOC_ENGINE', 'elasticsearch')
result = []
configs = SERVICE_CONFIGS.configs
for service_id, config in enumerate(configs):
config_dict = config.to_dict()
if config_dict['service_type'] == 'retrieval':
if config_dict['extra']['retrieval_type'] != doc_engine:
continue
try:
service_detail = ServiceMgr.get_service_details(service_id)
if "status" in service_detail:

View File

@ -33,7 +33,7 @@ from common.connection_utils import timeout
from common.misc_utils import get_uuid
from common import settings
from api.db.joint_services.memory_message_service import save_to_memory
from api.db.joint_services.memory_message_service import queue_save_to_memory_task
class MessageParam(ComponentParamBase):
@ -437,17 +437,4 @@ class Message(ComponentBase):
"user_input": self._canvas.get_sys_query(),
"agent_response": content
}
res = []
for memory_id in self._param.memory_ids:
success, msg = await save_to_memory(memory_id, message_dict)
res.append({
"memory_id": memory_id,
"success": success,
"msg": msg
})
if all([r["success"] for r in res]):
return True, "Successfully added to memories."
error_text = "Some messages failed to add. " + " ".join([f"Add to memory {r['memory_id']} failed, detail: {r['msg']}" for r in res if not r["success"]])
logging.error(error_text)
return False, error_text
return await queue_save_to_memory_task(self._param.memory_ids, message_dict)

View File

@ -159,7 +159,8 @@ async def delete_memory(memory_id):
return get_json_result(message=True, code=RetCode.NOT_FOUND)
try:
MemoryService.delete_memory(memory_id)
MessageService.delete_message({"memory_id": memory_id}, memory.tenant_id, memory_id)
if MessageService.has_index(memory.tenant_id, memory_id):
MessageService.delete_message({"memory_id": memory_id}, memory.tenant_id, memory_id)
return get_json_result(message=True)
except Exception as e:
logging.error(e)

View File

@ -60,7 +60,7 @@ async def create(tenant_id, chat_id):
"name": req.get("name", "New session"),
"message": [{"role": "assistant", "content": dia[0].prompt_config.get("prologue")}],
"user_id": req.get("user_id", ""),
"reference": [{}],
"reference": [],
}
if not conv.get("name"):
return get_error_data_result(message="`name` can not be empty.")

View File

@ -177,7 +177,7 @@ def healthz():
return jsonify(result), (200 if all_ok else 500)
@manager.route("/ping", methods=["GET"]) # noqa: F821
@manager.route("/ping", methods=["GET"]) # noqa: F821
def ping():
return "pong", 200
@ -213,7 +213,7 @@ def new_token():
if not tenants:
return get_data_error_result(message="Tenant not found!")
tenant_id = [tenant for tenant in tenants if tenant.role == 'owner'][0].tenant_id
tenant_id = [tenant for tenant in tenants if tenant.role == "owner"][0].tenant_id
obj = {
"tenant_id": tenant_id,
"token": generate_confirmation_token(),
@ -268,13 +268,12 @@ def token_list():
if not tenants:
return get_data_error_result(message="Tenant not found!")
tenant_id = [tenant for tenant in tenants if tenant.role == 'owner'][0].tenant_id
tenant_id = [tenant for tenant in tenants if tenant.role == "owner"][0].tenant_id
objs = APITokenService.query(tenant_id=tenant_id)
objs = [o.to_dict() for o in objs]
for o in objs:
if not o["beta"]:
o["beta"] = generate_confirmation_token().replace(
"ragflow-", "")[:32]
o["beta"] = generate_confirmation_token().replace("ragflow-", "")[:32]
APITokenService.filter_update([APIToken.tenant_id == tenant_id, APIToken.token == o["token"]], o)
return get_json_result(data=objs)
except Exception as e:
@ -307,13 +306,19 @@ def rm(token):
type: boolean
description: Deletion status.
"""
APITokenService.filter_delete(
[APIToken.tenant_id == current_user.id, APIToken.token == token]
)
return get_json_result(data=True)
try:
tenants = UserTenantService.query(user_id=current_user.id)
if not tenants:
return get_data_error_result(message="Tenant not found!")
tenant_id = tenants[0].tenant_id
APITokenService.filter_delete([APIToken.tenant_id == tenant_id, APIToken.token == token])
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@manager.route('/config', methods=['GET']) # noqa: F821
@manager.route("/config", methods=["GET"]) # noqa: F821
def get_config():
"""
Get system configuration.
@ -330,6 +335,4 @@ def get_config():
type: integer 0 means disabled, 1 means enabled
description: Whether user registration is enabled
"""
return get_json_result(data={
"registerEnabled": settings.REGISTER_ENABLED
})
return get_json_result(data={"registerEnabled": settings.REGISTER_ENABLED})

View File

@ -16,9 +16,14 @@
import logging
from typing import List
from api.db.services.task_service import TaskService
from common import settings
from common.time_utils import current_timestamp, timestamp_to_date, format_iso_8601_to_ymd_hms
from common.constants import MemoryType, LLMType
from common.doc_store.doc_store_base import FusionExpr
from common.misc_utils import get_uuid
from api.db.db_utils import bulk_insert_into_db
from api.db.db_models import Task
from api.db.services.memory_service import MemoryService
from api.db.services.tenant_llm_service import TenantLLMService
from api.db.services.llm_service import LLMBundle
@ -82,32 +87,44 @@ async def save_to_memory(memory_id: str, message_dict: dict):
"forget_at": None,
"status": True
} for content in extracted_content]]
embedding_model = LLMBundle(tenant_id, llm_type=LLMType.EMBEDDING, llm_name=memory.embd_id)
vector_list, _ = embedding_model.encode([msg["content"] for msg in message_list])
for idx, msg in enumerate(message_list):
msg["content_embed"] = vector_list[idx]
vector_dimension = len(vector_list[0])
if not MessageService.has_index(tenant_id, memory_id):
created = MessageService.create_index(tenant_id, memory_id, vector_size=vector_dimension)
if not created:
return False, "Failed to create message index."
return await embed_and_save(memory, message_list)
new_msg_size = sum([MessageService.calculate_message_size(m) for m in message_list])
current_memory_size = get_memory_size_cache(memory_id, tenant_id)
if new_msg_size + current_memory_size > memory.memory_size:
size_to_delete = current_memory_size + new_msg_size - memory.memory_size
if memory.forgetting_policy == "FIFO":
message_ids_to_delete, delete_size = MessageService.pick_messages_to_delete_by_fifo(memory_id, tenant_id, size_to_delete)
MessageService.delete_message({"message_id": message_ids_to_delete}, tenant_id, memory_id)
decrease_memory_size_cache(memory_id, delete_size)
else:
return False, "Failed to insert message into memory. Memory size reached limit and cannot decide which to delete."
fail_cases = MessageService.insert_message(message_list, tenant_id, memory_id)
if fail_cases:
return False, "Failed to insert message into memory. Details: " + "; ".join(fail_cases)
increase_memory_size_cache(memory_id, new_msg_size)
return True, "Message saved successfully."
async def save_extracted_to_memory_only(memory_id: str, message_dict, source_message_id: int):
memory = MemoryService.get_by_memory_id(memory_id)
if not memory:
return False, f"Memory '{memory_id}' not found."
if memory.memory_type == MemoryType.RAW.value:
return True, f"Memory '{memory_id}' don't need to extract."
tenant_id = memory.tenant_id
extracted_content = await extract_by_llm(
tenant_id,
memory.llm_id,
{"temperature": memory.temperature},
get_memory_type_human(memory.memory_type),
message_dict.get("user_input", ""),
message_dict.get("agent_response", "")
)
message_list = [{
"message_id": REDIS_CONN.generate_auto_increment_id(namespace="memory"),
"message_type": content["message_type"],
"source_id": source_message_id,
"memory_id": memory_id,
"user_id": "",
"agent_id": message_dict["agent_id"],
"session_id": message_dict["session_id"],
"content": content["content"],
"valid_at": content["valid_at"],
"invalid_at": content["invalid_at"] if content["invalid_at"] else None,
"forget_at": None,
"status": True
} for content in extracted_content]
if not message_list:
return True, "No memory extracted from raw message."
return await embed_and_save(memory, message_list)
async def extract_by_llm(tenant_id: str, llm_id: str, extract_conf: dict, memory_type: List[str], user_input: str,
@ -136,6 +153,36 @@ async def extract_by_llm(tenant_id: str, llm_id: str, extract_conf: dict, memory
} for message_type, extracted_content_list in res_json.items() for extracted_content in extracted_content_list]
async def embed_and_save(memory, message_list: list[dict]):
embedding_model = LLMBundle(memory.tenant_id, llm_type=LLMType.EMBEDDING, llm_name=memory.embd_id)
vector_list, _ = embedding_model.encode([msg["content"] for msg in message_list])
for idx, msg in enumerate(message_list):
msg["content_embed"] = vector_list[idx]
vector_dimension = len(vector_list[0])
if not MessageService.has_index(memory.tenant_id, memory.id):
created = MessageService.create_index(memory.tenant_id, memory.id, vector_size=vector_dimension)
if not created:
return False, "Failed to create message index."
new_msg_size = sum([MessageService.calculate_message_size(m) for m in message_list])
current_memory_size = get_memory_size_cache(memory.tenant_id, memory.id)
if new_msg_size + current_memory_size > memory.memory_size:
size_to_delete = current_memory_size + new_msg_size - memory.memory_size
if memory.forgetting_policy == "FIFO":
message_ids_to_delete, delete_size = MessageService.pick_messages_to_delete_by_fifo(memory.id, memory.tenant_id,
size_to_delete)
MessageService.delete_message({"message_id": message_ids_to_delete}, memory.tenant_id, memory.id)
decrease_memory_size_cache(memory.id, delete_size)
else:
return False, "Failed to insert message into memory. Memory size reached limit and cannot decide which to delete."
fail_cases = MessageService.insert_message(message_list, memory.tenant_id, memory.id)
if fail_cases:
return False, "Failed to insert message into memory. Details: " + "; ".join(fail_cases)
increase_memory_size_cache(memory.id, new_msg_size)
return True, "Message saved successfully."
def query_message(filter_dict: dict, params: dict):
"""
:param filter_dict: {
@ -231,3 +278,112 @@ def init_memory_size_cache():
def judge_system_prompt_is_default(system_prompt: str, memory_type: int|list[str]):
memory_type_list = memory_type if isinstance(memory_type, list) else get_memory_type_human(memory_type)
return system_prompt == PromptAssembler.assemble_system_prompt({"memory_type": memory_type_list})
async def queue_save_to_memory_task(memory_ids: list[str], message_dict: dict):
"""
:param memory_ids:
:param message_dict: {
"user_id": str,
"agent_id": str,
"session_id": str,
"user_input": str,
"agent_response": str
}
"""
def new_task(_memory_id: str, _source_id: int):
return {
"id": get_uuid(),
"doc_id": _memory_id,
"task_type": "memory",
"progress": 0.0,
"digest": str(_source_id)
}
not_found_memory = []
failed_memory = []
for memory_id in memory_ids:
memory = MemoryService.get_by_memory_id(memory_id)
if not memory:
not_found_memory.append(memory_id)
continue
raw_message_id = REDIS_CONN.generate_auto_increment_id(namespace="memory")
raw_message = {
"message_id": raw_message_id,
"message_type": MemoryType.RAW.name.lower(),
"source_id": 0,
"memory_id": memory_id,
"user_id": "",
"agent_id": message_dict["agent_id"],
"session_id": message_dict["session_id"],
"content": f"User Input: {message_dict.get('user_input')}\nAgent Response: {message_dict.get('agent_response')}",
"valid_at": timestamp_to_date(current_timestamp()),
"invalid_at": None,
"forget_at": None,
"status": True
}
res, msg = await embed_and_save(memory, [raw_message])
if not res:
failed_memory.append({"memory_id": memory_id, "fail_msg": msg})
continue
task = new_task(memory_id, raw_message_id)
bulk_insert_into_db(Task, [task], replace_on_conflict=True)
task_message = {
"id": task["id"],
"task_id": task["id"],
"task_type": task["task_type"],
"memory_id": memory_id,
"source_id": raw_message_id,
"message_dict": message_dict
}
if not REDIS_CONN.queue_product(settings.get_svr_queue_name(priority=0), message=task_message):
failed_memory.append({"memory_id": memory_id, "fail_msg": "Can't access Redis."})
error_msg = ""
if not_found_memory:
error_msg = f"Memory {not_found_memory} not found."
if failed_memory:
error_msg += "".join([f"Memory {fm['memory_id']} failed. Detail: {fm['fail_msg']}" for fm in failed_memory])
if error_msg:
return False, error_msg
return True, "All add to task."
async def handle_save_to_memory_task(task_param: dict):
"""
:param task_param: {
"id": task_id
"memory_id": id
"source_id": id
"message_dict": {
"user_id": str,
"agent_id": str,
"session_id": str,
"user_input": str,
"agent_response": str
}
}
"""
_, task = TaskService.get_by_id(task_param["id"])
if not task:
return False, f"Task {task_param['id']} is not found."
if task.progress == -1:
return False, f"Task {task_param['id']} is already failed."
now_time = current_timestamp()
TaskService.update_by_id(task_param["id"], {"begin_at": timestamp_to_date(now_time)})
memory_id = task_param["memory_id"]
source_id = task_param["source_id"]
message_dict = task_param["message_dict"]
success, msg = await save_extracted_to_memory_only(memory_id, message_dict, source_id)
if success:
TaskService.update_progress(task.id, {"progress": 1.0, "progress_msg": msg})
return True, msg
logging.error(msg)
TaskService.update_progress(task.id, {"progress": -1, "progress_msg": None})
return False, msg

View File

@ -164,7 +164,7 @@ class UserService(CommonService):
@classmethod
@DB.connection_context()
def get_all_users(cls):
users = cls.model.select()
users = cls.model.select().order_by(cls.model.email)
return list(users)

View File

@ -130,14 +130,17 @@ class FileSource(StrEnum):
GOOGLE_CLOUD_STORAGE = "google_cloud_storage"
AIRTABLE = "airtable"
ASANA = "asana"
GITHUB = "github"
GITLAB = "gitlab"
IMAP = "imap"
class PipelineTaskType(StrEnum):
PARSE = "Parse"
DOWNLOAD = "Download"
RAPTOR = "RAPTOR"
GRAPH_RAG = "GraphRAG"
MINDMAP = "Mindmap"
MEMORY = "Memory"
VALID_PIPELINE_TASK_TYPES = {PipelineTaskType.PARSE, PipelineTaskType.DOWNLOAD, PipelineTaskType.RAPTOR,

View File

@ -38,6 +38,7 @@ from .webdav_connector import WebDAVConnector
from .moodle_connector import MoodleConnector
from .airtable_connector import AirtableConnector
from .asana_connector import AsanaConnector
from .imap_connector import ImapConnector
from .config import BlobType, DocumentSource
from .models import Document, TextSection, ImageSection, BasicExpertInfo
from .exceptions import (
@ -75,4 +76,5 @@ __all__ = [
"UnexpectedValidationError",
"AirtableConnector",
"AsanaConnector",
"ImapConnector"
]

View File

@ -1,6 +1,6 @@
from datetime import datetime, timezone
import logging
from typing import Any
from typing import Any, Generator
import requests
@ -8,8 +8,8 @@ from pyairtable import Api as AirtableApi
from common.data_source.config import AIRTABLE_CONNECTOR_SIZE_THRESHOLD, INDEX_BATCH_SIZE, DocumentSource
from common.data_source.exceptions import ConnectorMissingCredentialError
from common.data_source.interfaces import LoadConnector
from common.data_source.models import Document, GenerateDocumentsOutput
from common.data_source.interfaces import LoadConnector, PollConnector
from common.data_source.models import Document, GenerateDocumentsOutput, SecondsSinceUnixEpoch
from common.data_source.utils import extract_size_bytes, get_file_ext
class AirtableClientNotSetUpError(PermissionError):
@ -19,7 +19,7 @@ class AirtableClientNotSetUpError(PermissionError):
)
class AirtableConnector(LoadConnector):
class AirtableConnector(LoadConnector, PollConnector):
"""
Lightweight Airtable connector.
@ -132,6 +132,26 @@ class AirtableConnector(LoadConnector):
if batch:
yield batch
def poll_source(self, start: SecondsSinceUnixEpoch, end: SecondsSinceUnixEpoch) -> Generator[list[Document], None, None]:
"""Poll source to get documents"""
start_dt = datetime.fromtimestamp(start, tz=timezone.utc)
end_dt = datetime.fromtimestamp(end, tz=timezone.utc)
for batch in self.load_from_state():
filtered: list[Document] = []
for doc in batch:
if not doc.doc_updated_at:
continue
doc_dt = doc.doc_updated_at.astimezone(timezone.utc)
if start_dt <= doc_dt < end_dt:
filtered.append(doc)
if filtered:
yield filtered
if __name__ == "__main__":
import os

View File

@ -57,7 +57,9 @@ class DocumentSource(str, Enum):
ASANA = "asana"
GITHUB = "github"
GITLAB = "gitlab"
IMAP = "imap"
class FileOrigin(str, Enum):
"""File origins"""
CONNECTOR = "connector"
@ -234,6 +236,8 @@ _REPLACEMENT_EXPANSIONS = "body.view.value"
BOX_WEB_OAUTH_REDIRECT_URI = os.environ.get("BOX_WEB_OAUTH_REDIRECT_URI", "http://localhost:9380/v1/connector/box/oauth/web/callback")
GITHUB_CONNECTOR_BASE_URL = os.environ.get("GITHUB_CONNECTOR_BASE_URL") or None
class HtmlBasedConnectorTransformLinksStrategy(str, Enum):
# remove links entirely
STRIP = "strip"
@ -263,6 +267,10 @@ ASANA_CONNECTOR_SIZE_THRESHOLD = int(
os.environ.get("ASANA_CONNECTOR_SIZE_THRESHOLD", 10 * 1024 * 1024)
)
IMAP_CONNECTOR_SIZE_THRESHOLD = int(
os.environ.get("IMAP_CONNECTOR_SIZE_THRESHOLD", 10 * 1024 * 1024)
)
_USER_NOT_FOUND = "Unknown Confluence User"
_COMMENT_EXPANSION_FIELDS = ["body.storage.value"]

View File

@ -0,0 +1,217 @@
import sys
import time
import logging
from collections.abc import Generator
from datetime import datetime
from typing import Generic
from typing import TypeVar
from common.data_source.interfaces import (
BaseConnector,
CheckpointedConnector,
CheckpointedConnectorWithPermSync,
CheckpointOutput,
LoadConnector,
PollConnector,
)
from common.data_source.models import ConnectorCheckpoint, ConnectorFailure, Document
TimeRange = tuple[datetime, datetime]
CT = TypeVar("CT", bound=ConnectorCheckpoint)
def batched_doc_ids(
checkpoint_connector_generator: CheckpointOutput[CT],
batch_size: int,
) -> Generator[set[str], None, None]:
batch: set[str] = set()
for document, failure, next_checkpoint in CheckpointOutputWrapper[CT]()(
checkpoint_connector_generator
):
if document is not None:
batch.add(document.id)
elif (
failure and failure.failed_document and failure.failed_document.document_id
):
batch.add(failure.failed_document.document_id)
if len(batch) >= batch_size:
yield batch
batch = set()
if len(batch) > 0:
yield batch
class CheckpointOutputWrapper(Generic[CT]):
"""
Wraps a CheckpointOutput generator to give things back in a more digestible format,
specifically for Document outputs.
The connector format is easier for the connector implementor (e.g. it enforces exactly
one new checkpoint is returned AND that the checkpoint is at the end), thus the different
formats.
"""
def __init__(self) -> None:
self.next_checkpoint: CT | None = None
def __call__(
self,
checkpoint_connector_generator: CheckpointOutput[CT],
) -> Generator[
tuple[Document | None, ConnectorFailure | None, CT | None],
None,
None,
]:
# grabs the final return value and stores it in the `next_checkpoint` variable
def _inner_wrapper(
checkpoint_connector_generator: CheckpointOutput[CT],
) -> CheckpointOutput[CT]:
self.next_checkpoint = yield from checkpoint_connector_generator
return self.next_checkpoint # not used
for document_or_failure in _inner_wrapper(checkpoint_connector_generator):
if isinstance(document_or_failure, Document):
yield document_or_failure, None, None
elif isinstance(document_or_failure, ConnectorFailure):
yield None, document_or_failure, None
else:
raise ValueError(
f"Invalid document_or_failure type: {type(document_or_failure)}"
)
if self.next_checkpoint is None:
raise RuntimeError(
"Checkpoint is None. This should never happen - the connector should always return a checkpoint."
)
yield None, None, self.next_checkpoint
class ConnectorRunner(Generic[CT]):
"""
Handles:
- Batching
- Additional exception logging
- Combining different connector types to a single interface
"""
def __init__(
self,
connector: BaseConnector,
batch_size: int,
# cannot be True for non-checkpointed connectors
include_permissions: bool,
time_range: TimeRange | None = None,
):
if not isinstance(connector, CheckpointedConnector) and include_permissions:
raise ValueError(
"include_permissions cannot be True for non-checkpointed connectors"
)
self.connector = connector
self.time_range = time_range
self.batch_size = batch_size
self.include_permissions = include_permissions
self.doc_batch: list[Document] = []
def run(self, checkpoint: CT) -> Generator[
tuple[list[Document] | None, ConnectorFailure | None, CT | None],
None,
None,
]:
"""Adds additional exception logging to the connector."""
try:
if isinstance(self.connector, CheckpointedConnector):
if self.time_range is None:
raise ValueError("time_range is required for CheckpointedConnector")
start = time.monotonic()
if self.include_permissions:
if not isinstance(
self.connector, CheckpointedConnectorWithPermSync
):
raise ValueError(
"Connector does not support permission syncing"
)
load_from_checkpoint = (
self.connector.load_from_checkpoint_with_perm_sync
)
else:
load_from_checkpoint = self.connector.load_from_checkpoint
checkpoint_connector_generator = load_from_checkpoint(
start=self.time_range[0].timestamp(),
end=self.time_range[1].timestamp(),
checkpoint=checkpoint,
)
next_checkpoint: CT | None = None
# this is guaranteed to always run at least once with next_checkpoint being non-None
for document, failure, next_checkpoint in CheckpointOutputWrapper[CT]()(
checkpoint_connector_generator
):
if document is not None and isinstance(document, Document):
self.doc_batch.append(document)
if failure is not None:
yield None, failure, None
if len(self.doc_batch) >= self.batch_size:
yield self.doc_batch, None, None
self.doc_batch = []
# yield remaining documents
if len(self.doc_batch) > 0:
yield self.doc_batch, None, None
self.doc_batch = []
yield None, None, next_checkpoint
logging.debug(
f"Connector took {time.monotonic() - start} seconds to get to the next checkpoint."
)
else:
finished_checkpoint = self.connector.build_dummy_checkpoint()
finished_checkpoint.has_more = False
if isinstance(self.connector, PollConnector):
if self.time_range is None:
raise ValueError("time_range is required for PollConnector")
for document_batch in self.connector.poll_source(
start=self.time_range[0].timestamp(),
end=self.time_range[1].timestamp(),
):
yield document_batch, None, None
yield None, None, finished_checkpoint
elif isinstance(self.connector, LoadConnector):
for document_batch in self.connector.load_from_state():
yield document_batch, None, None
yield None, None, finished_checkpoint
else:
raise ValueError(f"Invalid connector. type: {type(self.connector)}")
except Exception:
exc_type, _, exc_traceback = sys.exc_info()
# Traverse the traceback to find the last frame where the exception was raised
tb = exc_traceback
if tb is None:
logging.error("No traceback found for exception")
raise
while tb.tb_next:
tb = tb.tb_next # Move to the next frame in the traceback
# Get the local variables from the frame where the exception occurred
local_vars = tb.tb_frame.f_locals
local_vars_str = "\n".join(
f"{key}: {value}" for key, value in local_vars.items()
)
logging.error(
f"Error in connector. type: {exc_type};\n"
f"local_vars below -> \n{local_vars_str[:1024]}"
)
raise

View File

View File

@ -0,0 +1,973 @@
import copy
import logging
from collections.abc import Callable
from collections.abc import Generator
from datetime import datetime
from datetime import timedelta
from datetime import timezone
from enum import Enum
from typing import Any
from typing import cast
from github import Github, Auth
from github import RateLimitExceededException
from github import Repository
from github.GithubException import GithubException
from github.Issue import Issue
from github.NamedUser import NamedUser
from github.PaginatedList import PaginatedList
from github.PullRequest import PullRequest
from pydantic import BaseModel
from typing_extensions import override
from common.data_source.google_util.util import sanitize_filename
from common.data_source.config import DocumentSource, GITHUB_CONNECTOR_BASE_URL
from common.data_source.exceptions import (
ConnectorMissingCredentialError,
ConnectorValidationError,
CredentialExpiredError,
InsufficientPermissionsError,
UnexpectedValidationError,
)
from common.data_source.interfaces import CheckpointedConnectorWithPermSyncGH, CheckpointOutput
from common.data_source.models import (
ConnectorCheckpoint,
ConnectorFailure,
Document,
DocumentFailure,
ExternalAccess,
SecondsSinceUnixEpoch,
)
from common.data_source.connector_runner import ConnectorRunner
from .models import SerializedRepository
from .rate_limit_utils import sleep_after_rate_limit_exception
from .utils import deserialize_repository
from .utils import get_external_access_permission
ITEMS_PER_PAGE = 100
CURSOR_LOG_FREQUENCY = 50
_MAX_NUM_RATE_LIMIT_RETRIES = 5
ONE_DAY = timedelta(days=1)
SLIM_BATCH_SIZE = 100
# Cases
# X (from start) standard run, no fallback to cursor-based pagination
# X (from start) standard run errors, fallback to cursor-based pagination
# X error in the middle of a page
# X no errors: run to completion
# X (from checkpoint) standard run, no fallback to cursor-based pagination
# X (from checkpoint) continue from cursor-based pagination
# - retrying
# - no retrying
# things to check:
# checkpoint state on return
# checkpoint progress (no infinite loop)
class DocMetadata(BaseModel):
repo: str
def get_nextUrl_key(pag_list: PaginatedList[PullRequest | Issue]) -> str:
if "_PaginatedList__nextUrl" in pag_list.__dict__:
return "_PaginatedList__nextUrl"
for key in pag_list.__dict__:
if "__nextUrl" in key:
return key
for key in pag_list.__dict__:
if "nextUrl" in key:
return key
return ""
def get_nextUrl(
pag_list: PaginatedList[PullRequest | Issue], nextUrl_key: str
) -> str | None:
return getattr(pag_list, nextUrl_key) if nextUrl_key else None
def set_nextUrl(
pag_list: PaginatedList[PullRequest | Issue], nextUrl_key: str, nextUrl: str
) -> None:
if nextUrl_key:
setattr(pag_list, nextUrl_key, nextUrl)
elif nextUrl:
raise ValueError("Next URL key not found: " + str(pag_list.__dict__))
def _paginate_until_error(
git_objs: Callable[[], PaginatedList[PullRequest | Issue]],
cursor_url: str | None,
prev_num_objs: int,
cursor_url_callback: Callable[[str | None, int], None],
retrying: bool = False,
) -> Generator[PullRequest | Issue, None, None]:
num_objs = prev_num_objs
pag_list = git_objs()
nextUrl_key = get_nextUrl_key(pag_list)
if cursor_url:
set_nextUrl(pag_list, nextUrl_key, cursor_url)
elif retrying:
# if we are retrying, we want to skip the objects retrieved
# over previous calls. Unfortunately, this WILL retrieve all
# pages before the one we are resuming from, so we really
# don't want this case to be hit often
logging.warning(
"Retrying from a previous cursor-based pagination call. "
"This will retrieve all pages before the one we are resuming from, "
"which may take a while and consume many API calls."
)
pag_list = cast(PaginatedList[PullRequest | Issue], pag_list[prev_num_objs:])
num_objs = 0
try:
# this for loop handles cursor-based pagination
for issue_or_pr in pag_list:
num_objs += 1
yield issue_or_pr
# used to store the current cursor url in the checkpoint. This value
# is updated during iteration over pag_list.
cursor_url_callback(get_nextUrl(pag_list, nextUrl_key), num_objs)
if num_objs % CURSOR_LOG_FREQUENCY == 0:
logging.info(
f"Retrieved {num_objs} objects with current cursor url: {get_nextUrl(pag_list, nextUrl_key)}"
)
except Exception as e:
logging.exception(f"Error during cursor-based pagination: {e}")
if num_objs - prev_num_objs > 0:
raise
if get_nextUrl(pag_list, nextUrl_key) is not None and not retrying:
logging.info(
"Assuming that this error is due to cursor "
"expiration because no objects were retrieved. "
"Retrying from the first page."
)
yield from _paginate_until_error(
git_objs, None, prev_num_objs, cursor_url_callback, retrying=True
)
return
# for no cursor url or if we reach this point after a retry, raise the error
raise
def _get_batch_rate_limited(
# We pass in a callable because we want git_objs to produce a fresh
# PaginatedList each time it's called to avoid using the same object for cursor-based pagination
# from a partial offset-based pagination call.
git_objs: Callable[[], PaginatedList],
page_num: int,
cursor_url: str | None,
prev_num_objs: int,
cursor_url_callback: Callable[[str | None, int], None],
github_client: Github,
attempt_num: int = 0,
) -> Generator[PullRequest | Issue, None, None]:
if attempt_num > _MAX_NUM_RATE_LIMIT_RETRIES:
raise RuntimeError(
"Re-tried fetching batch too many times. Something is going wrong with fetching objects from Github"
)
try:
if cursor_url:
# when this is set, we are resuming from an earlier
# cursor-based pagination call.
yield from _paginate_until_error(
git_objs, cursor_url, prev_num_objs, cursor_url_callback
)
return
objs = list(git_objs().get_page(page_num))
# fetch all data here to disable lazy loading later
# this is needed to capture the rate limit exception here (if one occurs)
for obj in objs:
if hasattr(obj, "raw_data"):
getattr(obj, "raw_data")
yield from objs
except RateLimitExceededException:
sleep_after_rate_limit_exception(github_client)
yield from _get_batch_rate_limited(
git_objs,
page_num,
cursor_url,
prev_num_objs,
cursor_url_callback,
github_client,
attempt_num + 1,
)
except GithubException as e:
if not (
e.status == 422
and (
"cursor" in (e.message or "")
or "cursor" in (e.data or {}).get("message", "")
)
):
raise
# Fallback to a cursor-based pagination strategy
# This can happen for "large datasets," but there's no documentation
# On the error on the web as far as we can tell.
# Error message:
# "Pagination with the page parameter is not supported for large datasets,
# please use cursor based pagination (after/before)"
yield from _paginate_until_error(
git_objs, cursor_url, prev_num_objs, cursor_url_callback
)
def _get_userinfo(user: NamedUser) -> dict[str, str]:
def _safe_get(attr_name: str) -> str | None:
try:
return cast(str | None, getattr(user, attr_name))
except GithubException:
logging.debug(f"Error getting {attr_name} for user")
return None
return {
k: v
for k, v in {
"login": _safe_get("login"),
"name": _safe_get("name"),
"email": _safe_get("email"),
}.items()
if v is not None
}
def _convert_pr_to_document(
pull_request: PullRequest, repo_external_access: ExternalAccess | None
) -> Document:
repo_name = pull_request.base.repo.full_name if pull_request.base else ""
doc_metadata = DocMetadata(repo=repo_name)
file_content_byte = pull_request.body.encode('utf-8') if pull_request.body else b""
name = sanitize_filename(pull_request.title, "md")
return Document(
id=pull_request.html_url,
blob= file_content_byte,
source=DocumentSource.GITHUB,
external_access=repo_external_access,
semantic_identifier=f"{pull_request.number}:{name}",
# updated_at is UTC time but is timezone unaware, explicitly add UTC
# as there is logic in indexing to prevent wrong timestamped docs
# due to local time discrepancies with UTC
doc_updated_at=(
pull_request.updated_at.replace(tzinfo=timezone.utc)
if pull_request.updated_at
else None
),
extension=".md",
# this metadata is used in perm sync
size_bytes=len(file_content_byte) if file_content_byte else 0,
primary_owners=[],
doc_metadata=doc_metadata.model_dump(),
metadata={
k: [str(vi) for vi in v] if isinstance(v, list) else str(v)
for k, v in {
"object_type": "PullRequest",
"id": pull_request.number,
"merged": pull_request.merged,
"state": pull_request.state,
"user": _get_userinfo(pull_request.user) if pull_request.user else None,
"assignees": [
_get_userinfo(assignee) for assignee in pull_request.assignees
],
"repo": (
pull_request.base.repo.full_name if pull_request.base else None
),
"num_commits": str(pull_request.commits),
"num_files_changed": str(pull_request.changed_files),
"labels": [label.name for label in pull_request.labels],
"created_at": (
pull_request.created_at.replace(tzinfo=timezone.utc)
if pull_request.created_at
else None
),
"updated_at": (
pull_request.updated_at.replace(tzinfo=timezone.utc)
if pull_request.updated_at
else None
),
"closed_at": (
pull_request.closed_at.replace(tzinfo=timezone.utc)
if pull_request.closed_at
else None
),
"merged_at": (
pull_request.merged_at.replace(tzinfo=timezone.utc)
if pull_request.merged_at
else None
),
"merged_by": (
_get_userinfo(pull_request.merged_by)
if pull_request.merged_by
else None
),
}.items()
if v is not None
},
)
def _fetch_issue_comments(issue: Issue) -> str:
comments = issue.get_comments()
return "\nComment: ".join(comment.body for comment in comments)
def _convert_issue_to_document(
issue: Issue, repo_external_access: ExternalAccess | None
) -> Document:
repo_name = issue.repository.full_name if issue.repository else ""
doc_metadata = DocMetadata(repo=repo_name)
file_content_byte = issue.body.encode('utf-8') if issue.body else b""
name = sanitize_filename(issue.title, "md")
return Document(
id=issue.html_url,
blob=file_content_byte,
source=DocumentSource.GITHUB,
extension=".md",
external_access=repo_external_access,
semantic_identifier=f"{issue.number}:{name}",
# updated_at is UTC time but is timezone unaware
doc_updated_at=issue.updated_at.replace(tzinfo=timezone.utc),
# this metadata is used in perm sync
doc_metadata=doc_metadata.model_dump(),
size_bytes=len(file_content_byte) if file_content_byte else 0,
primary_owners=[_get_userinfo(issue.user) if issue.user else None],
metadata={
k: [str(vi) for vi in v] if isinstance(v, list) else str(v)
for k, v in {
"object_type": "Issue",
"id": issue.number,
"state": issue.state,
"user": _get_userinfo(issue.user) if issue.user else None,
"assignees": [_get_userinfo(assignee) for assignee in issue.assignees],
"repo": issue.repository.full_name if issue.repository else None,
"labels": [label.name for label in issue.labels],
"created_at": (
issue.created_at.replace(tzinfo=timezone.utc)
if issue.created_at
else None
),
"updated_at": (
issue.updated_at.replace(tzinfo=timezone.utc)
if issue.updated_at
else None
),
"closed_at": (
issue.closed_at.replace(tzinfo=timezone.utc)
if issue.closed_at
else None
),
"closed_by": (
_get_userinfo(issue.closed_by) if issue.closed_by else None
),
}.items()
if v is not None
},
)
class GithubConnectorStage(Enum):
START = "start"
PRS = "prs"
ISSUES = "issues"
class GithubConnectorCheckpoint(ConnectorCheckpoint):
stage: GithubConnectorStage
curr_page: int
cached_repo_ids: list[int] | None = None
cached_repo: SerializedRepository | None = None
# Used for the fallback cursor-based pagination strategy
num_retrieved: int
cursor_url: str | None = None
def reset(self) -> None:
"""
Resets curr_page, num_retrieved, and cursor_url to their initial values (0, 0, None)
"""
self.curr_page = 0
self.num_retrieved = 0
self.cursor_url = None
def make_cursor_url_callback(
checkpoint: GithubConnectorCheckpoint,
) -> Callable[[str | None, int], None]:
def cursor_url_callback(cursor_url: str | None, num_objs: int) -> None:
# we want to maintain the old cursor url so code after retrieval
# can determine that we are using the fallback cursor-based pagination strategy
if cursor_url:
checkpoint.cursor_url = cursor_url
checkpoint.num_retrieved = num_objs
return cursor_url_callback
class GithubConnector(CheckpointedConnectorWithPermSyncGH[GithubConnectorCheckpoint]):
def __init__(
self,
repo_owner: str,
repositories: str | None = None,
state_filter: str = "all",
include_prs: bool = True,
include_issues: bool = False,
) -> None:
self.repo_owner = repo_owner
self.repositories = repositories
self.state_filter = state_filter
self.include_prs = include_prs
self.include_issues = include_issues
self.github_client: Github | None = None
def load_credentials(self, credentials: dict[str, Any]) -> dict[str, Any] | None:
# defaults to 30 items per page, can be set to as high as 100
token = credentials["github_access_token"]
auth = Auth.Token(token)
if GITHUB_CONNECTOR_BASE_URL:
self.github_client = Github(
auth=auth,
base_url=GITHUB_CONNECTOR_BASE_URL,
per_page=ITEMS_PER_PAGE,
)
else:
self.github_client = Github(
auth=auth,
per_page=ITEMS_PER_PAGE,
)
return None
def get_github_repo(
self, github_client: Github, attempt_num: int = 0
) -> Repository.Repository:
if attempt_num > _MAX_NUM_RATE_LIMIT_RETRIES:
raise RuntimeError(
"Re-tried fetching repo too many times. Something is going wrong with fetching objects from Github"
)
try:
return github_client.get_repo(f"{self.repo_owner}/{self.repositories}")
except RateLimitExceededException:
sleep_after_rate_limit_exception(github_client)
return self.get_github_repo(github_client, attempt_num + 1)
def get_github_repos(
self, github_client: Github, attempt_num: int = 0
) -> list[Repository.Repository]:
"""Get specific repositories based on comma-separated repo_name string."""
if attempt_num > _MAX_NUM_RATE_LIMIT_RETRIES:
raise RuntimeError(
"Re-tried fetching repos too many times. Something is going wrong with fetching objects from Github"
)
try:
repos = []
# Split repo_name by comma and strip whitespace
repo_names = [
name.strip() for name in (cast(str, self.repositories)).split(",")
]
for repo_name in repo_names:
if repo_name: # Skip empty strings
try:
repo = github_client.get_repo(f"{self.repo_owner}/{repo_name}")
repos.append(repo)
except GithubException as e:
logging.warning(
f"Could not fetch repo {self.repo_owner}/{repo_name}: {e}"
)
return repos
except RateLimitExceededException:
sleep_after_rate_limit_exception(github_client)
return self.get_github_repos(github_client, attempt_num + 1)
def get_all_repos(
self, github_client: Github, attempt_num: int = 0
) -> list[Repository.Repository]:
if attempt_num > _MAX_NUM_RATE_LIMIT_RETRIES:
raise RuntimeError(
"Re-tried fetching repos too many times. Something is going wrong with fetching objects from Github"
)
try:
# Try to get organization first
try:
org = github_client.get_organization(self.repo_owner)
return list(org.get_repos())
except GithubException:
# If not an org, try as a user
user = github_client.get_user(self.repo_owner)
return list(user.get_repos())
except RateLimitExceededException:
sleep_after_rate_limit_exception(github_client)
return self.get_all_repos(github_client, attempt_num + 1)
def _pull_requests_func(
self, repo: Repository.Repository
) -> Callable[[], PaginatedList[PullRequest]]:
return lambda: repo.get_pulls(
state=self.state_filter, sort="updated", direction="desc"
)
def _issues_func(
self, repo: Repository.Repository
) -> Callable[[], PaginatedList[Issue]]:
return lambda: repo.get_issues(
state=self.state_filter, sort="updated", direction="desc"
)
def _fetch_from_github(
self,
checkpoint: GithubConnectorCheckpoint,
start: datetime | None = None,
end: datetime | None = None,
include_permissions: bool = False,
) -> Generator[Document | ConnectorFailure, None, GithubConnectorCheckpoint]:
if self.github_client is None:
raise ConnectorMissingCredentialError("GitHub")
checkpoint = copy.deepcopy(checkpoint)
# First run of the connector, fetch all repos and store in checkpoint
if checkpoint.cached_repo_ids is None:
repos = []
if self.repositories:
if "," in self.repositories:
# Multiple repositories specified
repos = self.get_github_repos(self.github_client)
else:
# Single repository (backward compatibility)
repos = [self.get_github_repo(self.github_client)]
else:
# All repositories
repos = self.get_all_repos(self.github_client)
if not repos:
checkpoint.has_more = False
return checkpoint
curr_repo = repos.pop()
checkpoint.cached_repo_ids = [repo.id for repo in repos]
checkpoint.cached_repo = SerializedRepository(
id=curr_repo.id,
headers=curr_repo.raw_headers,
raw_data=curr_repo.raw_data,
)
checkpoint.stage = GithubConnectorStage.PRS
checkpoint.curr_page = 0
# save checkpoint with repo ids retrieved
return checkpoint
if checkpoint.cached_repo is None:
raise ValueError("No repo saved in checkpoint")
# Deserialize the repository from the checkpoint
repo = deserialize_repository(checkpoint.cached_repo, self.github_client)
cursor_url_callback = make_cursor_url_callback(checkpoint)
repo_external_access: ExternalAccess | None = None
if include_permissions:
repo_external_access = get_external_access_permission(
repo, self.github_client
)
if self.include_prs and checkpoint.stage == GithubConnectorStage.PRS:
logging.info(f"Fetching PRs for repo: {repo.name}")
pr_batch = _get_batch_rate_limited(
self._pull_requests_func(repo),
checkpoint.curr_page,
checkpoint.cursor_url,
checkpoint.num_retrieved,
cursor_url_callback,
self.github_client,
)
checkpoint.curr_page += 1 # NOTE: not used for cursor-based fallback
done_with_prs = False
num_prs = 0
pr = None
print("start: ", start)
for pr in pr_batch:
num_prs += 1
print("-"*40)
print("PR name", pr.title)
print("updated at", pr.updated_at)
print("-"*40)
print("\n")
# we iterate backwards in time, so at this point we stop processing prs
if (
start is not None
and pr.updated_at
and pr.updated_at.replace(tzinfo=timezone.utc) <= start
):
done_with_prs = True
break
# Skip PRs updated after the end date
if (
end is not None
and pr.updated_at
and pr.updated_at.replace(tzinfo=timezone.utc) > end
):
continue
try:
yield _convert_pr_to_document(
cast(PullRequest, pr), repo_external_access
)
except Exception as e:
error_msg = f"Error converting PR to document: {e}"
logging.exception(error_msg)
yield ConnectorFailure(
failed_document=DocumentFailure(
document_id=str(pr.id), document_link=pr.html_url
),
failure_message=error_msg,
exception=e,
)
continue
# If we reach this point with a cursor url in the checkpoint, we were using
# the fallback cursor-based pagination strategy. That strategy tries to get all
# PRs, so having curosr_url set means we are done with prs. However, we need to
# return AFTER the checkpoint reset to avoid infinite loops.
# if we found any PRs on the page and there are more PRs to get, return the checkpoint.
# In offset mode, while indexing without time constraints, the pr batch
# will be empty when we're done.
used_cursor = checkpoint.cursor_url is not None
if num_prs > 0 and not done_with_prs and not used_cursor:
return checkpoint
# if we went past the start date during the loop or there are no more
# prs to get, we move on to issues
checkpoint.stage = GithubConnectorStage.ISSUES
checkpoint.reset()
if used_cursor:
# save the checkpoint after changing stage; next run will continue from issues
return checkpoint
checkpoint.stage = GithubConnectorStage.ISSUES
if self.include_issues and checkpoint.stage == GithubConnectorStage.ISSUES:
logging.info(f"Fetching issues for repo: {repo.name}")
issue_batch = list(
_get_batch_rate_limited(
self._issues_func(repo),
checkpoint.curr_page,
checkpoint.cursor_url,
checkpoint.num_retrieved,
cursor_url_callback,
self.github_client,
)
)
checkpoint.curr_page += 1
done_with_issues = False
num_issues = 0
for issue in issue_batch:
num_issues += 1
issue = cast(Issue, issue)
# we iterate backwards in time, so at this point we stop processing prs
if (
start is not None
and issue.updated_at.replace(tzinfo=timezone.utc) <= start
):
done_with_issues = True
break
# Skip PRs updated after the end date
if (
end is not None
and issue.updated_at.replace(tzinfo=timezone.utc) > end
):
continue
if issue.pull_request is not None:
# PRs are handled separately
continue
try:
yield _convert_issue_to_document(issue, repo_external_access)
except Exception as e:
error_msg = f"Error converting issue to document: {e}"
logging.exception(error_msg)
yield ConnectorFailure(
failed_document=DocumentFailure(
document_id=str(issue.id),
document_link=issue.html_url,
),
failure_message=error_msg,
exception=e,
)
continue
# if we found any issues on the page, and we're not done, return the checkpoint.
# don't return if we're using cursor-based pagination to avoid infinite loops
if num_issues > 0 and not done_with_issues and not checkpoint.cursor_url:
return checkpoint
# if we went past the start date during the loop or there are no more
# issues to get, we move on to the next repo
checkpoint.stage = GithubConnectorStage.PRS
checkpoint.reset()
checkpoint.has_more = len(checkpoint.cached_repo_ids) > 0
if checkpoint.cached_repo_ids:
next_id = checkpoint.cached_repo_ids.pop()
next_repo = self.github_client.get_repo(next_id)
checkpoint.cached_repo = SerializedRepository(
id=next_id,
headers=next_repo.raw_headers,
raw_data=next_repo.raw_data,
)
checkpoint.stage = GithubConnectorStage.PRS
checkpoint.reset()
if checkpoint.cached_repo_ids:
logging.info(
f"{len(checkpoint.cached_repo_ids)} repos remaining (IDs: {checkpoint.cached_repo_ids})"
)
else:
logging.info("No more repos remaining")
return checkpoint
def _load_from_checkpoint(
self,
start: SecondsSinceUnixEpoch,
end: SecondsSinceUnixEpoch,
checkpoint: GithubConnectorCheckpoint,
include_permissions: bool = False,
) -> CheckpointOutput[GithubConnectorCheckpoint]:
start_datetime = datetime.fromtimestamp(start, tz=timezone.utc)
# add a day for timezone safety
end_datetime = datetime.fromtimestamp(end, tz=timezone.utc) + ONE_DAY
# Move start time back by 3 hours, since some Issues/PRs are getting dropped
# Could be due to delayed processing on GitHub side
# The non-updated issues since last poll will be shortcut-ed and not embedded
# adjusted_start_datetime = start_datetime - timedelta(hours=3)
adjusted_start_datetime = start_datetime
epoch = datetime.fromtimestamp(0, tz=timezone.utc)
if adjusted_start_datetime < epoch:
adjusted_start_datetime = epoch
return self._fetch_from_github(
checkpoint,
start=adjusted_start_datetime,
end=end_datetime,
include_permissions=include_permissions,
)
@override
def load_from_checkpoint(
self,
start: SecondsSinceUnixEpoch,
end: SecondsSinceUnixEpoch,
checkpoint: GithubConnectorCheckpoint,
) -> CheckpointOutput[GithubConnectorCheckpoint]:
return self._load_from_checkpoint(
start, end, checkpoint, include_permissions=False
)
@override
def load_from_checkpoint_with_perm_sync(
self,
start: SecondsSinceUnixEpoch,
end: SecondsSinceUnixEpoch,
checkpoint: GithubConnectorCheckpoint,
) -> CheckpointOutput[GithubConnectorCheckpoint]:
return self._load_from_checkpoint(
start, end, checkpoint, include_permissions=True
)
def validate_connector_settings(self) -> None:
if self.github_client is None:
raise ConnectorMissingCredentialError("GitHub credentials not loaded.")
if not self.repo_owner:
raise ConnectorValidationError(
"Invalid connector settings: 'repo_owner' must be provided."
)
try:
if self.repositories:
if "," in self.repositories:
# Multiple repositories specified
repo_names = [name.strip() for name in self.repositories.split(",")]
if not repo_names:
raise ConnectorValidationError(
"Invalid connector settings: No valid repository names provided."
)
# Validate at least one repository exists and is accessible
valid_repos = False
validation_errors = []
for repo_name in repo_names:
if not repo_name:
continue
try:
test_repo = self.github_client.get_repo(
f"{self.repo_owner}/{repo_name}"
)
logging.info(
f"Successfully accessed repository: {self.repo_owner}/{repo_name}"
)
test_repo.get_contents("")
valid_repos = True
# If at least one repo is valid, we can proceed
break
except GithubException as e:
validation_errors.append(
f"Repository '{repo_name}': {e.data.get('message', str(e))}"
)
if not valid_repos:
error_msg = (
"None of the specified repositories could be accessed: "
)
error_msg += ", ".join(validation_errors)
raise ConnectorValidationError(error_msg)
else:
# Single repository (backward compatibility)
test_repo = self.github_client.get_repo(
f"{self.repo_owner}/{self.repositories}"
)
test_repo.get_contents("")
else:
# Try to get organization first
try:
org = self.github_client.get_organization(self.repo_owner)
total_count = org.get_repos().totalCount
if total_count == 0:
raise ConnectorValidationError(
f"Found no repos for organization: {self.repo_owner}. "
"Does the credential have the right scopes?"
)
except GithubException as e:
# Check for missing SSO
MISSING_SSO_ERROR_MESSAGE = "You must grant your Personal Access token access to this organization".lower()
if MISSING_SSO_ERROR_MESSAGE in str(e).lower():
SSO_GUIDE_LINK = (
"https://docs.github.com/en/enterprise-cloud@latest/authentication/"
"authenticating-with-saml-single-sign-on/"
"authorizing-a-personal-access-token-for-use-with-saml-single-sign-on"
)
raise ConnectorValidationError(
f"Your GitHub token is missing authorization to access the "
f"`{self.repo_owner}` organization. Please follow the guide to "
f"authorize your token: {SSO_GUIDE_LINK}"
)
# If not an org, try as a user
user = self.github_client.get_user(self.repo_owner)
# Check if we can access any repos
total_count = user.get_repos().totalCount
if total_count == 0:
raise ConnectorValidationError(
f"Found no repos for user: {self.repo_owner}. "
"Does the credential have the right scopes?"
)
except RateLimitExceededException:
raise UnexpectedValidationError(
"Validation failed due to GitHub rate-limits being exceeded. Please try again later."
)
except GithubException as e:
if e.status == 401:
raise CredentialExpiredError(
"GitHub credential appears to be invalid or expired (HTTP 401)."
)
elif e.status == 403:
raise InsufficientPermissionsError(
"Your GitHub token does not have sufficient permissions for this repository (HTTP 403)."
)
elif e.status == 404:
if self.repositories:
if "," in self.repositories:
raise ConnectorValidationError(
f"None of the specified GitHub repositories could be found for owner: {self.repo_owner}"
)
else:
raise ConnectorValidationError(
f"GitHub repository not found with name: {self.repo_owner}/{self.repositories}"
)
else:
raise ConnectorValidationError(
f"GitHub user or organization not found: {self.repo_owner}"
)
else:
raise ConnectorValidationError(
f"Unexpected GitHub error (status={e.status}): {e.data}"
)
except Exception as exc:
raise Exception(
f"Unexpected error during GitHub settings validation: {exc}"
)
def validate_checkpoint_json(
self, checkpoint_json: str
) -> GithubConnectorCheckpoint:
return GithubConnectorCheckpoint.model_validate_json(checkpoint_json)
def build_dummy_checkpoint(self) -> GithubConnectorCheckpoint:
return GithubConnectorCheckpoint(
stage=GithubConnectorStage.PRS, curr_page=0, has_more=True, num_retrieved=0
)
if __name__ == "__main__":
# Initialize the connector
connector = GithubConnector(
repo_owner="EvoAgentX",
repositories="EvoAgentX",
include_issues=True,
include_prs=False,
)
connector.load_credentials(
{"github_access_token": "<Your_GitHub_Access_Token>"}
)
if connector.github_client:
get_external_access_permission(
connector.get_github_repos(connector.github_client).pop(),
connector.github_client,
)
# Create a time range from epoch to now
end_time = datetime.now(timezone.utc)
start_time = datetime.fromtimestamp(0, tz=timezone.utc)
time_range = (start_time, end_time)
# Initialize the runner with a batch size of 10
runner: ConnectorRunner[GithubConnectorCheckpoint] = ConnectorRunner(
connector, batch_size=10, include_permissions=False, time_range=time_range
)
# Get initial checkpoint
checkpoint = connector.build_dummy_checkpoint()
# Run the connector
while checkpoint.has_more:
for doc_batch, failure, next_checkpoint in runner.run(checkpoint):
if doc_batch:
print(f"Retrieved batch of {len(doc_batch)} documents")
for doc in doc_batch:
print(f"Document: {doc.semantic_identifier}")
if failure:
print(f"Failure: {failure.failure_message}")
if next_checkpoint:
checkpoint = next_checkpoint

View File

@ -0,0 +1,17 @@
from typing import Any
from github import Repository
from github.Requester import Requester
from pydantic import BaseModel
class SerializedRepository(BaseModel):
# id is part of the raw_data as well, just pulled out for convenience
id: int
headers: dict[str, str | int]
raw_data: dict[str, Any]
def to_Repository(self, requester: Requester) -> Repository.Repository:
return Repository.Repository(
requester, self.headers, self.raw_data, completed=True
)

View File

@ -0,0 +1,24 @@
import time
import logging
from datetime import datetime
from datetime import timedelta
from datetime import timezone
from github import Github
def sleep_after_rate_limit_exception(github_client: Github) -> None:
"""
Sleep until the GitHub rate limit resets.
Args:
github_client: The GitHub client that hit the rate limit
"""
sleep_time = github_client.get_rate_limit().core.reset.replace(
tzinfo=timezone.utc
) - datetime.now(tz=timezone.utc)
sleep_time += timedelta(minutes=1) # add an extra minute just to be safe
logging.info(
"Ran into Github rate-limit. Sleeping %s seconds.", sleep_time.seconds
)
time.sleep(sleep_time.total_seconds())

View File

@ -0,0 +1,44 @@
import logging
from github import Github
from github.Repository import Repository
from common.data_source.models import ExternalAccess
from .models import SerializedRepository
def get_external_access_permission(
repo: Repository, github_client: Github
) -> ExternalAccess:
"""
Get the external access permission for a repository.
This functionality requires Enterprise Edition.
"""
# RAGFlow doesn't implement the Onyx EE external-permissions system.
# Default to private/unknown permissions.
return ExternalAccess.empty()
def deserialize_repository(
cached_repo: SerializedRepository, github_client: Github
) -> Repository:
"""
Deserialize a SerializedRepository back into a Repository object.
"""
# Try to access the requester - different PyGithub versions may use different attribute names
try:
# Try to get the requester using getattr to avoid linter errors
requester = getattr(github_client, "_requester", None)
if requester is None:
requester = getattr(github_client, "_Github__requester", None)
if requester is None:
# If we can't find the requester attribute, we need to fall back to recreating the repo
raise AttributeError("Could not find requester attribute")
return cached_repo.to_Repository(requester)
except Exception as e:
# If all else fails, re-fetch the repo directly
logging.warning("Failed to deserialize repository: %s. Attempting to re-fetch.", e)
repo_id = cached_repo.id
return github_client.get_repo(repo_id)

View File

@ -191,7 +191,7 @@ def get_credentials_from_env(email: str, oauth: bool = False, source="drive") ->
DB_CREDENTIALS_AUTHENTICATION_METHOD: "uploaded",
}
def sanitize_filename(name: str) -> str:
def sanitize_filename(name: str, extension: str = "txt") -> str:
"""
Soft sanitize for MinIO/S3:
- Replace only prohibited characters with a space.
@ -199,7 +199,7 @@ def sanitize_filename(name: str) -> str:
- Collapse multiple spaces.
"""
if name is None:
return "file.txt"
return f"file.{extension}"
name = str(name).strip()
@ -222,9 +222,8 @@ def sanitize_filename(name: str) -> str:
base, ext = os.path.splitext(name)
name = base[:180].rstrip() + ext
# Ensure there is an extension (your original logic)
if not os.path.splitext(name)[1]:
name += ".txt"
name += f".{extension}"
return name

View File

@ -0,0 +1,724 @@
import copy
import email
from email.header import decode_header
import imaplib
import logging
import os
import re
from datetime import datetime, timedelta
from datetime import timezone
from email.message import Message
from email.utils import collapse_rfc2231_value, parseaddr
from enum import Enum
from typing import Any
from typing import cast
import bs4
from pydantic import BaseModel
from common.data_source.config import IMAP_CONNECTOR_SIZE_THRESHOLD, DocumentSource
from common.data_source.interfaces import CheckpointOutput, CheckpointedConnectorWithPermSync, CredentialsConnector, CredentialsProviderInterface
from common.data_source.models import BasicExpertInfo, ConnectorCheckpoint, Document, ExternalAccess, SecondsSinceUnixEpoch
_DEFAULT_IMAP_PORT_NUMBER = int(os.environ.get("IMAP_PORT", 993))
_IMAP_OKAY_STATUS = "OK"
_PAGE_SIZE = 100
_USERNAME_KEY = "imap_username"
_PASSWORD_KEY = "imap_password"
class Header(str, Enum):
SUBJECT_HEADER = "subject"
FROM_HEADER = "from"
TO_HEADER = "to"
CC_HEADER = "cc"
DELIVERED_TO_HEADER = (
"Delivered-To" # Used in mailing lists instead of the "to" header.
)
DATE_HEADER = "date"
MESSAGE_ID_HEADER = "Message-ID"
class EmailHeaders(BaseModel):
"""
Model for email headers extracted from IMAP messages.
"""
id: str
subject: str
sender: str
recipients: str | None
cc: str | None
date: datetime
@classmethod
def from_email_msg(cls, email_msg: Message) -> "EmailHeaders":
def _decode(header: str, default: str | None = None) -> str | None:
value = email_msg.get(header, default)
if not value:
return None
decoded_fragments = decode_header(value)
decoded_strings: list[str] = []
for decoded_value, encoding in decoded_fragments:
if isinstance(decoded_value, bytes):
try:
decoded_strings.append(
decoded_value.decode(encoding or "utf-8", errors="replace")
)
except LookupError:
decoded_strings.append(
decoded_value.decode("utf-8", errors="replace")
)
elif isinstance(decoded_value, str):
decoded_strings.append(decoded_value)
else:
decoded_strings.append(str(decoded_value))
return "".join(decoded_strings)
def _parse_date(date_str: str | None) -> datetime | None:
if not date_str:
return None
try:
return email.utils.parsedate_to_datetime(date_str)
except (TypeError, ValueError):
return None
message_id = _decode(header=Header.MESSAGE_ID_HEADER)
if not message_id:
message_id = f"<generated-{uuid.uuid4()}@imap.local>"
# It's possible for the subject line to not exist or be an empty string.
subject = _decode(header=Header.SUBJECT_HEADER) or "Unknown Subject"
from_ = _decode(header=Header.FROM_HEADER)
to = _decode(header=Header.TO_HEADER)
if not to:
to = _decode(header=Header.DELIVERED_TO_HEADER)
cc = _decode(header=Header.CC_HEADER)
date_str = _decode(header=Header.DATE_HEADER)
date = _parse_date(date_str=date_str)
if not date:
date = datetime.now(tz=timezone.utc)
# If any of the above are `None`, model validation will fail.
# Therefore, no guards (i.e.: `if <header> is None: raise RuntimeError(..)`) were written.
return cls.model_validate(
{
"id": message_id,
"subject": subject,
"sender": from_,
"recipients": to,
"cc": cc,
"date": date,
}
)
class CurrentMailbox(BaseModel):
mailbox: str
todo_email_ids: list[str]
# An email has a list of mailboxes.
# Each mailbox has a list of email-ids inside of it.
#
# Usage:
# To use this checkpointer, first fetch all the mailboxes.
# Then, pop a mailbox and fetch all of its email-ids.
# Then, pop each email-id and fetch its content (and parse it, etc..).
# When you have popped all email-ids for this mailbox, pop the next mailbox and repeat the above process until you're done.
#
# For initial checkpointing, set both fields to `None`.
class ImapCheckpoint(ConnectorCheckpoint):
todo_mailboxes: list[str] | None = None
current_mailbox: CurrentMailbox | None = None
class LoginState(str, Enum):
LoggedIn = "logged_in"
LoggedOut = "logged_out"
class ImapConnector(
CredentialsConnector,
CheckpointedConnectorWithPermSync,
):
def __init__(
self,
host: str,
port: int = _DEFAULT_IMAP_PORT_NUMBER,
mailboxes: list[str] | None = None,
) -> None:
self._host = host
self._port = port
self._mailboxes = mailboxes
self._credentials: dict[str, Any] | None = None
@property
def credentials(self) -> dict[str, Any]:
if not self._credentials:
raise RuntimeError(
"Credentials have not been initialized; call `set_credentials_provider` first"
)
return self._credentials
def _get_mail_client(self) -> imaplib.IMAP4_SSL:
"""
Returns a new `imaplib.IMAP4_SSL` instance.
The `imaplib.IMAP4_SSL` object is supposed to be an "ephemeral" object; it's not something that you can login,
logout, then log back into again. I.e., the following will fail:
```py
mail_client.login(..)
mail_client.logout();
mail_client.login(..)
```
Therefore, you need a fresh, new instance in order to operate with IMAP. This function gives one to you.
# Notes
This function will throw an error if the credentials have not yet been set.
"""
def get_or_raise(name: str) -> str:
value = self.credentials.get(name)
if not value:
raise RuntimeError(f"Credential item {name=} was not found")
if not isinstance(value, str):
raise RuntimeError(
f"Credential item {name=} must be of type str, instead received {type(name)=}"
)
return value
username = get_or_raise(_USERNAME_KEY)
password = get_or_raise(_PASSWORD_KEY)
mail_client = imaplib.IMAP4_SSL(host=self._host, port=self._port)
status, _data = mail_client.login(user=username, password=password)
if status != _IMAP_OKAY_STATUS:
raise RuntimeError(f"Failed to log into imap server; {status=}")
return mail_client
def _load_from_checkpoint(
self,
start: SecondsSinceUnixEpoch,
end: SecondsSinceUnixEpoch,
checkpoint: ImapCheckpoint,
include_perm_sync: bool,
) -> CheckpointOutput[ImapCheckpoint]:
checkpoint = cast(ImapCheckpoint, copy.deepcopy(checkpoint))
checkpoint.has_more = True
mail_client = self._get_mail_client()
if checkpoint.todo_mailboxes is None:
# This is the dummy checkpoint.
# Fill it with mailboxes first.
if self._mailboxes:
checkpoint.todo_mailboxes = _sanitize_mailbox_names(self._mailboxes)
else:
fetched_mailboxes = _fetch_all_mailboxes_for_email_account(
mail_client=mail_client
)
if not fetched_mailboxes:
raise RuntimeError(
"Failed to find any mailboxes for this email account"
)
checkpoint.todo_mailboxes = _sanitize_mailbox_names(fetched_mailboxes)
return checkpoint
if (
not checkpoint.current_mailbox
or not checkpoint.current_mailbox.todo_email_ids
):
if not checkpoint.todo_mailboxes:
checkpoint.has_more = False
return checkpoint
mailbox = checkpoint.todo_mailboxes.pop()
email_ids = _fetch_email_ids_in_mailbox(
mail_client=mail_client,
mailbox=mailbox,
start=start,
end=end,
)
checkpoint.current_mailbox = CurrentMailbox(
mailbox=mailbox,
todo_email_ids=email_ids,
)
_select_mailbox(
mail_client=mail_client, mailbox=checkpoint.current_mailbox.mailbox
)
current_todos = cast(
list, copy.deepcopy(checkpoint.current_mailbox.todo_email_ids[:_PAGE_SIZE])
)
checkpoint.current_mailbox.todo_email_ids = (
checkpoint.current_mailbox.todo_email_ids[_PAGE_SIZE:]
)
for email_id in current_todos:
email_msg = _fetch_email(mail_client=mail_client, email_id=email_id)
if not email_msg:
logging.warning(f"Failed to fetch message {email_id=}; skipping")
continue
email_headers = EmailHeaders.from_email_msg(email_msg=email_msg)
msg_dt = email_headers.date
if msg_dt.tzinfo is None:
msg_dt = msg_dt.replace(tzinfo=timezone.utc)
else:
msg_dt = msg_dt.astimezone(timezone.utc)
start_dt = datetime.fromtimestamp(start, tz=timezone.utc)
end_dt = datetime.fromtimestamp(end, tz=timezone.utc)
if not (start_dt < msg_dt <= end_dt):
continue
email_doc = _convert_email_headers_and_body_into_document(
email_msg=email_msg,
email_headers=email_headers,
include_perm_sync=include_perm_sync,
)
yield email_doc
attachments = extract_attachments(email_msg)
for att in attachments:
yield attachment_to_document(email_doc, att, email_headers)
return checkpoint
# impls for BaseConnector
def load_credentials(self, credentials: dict[str, Any]) -> dict[str, Any] | None:
self._credentials = credentials
return None
def validate_connector_settings(self) -> None:
self._get_mail_client()
# impls for CredentialsConnector
def set_credentials_provider(
self, credentials_provider: CredentialsProviderInterface
) -> None:
self._credentials = credentials_provider.get_credentials()
# impls for CheckpointedConnector
def load_from_checkpoint(
self,
start: SecondsSinceUnixEpoch,
end: SecondsSinceUnixEpoch,
checkpoint: ImapCheckpoint,
) -> CheckpointOutput[ImapCheckpoint]:
return self._load_from_checkpoint(
start=start, end=end, checkpoint=checkpoint, include_perm_sync=False
)
def build_dummy_checkpoint(self) -> ImapCheckpoint:
return ImapCheckpoint(has_more=True)
def validate_checkpoint_json(self, checkpoint_json: str) -> ImapCheckpoint:
return ImapCheckpoint.model_validate_json(json_data=checkpoint_json)
# impls for CheckpointedConnectorWithPermSync
def load_from_checkpoint_with_perm_sync(
self,
start: SecondsSinceUnixEpoch,
end: SecondsSinceUnixEpoch,
checkpoint: ImapCheckpoint,
) -> CheckpointOutput[ImapCheckpoint]:
return self._load_from_checkpoint(
start=start, end=end, checkpoint=checkpoint, include_perm_sync=True
)
def _fetch_all_mailboxes_for_email_account(mail_client: imaplib.IMAP4_SSL) -> list[str]:
status, mailboxes_data = mail_client.list('""', "*")
if status != _IMAP_OKAY_STATUS:
raise RuntimeError(f"Failed to fetch mailboxes; {status=}")
mailboxes = []
for mailboxes_raw in mailboxes_data:
if isinstance(mailboxes_raw, bytes):
mailboxes_str = mailboxes_raw.decode()
elif isinstance(mailboxes_raw, str):
mailboxes_str = mailboxes_raw
else:
logging.warning(
f"Expected the mailbox data to be of type str, instead got {type(mailboxes_raw)=} {mailboxes_raw}; skipping"
)
continue
# The mailbox LIST response output can be found here:
# https://www.rfc-editor.org/rfc/rfc3501.html#section-7.2.2
#
# The general format is:
# `(<name-attributes>) <hierarchy-delimiter> <mailbox-name>`
#
# The below regex matches on that pattern; from there, we select the 3rd match (index 2), which is the mailbox-name.
match = re.match(r'\([^)]*\)\s+"([^"]+)"\s+"?(.+?)"?$', mailboxes_str)
if not match:
logging.warning(
f"Invalid mailbox-data formatting structure: {mailboxes_str=}; skipping"
)
continue
mailbox = match.group(2)
mailboxes.append(mailbox)
if not mailboxes:
logging.warning(
"No mailboxes parsed from LIST response; falling back to INBOX"
)
return ["INBOX"]
return mailboxes
def _select_mailbox(mail_client: imaplib.IMAP4_SSL, mailbox: str) -> bool:
try:
status, _ = mail_client.select(mailbox=mailbox, readonly=True)
if status != _IMAP_OKAY_STATUS:
return False
return True
except Exception:
return False
def _fetch_email_ids_in_mailbox(
mail_client: imaplib.IMAP4_SSL,
mailbox: str,
start: SecondsSinceUnixEpoch,
end: SecondsSinceUnixEpoch,
) -> list[str]:
if not _select_mailbox(mail_client, mailbox):
logging.warning(f"Skip mailbox: {mailbox}")
return []
start_dt = datetime.fromtimestamp(start, tz=timezone.utc)
end_dt = datetime.fromtimestamp(end, tz=timezone.utc) + timedelta(days=1)
start_str = start_dt.strftime("%d-%b-%Y")
end_str = end_dt.strftime("%d-%b-%Y")
search_criteria = f'(SINCE "{start_str}" BEFORE "{end_str}")'
status, email_ids_byte_array = mail_client.search(None, search_criteria)
if status != _IMAP_OKAY_STATUS or not email_ids_byte_array:
raise RuntimeError(f"Failed to fetch email ids; {status=}")
email_ids: bytes = email_ids_byte_array[0]
return [email_id.decode() for email_id in email_ids.split()]
def _fetch_email(mail_client: imaplib.IMAP4_SSL, email_id: str) -> Message | None:
status, msg_data = mail_client.fetch(message_set=email_id, message_parts="(RFC822)")
if status != _IMAP_OKAY_STATUS or not msg_data:
return None
data = msg_data[0]
if not isinstance(data, tuple):
raise RuntimeError(
f"Message data should be a tuple; instead got a {type(data)=} {data=}"
)
_, raw_email = data
return email.message_from_bytes(raw_email)
def _convert_email_headers_and_body_into_document(
email_msg: Message,
email_headers: EmailHeaders,
include_perm_sync: bool,
) -> Document:
sender_name, sender_addr = _parse_singular_addr(raw_header=email_headers.sender)
to_addrs = (
_parse_addrs(email_headers.recipients)
if email_headers.recipients
else []
)
cc_addrs = (
_parse_addrs(email_headers.cc)
if email_headers.cc
else []
)
all_participants = to_addrs + cc_addrs
expert_info_map = {
recipient_addr: BasicExpertInfo(
display_name=recipient_name, email=recipient_addr
)
for recipient_name, recipient_addr in all_participants
}
if sender_addr not in expert_info_map:
expert_info_map[sender_addr] = BasicExpertInfo(
display_name=sender_name, email=sender_addr
)
email_body = _parse_email_body(email_msg=email_msg, email_headers=email_headers)
primary_owners = list(expert_info_map.values())
external_access = (
ExternalAccess(
external_user_emails=set(expert_info_map.keys()),
external_user_group_ids=set(),
is_public=False,
)
if include_perm_sync
else None
)
return Document(
id=email_headers.id,
title=email_headers.subject,
blob=email_body,
size_bytes=len(email_body),
semantic_identifier=email_headers.subject,
metadata={},
extension='.txt',
doc_updated_at=email_headers.date,
source=DocumentSource.IMAP,
primary_owners=primary_owners,
external_access=external_access,
)
def extract_attachments(email_msg: Message, max_bytes: int = IMAP_CONNECTOR_SIZE_THRESHOLD):
attachments = []
if not email_msg.is_multipart():
return attachments
for part in email_msg.walk():
if part.get_content_maintype() == "multipart":
continue
disposition = (part.get("Content-Disposition") or "").lower()
filename = part.get_filename()
if not (
disposition.startswith("attachment")
or (disposition.startswith("inline") and filename)
):
continue
payload = part.get_payload(decode=True)
if not payload:
continue
if len(payload) > max_bytes:
continue
attachments.append({
"filename": filename or "attachment.bin",
"content_type": part.get_content_type(),
"content_bytes": payload,
"size_bytes": len(payload),
})
return attachments
def decode_mime_filename(raw: str | None) -> str | None:
if not raw:
return None
try:
raw = collapse_rfc2231_value(raw)
except Exception:
pass
parts = decode_header(raw)
decoded = []
for value, encoding in parts:
if isinstance(value, bytes):
decoded.append(value.decode(encoding or "utf-8", errors="replace"))
else:
decoded.append(value)
return "".join(decoded)
def attachment_to_document(
parent_doc: Document,
att: dict,
email_headers: EmailHeaders,
):
raw_filename = att["filename"]
filename = decode_mime_filename(raw_filename) or "attachment.bin"
ext = "." + filename.split(".")[-1] if "." in filename else ""
return Document(
id=f"{parent_doc.id}#att:{filename}",
source=DocumentSource.IMAP,
semantic_identifier=filename,
extension=ext,
blob=att["content_bytes"],
size_bytes=att["size_bytes"],
doc_updated_at=email_headers.date,
primary_owners=parent_doc.primary_owners,
metadata={
"parent_email_id": parent_doc.id,
"parent_subject": email_headers.subject,
"attachment_filename": filename,
"attachment_content_type": att["content_type"],
},
)
def _parse_email_body(
email_msg: Message,
email_headers: EmailHeaders,
) -> str:
body = None
for part in email_msg.walk():
if part.is_multipart():
# Multipart parts are *containers* for other parts, not the actual content itself.
# Therefore, we skip until we find the individual parts instead.
continue
charset = part.get_content_charset() or "utf-8"
try:
raw_payload = part.get_payload(decode=True)
if not isinstance(raw_payload, bytes):
logging.warning(
"Payload section from email was expected to be an array of bytes, instead got "
f"{type(raw_payload)=}, {raw_payload=}"
)
continue
body = raw_payload.decode(charset)
break
except (UnicodeDecodeError, LookupError) as e:
logging.warning(f"Could not decode part with charset {charset}. Error: {e}")
continue
if not body:
logging.warning(
f"Email with {email_headers.id=} has an empty body; returning an empty string"
)
return ""
soup = bs4.BeautifulSoup(markup=body, features="html.parser")
return " ".join(str_section for str_section in soup.stripped_strings)
def _sanitize_mailbox_names(mailboxes: list[str]) -> list[str]:
"""
Mailboxes with special characters in them must be enclosed by double-quotes, as per the IMAP protocol.
Just to be safe, we wrap *all* mailboxes with double-quotes.
"""
return [f'"{mailbox}"' for mailbox in mailboxes if mailbox]
def _parse_addrs(raw_header: str) -> list[tuple[str, str]]:
addrs = raw_header.split(",")
name_addr_pairs = [parseaddr(addr=addr) for addr in addrs if addr]
return [(name, addr) for name, addr in name_addr_pairs if addr]
def _parse_singular_addr(raw_header: str) -> tuple[str, str]:
addrs = _parse_addrs(raw_header=raw_header)
if not addrs:
return ("Unknown", "unknown@example.com")
elif len(addrs) >= 2:
raise RuntimeError(
f"Expected a singular address, but instead got multiple; {raw_header=} {addrs=}"
)
return addrs[0]
if __name__ == "__main__":
import time
import uuid
from types import TracebackType
from common.data_source.utils import load_all_docs_from_checkpoint_connector
class OnyxStaticCredentialsProvider(
CredentialsProviderInterface["OnyxStaticCredentialsProvider"]
):
"""Implementation (a very simple one!) to handle static credentials."""
def __init__(
self,
tenant_id: str | None,
connector_name: str,
credential_json: dict[str, Any],
):
self._tenant_id = tenant_id
self._connector_name = connector_name
self._credential_json = credential_json
self._provider_key = str(uuid.uuid4())
def __enter__(self) -> "OnyxStaticCredentialsProvider":
return self
def __exit__(
self,
exc_type: type[BaseException] | None,
exc_value: BaseException | None,
traceback: TracebackType | None,
) -> None:
pass
def get_tenant_id(self) -> str | None:
return self._tenant_id
def get_provider_key(self) -> str:
return self._provider_key
def get_credentials(self) -> dict[str, Any]:
return self._credential_json
def set_credentials(self, credential_json: dict[str, Any]) -> None:
self._credential_json = credential_json
def is_dynamic(self) -> bool:
return False
# from tests.daily.connectors.utils import load_all_docs_from_checkpoint_connector
# from onyx.connectors.credentials_provider import OnyxStaticCredentialsProvider
host = os.environ.get("IMAP_HOST")
mailboxes_str = os.environ.get("IMAP_MAILBOXES","INBOX")
username = os.environ.get("IMAP_USERNAME")
password = os.environ.get("IMAP_PASSWORD")
mailboxes = (
[mailbox.strip() for mailbox in mailboxes_str.split(",")]
if mailboxes_str
else []
)
if not host:
raise RuntimeError("`IMAP_HOST` must be set")
imap_connector = ImapConnector(
host=host,
mailboxes=mailboxes,
)
imap_connector.set_credentials_provider(
OnyxStaticCredentialsProvider(
tenant_id=None,
connector_name=DocumentSource.IMAP,
credential_json={
_USERNAME_KEY: username,
_PASSWORD_KEY: password,
},
)
)
END = time.time()
START = END - 1 * 24 * 60 * 60
for doc in load_all_docs_from_checkpoint_connector(
connector=imap_connector,
start=START,
end=END,
):
print(doc.id,doc.extension)

View File

@ -237,16 +237,13 @@ class BaseConnector(abc.ABC, Generic[CT]):
def validate_perm_sync(self) -> None:
"""
Don't override this; add a function to perm_sync_valid.py in the ee package
to do permission sync validation
Permission-sync validation hook.
RAGFlow doesn't ship the Onyx EE permission-sync validation package.
Connectors that support permission sync should override
`validate_connector_settings()` as needed.
"""
"""
validate_connector_settings_fn = fetch_ee_implementation_or_noop(
"onyx.connectors.perm_sync_valid",
"validate_perm_sync",
noop_return_value=None,
)
validate_connector_settings_fn(self)"""
return None
def set_allow_images(self, value: bool) -> None:
"""Implement if the underlying connector wants to skip/allow image downloading
@ -345,6 +342,17 @@ class CheckpointOutputWrapper(Generic[CT]):
yield None, None, self.next_checkpoint
class CheckpointedConnectorWithPermSyncGH(CheckpointedConnector[CT]):
@abc.abstractmethod
def load_from_checkpoint_with_perm_sync(
self,
start: SecondsSinceUnixEpoch,
end: SecondsSinceUnixEpoch,
checkpoint: CT,
) -> CheckpointOutput[CT]:
raise NotImplementedError
# Slim connectors retrieve just the ids of documents
class SlimConnector(BaseConnector):
@abc.abstractmethod

View File

@ -94,8 +94,10 @@ class Document(BaseModel):
blob: bytes
doc_updated_at: datetime
size_bytes: int
externale_access: Optional[ExternalAccess] = None
primary_owners: Optional[list] = None
metadata: Optional[dict[str, Any]] = None
doc_metadata: Optional[dict[str, Any]] = None
class BasicExpertInfo(BaseModel):

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import ast
import logging
from typing import Any, Callable, Dict
@ -49,8 +50,8 @@ def meta_filter(metas: dict, filters: list[dict], logic: str = "and"):
try:
if isinstance(input, list):
input = input[0]
input = float(input)
value = float(value)
input = ast.literal_eval(input)
value = ast.literal_eval(value)
except Exception:
pass
if isinstance(input, str):
@ -58,28 +59,41 @@ def meta_filter(metas: dict, filters: list[dict], logic: str = "and"):
if isinstance(value, str):
value = value.lower()
for conds in [
(operator == "contains", input in value if not isinstance(input, list) else all([i in value for i in input])),
(operator == "not contains", input not in value if not isinstance(input, list) else all([i not in value for i in input])),
(operator == "in", input in value if not isinstance(input, list) else all([i in value for i in input])),
(operator == "not in", input not in value if not isinstance(input, list) else all([i not in value for i in input])),
(operator == "start with", str(input).lower().startswith(str(value).lower()) if not isinstance(input, list) else "".join([str(i).lower() for i in input]).startswith(str(value).lower())),
(operator == "end with", str(input).lower().endswith(str(value).lower()) if not isinstance(input, list) else "".join([str(i).lower() for i in input]).endswith(str(value).lower())),
(operator == "empty", not input),
(operator == "not empty", input),
(operator == "=", input == value),
(operator == "", input != value),
(operator == ">", input > value),
(operator == "<", input < value),
(operator == "", input >= value),
(operator == "", input <= value),
]:
try:
if all(conds):
ids.extend(docids)
break
except Exception:
pass
matched = False
try:
if operator == "contains":
matched = input in value if not isinstance(input, list) else all(i in value for i in input)
elif operator == "not contains":
matched = input not in value if not isinstance(input, list) else all(i not in value for i in input)
elif operator == "in":
matched = input in value if not isinstance(input, list) else all(i in value for i in input)
elif operator == "not in":
matched = input not in value if not isinstance(input, list) else all(i not in value for i in input)
elif operator == "start with":
matched = str(input).lower().startswith(str(value).lower()) if not isinstance(input, list) else "".join([str(i).lower() for i in input]).startswith(str(value).lower())
elif operator == "end with":
matched = str(input).lower().endswith(str(value).lower()) if not isinstance(input, list) else "".join([str(i).lower() for i in input]).endswith(str(value).lower())
elif operator == "empty":
matched = not input
elif operator == "not empty":
matched = bool(input)
elif operator == "=":
matched = input == value
elif operator == "":
matched = input != value
elif operator == ">":
matched = input > value
elif operator == "<":
matched = input < value
elif operator == "":
matched = input >= value
elif operator == "":
matched = input <= value
except Exception:
pass
if matched:
ids.extend(docids)
return ids
for k, v2docs in metas.items():

View File

@ -170,7 +170,7 @@ def init_settings():
global DATABASE_TYPE, DATABASE
DATABASE_TYPE = os.getenv("DB_TYPE", "mysql")
DATABASE = decrypt_database_config(name=DATABASE_TYPE)
global ALLOWED_LLM_FACTORIES, LLM_FACTORY, LLM_BASE_URL
llm_settings = get_base_config("user_default_llm", {}) or {}
llm_default_models = llm_settings.get("default_models", {}) or {}
@ -334,6 +334,9 @@ def init_settings():
DOC_BULK_SIZE = int(os.environ.get("DOC_BULK_SIZE", 4))
EMBEDDING_BATCH_SIZE = int(os.environ.get("EMBEDDING_BATCH_SIZE", 16))
os.environ["DOTNET_SYSTEM_GLOBALIZATION_INVARIANT"] = "1"
def check_and_install_torch():
global PARALLEL_DEVICES
try:

View File

@ -476,11 +476,13 @@ class RAGFlowPdfParser:
self.boxes = bxs
def _naive_vertical_merge(self, zoomin=3):
bxs = self._assign_column(self.boxes, zoomin)
#bxs = self._assign_column(self.boxes, zoomin)
bxs = self.boxes
grouped = defaultdict(list)
for b in bxs:
grouped[(b["page_number"], b.get("col_id", 0))].append(b)
# grouped[(b["page_number"], b.get("col_id", 0))].append(b)
grouped[(b["page_number"], "x")].append(b)
merged_boxes = []
for (pg, col), bxs in grouped.items():
@ -551,7 +553,7 @@ class RAGFlowPdfParser:
merged_boxes.extend(bxs)
self.boxes = sorted(merged_boxes, key=lambda x: (x["page_number"], x.get("col_id", 0), x["top"]))
#self.boxes = sorted(merged_boxes, key=lambda x: (x["page_number"], x.get("col_id", 0), x["top"]))
def _final_reading_order_merge(self, zoomin=3):
if not self.boxes:
@ -1061,8 +1063,8 @@ class RAGFlowPdfParser:
self.total_page = len(self.pdf.pages)
except Exception:
logging.exception("RAGFlowPdfParser __images__")
except Exception as e:
logging.exception(f"RAGFlowPdfParser __images__, exception: {e}")
logging.info(f"__images__ dedupe_chars cost {timer() - start}s")
self.outlines = []
@ -1206,7 +1208,7 @@ class RAGFlowPdfParser:
start = timer()
self._text_merge()
self._concat_downward()
#self._naive_vertical_merge(zoomin)
self._naive_vertical_merge(zoomin)
if callback:
callback(0.92, "Text merged ({:.2f}s)".format(timer() - start))

View File

@ -137,11 +137,11 @@ ADMIN_SVR_HTTP_PORT=9381
SVR_MCP_PORT=9382
# The RAGFlow Docker image to download. v0.22+ doesn't include embedding models.
RAGFLOW_IMAGE=infiniflow/ragflow:v0.23.0
RAGFLOW_IMAGE=infiniflow/ragflow:v0.23.1
# If you cannot download the RAGFlow Docker image:
# RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:v0.23.0
# RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:v0.23.0
# RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:v0.23.1
# RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:v0.23.1
#
# - For the `nightly` edition, uncomment either of the following:
# RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:nightly

View File

@ -77,7 +77,7 @@ The [.env](./.env) file contains important environment variables for Docker.
- `SVR_HTTP_PORT`
The port used to expose RAGFlow's HTTP API service to the host machine, allowing **external** access to the service running inside the Docker container. Defaults to `9380`.
- `RAGFLOW-IMAGE`
The Docker image edition. Defaults to `infiniflow/ragflow:v0.23.0`. The RAGFlow Docker image does not include embedding models.
The Docker image edition. Defaults to `infiniflow/ragflow:v0.23.1`. The RAGFlow Docker image does not include embedding models.
> [!TIP]

View File

@ -0,0 +1,8 @@
{
"label": "Basics",
"position": 2,
"link": {
"type": "generated-index",
"description": "Basic concepts."
}
}

View File

@ -0,0 +1,61 @@
---
sidebar_position: 2
slug: /what_is_agent_context_engine
---
# What is Agent context engine?
From 2025, a silent revolution began beneath the dazzling surface of AI Agents. While the world marveled at agents that could write code, analyze data, and automate workflows, a fundamental bottleneck emerged: why do even the most advanced agents still stumble on simple questions, forget previous conversations, or misuse available tools?
The answer lies not in the intelligence of the Large Language Model (LLM) itself, but in the quality of the Context it receives. An LLM, no matter how powerful, is only as good as the information we feed it. Todays cutting-edge agents are often crippled by a cumbersome, manual, and error-prone process of context assembly—a process known as Context Engineering.
This is where the Agent Context Engine comes in. It is not merely an incremental improvement but a foundational shift, representing the evolution of RAG from a singular technique into the core data and intelligence substrate for the entire Agent ecosystem.
## Beyond the hype: The reality of today's "intelligent" Agents
Today, the “intelligence” behind most AI Agents hides a mountain of human labor. Developers must:
- Hand-craft elaborate prompt templates
- Hard-code document-retrieval logic for every task
- Juggle tool descriptions, conversation history, and knowledge snippets inside a tiny context window
- Repeat the whole process for each new scenario
This pattern is called Context Engineering. It is deeply tied to expert know-how, almost impossible to scale, and prohibitively expensive to maintain. When an enterprise needs to keep dozens of distinct agents alive, the artisanal workshop model collapses under its own weight.
The mission of an Agent Context Engine is to turn Context Engineering from an “art” into an industrial-grade science.
Deconstructing the Agent Context Engine
So, what exactly is an Agent Context Engine? It is a unified, intelligent, and automated platform responsible for the end-to-end process of assembling the optimal context for an LLM or Agent at the moment of inference. It moves from artisanal crafting to industrialized production.
At its core, an Agent Context Engine is built on a triumvirate of next-generation retrieval capabilities, seamlessly integrated into a single service layer:
1. The Knowledge Core (Advanced RAG): This is the evolution of traditional RAG. It moves beyond simple chunk-and-embed to intelligently process static, private enterprise knowledge. Techniques like TreeRAG (building LLM-generated document outlines for "locate-then-expand" retrieval) and GraphRAG (extracting entity networks to find semantically distant connections) work to close the "semantic gap." The engines Ingestion Pipeline acts as the ETL for unstructured data, parsing multi-format documents and using LLMs to enrich content with summaries, metadata, and structure before indexing.
2. The Memory Layer: An Agents intelligence is defined by its ability to learn from interaction. The Memory Layer is a specialized retrieval system for dynamic, episodic data: conversation history, user preferences, and the agents own internal state (e.g., "waiting for human input"). It manages the lifecycle of this data—storing raw dialogue, triggering summarization into semantic memory, and retrieving relevant past interactions to provide continuity and personalization. Technologically, it is a close sibling to RAG, but focused on a temporal stream of data.
3. The Tool Orchestrator: As MCP (Model Context Protocol) enables the connection of hundreds of internal services as tools, a new problem arises: tool selection. The Context Engine solves this with Tool Retrieval. Instead of dumping all tool descriptions into the prompt, it maintains an index of tools and—critically—an index of Playbooks or Guidelines (best practices on when and how to use tools). For a given task, it retrieves only the most relevant tools and instructions, transforming the LLMs job from "searching a haystack" to "following a recipe."
## Why we need a dedicated engine? The case for a unified substrate
The necessity of an Agent Context Engine becomes clear when we examine the alternative: siloed, manually wired components.
- The Data Silo Problem: Knowledge, memory, and tools reside in separate systems, requiring complex integration for each new agent.
- The Assembly Line Bottleneck: Developers spend more time on context plumbing than on agent logic, slowing innovation to a crawl.
- The "Context Ownership" Dilemma: In manually engineered systems, context logic is buried in code, owned by developers, and opaque to business users. An Engine makes context a configurable, observable, and customer-owned asset.
The shift from Context Engineering to a Context Platform/Engine marks the maturation of enterprise AI, as summarized in the table below:
| Dimension | Context engineering (present) | Context engineering/Platform (future) |
| ------------------- | -------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- |
| Context creation | Manual, artisanal work by developers and prompt engineers. | Automated, driven by intelligent ingestion pipelines and configurable rules. |
| Context delivery | Hard-coded prompts and static retrieval logic embedded in agent workflows. | Dynamic, real-time retrieval and assembly based on the agent's live state and intent. |
| Context maintenance | A development and operational burden, logic locked in code. | A manageable platform function, with visibility and control returned to the business. |
## RAGFlow: A resolute march toward the context engine of Agents
This is the future RAGFlow is forging.
We left behind the label of “yet another RAG system” long ago. From DeepDoc—our deeply-optimized, multimodal document parser—to the bleeding-edge architectures that bridge semantic chasms in complex RAG scenarios, all the way to a full-blown, enterprise-grade ingestion pipeline, every evolutionary step RAGFlow takes is a deliberate stride toward the ultimate form: an Agentic Context Engine.
We believe tomorrows enterprise AI advantage will hinge not on who owns the largest model, but on who can feed that model the highest-quality, most real-time, and most relevant context. An Agentic Context Engine is the critical infrastructure that turns this vision into reality.
In the paradigm shift from “hand-crafted prompts” to “intelligent context,” RAGFlow is determined to be the most steadfast propeller and enabler. We invite every developer, enterprise, and researcher who cares about the future of AI agents to follow RAGFlows journey—so together we can witness and build the cornerstone of the next-generation AI stack.

107
docs/basics/rag.md Normal file
View File

@ -0,0 +1,107 @@
---
sidebar_position: 1
slug: /what_is_rag
---
# What is Retreival-Augmented-Generation (RAG)
Since large language models (LLMs) became the focus of technology, their ability to handle general knowledge has been astonishing. However, when questions shift to internal corporate documents, proprietary knowledge bases, or real-time data, the limitations of LLMs become glaringly apparent: they cannot access private information outside their training data. Retrieval-Augmented Generation (RAG) was born precisely to address this core need. Before an LLM generates an answer, it first retrieves the most relevant context from an external knowledge base and inputs it as "reference material" to the LLM, thereby guiding it to produce accurate answers. In short, RAG elevates LLMs from "relying on memory" to "having evidence to rely on," significantly improving their accuracy and trustworthiness in specialized fields and real-time information queries.
## Why RAG is important?
Although LLMs excel in language understanding and generation, they have inherent limitations:
- Static Knowledge: The model's knowledge is based on a data snapshot from its training time and cannot be automatically updated, making it difficult to perceive the latest information.
- Blind Spot to External Data: They cannot directly access corporate private documents, real-time information streams, or domain-specific content.
- Hallucination Risk: When lacking accurate evidence, they may still fabricate plausible-sounding but false answers to maintain conversational fluency.
The introduction of RAG provides LLMs with real-time, credible "factual grounding." Its core mechanism is divided into two stages:
- Retrieval Stage: Based on the user's question, quickly retrieve the most relevant documents or data fragments from an external knowledge base.
- Generation Stage: The LLM organizes and generates the final answer by incorporating the retrieved information as context, combined with its own linguistic capabilities.
This upgrades LLMs from "speaking from memory" to "speaking with documentation," significantly enhancing reliability in professional and enterprise-level applications.
## How RAG works?
Retrieval-Augmented Generation enables LLMs to generate higher-quality responses by leveraging real-time, external, or private data sources through the introduction of an information retrieval mechanism. Its workflow can be divided into following key steps:
### Data processing and vectorization
The knowledge required by RAG comes from unstructured data in various formats, such as documents, database records, or API return content. This data typically needs to be chunked, then transformed into vectors via an embedding model, and stored in a vector database.
Why is Chunking Needed? Indexing entire documents directly faces the following problems:
- Decreased Retrieval Precision: Vectorizing long documents leads to semantic "averaging," losing details.
- Context Length Limitation: LLMs have a finite context window, requiring filtering of the most relevant parts for input.
- Cost and Efficiency: Embedding computation and retrieval costs are higher for long texts.
Therefore, an intelligent chunking strategy is key to balancing information integrity, retrieval granularity, and computational efficiency.
### Retrieve relevant information
The user's query is also converted into a vector to perform semantic relevance searches (e.g., calculating cosine similarity) in the vector database, matching and recalling the most relevant text fragments.
### Context construction and answer generation
The retrieved relevant content is added to the LLM's context as factual grounding, and the LLM finally generates the answer. Therefore, RAG can be seen as Context Engineering 1.0 for automated context construction.
## Deep dive into existing RAG architecture: beyond vector retrieval
An industrial-grade RAG system is far from being as simple as "vector search + LLM"; its complexity and challenges are primarily embedded in the retrieval process.
### Data complexity: multimodal document processing
Core Challenge: Corporate knowledge mostly exists in the form of multimodal documents containing text, charts, tables, and formulas. Simple OCR extraction loses a large amount of semantic information.
Advanced Practice: Leading solutions, such as RAGFlow, tend to use Visual Language Models (VLM) or specialized parsing models like DeepDoc to "translate" multimodal documents into unimodal text rich in structural and semantic information. Converting multimodal information into high-quality unimodal text has become standard practice for advanced RAG.
### The complexity of chunking: the trade-off between precision and context
A simple "chunk-embed-retrieve" pipeline has an inherent contradiction:
- Semantic Matching requires small text chunks to ensure clear semantic focus.
- Context Understanding requires large text chunks to ensure complete and coherent information.
This forces system design into a difficult trade-off between "precise but fragmented" and "complete but vague."
Advanced Practice: Leading solutions, such as RAGFlow, employ semantic enhancement techniques like constructing semantic tables of contents and knowledge graphs. These not only address semantic fragmentation caused by physical chunking but also enable the discovery of relevant content across documents based on entity-relationship networks.
### Why is a vector database insufficient for serving RAG?
Vector databases excel at semantic similarity search, but RAG requires precise and reliable answers, demanding more capabilities from the retrieval system:
- Hybrid Search: Relying solely on vector retrieval may miss exact keyword matches (e.g., product codes, regulation numbers). Hybrid search, combining vector retrieval with keyword retrieval (BM25), ensures both semantic breadth and keyword precision.
- Tensor or Multi-Vector Representation: To support cross-modal data, employing tensor or multi-vector representation has become an important trend.
- Metadata Filtering: Filtering based on attributes like date, department, and type is a rigid requirement in business scenarios.
Therefore, the retrieval layer of RAG is a composite system based on vector search but must integrate capabilities like full-text search, re-ranking, and metadata filtering.
## RAG and memory: Retrieval from the same source but different streams
Within the agent framework, the essence of the memory mechanism is the same as RAG: both retrieve relevant information from storage based on current needs. The key difference lies in the data source:
- RAG: Targets pre-existing static or dynamic private data provided by the user in advance (e.g., documents, databases).
- Memory: Targets dynamic data generated or perceived by the agent in real-time during interaction (e.g., conversation history, environmental state, tool execution results).
They are highly consistent at the technical base (e.g., vector retrieval, keyword matching) and can be seen as the same retrieval capability applied in different scenarios ("existing knowledge" vs. "interaction memory"). A complete agent system often includes both an RAG module for inherent knowledge and a Memory module for interaction history.
## RAG applications
RAG has demonstrated clear value in several typical scenarios:
1. Enterprise Knowledge Q&A and Internal Search
By vectorizing corporate private data and combining it with an LLM, RAG can directly return natural language answers based on authoritative sources, rather than document lists. While meeting intelligent Q&A needs, it inherently aligns with corporate requirements for data security, access control, and compliance.
2. Complex Document Understanding and Professional Q&A
For structurally complex documents like contracts and regulations, the value of RAG lies in its ability to generate accurate, verifiable answers while maintaining context integrity. Its system accuracy largely depends on text chunking and semantic understanding strategies.
3. Dynamic Knowledge Fusion and Decision Support
In business scenarios requiring the synthesis of information from multiple sources, RAG evolves into a knowledge orchestration and reasoning support system for business decisions. Through a multi-path recall mechanism, it fuses knowledge from different systems and formats, maintaining factual consistency and logical controllability during the generation phase.
## The future of RAG
The evolution of RAG is unfolding along several clear paths:
1. RAG as the data foundation for Agents
RAG and agents have an architecture vs. scenario relationship. For agents to achieve autonomous and reliable decision-making and execution, they must rely on accurate and timely knowledge. RAG provides them with a standardized capability to access private domain knowledge and is an inevitable choice for building knowledge-aware agents.
2. Advanced RAG: Using LLMs to optimize retrieval itself
The core feature of next-generation RAG is fully utilizing the reasoning capabilities of LLMs to optimize the retrieval process, such as rewriting queries, summarizing or fusing results, or implementing intelligent routing. Empowering every aspect of retrieval with LLMs is key to breaking through current performance bottlenecks.
3. Towards context engineering 2.0
Current RAG can be viewed as Context Engineering 1.0, whose core is assembling static knowledge context for single Q&A tasks. The forthcoming Context Engineering 2.0 will extend with RAG technology at its core, becoming a system that automatically and dynamically assembles comprehensive context for agents. The context fused by this system will come not only from documents but also include interaction memory, available tools/skills, and real-time environmental information. This marks the transition of agent development from a "handicraft workshop" model to the industrial starting point of automated context engineering.
The essence of RAG is to build a dedicated, efficient, and trustworthy external data interface for large language models; its core is Retrieval, not Generation. Starting from the practical need to solve private data access, its technical depth is reflected in the optimization of retrieval for complex unstructured data. With its deep integration into agent architectures and its development towards automated context engineering, RAG is evolving from a technology that improves Q&A quality into the core infrastructure for building the next generation of trustworthy, controllable, and scalable intelligent applications.

View File

@ -99,7 +99,7 @@ RAGFlow utilizes MinIO as its object storage solution, leveraging its scalabilit
- `SVR_HTTP_PORT`
The port used to expose RAGFlow's HTTP API service to the host machine, allowing **external** access to the service running inside the Docker container. Defaults to `9380`.
- `RAGFLOW-IMAGE`
The Docker image edition. Defaults to `infiniflow/ragflow:v0.23.0` (the RAGFlow Docker image without embedding models).
The Docker image edition. Defaults to `infiniflow/ragflow:v0.23.1` (the RAGFlow Docker image without embedding models).
:::tip NOTE
If you cannot download the RAGFlow Docker image, try the following mirrors.

View File

@ -47,7 +47,7 @@ After building the infiniflow/ragflow:nightly image, you are ready to launch a f
1. Edit Docker Compose Configuration
Open the `docker/.env` file. Find the `RAGFLOW_IMAGE` setting and change the image reference from `infiniflow/ragflow:v0.23.0` to `infiniflow/ragflow:nightly` to use the pre-built image.
Open the `docker/.env` file. Find the `RAGFLOW_IMAGE` setting and change the image reference from `infiniflow/ragflow:v0.23.1` to `infiniflow/ragflow:nightly` to use the pre-built image.
2. Launch the Service

View File

@ -133,7 +133,7 @@ See [Run retrieval test](./run_retrieval_test.md) for details.
## Search for dataset
As of RAGFlow v0.23.0, the search feature is still in a rudimentary form, supporting only dataset search by name.
As of RAGFlow v0.23.1, the search feature is still in a rudimentary form, supporting only dataset search by name.
![search dataset](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/search_datasets.jpg)

View File

@ -87,4 +87,4 @@ RAGFlow's file management allows you to download an uploaded file:
![download_file](https://github.com/infiniflow/ragflow/assets/93570324/cf3b297f-7d9b-4522-bf5f-4f45743e4ed5)
> As of RAGFlow v0.23.0, bulk download is not supported, nor can you download an entire folder.
> As of RAGFlow v0.23.1, bulk download is not supported, nor can you download an entire folder.

View File

@ -46,7 +46,7 @@ The Admin CLI and Admin Service form a client-server architectural suite for RAG
2. Install ragflow-cli.
```bash
pip install ragflow-cli==0.23.0
pip install ragflow-cli==0.23.1
```
3. Launch the CLI client:

View File

@ -0,0 +1,8 @@
{
"label": "Migration",
"position": 5,
"link": {
"type": "generated-index",
"description": "RAGFlow migration guide"
}
}

View File

@ -1,109 +0,0 @@
---
sidebar_position: 8
slug: /run_health_check
---
# Monitoring
Double-check the health status of RAGFlow's dependencies.
---
The operation of RAGFlow depends on four services:
- **Elasticsearch** (default) or [Infinity](https://github.com/infiniflow/infinity) as the document engine
- **MySQL**
- **Redis**
- **MinIO** for object storage
If an exception or error occurs related to any of the above services, such as `Exception: Can't connect to ES cluster`, refer to this document to check their health status.
You can also click you avatar in the top right corner of the page **>** System to view the visualized health status of RAGFlow's core services. The following screenshot shows that all services are 'green' (running healthily). The task executor displays the *cumulative* number of completed and failed document parsing tasks from the past 30 minutes:
![system_status_page](https://github.com/user-attachments/assets/b0c1a11e-93e3-4947-b17a-1bfb4cdab6e4)
Services with a yellow or red light are not running properly. The following is a screenshot of the system page after running `docker stop ragflow-es-10`:
![es_failed](https://github.com/user-attachments/assets/06056540-49f5-48bf-9cc9-a7086bc75790)
You can click on a specific 30-second time interval to view the details of completed and failed tasks:
![done_tasks](https://github.com/user-attachments/assets/49b25ec4-03af-48cf-b2e5-c892f6eaa261)
![done_vs_failed](https://github.com/user-attachments/assets/eaa928d0-a31c-4072-adea-046091e04599)
## API Health Check
In addition to checking the system dependencies from the **avatar > System** page in the UI, you can directly query the backend health check endpoint:
```bash
http://IP_OF_YOUR_MACHINE/v1/system/healthz
```
Here `<port>` refers to the actual port of your backend service (e.g., `7897`, `9222`, etc.).
Key points:
- **No login required** (no `@login_required` decorator)
- Returns results in JSON format
- If all dependencies are healthy → HTTP **200 OK**
- If any dependency fails → HTTP **500 Internal Server Error**
### Example 1: All services healthy (HTTP 200)
```bash
http://127.0.0.1/v1/system/healthz
```
Response:
```http
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 120
```
Explanation:
- Database (MySQL/Postgres), Redis, document engine (Elasticsearch/Infinity), and object storage (MinIO) are all healthy.
- The `status` field returns `"ok"`.
### Example 2: One service unhealthy (HTTP 500)
For example, if Redis is down:
Response:
```http
HTTP/1.1 500 INTERNAL SERVER ERROR
Content-Type: application/json
Content-Length: 300
```
Explanation:
- `redis` is marked as `"nok"`, with detailed error info under `_meta.redis.error`.
- The overall `status` is `"nok"`, so the endpoint returns 500.
---
This endpoint allows you to monitor RAGFlows core dependencies programmatically in scripts or external monitoring systems, without relying on the frontend UI.
"redis": "nok",
"doc_engine": "ok",
"storage": "ok",
"status": "nok",
"_meta": {
"redis": {
"elapsed": "5.2",
"error": "Lost connection!"
}
}
}
```
Explanation:
- `redis` is marked as `"nok"`, with detailed error info under `_meta.redis.error`.
- The overall `status` is `"nok"`, so the endpoint returns 500.
---
This endpoint allows you to monitor RAGFlows core dependencies programmatically in scripts or external monitoring systems, without relying on the frontend UI.

View File

@ -60,16 +60,16 @@ To upgrade RAGFlow, you must upgrade **both** your code **and** your Docker imag
git pull
```
3. Switch to the latest, officially published release, e.g., `v0.23.0`:
3. Switch to the latest, officially published release, e.g., `v0.23.1`:
```bash
git checkout -f v0.23.0
git checkout -f v0.23.1
```
4. Update **ragflow/docker/.env**:
```bash
RAGFLOW_IMAGE=infiniflow/ragflow:v0.23.0
RAGFLOW_IMAGE=infiniflow/ragflow:v0.23.1
```
5. Update the RAGFlow image and restart RAGFlow:
@ -90,10 +90,10 @@ No, you do not need to. Upgrading RAGFlow in itself will *not* remove your uploa
1. From an environment with Internet access, pull the required Docker image.
2. Save the Docker image to a **.tar** file.
```bash
docker save -o ragflow.v0.23.0.tar infiniflow/ragflow:v0.23.0
docker save -o ragflow.v0.23.1.tar infiniflow/ragflow:v0.23.1
```
3. Copy the **.tar** file to the target server.
4. Load the **.tar** file into Docker:
```bash
docker load -i ragflow.v0.23.0.tar
docker load -i ragflow.v0.23.1.tar
```

View File

@ -46,7 +46,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
`vm.max_map_count`. This value sets the maximum number of memory map areas a process may have. Its default value is 65530. While most applications require fewer than a thousand maps, reducing this value can result in abnormal behaviors, and the system will throw out-of-memory errors when a process reaches the limitation.
RAGFlow v0.23.0 uses Elasticsearch or [Infinity](https://github.com/infiniflow/infinity) for multiple recall. Setting the value of `vm.max_map_count` correctly is crucial to the proper functioning of the Elasticsearch component.
RAGFlow v0.23.1 uses Elasticsearch or [Infinity](https://github.com/infiniflow/infinity) for multiple recall. Setting the value of `vm.max_map_count` correctly is crucial to the proper functioning of the Elasticsearch component.
<Tabs
defaultValue="linux"
@ -186,7 +186,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/docker
$ git checkout -f v0.23.0
$ git checkout -f v0.23.1
```
3. Use the pre-built Docker images and start up the server:
@ -202,7 +202,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
| RAGFlow image tag | Image size (GB) | Stable? |
| ------------------- | --------------- | ------------------------ |
| v0.23.0 | &approx;2 | Stable release |
| v0.23.1 | &approx;2 | Stable release |
| nightly | &approx;2 | _Unstable_ nightly build |
```mdx-code-block

View File

@ -13,61 +13,58 @@ A complete list of models supported by RAGFlow, which will continue to expand.
<APITable>
```
| Provider | Chat | Embedding | Rerank | Img2txt | Speech2txt | TTS |
| --------------------- | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
| Anthropic | :heavy_check_mark: | | | | | |
| Azure-OpenAI | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | |
| BAAI | | :heavy_check_mark: | :heavy_check_mark: | | | |
| BaiChuan | :heavy_check_mark: | :heavy_check_mark: | | | | |
| BaiduYiyan | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
| Bedrock | :heavy_check_mark: | :heavy_check_mark: | | | | |
| Cohere | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
| DeepSeek | :heavy_check_mark: | | | | | |
| FastEmbed | | :heavy_check_mark: | | | | |
| Fish Audio | | | | | | :heavy_check_mark: |
| Gemini | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | |
| Google Cloud | :heavy_check_mark: | | | | | |
| GPUStack | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
| Groq | :heavy_check_mark: | | | | | |
| HuggingFace | :heavy_check_mark: | :heavy_check_mark: | | | | |
| Jina | | :heavy_check_mark: | :heavy_check_mark: | | | |
| LeptonAI | :heavy_check_mark: | | | | | |
| LocalAI | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | |
| LM-Studio | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | |
| MiniMax | :heavy_check_mark: | | | | | |
| Mistral | :heavy_check_mark: | :heavy_check_mark: | | | | |
| ModelScope | :heavy_check_mark: | | | | | |
| Moonshot | :heavy_check_mark: | | | :heavy_check_mark: | | |
| Novita AI | :heavy_check_mark: | :heavy_check_mark: | | | | |
| NVIDIA | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
| Ollama | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | |
| OpenAI | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| OpenAI-API-Compatible | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
| OpenRouter | :heavy_check_mark: | | | :heavy_check_mark: | | |
| PerfXCloud | :heavy_check_mark: | :heavy_check_mark: | | | | |
| Replicate | :heavy_check_mark: | :heavy_check_mark: | | | | |
| PPIO | :heavy_check_mark: | | | | | |
| SILICONFLOW | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
| StepFun | :heavy_check_mark: | | | | | |
| Tencent Hunyuan | :heavy_check_mark: | | | | | |
| Tencent Cloud | | | | | :heavy_check_mark: | |
| TogetherAI | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
| Tongyi-Qianwen | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Upstage | :heavy_check_mark: | :heavy_check_mark: | | | | |
| VLLM | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
| VolcEngine | :heavy_check_mark: | | | | | |
| Voyage AI | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
| Xinference | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| XunFei Spark | :heavy_check_mark: | | | | | :heavy_check_mark: |
| xAI | :heavy_check_mark: | | | :heavy_check_mark: | | |
| Youdao | | :heavy_check_mark: | :heavy_check_mark: | | | |
| ZHIPU-AI | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | |
| 01.AI | :heavy_check_mark: | | | | | |
| DeepInfra | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: |
| 302.AI | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
| CometAPI | :heavy_check_mark: | :heavy_check_mark: | | | | |
| DeerAPI | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | :heavy_check_mark: |
| Jiekou.AI | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | |
| Provider | LLM | Image2Text | Speech2text | TTS | Embedding | Rerank | OCR |
| --------------------- | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
| Anthropic | :heavy_check_mark: | | | | | | |
| Azure-OpenAI | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | |
| BaiChuan | :heavy_check_mark: | | | | :heavy_check_mark: | | |
| BaiduYiyan | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | |
| Bedrock | :heavy_check_mark: | | | | :heavy_check_mark: | | |
| Cohere | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | |
| DeepSeek | :heavy_check_mark: | | | | | | |
| Fish Audio | | | | :heavy_check_mark: | | | |
| Gemini | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | | |
| Google Cloud | :heavy_check_mark: | | | | | | |
| GPUStack | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| Groq | :heavy_check_mark: | | | | | | |
| HuggingFace | :heavy_check_mark: | | | | :heavy_check_mark: | | |
| Jina | | | | | :heavy_check_mark: | :heavy_check_mark: | |
| LocalAI | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | | |
| LongCat | :heavy_check_mark: | | | | | | |
| LM-Studio | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | | |
| MiniMax | :heavy_check_mark: | | | | | | |
| MinerU | | | | | | | :heavy_check_mark: |
| Mistral | :heavy_check_mark: | | | | :heavy_check_mark: | | |
| ModelScope | :heavy_check_mark: | | | | | | |
| Moonshot | :heavy_check_mark: | :heavy_check_mark: | | | | | |
| NovitaAI | :heavy_check_mark: | | | | :heavy_check_mark: | | |
| NVIDIA | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | |
| Ollama | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | | |
| OpenAI | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
| OpenAI-API-Compatible | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | |
| OpenRouter | :heavy_check_mark: | :heavy_check_mark: | | | | | |
| Replicate | :heavy_check_mark: | | | | :heavy_check_mark: | | |
| PPIO | :heavy_check_mark: | | | | | | |
| SILICONFLOW | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | |
| StepFun | :heavy_check_mark: | | | | | | |
| Tencent Hunyuan | :heavy_check_mark: | | | | | | |
| Tencent Cloud | | | :heavy_check_mark: | | | | |
| TogetherAI | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | |
| TokenPony | :heavy_check_mark: | | | | | | |
| Tongyi-Qianwen | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| Upstage | :heavy_check_mark: | | | | :heavy_check_mark: | | |
| VLLM | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | |
| VolcEngine | :heavy_check_mark: | | | | | | |
| Voyage AI | | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | |
| Xinference | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| XunFei Spark | :heavy_check_mark: | | | :heavy_check_mark: | | | |
| xAI | :heavy_check_mark: | :heavy_check_mark: | | | | | |
| ZHIPU-AI | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | | |
| DeepInfra | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
| 302.AI | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | |
| CometAPI | :heavy_check_mark: | | | | :heavy_check_mark: | | |
| DeerAPI | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | | |
| Jiekou.AI | :heavy_check_mark: | | | | :heavy_check_mark: | :heavy_check_mark: | |
```mdx-code-block
</APITable>

View File

@ -7,6 +7,24 @@ slug: /release_notes
Key features, improvements and bug fixes in the latest releases.
## v0.23.1
Released on December 31, 2025.
### Fixed issues
- Resolved an issue where the RAGFlow Server would fail to start if an empty memory object existed, and corrected the inability to delete a newly created empty Memory.
- Improved the stability of memory extraction across all memory types after selection.
- Fixed MDX file parsing support.
### Data sources
- GitHub
- Gitlab
- Asana
- IMAP
## v0.23.0
Released on December 27, 2025.

View File

@ -77,7 +77,7 @@ env:
ragflow:
image:
repository: infiniflow/ragflow
tag: v0.23.0
tag: v0.23.1
pullPolicy: IfNotPresent
pullSecrets: []
# Optional service configuration overrides

View File

@ -1,6 +1,6 @@
[project]
name = "ragflow"
version = "0.23.0"
version = "0.23.1"
description = "[RAGFlow](https://ragflow.io/) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data."
authors = [{ name = "Zhichang Yu", email = "yuzhichang@gmail.com" }]
license-files = ["LICENSE"]
@ -149,6 +149,7 @@ dependencies = [
# "cryptography==46.0.3",
# "jinja2>=3.1.0",
"pyairtable>=3.3.0",
"pygithub>=2.8.1",
"asana>=5.2.2",
"python-gitlab>=7.0.0",
]

View File

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import os
import re
import tempfile
@ -28,7 +28,7 @@ def chunk(filename, binary, tenant_id, lang, callback=None, **kwargs):
doc["title_sm_tks"] = rag_tokenizer.fine_grained_tokenize(doc["title_tks"])
# is it English
eng = lang.lower() == "english" # is_english(sections)
is_english = lang.lower() == "english" # is_english(sections)
try:
_, ext = os.path.splitext(filename)
if not ext:
@ -49,7 +49,7 @@ def chunk(filename, binary, tenant_id, lang, callback=None, **kwargs):
ans = seq2txt_mdl.transcription(tmp_path)
callback(0.8, "Sequence2Txt LLM respond: %s ..." % ans[:32])
tokenize(doc, ans, eng)
tokenize(doc, ans, is_english)
return [doc]
except Exception as e:
callback(prog=-1, msg=str(e))
@ -57,6 +57,7 @@ def chunk(filename, binary, tenant_id, lang, callback=None, **kwargs):
if tmp_path and os.path.exists(tmp_path):
try:
os.unlink(tmp_path)
except Exception:
except Exception as e:
logging.exception(f"Failed to remove temporary file: {tmp_path}, exception: {e}")
pass
return []

View File

@ -40,7 +40,7 @@ from deepdoc.parser.docling_parser import DoclingParser
from deepdoc.parser.tcadp_parser import TCADPParser
from common.parser_config_utils import normalize_layout_recognizer
from rag.nlp import concat_img, find_codec, naive_merge, naive_merge_with_images, naive_merge_docx, rag_tokenizer, \
tokenize_chunks, tokenize_chunks_with_images, tokenize_table, attach_media_context
tokenize_chunks, tokenize_chunks_with_images, tokenize_table, attach_media_context, append_context2table_image4pdf
def by_deepdoc(filename, binary=None, from_page=0, to_page=100000, lang="Chinese", callback=None, pdf_cls=None,
@ -210,8 +210,8 @@ class Docx(DocxParser):
except UnicodeDecodeError:
logging.info("The recognized image stream appears to be corrupted. Skipping image.")
continue
except Exception:
logging.info("The recognized image stream appears to be corrupted. Skipping image.")
except Exception as e:
logging.warning(f"The recognized image stream appears to be corrupted. Skipping image, exception: {e}")
continue
try:
image = Image.open(BytesIO(image_blob)).convert('RGB')
@ -219,7 +219,8 @@ class Docx(DocxParser):
res_img = image
else:
res_img = concat_img(res_img, image)
except Exception:
except Exception as e:
logging.warning(f"Fail to open or concat images, exception: {e}")
continue
return res_img
@ -486,7 +487,7 @@ class Pdf(PdfParser):
tbls = self._extract_table_figure(True, zoomin, True, True)
self._naive_vertical_merge()
self._concat_downward()
self._final_reading_order_merge()
# self._final_reading_order_merge()
# self._filter_forpages()
logging.info("layouts cost: {}s".format(timer() - first_start))
return [(b["text"], self._line_tag(b, zoomin)) for b in self.boxes], tbls
@ -553,7 +554,8 @@ class Markdown(MarkdownParser):
if (src, line_no) not in seen:
urls.append({"url": src, "line": line_no})
seen.add((src, line_no))
except Exception:
except Exception as e:
logging.error("Failed to extract image urls: {}".format(e))
pass
return urls
@ -698,8 +700,10 @@ def chunk(filename, binary=None, from_page=0, to_page=100000, lang="Chinese", ca
**kwargs) or []
embed_res.extend(sub_res)
except Exception as e:
error_msg = f"Failed to chunk embed {embed_filename}: {e}"
logging.error(error_msg)
if callback:
callback(0.05, f"Failed to chunk embed {embed_filename}: {e}")
callback(0.05, error_msg)
continue
if re.search(r"\.docx$", filename, re.IGNORECASE):
@ -772,6 +776,9 @@ def chunk(filename, binary=None, from_page=0, to_page=100000, lang="Chinese", ca
if not sections and not tables:
return []
if table_context_size or image_context_size:
tables = append_context2table_image4pdf(sections, tables, image_context_size)
if name in ["tcadp", "docling", "mineru"]:
parser_config["chunk_token_num"] = 0
@ -839,7 +846,8 @@ def chunk(filename, binary=None, from_page=0, to_page=100000, lang="Chinese", ca
try:
vision_model = LLMBundle(kwargs["tenant_id"], LLMType.IMAGE2TEXT)
callback(0.2, "Visual model detected. Attempting to enhance figure extraction...")
except Exception:
except Exception as e:
logging.warning(f"Failed to detect figure extraction: {e}")
vision_model = None
if vision_model:
@ -905,8 +913,9 @@ def chunk(filename, binary=None, from_page=0, to_page=100000, lang="Chinese", ca
sections = [(_, "") for _ in sections if _]
callback(0.8, "Finish parsing.")
else:
callback(0.8, f"tika.parser got empty content from {filename}.")
logging.warning(f"tika.parser got empty content from {filename}.")
error_msg = f"tika.parser got empty content from {filename}."
callback(0.8, error_msg)
logging.warning(error_msg)
return []
else:
raise NotImplementedError(
@ -1000,8 +1009,8 @@ def chunk(filename, binary=None, from_page=0, to_page=100000, lang="Chinese", ca
res.extend(embed_res)
if url_res:
res.extend(url_res)
if table_context_size or image_context_size:
attach_media_context(res, table_context_size, image_context_size)
#if table_context_size or image_context_size:
# attach_media_context(res, table_context_size, image_context_size)
return res

View File

@ -42,16 +42,16 @@ class Excel(ExcelParser):
else:
wb = Excel._load_excel_to_workbook(BytesIO(binary))
total = 0
for sheetname in wb.sheetnames:
total += len(list(wb[sheetname].rows))
for sheet_name in wb.sheetnames:
total += len(list(wb[sheet_name].rows))
res, fails, done = [], [], 0
rn = 0
flow_images = []
pending_cell_images = []
tables = []
for sheetname in wb.sheetnames:
ws = wb[sheetname]
images = Excel._extract_images_from_worksheet(ws, sheetname=sheetname)
for sheet_name in wb.sheetnames:
ws = wb[sheet_name]
images = Excel._extract_images_from_worksheet(ws, sheetname=sheet_name)
if images:
image_descriptions = vision_figure_parser_figure_xlsx_wrapper(images=images, callback=callback,
**kwargs)
@ -59,7 +59,7 @@ class Excel(ExcelParser):
for i, bf in enumerate(image_descriptions):
images[i]["image_description"] = "\n".join(bf[0][1])
for img in images:
if (img["span_type"] == "single_cell" and img.get("image_description")):
if img["span_type"] == "single_cell" and img.get("image_description"):
pending_cell_images.append(img)
else:
flow_images.append(img)
@ -67,7 +67,7 @@ class Excel(ExcelParser):
try:
rows = list(ws.rows)
except Exception as e:
logging.warning(f"Skip sheet '{sheetname}' due to rows access error: {e}")
logging.warning(f"Skip sheet '{sheet_name}' due to rows access error: {e}")
continue
if not rows:
continue
@ -303,7 +303,8 @@ class Excel(ExcelParser):
def trans_datatime(s):
try:
return datetime_parse(s.strip()).strftime("%Y-%m-%d %H:%M:%S")
except Exception:
except Exception as e:
logging.warning(f"Failed to parse date from {s}, error: {e}")
pass
@ -312,6 +313,7 @@ def trans_bool(s):
return "yes"
if re.match(r"(false|no|否|⍻|×)$", str(s).strip(), flags=re.IGNORECASE):
return "no"
return None
def column_data_type(arr):
@ -346,8 +348,9 @@ def column_data_type(arr):
continue
try:
arr[i] = trans[ty](str(arr[i]))
except Exception:
except Exception as e:
arr[i] = None
logging.warning(f"Column {i}: {e}")
# if ty == "text":
# if len(arr) > 128 and uni / len(arr) < 0.1:
# ty = "keyword"

View File

@ -16,7 +16,7 @@
import logging
import random
from collections import Counter
from collections import Counter, defaultdict
from common.token_utils import num_tokens_from_string
import re
@ -667,6 +667,94 @@ def attach_media_context(chunks, table_context_size=0, image_context_size=0):
return chunks
def append_context2table_image4pdf(sections: list, tabls: list, table_context_size=0):
from deepdoc.parser import PdfParser
if table_context_size <=0:
return tabls
page_bucket = defaultdict(list)
for i, (txt, poss) in enumerate(sections):
poss = PdfParser.extract_positions(poss)
for page, left, right, top, bottom in poss:
page = page[0]
page_bucket[page].append(((left, top, right, bottom), txt))
def upper_context(page, i):
txt = ""
if page not in page_bucket:
i = -1
while num_tokens_from_string(txt) < table_context_size:
if i < 0:
page -= 1
if page < 0 or page not in page_bucket:
break
i = len(page_bucket[page]) -1
blks = page_bucket[page]
(_, _, _, _), cnt = blks[i]
txts = re.split(r"([。!?\n]|\. )", cnt, flags=re.DOTALL)[::-1]
for j in range(0, len(txts), 2):
txt = (txts[j+1] if j+1<len(txts) else "") + txts[j] + txt
if num_tokens_from_string(txt) > table_context_size:
break
i -= 1
return txt
def lower_context(page, i):
txt = ""
if page not in page_bucket:
return txt
while num_tokens_from_string(txt) < table_context_size:
if i >= len(page_bucket[page]):
page += 1
if page not in page_bucket:
break
i = 0
blks = page_bucket[page]
(_, _, _, _), cnt = blks[i]
txts = re.split(r"([。!?\n]|\. )", cnt, flags=re.DOTALL)
for j in range(0, len(txts), 2):
txt += txts[j] + (txts[j+1] if j+1<len(txts) else "")
if num_tokens_from_string(txt) > table_context_size:
break
i += 1
return txt
res = []
for (img, tb), poss in tabls:
page, left, top, right, bott = poss[0]
_page, _left, _top, _right, _bott = poss[-1]
if isinstance(tb, list):
tb = "\n".join(tb)
i = 0
blks = page_bucket.get(page, [])
_tb = tb
while i < len(blks):
if i + 1 >= len(blks):
if _page > page:
page += 1
i = 0
blks = page_bucket.get(page, [])
continue
tb = upper_context(page, i) + tb + lower_context(page+1, 0)
break
(_, t, r, b), txt = blks[i]
if b > top:
break
(_, _t, _r, _b), _txt = blks[i+1]
if _t < _bott:
i += 1
continue
tb = upper_context(page, i) + tb + lower_context(page, i)
break
if _tb == tb:
tb = upper_context(page, -1) + tb + lower_context(page+1, 0)
res.append(((img, tb), poss))
return res
def add_positions(d, poss):
if not poss:
return

View File

@ -729,6 +729,8 @@ TOC_FROM_TEXT_USER = load_prompt("toc_from_text_user")
# Generate TOC from text chunks with text llms
async def gen_toc_from_text(txt_info: dict, chat_mdl, callback=None):
if callback:
callback(msg="")
try:
ans = await gen_json(
PROMPT_JINJA_ENV.from_string(TOC_FROM_TEXT_SYSTEM).render(),
@ -738,8 +740,6 @@ async def gen_toc_from_text(txt_info: dict, chat_mdl, callback=None):
gen_conf={"temperature": 0.0, "top_p": 0.9}
)
txt_info["toc"] = ans if ans and not isinstance(ans, str) else []
if callback:
callback(msg="")
except Exception as e:
logging.exception(e)

View File

@ -1,14 +1,11 @@
## Role: Metadata extraction expert
## Constraints:
- Core Directive: Extract important structured information from the given content. Output ONLY a valid JSON string. No Markdown (e.g., ```json), no explanations, and no notes.
- Schema Parsing: In the `properties` object provided in Schema, the attribute name (e.g., 'author') is the target Key. Extract values based on the `description`; if no `description` is provided, refer to the key's literal meaning.
- Extraction Rules: Extract only when there is an explicit semantic correlation. If multiple values or data points match a field's definition, extract and include all of them. Strictly follow the Schema below and only output matched key-value pairs. If the content is irrelevant or no matching information is identified, you **MUST** output {}.
- Data Source: Extraction must be based solely on content below. Semantic mapping (synonyms) is allowed, but strictly prohibit hallucinations or fabricated facts.
## Enum Rules (Triggered ONLY if an enum list is present):
- Value Lock: All extracted values MUST strictly match the provided enum list.
- Normalization: Map synonyms or variants in the text back to the standard enum value (e.g., "Dec" to "December").
- Fallback: Output {} if no explicit match or synonym is identified.
## Role: Metadata extraction expert.
## Rules:
- Strict Evidence Only: Extract a value ONLY if it is explicitly mentioned in the Content.
- Enum Filter: For any field with an 'enum' list, the list acts as a strict filter. If no element from the list (or its direct synonym) is found in the Content, you MUST NOT extract that field.
- No Meta-Inference: Do not infer values based on the document's nature, format, or category. If the text does not literally state the information, treat it as missing.
- Zero-Hallucination: Never invent information or pick a "likely" value from the enum to fill a field.
- Empty Result: If no matches are found for any field, or if the content is irrelevant, output ONLY {}.
- Output: ONLY a valid JSON string. No Markdown, no notes.
## Schema for extraction:
{{ schema }}

View File

@ -39,22 +39,24 @@ from api.db.services.knowledgebase_service import KnowledgebaseService
from common import settings
from common.config_utils import show_configs
from common.data_source import (
BlobStorageConnector,
NotionConnector,
DiscordConnector,
GoogleDriveConnector,
MoodleConnector,
JiraConnector,
DropboxConnector,
WebDAVConnector,
AirtableConnector,
BlobStorageConnector,
NotionConnector,
DiscordConnector,
GoogleDriveConnector,
MoodleConnector,
JiraConnector,
DropboxConnector,
WebDAVConnector,
AirtableConnector,
AsanaConnector,
ImapConnector
)
from common.constants import FileSource, TaskStatus
from common.data_source.config import INDEX_BATCH_SIZE
from common.data_source.confluence_connector import ConfluenceConnector
from common.data_source.gmail_connector import GmailConnector
from common.data_source.box_connector import BoxConnector
from common.data_source.github.connector import GithubConnector
from common.data_source.gitlab_connector import GitlabConnector
from common.data_source.interfaces import CheckpointOutputWrapper
from common.log_utils import init_root_logger
@ -108,7 +110,7 @@ class SyncBase:
if task["poll_range_start"]:
next_update = task["poll_range_start"]
for document_batch in document_batch_generator:
async for document_batch in document_batch_generator:
if not document_batch:
continue
@ -706,20 +708,17 @@ class Moodle(SyncBase):
self.connector.load_credentials(self.conf["credentials"])
# Determine the time range for synchronization based on reindex or poll_range_start
if task["reindex"] == "1" or not task.get("poll_range_start"):
poll_start = task.get("poll_range_start")
if task["reindex"] == "1" or poll_start is None:
document_generator = self.connector.load_from_state()
begin_info = "totally"
else:
poll_start = task["poll_range_start"]
if poll_start is None:
document_generator = self.connector.load_from_state()
begin_info = "totally"
else:
document_generator = self.connector.poll_source(
poll_start.timestamp(),
datetime.now(timezone.utc).timestamp()
)
begin_info = "from {}".format(poll_start)
document_generator = self.connector.poll_source(
poll_start.timestamp(),
datetime.now(timezone.utc).timestamp(),
)
begin_info = f"from {poll_start}"
logging.info("Connect to Moodle: {} {}".format(self.conf["moodle_url"], begin_info))
return document_generator
@ -749,20 +748,17 @@ class BOX(SyncBase):
auth.token_storage.store(token)
self.connector.load_credentials(auth)
if task["reindex"] == "1" or not task["poll_range_start"]:
poll_start = task["poll_range_start"]
if task["reindex"] == "1" or poll_start is None:
document_generator = self.connector.load_from_state()
begin_info = "totally"
else:
poll_start = task["poll_range_start"]
if poll_start is None:
document_generator = self.connector.load_from_state()
begin_info = "totally"
else:
document_generator = self.connector.poll_source(
poll_start.timestamp(),
datetime.now(timezone.utc).timestamp()
)
begin_info = "from {}".format(poll_start)
document_generator = self.connector.poll_source(
poll_start.timestamp(),
datetime.now(timezone.utc).timestamp(),
)
begin_info = f"from {poll_start}"
logging.info("Connect to Box: folder_id({}) {}".format(self.conf["folder_id"], begin_info))
return document_generator
@ -788,20 +784,17 @@ class Airtable(SyncBase):
{"airtable_access_token": credentials["airtable_access_token"]}
)
if task.get("reindex") == "1" or not task.get("poll_range_start"):
poll_start = task.get("poll_range_start")
if task.get("reindex") == "1" or poll_start is None:
document_generator = self.connector.load_from_state()
begin_info = "totally"
else:
poll_start = task.get("poll_range_start")
if poll_start is None:
document_generator = self.connector.load_from_state()
begin_info = "totally"
else:
document_generator = self.connector.poll_source(
poll_start.timestamp(),
datetime.now(timezone.utc).timestamp(),
)
begin_info = f"from {poll_start}"
document_generator = self.connector.poll_source(
poll_start.timestamp(),
datetime.now(timezone.utc).timestamp(),
)
begin_info = f"from {poll_start}"
logging.info(
"Connect to Airtable: base_id(%s), table(%s) %s",
@ -854,6 +847,139 @@ class Asana(SyncBase):
return document_generator
class Github(SyncBase):
SOURCE_NAME: str = FileSource.GITHUB
async def _generate(self, task: dict):
"""
Sync files from Github repositories.
"""
from common.data_source.connector_runner import ConnectorRunner
self.connector = GithubConnector(
repo_owner=self.conf.get("repository_owner"),
repositories=self.conf.get("repository_name"),
include_prs=self.conf.get("include_pull_requests", False),
include_issues=self.conf.get("include_issues", False),
)
credentials = self.conf.get("credentials", {})
if "github_access_token" not in credentials:
raise ValueError("Missing github_access_token in credentials")
self.connector.load_credentials(
{"github_access_token": credentials["github_access_token"]}
)
if task.get("reindex") == "1" or not task.get("poll_range_start"):
start_time = datetime.fromtimestamp(0, tz=timezone.utc)
begin_info = "totally"
else:
start_time = task.get("poll_range_start")
begin_info = f"from {start_time}"
end_time = datetime.now(timezone.utc)
runner = ConnectorRunner(
connector=self.connector,
batch_size=self.conf.get("batch_size", INDEX_BATCH_SIZE),
include_permissions=False,
time_range=(start_time, end_time)
)
def document_batches():
checkpoint = self.connector.build_dummy_checkpoint()
while checkpoint.has_more:
for doc_batch, failure, next_checkpoint in runner.run(checkpoint):
if failure is not None:
logging.warning(
"Github connector failure: %s",
getattr(failure, "failure_message", failure),
)
continue
if doc_batch is not None:
yield doc_batch
if next_checkpoint is not None:
checkpoint = next_checkpoint
async def async_wrapper():
for batch in document_batches():
yield batch
logging.info(
"Connect to Github: org_name(%s), repo_names(%s) for %s",
self.conf.get("repository_owner"),
self.conf.get("repository_name"),
begin_info,
)
return async_wrapper()
class IMAP(SyncBase):
SOURCE_NAME: str = FileSource.IMAP
async def _generate(self, task):
from common.data_source.config import DocumentSource
from common.data_source.interfaces import StaticCredentialsProvider
self.connector = ImapConnector(
host=self.conf.get("imap_host"),
port=self.conf.get("imap_port"),
mailboxes=self.conf.get("imap_mailbox"),
)
credentials_provider = StaticCredentialsProvider(tenant_id=task["tenant_id"], connector_name=DocumentSource.IMAP, credential_json=self.conf["credentials"])
self.connector.set_credentials_provider(credentials_provider)
end_time = datetime.now(timezone.utc).timestamp()
if task["reindex"] == "1" or not task["poll_range_start"]:
start_time = end_time - self.conf.get("poll_range",30) * 24 * 60 * 60
begin_info = "totally"
else:
start_time = task["poll_range_start"].timestamp()
begin_info = f"from {task['poll_range_start']}"
raw_batch_size = self.conf.get("sync_batch_size") or self.conf.get("batch_size") or INDEX_BATCH_SIZE
try:
batch_size = int(raw_batch_size)
except (TypeError, ValueError):
batch_size = INDEX_BATCH_SIZE
if batch_size <= 0:
batch_size = INDEX_BATCH_SIZE
def document_batches():
checkpoint = self.connector.build_dummy_checkpoint()
pending_docs = []
iterations = 0
iteration_limit = 100_000
while checkpoint.has_more:
wrapper = CheckpointOutputWrapper()
doc_generator = wrapper(self.connector.load_from_checkpoint(start_time, end_time, checkpoint))
for document, failure, next_checkpoint in doc_generator:
if failure is not None:
logging.warning("IMAP connector failure: %s", getattr(failure, "failure_message", failure))
continue
if document is not None:
pending_docs.append(document)
if len(pending_docs) >= batch_size:
yield pending_docs
pending_docs = []
if next_checkpoint is not None:
checkpoint = next_checkpoint
iterations += 1
if iterations > iteration_limit:
raise RuntimeError("Too many iterations while loading IMAP documents.")
if pending_docs:
yield pending_docs
logging.info(
"Connect to IMAP: host(%s) port(%s) user(%s) folder(%s) %s",
self.conf["imap_host"],
self.conf["imap_port"],
self.conf["credentials"]["imap_username"],
self.conf["imap_mailbox"],
begin_info
)
return document_batches()
class Gitlab(SyncBase):
@ -915,8 +1041,10 @@ func_factory = {
FileSource.WEBDAV: WebDAV,
FileSource.BOX: BOX,
FileSource.AIRTABLE: Airtable,
FileSource.GITLAB: Gitlab,
FileSource.ASANA: Asana,
FileSource.IMAP: IMAP,
FileSource.GITHUB: Github,
FileSource.GITLAB: Gitlab,
}

View File

@ -26,6 +26,7 @@ import time
from api.db import PIPELINE_SPECIAL_PROGRESS_FREEZE_TASK_TYPES
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.pipeline_operation_log_service import PipelineOperationLogService
from api.db.joint_services.memory_message_service import handle_save_to_memory_task
from common.connection_utils import timeout
from common.metadata_utils import update_metadata_to, metadata_schema
from rag.utils.base64_image import image2id
@ -96,6 +97,7 @@ TASK_TYPE_TO_PIPELINE_TASK_TYPE = {
"raptor": PipelineTaskType.RAPTOR,
"graphrag": PipelineTaskType.GRAPH_RAG,
"mindmap": PipelineTaskType.MINDMAP,
"memory": PipelineTaskType.MEMORY,
}
UNACKED_ITERATOR = None
@ -157,8 +159,8 @@ def set_progress(task_id, from_page=0, to_page=-1, prog=None, msg="Processing...
logging.info(f"set_progress({task_id}), progress: {prog}, progress_msg: {msg}")
except DoesNotExist:
logging.warning(f"set_progress({task_id}) got exception DoesNotExist")
except Exception:
logging.exception(f"set_progress({task_id}), progress: {prog}, progress_msg: {msg}, got exception")
except Exception as e:
logging.exception(f"set_progress({task_id}), progress: {prog}, progress_msg: {msg}, got exception: {e}")
async def collect():
@ -166,6 +168,7 @@ async def collect():
global UNACKED_ITERATOR
svr_queue_names = settings.get_svr_queue_names()
redis_msg = None
try:
if not UNACKED_ITERATOR:
UNACKED_ITERATOR = REDIS_CONN.get_unacked_iterator(svr_queue_names, SVR_CONSUMER_GROUP_NAME, CONSUMER_NAME)
@ -176,8 +179,8 @@ async def collect():
redis_msg = REDIS_CONN.queue_consumer(svr_queue_name, SVR_CONSUMER_GROUP_NAME, CONSUMER_NAME)
if redis_msg:
break
except Exception:
logging.exception("collect got exception")
except Exception as e:
logging.exception(f"collect got exception: {e}")
return None, None
if not redis_msg:
@ -196,6 +199,9 @@ async def collect():
if task:
task["doc_id"] = msg["doc_id"]
task["doc_ids"] = msg.get("doc_ids", []) or []
elif msg.get("task_type") == PipelineTaskType.MEMORY.lower():
_, task_obj = TaskService.get_by_id(msg["id"])
task = task_obj.to_dict()
else:
task = TaskService.get_task(msg["id"])
@ -214,6 +220,10 @@ async def collect():
task["tenant_id"] = msg["tenant_id"]
task["dataflow_id"] = msg["dataflow_id"]
task["kb_id"] = msg.get("kb_id", "")
if task_type[:6] == "memory":
task["memory_id"] = msg["memory_id"]
task["source_id"] = msg["source_id"]
task["message_dict"] = msg["message_dict"]
return redis_msg, task
@ -322,6 +332,9 @@ async def build_chunks(task, progress_callback):
async def doc_keyword_extraction(chat_mdl, d, topn):
cached = get_llm_cache(chat_mdl.llm_name, d["content_with_weight"], "keywords", {"topn": topn})
if not cached:
if has_canceled(task["id"]):
progress_callback(-1, msg="Task has been canceled.")
return
async with chat_limiter:
cached = await keyword_extraction(chat_mdl, d["content_with_weight"], topn)
set_llm_cache(chat_mdl.llm_name, d["content_with_weight"], cached, "keywords", {"topn": topn})
@ -352,6 +365,9 @@ async def build_chunks(task, progress_callback):
async def doc_question_proposal(chat_mdl, d, topn):
cached = get_llm_cache(chat_mdl.llm_name, d["content_with_weight"], "question", {"topn": topn})
if not cached:
if has_canceled(task["id"]):
progress_callback(-1, msg="Task has been canceled.")
return
async with chat_limiter:
cached = await question_proposal(chat_mdl, d["content_with_weight"], topn)
set_llm_cache(chat_mdl.llm_name, d["content_with_weight"], cached, "question", {"topn": topn})
@ -382,6 +398,9 @@ async def build_chunks(task, progress_callback):
cached = get_llm_cache(chat_mdl.llm_name, d["content_with_weight"], "metadata",
task["parser_config"]["metadata"])
if not cached:
if has_canceled(task["id"]):
progress_callback(-1, msg="Task has been canceled.")
return
async with chat_limiter:
cached = await gen_metadata(chat_mdl,
metadata_schema(task["parser_config"]["metadata"]),
@ -447,6 +466,9 @@ async def build_chunks(task, progress_callback):
async def doc_content_tagging(chat_mdl, d, topn_tags):
cached = get_llm_cache(chat_mdl.llm_name, d["content_with_weight"], all_tags, {"topn": topn_tags})
if not cached:
if has_canceled(task["id"]):
progress_callback(-1, msg="Task has been canceled.")
return
picked_examples = random.choices(examples, k=2) if len(examples) > 2 else examples
if not picked_examples:
picked_examples.append({"content": "This is an example", TAG_FLD: {'example': 1}})
@ -865,6 +887,10 @@ async def insert_es(task_id, task_tenant_id, task_dataset_id, chunks, progress_c
async def do_handle_task(task):
task_type = task.get("task_type", "")
if task_type == "memory":
await handle_save_to_memory_task(task)
return
if task_type == "dataflow" and task.get("doc_id", "") == CANVAS_DEBUG_DOC_ID:
await run_dataflow(task)
return
@ -876,6 +902,7 @@ async def do_handle_task(task):
task_embedding_id = task["embd_id"]
task_language = task["language"]
task_llm_id = task["parser_config"].get("llm_id") or task["llm_id"]
task["llm_id"] = task_llm_id
task_dataset_id = task["kb_id"]
task_doc_id = task["doc_id"]
task_document_name = task["name"]
@ -1050,8 +1077,8 @@ async def do_handle_task(task):
async def _maybe_insert_es(_chunks):
if has_canceled(task_id):
return True
e = await insert_es(task_id, task_tenant_id, task_dataset_id, _chunks, progress_callback)
return bool(e)
insert_result = await insert_es(task_id, task_tenant_id, task_dataset_id, _chunks, progress_callback)
return bool(insert_result)
try:
if not await _maybe_insert_es(chunks):
@ -1101,10 +1128,9 @@ async def do_handle_task(task):
search.index_name(task_tenant_id),
task_dataset_id,
)
except Exception:
except Exception as e:
logging.exception(
f"Remove doc({task_doc_id}) from docStore failed when task({task_id}) canceled."
)
f"Remove doc({task_doc_id}) from docStore failed when task({task_id}) canceled, exception: {e}")
async def handle_task():
@ -1117,24 +1143,25 @@ async def handle_task():
task_type = task["task_type"]
pipeline_task_type = TASK_TYPE_TO_PIPELINE_TASK_TYPE.get(task_type,
PipelineTaskType.PARSE) or PipelineTaskType.PARSE
task_id = task["id"]
try:
logging.info(f"handle_task begin for task {json.dumps(task)}")
CURRENT_TASKS[task["id"]] = copy.deepcopy(task)
await do_handle_task(task)
DONE_TASKS += 1
CURRENT_TASKS.pop(task["id"], None)
CURRENT_TASKS.pop(task_id, None)
logging.info(f"handle_task done for task {json.dumps(task)}")
except Exception as e:
FAILED_TASKS += 1
CURRENT_TASKS.pop(task["id"], None)
CURRENT_TASKS.pop(task_id, None)
try:
err_msg = str(e)
while isinstance(e, exceptiongroup.ExceptionGroup):
e = e.exceptions[0]
err_msg += ' -- ' + str(e)
set_progress(task["id"], prog=-1, msg=f"[Exception]: {err_msg}")
except Exception:
set_progress(task_id, prog=-1, msg=f"[Exception]: {err_msg}")
except Exception as e:
logging.exception(f"[Exception]: {str(e)}")
pass
logging.exception(f"handle_task got exception for task {json.dumps(task)}")
finally:
@ -1207,8 +1234,8 @@ async def report_status():
logging.info(f"{consumer_name} expired, removed")
REDIS_CONN.srem("TASKEXE", consumer_name)
REDIS_CONN.delete(consumer_name)
except Exception:
logging.exception("report_status got exception")
except Exception as e:
logging.exception(f"report_status got exception: {e}")
finally:
redis_lock.release()
await asyncio.sleep(30)

View File

@ -1,6 +1,6 @@
[project]
name = "ragflow-sdk"
version = "0.23.0"
version = "0.23.1"
description = "Python client sdk of [RAGFlow](https://github.com/infiniflow/ragflow). RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding."
authors = [{ name = "Zhichang Yu", email = "yuzhichang@gmail.com" }]
license = { text = "Apache License, Version 2.0" }

2
sdk/python/uv.lock generated
View File

@ -353,7 +353,7 @@ wheels = [
[[package]]
name = "ragflow-sdk"
version = "0.23.0"
version = "0.23.1"
source = { virtual = "." }
dependencies = [
{ name = "beartype" },

57
uv.lock generated
View File

@ -5509,6 +5509,22 @@ dependencies = [
]
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/ba/8e/aedef81641c8dca6fd0fb7294de5bed9c45f3397d67fddf755c1042c2642/PyExecJS-1.5.1.tar.gz", hash = "sha256:34cc1d070976918183ff7bdc0ad71f8157a891c92708c00c5fbbff7a769f505c", size = 13344, upload-time = "2018-01-18T04:33:55.126Z" }
[[package]]
name = "pygithub"
version = "2.8.1"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
dependencies = [
{ name = "pyjwt", extra = ["crypto"] },
{ name = "pynacl" },
{ name = "requests" },
{ name = "typing-extensions" },
{ name = "urllib3" },
]
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/c1/74/e560bdeffea72ecb26cff27f0fad548bbff5ecc51d6a155311ea7f9e4c4c/pygithub-2.8.1.tar.gz", hash = "sha256:341b7c78521cb07324ff670afd1baa2bf5c286f8d9fd302c1798ba594a5400c9", size = 2246994, upload-time = "2025-09-02T17:41:54.674Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/07/ba/7049ce39f653f6140aac4beb53a5aaf08b4407b6a3019aae394c1c5244ff/pygithub-2.8.1-py3-none-any.whl", hash = "sha256:23a0a5bca93baef082e03411bf0ce27204c32be8bfa7abc92fe4a3e132936df0", size = 432709, upload-time = "2025-09-02T17:41:52.947Z" },
]
[[package]]
name = "pygments"
version = "2.19.2"
@ -5541,6 +5557,43 @@ wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/7c/4c/ad33b92b9864cbde84f259d5df035a6447f91891f5be77788e2a3892bce3/pymysql-1.1.2-py3-none-any.whl", hash = "sha256:e6b1d89711dd51f8f74b1631fe08f039e7d76cf67a42a323d3178f0f25762ed9", size = 45300, upload-time = "2025-08-24T12:55:53.394Z" },
]
[[package]]
name = "pynacl"
version = "1.6.1"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
dependencies = [
{ name = "cffi", marker = "platform_python_implementation != 'PyPy'" },
]
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/b2/46/aeca065d227e2265125aea590c9c47fbf5786128c9400ee0eb7c88931f06/pynacl-1.6.1.tar.gz", hash = "sha256:8d361dac0309f2b6ad33b349a56cd163c98430d409fa503b10b70b3ad66eaa1d", size = 3506616, upload-time = "2025-11-10T16:02:13.195Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/75/d6/4b2dca33ed512de8f54e5c6074aa06eaeb225bfbcd9b16f33a414389d6bd/pynacl-1.6.1-cp314-cp314t-macosx_10_10_universal2.whl", hash = "sha256:7d7c09749450c385301a3c20dca967a525152ae4608c0a096fe8464bfc3df93d", size = 389109, upload-time = "2025-11-10T16:01:28.79Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/3c/30/e8dbb8ff4fa2559bbbb2187ba0d0d7faf728d17cb8396ecf4a898b22d3da/pynacl-1.6.1-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:fc734c1696ffd49b40f7c1779c89ba908157c57345cf626be2e0719488a076d3", size = 808254, upload-time = "2025-11-10T16:01:37.839Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/44/f9/f5449c652f31da00249638dbab065ad4969c635119094b79b17c3a4da2ab/pynacl-1.6.1-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3cd787ec1f5c155dc8ecf39b1333cfef41415dc96d392f1ce288b4fe970df489", size = 1407365, upload-time = "2025-11-10T16:01:40.454Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/eb/2f/9aa5605f473b712065c0a193ebf4ad4725d7a245533f0cd7e5dcdbc78f35/pynacl-1.6.1-cp314-cp314t-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6b35d93ab2df03ecb3aa506be0d3c73609a51449ae0855c2e89c7ed44abde40b", size = 843842, upload-time = "2025-11-10T16:01:30.524Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/32/8d/748f0f6956e207453da8f5f21a70885fbbb2e060d5c9d78e0a4a06781451/pynacl-1.6.1-cp314-cp314t-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:dece79aecbb8f4640a1adbb81e4aa3bfb0e98e99834884a80eb3f33c7c30e708", size = 1445559, upload-time = "2025-11-10T16:01:33.663Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/78/d0/2387f0dcb0e9816f38373999e48db4728ed724d31accdd4e737473319d35/pynacl-1.6.1-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:c2228054f04bf32d558fb89bb99f163a8197d5a9bf4efa13069a7fa8d4b93fc3", size = 825791, upload-time = "2025-11-10T16:01:34.823Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/18/3d/ef6fb7eb072aaf15f280bc66f26ab97e7fc9efa50fb1927683013ef47473/pynacl-1.6.1-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:2b12f1b97346f177affcdfdc78875ff42637cb40dcf79484a97dae3448083a78", size = 1410843, upload-time = "2025-11-10T16:01:36.401Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/e3/fb/23824a017526850ee7d8a1cc4cd1e3e5082800522c10832edbbca8619537/pynacl-1.6.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:e735c3a1bdfde3834503baf1a6d74d4a143920281cb724ba29fb84c9f49b9c48", size = 801140, upload-time = "2025-11-10T16:01:42.013Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/5d/d1/ebc6b182cb98603a35635b727d62f094bc201bf610f97a3bb6357fe688d2/pynacl-1.6.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:3384a454adf5d716a9fadcb5eb2e3e72cd49302d1374a60edc531c9957a9b014", size = 1371966, upload-time = "2025-11-10T16:01:43.297Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/64/f4/c9d7b6f02924b1f31db546c7bd2a83a2421c6b4a8e6a2e53425c9f2802e0/pynacl-1.6.1-cp314-cp314t-win32.whl", hash = "sha256:d8615ee34d01c8e0ab3f302dcdd7b32e2bcf698ba5f4809e7cc407c8cdea7717", size = 230482, upload-time = "2025-11-10T16:01:47.688Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/c4/2c/942477957fba22da7bf99131850e5ebdff66623418ab48964e78a7a8293e/pynacl-1.6.1-cp314-cp314t-win_amd64.whl", hash = "sha256:5f5b35c1a266f8a9ad22525049280a600b19edd1f785bccd01ae838437dcf935", size = 243232, upload-time = "2025-11-10T16:01:45.208Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/7a/0c/bdbc0d04a53b96a765ab03aa2cf9a76ad8653d70bf1665459b9a0dedaa1c/pynacl-1.6.1-cp314-cp314t-win_arm64.whl", hash = "sha256:d984c91fe3494793b2a1fb1e91429539c6c28e9ec8209d26d25041ec599ccf63", size = 187907, upload-time = "2025-11-10T16:01:46.328Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/49/41/3cfb3b4f3519f6ff62bf71bf1722547644bcfb1b05b8fdbdc300249ba113/pynacl-1.6.1-cp38-abi3-macosx_10_10_universal2.whl", hash = "sha256:a6f9fd6d6639b1e81115c7f8ff16b8dedba1e8098d2756275d63d208b0e32021", size = 387591, upload-time = "2025-11-10T16:01:49.1Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/18/21/b8a6563637799f617a3960f659513eccb3fcc655d5fc2be6e9dc6416826f/pynacl-1.6.1-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:e49a3f3d0da9f79c1bec2aa013261ab9fa651c7da045d376bd306cf7c1792993", size = 798866, upload-time = "2025-11-10T16:01:55.688Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/e8/6c/dc38033bc3ea461e05ae8f15a81e0e67ab9a01861d352ae971c99de23e7c/pynacl-1.6.1-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:7713f8977b5d25f54a811ec9efa2738ac592e846dd6e8a4d3f7578346a841078", size = 1398001, upload-time = "2025-11-10T16:01:57.101Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/9f/05/3ec0796a9917100a62c5073b20c4bce7bf0fea49e99b7906d1699cc7b61b/pynacl-1.6.1-cp38-abi3-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5a3becafc1ee2e5ea7f9abc642f56b82dcf5be69b961e782a96ea52b55d8a9fc", size = 834024, upload-time = "2025-11-10T16:01:50.228Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/f0/b7/ae9982be0f344f58d9c64a1c25d1f0125c79201634efe3c87305ac7cb3e3/pynacl-1.6.1-cp38-abi3-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4ce50d19f1566c391fedc8dc2f2f5be265ae214112ebe55315e41d1f36a7f0a9", size = 1436766, upload-time = "2025-11-10T16:01:51.886Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/b4/51/b2ccbf89cf3025a02e044dd68a365cad593ebf70f532299f2c047d2b7714/pynacl-1.6.1-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:543f869140f67d42b9b8d47f922552d7a967e6c116aad028c9bfc5f3f3b3a7b7", size = 817275, upload-time = "2025-11-10T16:01:53.351Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/a8/6c/dd9ee8214edf63ac563b08a9b30f98d116942b621d39a751ac3256694536/pynacl-1.6.1-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:a2bb472458c7ca959aeeff8401b8efef329b0fc44a89d3775cffe8fad3398ad8", size = 1401891, upload-time = "2025-11-10T16:01:54.587Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/0f/c1/97d3e1c83772d78ee1db3053fd674bc6c524afbace2bfe8d419fd55d7ed1/pynacl-1.6.1-cp38-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:3206fa98737fdc66d59b8782cecc3d37d30aeec4593d1c8c145825a345bba0f0", size = 772291, upload-time = "2025-11-10T16:01:58.111Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/4d/ca/691ff2fe12f3bb3e43e8e8df4b806f6384593d427f635104d337b8e00291/pynacl-1.6.1-cp38-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:53543b4f3d8acb344f75fd4d49f75e6572fce139f4bfb4815a9282296ff9f4c0", size = 1370839, upload-time = "2025-11-10T16:01:59.252Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/30/27/06fe5389d30391fce006442246062cc35773c84fbcad0209fbbf5e173734/pynacl-1.6.1-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:319de653ef84c4f04e045eb250e6101d23132372b0a61a7acf91bac0fda8e58c", size = 791371, upload-time = "2025-11-10T16:02:01.075Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/2c/7a/e2bde8c9d39074a5aa046c7d7953401608d1f16f71e237f4bef3fb9d7e49/pynacl-1.6.1-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:262a8de6bba4aee8a66f5edf62c214b06647461c9b6b641f8cd0cb1e3b3196fe", size = 1363031, upload-time = "2025-11-10T16:02:02.656Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/dd/b6/63fd77264dae1087770a1bb414bc604470f58fbc21d83822fc9c76248076/pynacl-1.6.1-cp38-abi3-win32.whl", hash = "sha256:9fd1a4eb03caf8a2fe27b515a998d26923adb9ddb68db78e35ca2875a3830dde", size = 226585, upload-time = "2025-11-10T16:02:07.116Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/12/c8/b419180f3fdb72ab4d45e1d88580761c267c7ca6eda9a20dcbcba254efe6/pynacl-1.6.1-cp38-abi3-win_amd64.whl", hash = "sha256:a569a4069a7855f963940040f35e87d8bc084cb2d6347428d5ad20550a0a1a21", size = 238923, upload-time = "2025-11-10T16:02:04.401Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/35/76/c34426d532e4dce7ff36e4d92cb20f4cbbd94b619964b93d24e8f5b5510f/pynacl-1.6.1-cp38-abi3-win_arm64.whl", hash = "sha256:5953e8b8cfadb10889a6e7bd0f53041a745d1b3d30111386a1bb37af171e6daf", size = 183970, upload-time = "2025-11-10T16:02:05.786Z" },
]
[[package]]
name = "pynndescent"
version = "0.5.13"
@ -6110,7 +6163,7 @@ wheels = [
[[package]]
name = "ragflow"
version = "0.23.0"
version = "0.23.1"
source = { virtual = "." }
dependencies = [
{ name = "aiosmtplib" },
@ -6184,6 +6237,7 @@ dependencies = [
{ name = "pyairtable" },
{ name = "pyclipper" },
{ name = "pycryptodomex" },
{ name = "pygithub" },
{ name = "pyobvector" },
{ name = "pyodbc" },
{ name = "pypandoc" },
@ -6315,6 +6369,7 @@ requires-dist = [
{ name = "pyairtable", specifier = ">=3.3.0" },
{ name = "pyclipper", specifier = ">=1.4.0,<2.0.0" },
{ name = "pycryptodomex", specifier = "==3.20.0" },
{ name = "pygithub", specifier = ">=2.8.1" },
{ name = "pyobvector", specifier = "==0.2.18" },
{ name = "pyodbc", specifier = ">=5.2.0,<6.0.0" },
{ name = "pypandoc", specifier = ">=1.16" },

View File

@ -0,0 +1,3 @@
<svg width="1024" height="1024" viewBox="0 0 1024 1024" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M8 0C3.58 0 0 3.58 0 8C0 11.54 2.29 14.53 5.47 15.59C5.87 15.66 6.02 15.42 6.02 15.21C6.02 15.02 6.01 14.39 6.01 13.72C4 14.09 3.48 13.23 3.32 12.78C3.23 12.55 2.84 11.84 2.5 11.65C2.22 11.5 1.82 11.13 2.49 11.12C3.12 11.11 3.57 11.7 3.72 11.94C4.44 13.15 5.59 12.81 6.05 12.6C6.12 12.08 6.33 11.73 6.56 11.53C4.78 11.33 2.92 10.64 2.92 7.58C2.92 6.71 3.23 5.99 3.74 5.43C3.66 5.23 3.38 4.41 3.82 3.31C3.82 3.31 4.49 3.1 6.02 4.13C6.66 3.95 7.34 3.86 8.02 3.86C8.7 3.86 9.38 3.95 10.02 4.13C11.55 3.09 12.22 3.31 12.22 3.31C12.66 4.41 12.38 5.23 12.3 5.43C12.81 5.99 13.12 6.7 13.12 7.58C13.12 10.65 11.25 11.33 9.47 11.53C9.76 11.78 10.01 12.26 10.01 13.01C10.01 14.08 10 14.94 10 15.21C10 15.42 10.15 15.67 10.55 15.59C13.71 14.53 16 11.53 16 8C16 3.58 12.42 0 8 0Z" transform="scale(64)" fill="#1B1F23"/>
</svg>

After

Width:  |  Height:  |  Size: 968 B

View File

@ -0,0 +1,7 @@
<svg stroke="currentColor" fill="none" stroke-width="2" viewBox="0 0 24 24"
stroke-linecap="round" stroke-linejoin="round"
class="text-text-04" height="32" width="32"
xmlns="http://www.w3.org/2000/svg">
<path d="M4 4h16c1.1 0 2 .9 2 2v12c0 1.1-.9 2-2 2H4c-1.1 0-2-.9-2-2V6c0-1.1.9-2 2-2z"></path>
<polyline points="22,6 12,13 2,6"></polyline>
</svg>

After

Width:  |  Height:  |  Size: 360 B

View File

@ -1,3 +1,4 @@
import { useIsDarkTheme } from '@/components/theme-provider';
import { useSetModalState, useTranslate } from '@/hooks/common-hooks';
import { LangfuseCard } from '@/pages/user-setting/setting-model/langfuse';
import apiDoc from '@parent/docs/references/http_api_reference.md';
@ -28,6 +29,8 @@ const ApiContent = ({
const { handlePreview } = usePreviewChat(idKey);
const isDarkTheme = useIsDarkTheme();
return (
<div className="pb-2">
<Flex vertical gap={'middle'}>
@ -47,7 +50,10 @@ const ApiContent = ({
<div style={{ position: 'relative' }}>
<MarkdownToc content={apiDoc} />
</div>
<MarkdownPreview source={apiDoc}></MarkdownPreview>
<MarkdownPreview
source={apiDoc}
wrapperElement={{ 'data-color-mode': isDarkTheme ? 'dark' : 'light' }}
></MarkdownPreview>
</Flex>
{apiKeyVisible && (
<ChatApiKeyModal

View File

@ -1,79 +1,72 @@
import { DocumentParserType } from '@/constants/knowledge';
import { useTranslate } from '@/hooks/common-hooks';
import { useFetchKnowledgeList } from '@/hooks/use-knowledge-request';
import { IKnowledge } from '@/interfaces/database/knowledge';
import { useBuildQueryVariableOptions } from '@/pages/agent/hooks/use-get-begin-query';
import { UserOutlined } from '@ant-design/icons';
import { Avatar as AntAvatar, Form, Select, Space } from 'antd';
import { toLower } from 'lodash';
import { useMemo } from 'react';
import { useEffect, useMemo, useState } from 'react';
import { useFormContext } from 'react-hook-form';
import { useTranslation } from 'react-i18next';
import { RAGFlowAvatar } from './ragflow-avatar';
import { FormControl, FormField, FormItem, FormLabel } from './ui/form';
import { MultiSelect } from './ui/multi-select';
interface KnowledgeBaseItemProps {
label?: string;
tooltipText?: string;
name?: string;
required?: boolean;
onChange?(): void;
}
const KnowledgeBaseItem = ({
label,
tooltipText,
name,
required = true,
onChange,
}: KnowledgeBaseItemProps) => {
const { t } = useTranslate('chat');
const { list: knowledgeList } = useFetchKnowledgeList(true);
const filteredKnowledgeList = knowledgeList.filter(
(x) => x.parser_id !== DocumentParserType.Tag,
);
const knowledgeOptions = filteredKnowledgeList.map((x) => ({
label: (
<Space>
<AntAvatar size={20} icon={<UserOutlined />} src={x.avatar} />
{x.name}
</Space>
),
value: x.id,
}));
return (
<Form.Item
label={label || t('knowledgeBases')}
name={name || 'kb_ids'}
tooltip={tooltipText || t('knowledgeBasesTip')}
rules={[
{
required,
message: t('knowledgeBasesMessage'),
type: 'array',
},
]}
>
<Select
mode="multiple"
options={knowledgeOptions}
placeholder={t('knowledgeBasesMessage')}
onChange={onChange}
></Select>
</Form.Item>
);
};
export default KnowledgeBaseItem;
import { MultiSelect, MultiSelectOptionType } from './ui/multi-select';
function buildQueryVariableOptionsByShowVariable(showVariable?: boolean) {
return showVariable ? useBuildQueryVariableOptions : () => [];
}
export function useDisableDifferenceEmbeddingDataset() {
const [datasetOptions, setDatasetOptions] = useState<MultiSelectOptionType[]>(
[],
);
const [datasetSelectEmbedId, setDatasetSelectEmbedId] = useState('');
const { list: datasetListOrigin } = useFetchKnowledgeList(true);
useEffect(() => {
const datasetListMap = datasetListOrigin
.filter((x) => x.parser_id !== DocumentParserType.Tag)
.map((item: IKnowledge) => {
return {
label: item.name,
icon: () => (
<RAGFlowAvatar
className="size-4"
avatar={item.avatar}
name={item.name}
/>
),
suffix: (
<div className="text-xs px-4 p-1 bg-bg-card text-text-secondary rounded-lg border border-bg-card">
{item.embd_id}
</div>
),
value: item.id,
disabled:
item.embd_id !== datasetSelectEmbedId &&
datasetSelectEmbedId !== '',
};
});
setDatasetOptions(datasetListMap);
}, [datasetListOrigin, datasetSelectEmbedId]);
const handleDatasetSelectChange = (
value: string[],
onChange: (value: string[]) => void,
) => {
if (value.length) {
const data = datasetListOrigin?.find((item) => item.id === value[0]);
setDatasetSelectEmbedId(data?.embd_id ?? '');
} else {
setDatasetSelectEmbedId('');
}
onChange?.(value);
};
return {
datasetOptions,
handleDatasetSelectChange,
};
}
export function KnowledgeBaseFormField({
showVariable = false,
}: {
@ -82,22 +75,12 @@ export function KnowledgeBaseFormField({
const form = useFormContext();
const { t } = useTranslation();
const { list: knowledgeList } = useFetchKnowledgeList(true);
const filteredKnowledgeList = knowledgeList.filter(
(x) => x.parser_id !== DocumentParserType.Tag,
);
const { datasetOptions, handleDatasetSelectChange } =
useDisableDifferenceEmbeddingDataset();
const nextOptions = buildQueryVariableOptionsByShowVariable(showVariable)();
const knowledgeOptions = filteredKnowledgeList.map((x) => ({
label: x.name,
value: x.id,
icon: () => (
<RAGFlowAvatar className="size-4 mr-2" avatar={x.avatar} name={x.name} />
),
}));
const knowledgeOptions = datasetOptions;
const options = useMemo(() => {
if (showVariable) {
return [
@ -140,11 +123,14 @@ export function KnowledgeBaseFormField({
<FormControl>
<MultiSelect
options={options}
onValueChange={field.onChange}
onValueChange={(value) => {
handleDatasetSelectChange(value, field.onChange);
}}
placeholder={t('chat.knowledgeBasesMessage')}
variant="inverted"
maxCount={100}
defaultValue={field.value}
showSelectAll={false}
{...field}
/>
</FormControl>

View File

@ -109,6 +109,19 @@ export const SelectWithSearch = forwardRef<
}
}, [options, value]);
const showSearch = useMemo(() => {
if (Array.isArray(options) && options.length > 5) {
return true;
}
if (Array.isArray(options)) {
const optionsNum = options.reduce((acc, option) => {
return acc + (option?.options?.length || 0);
}, 0);
return optionsNum > 5;
}
return false;
}, [options]);
const handleSelect = useCallback(
(val: string) => {
setValue(val);
@ -179,7 +192,7 @@ export const SelectWithSearch = forwardRef<
align="start"
>
<Command className="p-5">
{options && options.length > 5 && (
{showSearch && (
<CommandInput
placeholder={t('common.search') + '...'}
className=" placeholder:text-text-disabled"

View File

@ -1,35 +1,19 @@
import { toast } from 'sonner';
import { ExternalToast, toast } from 'sonner';
const duration = { duration: 2500 };
const configuration: ExternalToast = { duration: 2500, position: 'top-center' };
const message = {
success: (msg: string) => {
toast.success(msg, {
position: 'top-center',
closeButton: false,
...duration,
});
toast.success(msg, configuration);
},
error: (msg: string) => {
toast.error(msg, {
position: 'top-center',
closeButton: false,
...duration,
});
toast.error(msg, configuration);
},
warning: (msg: string) => {
toast.warning(msg, {
position: 'top-center',
closeButton: false,
...duration,
});
toast.warning(msg, configuration);
},
info: (msg: string) => {
toast.info(msg, {
position: 'top-center',
closeButton: false,
...duration,
});
toast.info(msg, configuration);
},
};
export default message;

View File

@ -211,3 +211,14 @@ export const WebhookJWTAlgorithmList = [
'ps512',
'none',
] as const;
export enum AgentDialogueMode {
Conversational = 'conversational',
Task = 'task',
Webhook = 'Webhook',
}
export const initialBeginValues = {
mode: AgentDialogueMode.Conversational,
prologue: `Hi! I'm your assistant. What can I do for you?`,
};

View File

@ -1,7 +1,7 @@
import { FileUploadProps } from '@/components/file-upload';
import { useHandleFilterSubmit } from '@/components/list-filter-bar/use-handle-filter-submit';
import message from '@/components/ui/message';
import { AgentGlobals } from '@/constants/agent';
import { AgentGlobals, initialBeginValues } from '@/constants/agent';
import {
IAgentLogsRequest,
IAgentLogsResponse,
@ -76,6 +76,7 @@ export const EmptyDsl = {
data: {
label: 'Begin',
name: 'begin',
form: initialBeginValues,
},
sourcePosition: 'left',
targetPosition: 'right',

View File

@ -278,7 +278,7 @@ export const useRunDocument = () => {
const ret = await kbService.document_run({
doc_ids: documentIds,
run,
...option,
...(option || {}),
});
const code = get(ret, 'data.code');
if (code === 0) {

View File

@ -116,7 +116,7 @@ export interface ITenantInfo {
tts_id: string;
}
export type ChunkDocType = 'image' | 'table';
export type ChunkDocType = 'image' | 'table' | 'text';
export interface IChunk {
available_int: number; // Whether to enable, 0: not enabled, 1: enabled

View File

@ -949,6 +949,8 @@ Beispiel: Virtual Hosted Style`,
'Verbinden Sie Ihre Dropbox, um Dateien und Ordner von einem ausgewählten Konto zu synchronisieren.',
boxDescription:
'Verbinden Sie Ihr Box-Laufwerk, um Dateien und Ordner zu synchronisieren.',
githubDescription:
'Verbinden Sie GitHub, um Pull Requests und Issues zur Recherche zu synchronisieren.',
dropboxAccessTokenTip:
'Generieren Sie ein langlebiges Zugriffstoken in der Dropbox App Console mit den Bereichen files.metadata.read, files.content.read und sharing.read.',
moodleDescription:

View File

@ -147,6 +147,8 @@ Procedural Memory: Learned skills, habits, and automated procedures.`,
action: 'Action',
},
config: {
memorySizeTooltip: `Accounts for each message's content + its embedding vector (≈ Content + Dimensions × 8 Bytes).
Example: A 1 KB message with 1024-dim embedding uses ~9 KB. The 5 MB default limit holds ~500 such messages.`,
avatar: 'Avatar',
description: 'Description',
memorySize: 'Memory size',
@ -181,6 +183,8 @@ Procedural Memory: Learned skills, habits, and automated procedures.`,
},
knowledgeDetails: {
metadata: {
toMetadataSetting: 'Generation settings',
toMetadataSettingTip: 'Set auto-metadata in Configuration.',
descriptionTip:
'Provide descriptions or examples to guide LLM extract values for this field. If left empty, it will rely on the field name.',
restrictTDefinedValuesTip:
@ -931,12 +935,16 @@ Example: Virtual Hosted Style`,
dropboxDescription:
'Connect your Dropbox to sync files and folders from a chosen account.',
boxDescription: 'Connect your Box drive to sync files and folders.',
githubDescription:
'Connect GitHub to sync pull requests and issues for retrieval.',
airtableDescription:
'Connect to Airtable and synchronize files from a specified table within a designated workspace.',
gitlabDescription:
'Connect GitLab to sync repositories, issues, merge requests, and related documentation.',
asanaDescription:
'Connect to Asana and synchronize files from a specified workspace.',
imapDescription:
'Connect to your IMAP mailbox to sync emails for knowledge retrieval.',
dropboxAccessTokenTip:
'Generate a long-lived access token in the Dropbox App Console with files.metadata.read, files.content.read, and sharing.read scopes.',
moodleDescription:

View File

@ -747,12 +747,16 @@ export default {
'Синхронизируйте страницы и базы данных из Notion для извлечения знаний.',
boxDescription:
'Подключите ваш диск Box для синхронизации файлов и папок.',
githubDescription:
'Подключите GitHub для синхронизации содержимого Pull Request и Issue для поиска.',
airtableDescription:
'Подключите Airtable и синхронизируйте файлы из указанной таблицы в заданном рабочем пространстве.',
gitlabDescription:
'Подключите GitLab для синхронизации репозиториев, задач, merge requests и связанной документации.',
asanaDescription:
'Подключите Asana и синхронизируйте файлы из рабочего пространства.',
imapDescription:
'Подключите почтовый ящик IMAP для синхронизации писем из указанных почтовых ящиков (mailboxes) с целью поиска и анализа знаний.',
google_driveDescription:
'Подключите ваш Google Drive через OAuth и синхронизируйте определенные папки или диски.',
gmailDescription:

View File

@ -124,12 +124,11 @@ export default {
forgetMessageTip: '确定遗忘吗?',
messageDescription: '记忆提取使用高级设置中的提示词和温度值进行配置。',
copied: '已复制!',
contentEmbed: '内容嵌入',
content: '内容',
delMessageWarn: `遗忘后,代理将无法检索此消息。`,
forgetMessage: '遗忘消息',
sessionId: '会话ID',
agent: '代理',
agent: '智能体',
type: '类型',
validDate: '有效日期',
forgetAt: '遗忘于',
@ -138,6 +137,8 @@ export default {
action: '操作',
},
config: {
memorySizeTooltip: `记录每条消息的内容 + 其嵌入向量(≈ 内容 + 维度 × 8 字节)。
例如:一条带有 1024 维嵌入的 1 KB 消息大约使用 9 KB。5 MB 的默认限制大约可容纳 500 条此类消息。`,
avatar: '头像',
description: '描述',
memorySize: '记忆大小',
@ -172,6 +173,8 @@ export default {
},
knowledgeDetails: {
metadata: {
toMetadataSettingTip: '在配置中设置自动元数据',
toMetadataSetting: '生成设置',
descriptionTip:
'提供描述或示例来指导大语言模型为此字段提取值。如果留空,将依赖字段名称。',
restrictTDefinedValuesTip:
@ -552,10 +555,10 @@ export default {
注意您需要指定的条目类型。</p>`,
tag: `<p>使用“Tag”分块方法的知识库用作标签集.其他知识库可以把标签集当中的标签按照相似度匹配到自己对应的文本块中,对这些知识库的查询也将根据此标签集对自己进行标记。</p>
<p>标签集<b>不会</b>直接参与 RAG 检索过程。</p>
<p>标签集中的每个文本分块都是相互独立的标签和标签描述的文本对。</p>
<p>标签集中的每个文本分块都是相互独立的标签和标签描述的文本对。</p>
<p>Tag 分块方法支持<b>XLSX</b>和<b>CSV/TXT</b>文件格式。</p>
<p>如果文件为<b>XLSX</b>格式,则它应该包含两列无标题:一列用于标签描述,另一列用于标签,标签描述列位于标签列之前。支持多个工作表,只要列结构正确即可。</p>
<p>如果文件为<b>XLSX</b>格式,则它应该包含无标题的两列:一列用于标签描述,另一列用于标签,标签描述列位于标签列之前。支持多个工作表,只要列结构正确即可。</p>
<p>如果文件为 <b>CSV/TXT</b> 格式,则必须使用 UTF-8 编码并以 TAB 作为分隔符来分隔内容和标签。</p>
<p>在标签列中,标签之间使用英文逗号分隔。</p>
<i>不符合上述规则的文本行将被忽略。</i>
@ -861,10 +864,14 @@ General实体和关系提取提示来自 GitHub - microsoft/graphrag基于
'请上传由 Google Console 生成的 OAuth JSON。如果仅包含 client credentials请通过浏览器授权一次以获取长期有效的刷新 Token。',
dropboxDescription: '连接 Dropbox同步指定账号下的文件与文件夹。',
boxDescription: '连接你的 Box 云盘以同步文件和文件夹。',
githubDescription:
'连接 GitHub可同步 Pull Request 与 Issue 内容用于检索。',
airtableDescription: '连接 Airtable同步指定工作区下指定表格中的文件。',
gitlabDescription:
'连接 GitLab同步仓库、Issue、合并请求MR及相关文档内容。',
asanaDescription: '连接 Asana同步工作区中的文件。',
imapDescription:
'连接你的 IMAP 邮箱同步指定mailboxes中的邮件用于知识检索与分析',
r2Description: '连接你的 Cloudflare R2 存储桶以导入和同步文件。',
dropboxAccessTokenTip:
'请在 Dropbox App Console 生成 Access Token并勾选 files.metadata.read、files.content.read、sharing.read 等必要权限。',

View File

@ -1,4 +1,4 @@
import { useMemo, useState } from 'react';
import { useLayoutEffect, useMemo, useState } from 'react';
import { useTranslation } from 'react-i18next';
import { useNavigate } from 'umi';
@ -12,7 +12,12 @@ import {
useReactTable,
} from '@tanstack/react-table';
import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query';
import {
keepPreviousData,
useMutation,
useQuery,
useQueryClient,
} from '@tanstack/react-query';
import {
LucideClipboardList,
@ -125,12 +130,14 @@ function AdminUserManagement() {
queryFn: async () => (await listRoles()).data.data.roles,
enabled: IS_ENTERPRISE,
retry: false,
placeholderData: keepPreviousData,
});
const { data: usersList } = useQuery({
queryKey: ['admin/listUsers'],
queryFn: async () => (await listUsers()).data.data,
retry: false,
placeholderData: keepPreviousData,
});
// Delete user mutation
@ -354,8 +361,16 @@ function AdminUserManagement() {
getFilteredRowModel: getFilteredRowModel(),
getSortedRowModel: getSortedRowModel(),
getPaginationRowModel: getPaginationRowModel(),
autoResetPageIndex: false,
});
useLayoutEffect(() => {
if (table.getState().pagination.pageIndex > table.getPageCount()) {
table.setPageIndex(Math.max(0, table.getPageCount() - 1));
}
}, [usersList, table]);
return (
<>
<Card className="!shadow-none relative h-full bg-transparent overflow-hidden">
@ -538,7 +553,7 @@ function AdminUserManagement() {
<CardFooter className="flex items-center justify-end">
<RAGFlowPagination
total={usersList?.length ?? 0}
total={table.getFilteredRowModel().rows.length}
current={table.getState().pagination.pageIndex + 1}
pageSize={table.getState().pagination.pageSize}
onChange={(page, pageSize) => {

View File

@ -1,16 +1,19 @@
import { IBeginNode } from '@/interfaces/database/flow';
import { BaseNode } from '@/interfaces/database/flow';
import { cn } from '@/lib/utils';
import { NodeProps, Position } from '@xyflow/react';
import get from 'lodash/get';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import {
AgentDialogueMode,
BeginQueryType,
BeginQueryTypeIconMap,
NodeHandleId,
Operator,
} from '../../constant';
import { BeginQuery } from '../../interface';
import { BeginFormSchemaType } from '../../form/begin-form/schema';
import { useBuildWebhookUrl } from '../../hooks/use-build-webhook-url';
import { useIsPipeline } from '../../hooks/use-is-pipeline';
import OperatorIcon from '../../operator-icon';
import { LabelCard } from './card';
import { CommonHandle } from './handle';
@ -18,10 +21,21 @@ import { RightHandleStyle } from './handle-icon';
import styles from './index.less';
import { NodeWrapper } from './node-wrapper';
// TODO: do not allow other nodes to connect to this node
function InnerBeginNode({ data, id, selected }: NodeProps<IBeginNode>) {
function InnerBeginNode({
data,
id,
selected,
}: NodeProps<BaseNode<BeginFormSchemaType>>) {
const { t } = useTranslation();
const inputs: Record<string, BeginQuery> = get(data, 'form.inputs', {});
const inputs = get(data, 'form.inputs', {});
const mode = data.form?.mode;
const isWebhookMode = mode === AgentDialogueMode.Webhook;
const url = useBuildWebhookUrl();
const isPipeline = useIsPipeline();
return (
<NodeWrapper selected={selected} id={id}>
@ -34,29 +48,46 @@ function InnerBeginNode({ data, id, selected }: NodeProps<IBeginNode>) {
id={NodeHandleId.Start}
></CommonHandle>
<section className="flex items-center gap-2">
<section className="flex items-center gap-2">
<OperatorIcon name={data.label as Operator}></OperatorIcon>
<div className="truncate text-center font-semibold text-sm">
{t(`flow.begin`)}
</div>
</section>
<section className={cn(styles.generateParameters, 'flex gap-2 flex-col')}>
{Object.entries(inputs).map(([key, val], idx) => {
const Icon = BeginQueryTypeIconMap[val.type as BeginQueryType];
return (
<LabelCard key={idx} className={cn('flex gap-1.5 items-center')}>
<Icon className="size-3.5" />
<label htmlFor="" className="text-accent-primary text-sm italic">
{key}
</label>
<LabelCard className="py-0.5 truncate flex-1">
{val.name}
{isPipeline || (
<div className="text-accent-primary mt-2 p-1 bg-bg-accent w-fit rounded-sm text-xs">
{t(`flow.${isWebhookMode ? 'webhook.name' : mode}`)}
</div>
)}
{isWebhookMode ? (
<LabelCard className="mt-2 flex gap-1 items-center">
<span className="font-bold">URL</span>
<span className="flex-1 truncate">{url}</span>
</LabelCard>
) : (
<section
className={cn(styles.generateParameters, 'flex gap-2 flex-col')}
>
{Object.entries(inputs).map(([key, val], idx) => {
const Icon = BeginQueryTypeIconMap[val.type as BeginQueryType];
return (
<LabelCard key={idx} className={cn('flex gap-1.5 items-center')}>
<Icon className="size-3.5" />
<label
htmlFor=""
className="text-accent-primary text-sm italic"
>
{key}
</label>
<LabelCard className="py-0.5 truncate flex-1">
{val.name}
</LabelCard>
<span className="flex-1">{val.optional ? 'Yes' : 'No'}</span>
</LabelCard>
<span className="flex-1">{val.optional ? 'Yes' : 'No'}</span>
</LabelCard>
);
})}
</section>
);
})}
</section>
)}
</NodeWrapper>
);
}

View File

@ -15,19 +15,15 @@ import {
initialLlmBaseValues,
} from '@/constants/agent';
export {
AgentDialogueMode,
AgentStructuredOutputField,
JsonSchemaDataType,
Operator,
initialBeginValues,
} from '@/constants/agent';
export * from './pipeline';
export enum AgentDialogueMode {
Conversational = 'conversational',
Task = 'task',
Webhook = 'Webhook',
}
import { ModelVariableType } from '@/constants/knowledge';
import { t } from 'i18next';
@ -109,11 +105,6 @@ export const initialRetrievalValues = {
},
};
export const initialBeginValues = {
mode: AgentDialogueMode.Conversational,
prologue: `Hi! I'm your assistant. What can I do for you?`,
};
export const initialRewriteQuestionValues = {
...initialLlmBaseValues,
language: '',
@ -750,6 +741,8 @@ export const NodeMap = {
[Operator.Loop]: 'loopNode',
[Operator.LoopStart]: 'loopStartNode',
[Operator.ExitLoop]: 'exitLoopNode',
[Operator.ExcelProcessor]: 'ragNode',
[Operator.PDFGenerator]: 'ragNode',
};
export enum BeginQueryType {

View File

@ -3,13 +3,16 @@ import { CopyToClipboardWithText } from '@/components/copy-to-clipboard';
import NumberInput from '@/components/originui/number-input';
import { SelectWithSearch } from '@/components/originui/select-with-search';
import { RAGFlowFormItem } from '@/components/ragflow-form';
import { Label } from '@/components/ui/label';
import { MultiSelect } from '@/components/ui/multi-select';
import { Separator } from '@/components/ui/separator';
import { Textarea } from '@/components/ui/textarea';
import { useBuildWebhookUrl } from '@/pages/agent/hooks/use-build-webhook-url';
import { buildOptions } from '@/utils/form';
import { upperFirst } from 'lodash';
import { useCallback } from 'react';
import { useFormContext, useWatch } from 'react-hook-form';
import { useTranslation } from 'react-i18next';
import { useParams } from 'umi';
import {
RateLimitPerList,
WebhookMaxBodySize,
@ -22,7 +25,10 @@ import { Auth } from './auth';
import { WebhookRequestSchema } from './request-schema';
import { WebhookResponse } from './response';
const RateLimitPerOptions = buildOptions(RateLimitPerList);
const RateLimitPerOptions = RateLimitPerList.map((x) => ({
value: x,
label: upperFirst(x),
}));
const RequestLimitMap = {
[WebhookRateLimitPer.Second]: 100,
@ -33,7 +39,6 @@ const RequestLimitMap = {
export function WebHook() {
const { t } = useTranslation();
const { id } = useParams();
const form = useFormContext();
const rateLimitPer = useWatch({
@ -45,7 +50,7 @@ export function WebHook() {
return RequestLimitMap[rateLimitPer as keyof typeof RequestLimitMap] ?? 100;
}, []);
const text = `${location.protocol}//${location.host}/api/v1/webhook/${id}`;
const text = useBuildWebhookUrl();
return (
<>
@ -74,33 +79,36 @@ export function WebHook() {
></SelectWithSearch>
</RAGFlowFormItem>
<Auth></Auth>
<RAGFlowFormItem
name="security.rate_limit.limit"
label={t('flow.webhook.limit')}
>
<NumberInput
max={getLimitRateLimitPerMax(rateLimitPer)}
className="w-full"
></NumberInput>
</RAGFlowFormItem>
<RAGFlowFormItem
name="security.rate_limit.per"
label={t('flow.webhook.per')}
>
{(field) => (
<SelectWithSearch
options={RateLimitPerOptions}
value={field.value}
onChange={(val) => {
field.onChange(val);
form.setValue(
'security.rate_limit.limit',
getLimitRateLimitPerMax(val),
);
}}
></SelectWithSearch>
)}
</RAGFlowFormItem>
<section>
<Label>{t('flow.webhook.limit')}</Label>
<div className="flex items-center mt-1 gap-2">
<RAGFlowFormItem
name="security.rate_limit.limit"
className="flex-1"
>
<NumberInput
max={getLimitRateLimitPerMax(rateLimitPer)}
className="w-full"
></NumberInput>
</RAGFlowFormItem>
<Separator className="w-2" />
<RAGFlowFormItem name="security.rate_limit.per">
{(field) => (
<SelectWithSearch
options={RateLimitPerOptions}
value={field.value}
onChange={(val) => {
field.onChange(val);
form.setValue(
'security.rate_limit.limit',
getLimitRateLimitPerMax(val),
);
}}
></SelectWithSearch>
)}
</RAGFlowFormItem>
</div>
</section>
<RAGFlowFormItem
name="security.max_body_size"
label={t('flow.webhook.maxBodySize')}

View File

@ -179,6 +179,8 @@ export const useInitializeOperatorParams = () => {
[Operator.Loop]: initialLoopValues,
[Operator.LoopStart]: {},
[Operator.ExitLoop]: {},
[Operator.PDFGenerator]: {},
[Operator.ExcelProcessor]: {},
};
}, [llmId]);

View File

@ -0,0 +1,8 @@
import { useParams } from 'umi';
export function useBuildWebhookUrl() {
const { id } = useParams();
const text = `${location.protocol}//${location.host}/api/v1/webhook/${id}`;
return text;
}

View File

@ -8,7 +8,7 @@ import {
TooltipContent,
TooltipTrigger,
} from '@/components/ui/tooltip';
import { IChunk } from '@/interfaces/database/knowledge';
import type { ChunkDocType, IChunk } from '@/interfaces/database/knowledge';
import { cn } from '@/lib/utils';
import { CheckedState } from '@radix-ui/react-checkbox';
import classNames from 'classnames';
@ -67,6 +67,10 @@ const ChunkCard = ({
setEnabled(available === 1);
}, [available]);
const chunkType =
((item.doc_type_kwd &&
String(item.doc_type_kwd)?.toLowerCase()) as ChunkDocType) || 'text';
return (
<Card
className={classNames('relative flex-none', styles.chunkCard, {
@ -81,9 +85,7 @@ const ChunkCard = ({
bg-bg-card rounded-bl-2xl rounded-tr-lg
border-l-0.5 border-b-0.5 border-border-button"
>
{t(
`chunk.docType.${item.doc_type_kwd ? String(item.doc_type_kwd).toLowerCase() : 'text'}`,
)}
{t(`chunk.docType.${chunkType}`)}
</span>
<div className="flex items-start justify-between gap-2">

View File

@ -22,6 +22,7 @@ import { Switch } from '@/components/ui/switch';
import { Textarea } from '@/components/ui/textarea';
import { useFetchChunk } from '@/hooks/use-chunk-request';
import { IModalProps } from '@/interfaces/common';
import type { ChunkDocType } from '@/interfaces/database/knowledge';
import React, { useCallback, useEffect, useState } from 'react';
import { FieldValues, FormProvider, useForm } from 'react-hook-form';
import { useTranslation } from 'react-i18next';
@ -151,20 +152,25 @@ const ChunkCreatingModal: React.FC<IModalProps<any> & kFProps> = ({
<FormField
control={form.control}
name="doc_type_kwd"
render={({ field }) => (
<FormItem>
<FormLabel>{t(`chunk.type`)}</FormLabel>
<FormControl>
<Input
type="text"
value={t(
`chunk.docType.${field.value ? String(field.value).toLowerCase() : 'text'}`,
)}
readOnly
/>
</FormControl>
</FormItem>
)}
render={({ field }) => {
const chunkType =
((field.value &&
String(field.value)?.toLowerCase()) as ChunkDocType) ||
'text';
return (
<FormItem>
<FormLabel>{t(`chunk.type`)}</FormLabel>
<FormControl>
<Input
type="text"
value={t(`chunk.docType.${chunkType}`)}
readOnly
/>
</FormControl>
</FormItem>
);
}}
/>
)}

View File

@ -129,6 +129,7 @@ export const util = {
metaDataSettingJSONToMetaDataTableData(
data: IMetaDataReturnJSONSettings,
): IMetaDataTableData[] {
if (!Array.isArray(data)) return [];
return data.map((item) => {
return {
field: item.key,

View File

@ -15,6 +15,7 @@ import {
TableRow,
} from '@/components/ui/table';
import { useSetModalState } from '@/hooks/common-hooks';
import { Routes } from '@/routes';
import {
ColumnDef,
flexRender,
@ -27,6 +28,7 @@ import {
import { Plus, Settings, Trash2 } from 'lucide-react';
import { useCallback, useEffect, useMemo, useState } from 'react';
import { useTranslation } from 'react-i18next';
import { useHandleMenuClick } from '../../sidebar/hooks';
import {
MetadataDeleteMap,
MetadataType,
@ -78,7 +80,7 @@ export const ManageMetadataModal = (props: IManageModalProps) => {
addUpdateValue,
addDeleteValue,
} = useManageMetaDataModal(originalTableData, metadataType, otherData);
const { handleMenuClick } = useHandleMenuClick();
const {
visible: manageValuesVisible,
showModal: showManageValuesModal,
@ -335,67 +337,87 @@ export const ManageMetadataModal = (props: IManageModalProps) => {
success?.(res);
}}
>
<div className="flex flex-col gap-2">
<div className="flex items-center justify-between">
<div>{t('knowledgeDetails.metadata.metadata')}</div>
{isCanAdd && (
<Button
variant={'ghost'}
className="border border-border-button"
onClick={handAddValueRow}
>
<Plus />
</Button>
)}
</div>
<Table rootClassName="max-h-[800px]">
<TableHeader>
{table.getHeaderGroups().map((headerGroup) => (
<TableRow key={headerGroup.id}>
{headerGroup.headers.map((header) => (
<TableHead key={header.id}>
{header.isPlaceholder
? null
: flexRender(
header.column.columnDef.header,
header.getContext(),
)}
</TableHead>
))}
</TableRow>
))}
</TableHeader>
<TableBody className="relative">
{table.getRowModel().rows?.length ? (
table.getRowModel().rows.map((row) => (
<TableRow
key={row.id}
data-state={row.getIsSelected() && 'selected'}
className="group"
>
{row.getVisibleCells().map((cell) => (
<TableCell key={cell.id}>
{flexRender(
cell.column.columnDef.cell,
cell.getContext(),
)}
</TableCell>
<>
<div className="flex flex-col gap-2">
<div className="flex items-center justify-between">
<div>{t('knowledgeDetails.metadata.metadata')}</div>
{metadataType === MetadataType.Manage && false && (
<Button
variant={'ghost'}
className="border border-border-button"
type="button"
onClick={handleMenuClick(Routes.DataSetSetting, {
openMetadata: true,
})}
>
{t('knowledgeDetails.metadata.toMetadataSetting')}
</Button>
)}
{isCanAdd && (
<Button
variant={'ghost'}
className="border border-border-button"
type="button"
onClick={handAddValueRow}
>
<Plus />
</Button>
)}
</div>
<Table rootClassName="max-h-[800px]">
<TableHeader>
{table.getHeaderGroups().map((headerGroup) => (
<TableRow key={headerGroup.id}>
{headerGroup.headers.map((header) => (
<TableHead key={header.id}>
{header.isPlaceholder
? null
: flexRender(
header.column.columnDef.header,
header.getContext(),
)}
</TableHead>
))}
</TableRow>
))
) : (
<TableRow>
<TableCell
colSpan={columns.length}
className="h-24 text-center"
>
<Empty type={EmptyType.Data} />
</TableCell>
</TableRow>
)}
</TableBody>
</Table>
</div>
))}
</TableHeader>
<TableBody className="relative">
{table.getRowModel().rows?.length ? (
table.getRowModel().rows.map((row) => (
<TableRow
key={row.id}
data-state={row.getIsSelected() && 'selected'}
className="group"
>
{row.getVisibleCells().map((cell) => (
<TableCell key={cell.id}>
{flexRender(
cell.column.columnDef.cell,
cell.getContext(),
)}
</TableCell>
))}
</TableRow>
))
) : (
<TableRow>
<TableCell
colSpan={columns.length}
className="h-24 text-center"
>
<Empty type={EmptyType.Data} />
</TableCell>
</TableRow>
)}
</TableBody>
</Table>
</div>
{metadataType === MetadataType.Manage && (
<div className=" absolute bottom-6 left-5 text-text-secondary text-sm">
{t('knowledgeDetails.metadata.toMetadataSettingTip')}
</div>
)}
</>
</Modal>
{manageValuesVisible && (
<ManageValuesModal

View File

@ -25,12 +25,13 @@ import { useComposeLlmOptionsByModelTypes } from '@/hooks/use-llm-request';
import { cn } from '@/lib/utils';
import { t } from 'i18next';
import { Settings } from 'lucide-react';
import { useMemo, useState } from 'react';
import { useCallback, useEffect, useMemo, useState } from 'react';
import {
ControllerRenderProps,
FieldValues,
useFormContext,
} from 'react-hook-form';
import { useLocation } from 'umi';
import {
MetadataType,
useManageMetadata,
@ -368,6 +369,7 @@ export function AutoMetadata({
otherData?: Record<string, any>;
}) {
// get metadata field
const location = useLocation();
const form = useFormContext();
const {
manageMetadataVisible,
@ -377,6 +379,29 @@ export function AutoMetadata({
config: metadataConfig,
} = useManageMetadata();
const handleClickOpenMetadata = useCallback(() => {
const metadata = form.getValues('parser_config.metadata');
const tableMetaData = util.metaDataSettingJSONToMetaDataTableData(metadata);
showManageMetadataModal({
metadata: tableMetaData,
isCanAdd: true,
type: type,
record: otherData,
});
}, [form, otherData, showManageMetadataModal, type]);
useEffect(() => {
const locationState = location.state as
| { openMetadata?: boolean }
| undefined;
if (locationState?.openMetadata) {
setTimeout(() => {
handleClickOpenMetadata();
}, 100);
locationState.openMetadata = false;
}
}, [location, handleClickOpenMetadata]);
const autoMetadataField: FormFieldConfig = {
name: 'parser_config.enable_metadata',
label: t('knowledgeConfiguration.autoMetadata'),
@ -386,21 +411,7 @@ export function AutoMetadata({
tooltip: t('knowledgeConfiguration.autoMetadataTip'),
render: (fieldProps: ControllerRenderProps) => (
<div className="flex items-center justify-between">
<Button
type="button"
variant="ghost"
onClick={() => {
const metadata = form.getValues('parser_config.metadata');
const tableMetaData =
util.metaDataSettingJSONToMetaDataTableData(metadata);
showManageMetadataModal({
metadata: tableMetaData,
isCanAdd: true,
type: type,
record: otherData,
});
}}
>
<Button type="button" variant="ghost" onClick={handleClickOpenMetadata}>
<div className="flex items-center gap-2">
<Settings />
{t('knowledgeConfiguration.settings')}

View File

@ -79,6 +79,7 @@ export default function Dataset() {
useRowSelection();
const {
chunkNum,
list,
visible: reparseDialogVisible,
hideModal: hideReparseDialogModal,
@ -218,7 +219,7 @@ export default function Dataset() {
// hidden={isZeroChunk || isRunning}
hidden={false}
handleOperationIconClick={handleOperationIconClick}
chunk_num={0}
chunk_num={chunkNum}
visible={reparseDialogVisible}
hideModal={hideReparseDialogModal}
></ReparseDialog>

View File

@ -72,7 +72,7 @@ export function ParsingStatusCell({
const isRunning = isParserRunning(run);
const isZeroChunk = chunk_num === 0;
const handleOperationIconClick = (option: {
const handleOperationIconClick = (option?: {
delete: boolean;
apply_kb: boolean;
}) => {
@ -183,8 +183,8 @@ export function ParsingStatusCell({
)}
{reparseDialogVisible && (
<ReparseDialog
// hidden={isZeroChunk || isRunning}
hidden={false}
hidden={isRunning}
// hidden={false}
handleOperationIconClick={handleOperationIconClick}
chunk_num={chunk_num}
visible={reparseDialogVisible}

View File

@ -2,141 +2,154 @@ import { ConfirmDeleteDialog } from '@/components/confirm-delete-dialog';
import {
DynamicForm,
DynamicFormRef,
FormFieldConfig,
FormFieldType,
} from '@/components/dynamic-form';
import { Checkbox } from '@/components/ui/checkbox';
import { DialogProps } from '@radix-ui/react-dialog';
import { t } from 'i18next';
import { useCallback, useState } from 'react';
import { memo, useCallback, useEffect, useRef, useState } from 'react';
import { ControllerRenderProps } from 'react-hook-form';
import { useTranslation } from 'react-i18next';
export const ReparseDialog = ({
handleOperationIconClick,
chunk_num,
hidden = false,
visible = true,
hideModal,
children,
}: DialogProps & {
chunk_num: number;
handleOperationIconClick: (options: {
delete: boolean;
apply_kb: boolean;
}) => void;
visible: boolean;
hideModal: () => void;
hidden?: boolean;
}) => {
const [formInstance, setFormInstance] = useState<DynamicFormRef | null>(null);
export const ReparseDialog = memo(
({
handleOperationIconClick,
chunk_num,
hidden = false,
visible = true,
hideModal,
}: DialogProps & {
chunk_num: number;
handleOperationIconClick: (options?: {
delete: boolean;
apply_kb: boolean;
}) => void;
visible: boolean;
hideModal: () => void;
hidden?: boolean;
}) => {
const [defaultValues, setDefaultValues] = useState<any>(null);
const [fields, setFields] = useState<FormFieldConfig[]>([]);
const { t } = useTranslation();
const handleOperationIconClickRef = useRef(handleOperationIconClick);
const hiddenRef = useRef(hidden);
const formCallbackRef = useCallback((node: DynamicFormRef | null) => {
if (node) {
setFormInstance(node);
console.log('Form instance assigned:', node);
} else {
console.log('Form instance removed');
}
}, []);
useEffect(() => {
handleOperationIconClickRef.current = handleOperationIconClick;
hiddenRef.current = hidden;
});
const handleCancel = useCallback(() => {
// handleOperationIconClick(false);
hideModal?.();
formInstance?.reset();
}, [formInstance]);
const handleSave = useCallback(async () => {
const instance = formInstance;
if (!instance) {
console.error('Form instance is null');
return;
}
const check = await instance.trigger();
if (check) {
instance.submit();
const formValues = instance.getValues();
console.log(formValues);
handleOperationIconClick({
delete: formValues.delete,
apply_kb: formValues.apply_kb,
useEffect(() => {
if (hiddenRef.current) {
handleOperationIconClickRef.current();
}
}, []);
useEffect(() => {
setDefaultValues({
delete: chunk_num > 0,
apply_kb: false,
});
}
}, [formInstance, handleOperationIconClick]);
// useEffect(() => {
// if (!hidden) {
// const timer = setTimeout(() => {
// if (!formInstance) {
// console.warn(
// 'Form ref is still null after component should be mounted',
// );
// } else {
// console.log('Form ref is properly set');
// }
// }, 1000);
// return () => clearTimeout(timer);
// }
// }, [hidden, formInstance]);
return (
<ConfirmDeleteDialog
title={t(`knowledgeDetails.parseFile`)}
onOk={() => handleSave()}
onCancel={() => handleCancel()}
hidden={hidden}
open={visible}
okButtonText={t('common.confirm')}
content={{
title: t(`knowledgeDetails.parseFileTip`),
node: (
<div>
<DynamicForm.Root
onSubmit={(data) => {
console.log('submit', data);
const deleteField = {
name: 'delete',
label: '',
type: FormFieldType.Checkbox,
render: (fieldProps: ControllerRenderProps) => (
<div className="flex items-center text-text-secondary p-5 border border-border-button rounded-lg">
<Checkbox
{...fieldProps}
checked={fieldProps.value}
onCheckedChange={(checked: boolean) => {
fieldProps.onChange(checked);
}}
ref={formCallbackRef}
fields={[
{
name: 'delete',
label: '',
type: FormFieldType.Checkbox,
render: (fieldProps) => (
<div className="flex items-center text-text-secondary p-5 border border-border-button rounded-lg">
<Checkbox
{...fieldProps}
onCheckedChange={(checked: boolean) => {
fieldProps.onChange(checked);
}}
/>
<span className="ml-2">
{chunk_num > 0
? t(`knowledgeDetails.redo`, { chunkNum: chunk_num })
: t('knowledgeDetails.redoAll')}
</span>
</div>
),
},
{
name: 'apply_kb',
label: '',
type: FormFieldType.Checkbox,
render: (fieldProps) => (
<div className="flex items-center text-text-secondary p-5 border border-border-button rounded-lg">
<Checkbox
{...fieldProps}
onCheckedChange={(checked: boolean) => {
fieldProps.onChange(checked);
}}
/>
<span className="ml-2">
{t('knowledgeDetails.applyAutoMetadataSettings')}
</span>
</div>
),
},
]}
>
{/* <DynamicForm.CancelButton
/>
<span className="ml-2">
{chunk_num > 0
? t(`knowledgeDetails.redo`, {
chunkNum: chunk_num,
})
: t('knowledgeDetails.redoAll')}
</span>
</div>
),
};
const applyKBField = {
name: 'apply_kb',
label: '',
type: FormFieldType.Checkbox,
defaultValue: false,
render: (fieldProps: ControllerRenderProps) => (
<div className="flex items-center text-text-secondary p-5 border border-border-button rounded-lg">
<Checkbox
{...fieldProps}
checked={fieldProps.value}
onCheckedChange={(checked: boolean) => {
fieldProps.onChange(checked);
}}
/>
<span className="ml-2">
{t('knowledgeDetails.applyAutoMetadataSettings')}
</span>
</div>
),
};
if (chunk_num > 0) {
setFields([deleteField, applyKBField]);
}
if (chunk_num <= 0) {
setFields([applyKBField]);
}
}, [chunk_num, t]);
const formCallbackRef = useRef<DynamicFormRef>(null);
const handleCancel = useCallback(() => {
// handleOperationIconClick(false);
hideModal?.();
// formInstance?.reset();
formCallbackRef?.current?.reset();
}, [formCallbackRef, hideModal]);
const handleSave = useCallback(async () => {
// const instance = formInstance;
const instance = formCallbackRef?.current;
if (!instance) {
console.error('Form instance is null');
return;
}
const check = await instance.trigger();
if (check) {
instance.submit();
const formValues = instance.getValues();
console.log(formValues);
handleOperationIconClick({
delete: formValues.delete,
apply_kb: formValues.apply_kb,
});
}
}, [formCallbackRef, handleOperationIconClick]);
return (
<ConfirmDeleteDialog
title={t(`knowledgeDetails.parseFile`)}
onOk={() => handleSave()}
onCancel={() => handleCancel()}
hidden={hidden}
open={visible}
okButtonText={t('common.confirm')}
content={{
title: t(`knowledgeDetails.parseFileTip`),
node: (
<div>
<DynamicForm.Root
onSubmit={(data) => {
console.log('submit', data);
}}
ref={formCallbackRef}
fields={fields}
defaultValues={defaultValues}
>
{/* <DynamicForm.CancelButton
handleCancel={() => handleOperationIconClick(false)}
cancelText={t('common.cancel')}
/>
@ -144,12 +157,13 @@ export const ReparseDialog = ({
buttonText={t('common.confirm')}
submitFunc={handleSave}
/> */}
</DynamicForm.Root>
</div>
),
}}
>
{/* {children} */}
</ConfirmDeleteDialog>
);
};
</DynamicForm.Root>
</div>
),
}}
>
{/* {children} */}
</ConfirmDeleteDialog>
);
},
);

View File

@ -10,7 +10,7 @@ import {
} from '@/hooks/use-document-request';
import { IDocumentInfo } from '@/interfaces/database/document';
import { Ban, CircleCheck, CircleX, Play, Trash2 } from 'lucide-react';
import { useCallback } from 'react';
import { useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { toast } from 'sonner';
import { DocumentType, RunningStatus } from './constant';
@ -32,6 +32,16 @@ export function useBulkOperateDataset({
const { setDocumentStatus } = useSetDocumentStatus();
const { removeDocument } = useRemoveDocument();
const { visible, showModal, hideModal } = useSetModalState();
const chunkNum = useMemo(() => {
if (!documents.length) {
return 0;
}
return documents.reduce((acc, cur) => {
return acc + cur.chunk_num;
}, 0);
}, [documents]);
const runDocument = useCallback(
async (run: number, option?: { delete: boolean; apply_kb: boolean }) => {
const nonVirtualKeys = selectedRowKeys.filter(
@ -54,7 +64,7 @@ export function useBulkOperateDataset({
);
const handleRunClick = useCallback(
(option: { delete: boolean; apply_kb: boolean }) => {
(option?: { delete: boolean; apply_kb: boolean }) => {
runDocument(1, option);
},
[runDocument],
@ -132,5 +142,5 @@ export function useBulkOperateDataset({
},
];
return { list, visible, hideModal, showModal, handleRunClick };
return { chunkNum, list, visible, hideModal, showModal, handleRunClick };
}

View File

@ -10,7 +10,7 @@ export const useHandleRunDocumentByIds = (id: string) => {
const handleRunDocumentByIds = async (
documentId: string,
isRunning: boolean,
option: { delete: boolean; apply_kb: boolean },
option?: { delete: boolean; apply_kb: boolean },
) => {
if (isLoading) {
return;

View File

@ -38,21 +38,27 @@ interface ProcessLogModalProps {
}
const InfoItem: React.FC<{
overflowTip?: boolean;
label: string;
value: string | React.ReactNode;
className?: string;
}> = ({ label, value, className = '' }) => {
}> = ({ label, value, className = '', overflowTip = false }) => {
return (
<div className={`flex flex-col mb-4 ${className}`}>
<span className="text-text-secondary text-sm">{label}</span>
<Tooltip>
<TooltipTrigger asChild>
<span className="text-text-primary mt-1 truncate w-full">
{value}
</span>
</TooltipTrigger>
<TooltipContent>{value}</TooltipContent>
</Tooltip>
{overflowTip && (
<Tooltip>
<TooltipTrigger asChild>
<span className="text-text-primary mt-1 truncate w-full">
{value}
</span>
</TooltipTrigger>
<TooltipContent>{value}</TooltipContent>
</Tooltip>
)}
{!overflowTip && (
<span className="text-text-primary mt-1 truncate w-full">{value}</span>
)}
</div>
);
};
@ -139,6 +145,7 @@ const ProcessLogModal: React.FC<ProcessLogModalProps> = ({
return (
<div className="w-1/2" key={key}>
<InfoItem
overflowTip={true}
label={t(key)}
value={logInfo[key as keyof typeof logInfo]}
/>

View File

@ -7,8 +7,8 @@ export const useHandleMenuClick = () => {
const { id } = useParams();
const handleMenuClick = useCallback(
(key: Routes) => () => {
navigate(`${Routes.DatasetBase}${key}/${id}`);
(key: Routes, data?: any) => () => {
navigate(`${Routes.DatasetBase}${key}/${id}`, { state: data });
},
[id, navigate],
);

View File

@ -18,6 +18,7 @@ import {
} from '@/components/ui/table';
import { Pagination } from '@/interfaces/common';
import { replaceText } from '@/pages/dataset/process-log-modal';
import { MemoryOptions } from '@/pages/memories/constants';
import {
ColumnDef,
ColumnFiltersState,
@ -99,7 +100,12 @@ export function MemoryTable({
header: () => <span>{t('memory.messages.type')}</span>,
cell: ({ row }) => (
<div className="text-sm font-medium capitalize">
{row.getValue('message_type')}
{row.getValue('message_type')
? MemoryOptions(t).find(
(item) =>
item.value === (row.getValue('message_type') as string),
)?.label
: row.getValue('message_type')}
</div>
),
},
@ -117,13 +123,13 @@ export function MemoryTable({
<div className="text-sm ">{row.getValue('forget_at')}</div>
),
},
{
accessorKey: 'source_id',
header: () => <span>{t('memory.messages.source')}</span>,
cell: ({ row }) => (
<div className="text-sm ">{row.getValue('source_id')}</div>
),
},
// {
// accessorKey: 'source_id',
// header: () => <span>{t('memory.messages.source')}</span>,
// cell: ({ row }) => (
// <div className="text-sm ">{row.getValue('source_id')}</div>
// ),
// },
{
accessorKey: 'status',
header: () => <span>{t('memory.messages.enable')}</span>,

View File

@ -92,6 +92,7 @@ export const MemoryModelForm = () => {
label: t('memory.config.memorySize') + ' (Bytes)',
type: FormFieldType.Number,
horizontal: true,
tooltip: t('memory.config.memorySizeTooltip'),
// placeholder: t('memory.config.memorySizePlaceholder'),
required: false,
}}

View File

@ -27,6 +27,8 @@ export enum DataSourceKey {
AIRTABLE = 'airtable',
GITLAB = 'gitlab',
ASANA = 'asana',
IMAP = 'imap',
GITHUB = 'github',
// SHAREPOINT = 'sharepoint',
// SLACK = 'slack',
// TEAMS = 'teams',
@ -121,6 +123,16 @@ export const generateDataSourceInfo = (t: TFunction) => {
description: t(`setting.${DataSourceKey.ASANA}Description`),
icon: <SvgIcon name={'data-source/asana'} width={38} />,
},
[DataSourceKey.GITHUB]: {
name: 'GitHub',
description: t(`setting.${DataSourceKey.GITHUB}Description`),
icon: <SvgIcon name={'data-source/github'} width={38} />,
},
[DataSourceKey.IMAP]: {
name: 'IMAP',
description: t(`setting.${DataSourceKey.IMAP}Description`),
icon: <SvgIcon name={'data-source/imap'} width={38} />,
},
};
};
@ -648,7 +660,7 @@ export const DataSourceFormFields = {
{
label: 'Access Token',
name: 'config.credentials.airtable_access_token',
type: FormFieldType.Text,
type: FormFieldType.Password,
required: true,
},
{
@ -716,7 +728,7 @@ export const DataSourceFormFields = {
{
label: 'API Token',
name: 'config.credentials.asana_api_token_secret',
type: FormFieldType.Text,
type: FormFieldType.Password,
required: true,
},
{
@ -738,6 +750,78 @@ export const DataSourceFormFields = {
required: false,
},
],
[DataSourceKey.GITHUB]: [
{
label: 'Repository Owner',
name: 'config.repository_owner',
type: FormFieldType.Text,
required: true,
},
{
label: 'Repository Name',
name: 'config.repository_name',
type: FormFieldType.Text,
required: true,
},
{
label: 'GitHub Access Token',
name: 'config.credentials.github_access_token',
type: FormFieldType.Password,
required: true,
},
{
label: 'Inlcude Pull Requests',
name: 'config.include_pull_requests',
type: FormFieldType.Checkbox,
required: false,
defaultValue: false,
},
{
label: 'Inlcude Issues',
name: 'config.include_issues',
type: FormFieldType.Checkbox,
required: false,
defaultValue: false,
},
],
[DataSourceKey.IMAP]: [
{
label: 'Username',
name: 'config.credentials.imap_username',
type: FormFieldType.Text,
required: true,
},
{
label: 'Password',
name: 'config.credentials.imap_password',
type: FormFieldType.Password,
required: true,
},
{
label: 'Host',
name: 'config.imap_host',
type: FormFieldType.Text,
required: true,
},
{
label: 'Port',
name: 'config.imap_port',
type: FormFieldType.Number,
required: true,
},
{
label: 'Mailboxes',
name: 'config.imap_mailbox',
type: FormFieldType.Tag,
required: false,
},
{
label: 'Poll Range',
name: 'config.poll_range',
type: FormFieldType.Number,
required: false,
},
],
};
export const DataSourceFormDefaultValues = {
@ -964,4 +1048,32 @@ export const DataSourceFormDefaultValues = {
},
},
},
[DataSourceKey.GITHUB]: {
name: '',
source: DataSourceKey.GITHUB,
config: {
repository_owner: '',
repository_name: '',
include_pull_requests: false,
include_issues: false,
credentials: {
github_access_token: '',
},
},
},
[DataSourceKey.IMAP]: {
name: '',
source: DataSourceKey.IMAP,
config: {
name: '',
imap_host: '',
imap_port: 993,
imap_mailbox: [],
poll_range: 30,
credentials: {
imap_username: '',
imap_password: '',
},
},
},
};

Some files were not shown because too many files have changed in this diff Show More