Compare commits

..

154 Commits

Author SHA1 Message Date
ff4239c7cf Docs: Updated descriptions on metadata filtering (#10518)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-10-13 17:33:04 +08:00
cf5867b146 Feat: Merge title splitter and token splitter into chunker category #9869 (#10517)
### What problem does this PR solve?

Feat: Merge title splitter and token splitter into chunker category
#9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-13 15:46:14 +08:00
77481ab3ab Fix: Optimized the login page and fixed some known issues. #9869 (#10514)
### What problem does this PR solve?

Fix: Optimized the login page and fixed some known issues. #9869

- Added the FlipCard3D component to implement a 3D flip effect on the
login/registration forms.
- Adjusted the Spotlight component to support custom positioning and
color configurations.
- Updated the route to point to the new login page /login-next.
- Added a cancel interface to the auto-generate function.
- Fixed scroll bar issues in PDF preview.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-13 15:31:36 +08:00
9c53b3336a Fix: The Context Generator(Transformer) node can only be followed by a Tokenizer(Indexer) and a Context Generator(Transformer). #9869 (#10515)
### What problem does this PR solve?

Fix: The Context Generator node can only be followed by a Tokenizer and
a Context Generator. #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-13 14:37:30 +08:00
24481f0332 Fix: Update lm studio models support, refer to #8116 (#10509)
### What problem does this PR solve?

Fix: Update lm studio models support, refer to #8116

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Documentation Update
2025-10-13 13:58:08 +08:00
4e6b84bb41 Feat: add trino support (#10512)
### What problem does this PR solve?
issue:
[#10296](https://github.com/infiniflow/ragflow/issues/10296)
change:
- ExeSQL: support connecting to Trino.
- Validation: password can be empty only when db_type === "trino";
  all other database types keep the existing requirement (non-empty).

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-13 13:57:40 +08:00
65c3f0406c Fix: maintain backward compatibility for KB tasks (#10508)
### What problem does this PR solve?

Maintain backward compatibility for KB tasks

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-13 11:53:48 +08:00
7fb8b30cc2 fix: decode before format to json (#10506)
### What problem does this PR solve?

Decode bytes before format to json.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-13 11:11:06 +08:00
acca3640f7 Feat: Modify the background color of the canvas #9869 (#10507)
### What problem does this PR solve?

Feat: Modify the background color of the canvas #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-13 11:10:54 +08:00
58836d84fe Fix: Mcp reset error, #10497 (#10498)
### What problem does this PR solve?

Fix #10497

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-10-13 09:34:44 +08:00
ad56137a59 Feat: ​​OpenSearch's support for newly embedding models​​ (#10494)
### What problem does this PR solve?

fix issues:https://github.com/infiniflow/ragflow/issues/10402

As the newly distributed embedding models support vector dimensions max
to 4096, while current OpenSearch's max dimension support is 1536.
As I tested, the 4096-dimensions vector will be treated as a float type
which is unacceptable in OpenSearch.

Besides, OpenSearch supports max to 16000 dimensions by defalut with the
vector engine(Faiss). According to:
https://docs.opensearch.org/2.19/field-types/supported-field-types/knn-methods-engines/

I added max to 10240 dimensions support for OpenSearch, as I think will
be sufficient in the future.

As I tested , it worked well on my own server (treated as knn_vector)by
using qwen3-embedding:8b as the embedding model:
<img width="1338" height="790" alt="image"
src="https://github.com/user-attachments/assets/a9b2d284-fcf6-4cea-859a-6aadccf36ace"
/>


### Type of change

- [x] New Feature (non-breaking change which adds functionality)


By the way, I will still focus on the stuff about
Elasticsearch/Opensearch as search engines and vector databases.

Co-authored-by: 张雨豪 <zhangyh80@chinatelecom.cn>
2025-10-11 19:58:12 +08:00
2828e321bc Fix: remove lang for autio. (#10496)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-11 19:38:07 +08:00
932781ea4e Fix: incorrect agent template #10393 (#10491)
### What problem does this PR solve?

Fix: incorrect agent template #10493

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-10-11 19:37:42 +08:00
5200711441 Feat: add support for multi-column PDF parsing (#10475)
### What problem does this PR solve?

Add support for multi-columns PDF parsing. #9878, #9919.

Two-column sample:
<img width="1885" height="1020" alt="image"
src="https://github.com/user-attachments/assets/0270c028-2db8-4ca6-a4b7-cd5830882d28"
/>

Three-column sample: 
<img width="1881" height="992" alt="image"
src="https://github.com/user-attachments/assets/9ee88844-d5b1-4927-9e4e-3bd810d6e03a"
/>

Single-column sample:
<img width="1883" height="1042" alt="image"
src="https://github.com/user-attachments/assets/e93d3d18-43c3-4067-b5fa-e454ed0ab093"
/>



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-10-11 18:46:09 +08:00
c21cea2038 Fix: Added table of contents extraction functionality and optimized form item layout #9869 (#10492)
### What problem does this PR solve?

Fix: Added table of contents extraction functionality and optimized form
item layout #9869

- Added `EnableTocToggle` component to toggle table of contents
extraction on and off
- Added multiple parser configuration components (such as naive, book,
laws, etc.), displaying different parser components based on built-in
slicing methods

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-11 18:45:55 +08:00
6a0f448419 Feat: Modify the default style of the agent node anchor #9869 (#10489)
### What problem does this PR solve?

Feat: Modify the default style of the agent node anchor #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-11 18:45:38 +08:00
7d2f65671f Feat: debugging toc part. (#10486)
### What problem does this PR solve?

#10436

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-11 18:45:21 +08:00
a0d5f81098 Feat: include author, journal name, volume, issue, page, and DOI in PubMed search results (#10481)
### What problem does this PR solve?

issue:
[#6571](https://github.com/infiniflow/ragflow/issues/6571)
change:
include author, journal name, volume, issue, page, and DOI in PubMed
search results

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-11 16:00:16 +08:00
52f26f4643 feat(login): Refactor the login page and add dynamic background and highlight effects #9869 (#10482)
### What problem does this PR solve?

Refactor(login): Refactor the login page and add dynamic background and
highlight effects #9869

### Type of change
- [x] Refactoring
2025-10-11 15:24:42 +08:00
313e92dd9b Fix: Fixed the issue where the connection lines of placeholder nodes in the agent canvas could not be displayed #9869 (#10485)
### What problem does this PR solve?

Fix: Fixed the issue where the connection lines of placeholder nodes in
the agent canvas could not be displayed #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-11 15:24:27 +08:00
fee757eb41 Fix: Disable reasoning on Gemini 2.5 Flash by default (#10477)
### What problem does this PR solve?

Gemini 2.5 Flash Models use reasoning by default. There is currently no
way to disable this behaviour. This leads to very long response times (>
1min). The default behaviour should be, that reasoning is disabled and
configurable

issue #10474 

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-10-11 10:22:51 +08:00
b5ddc7ca05 fix: return type annotation for get_urls() in download_deps (#10478)
### What problem does this PR solve?

Fixes the return type annotation for the `get_urls` function in
`download_deps.py`

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-11 09:49:09 +08:00
534fa60b2a Fix: Agent.reset() argument wrong #10463 & Unable to converse with agent through Python API. #10415 (#10472)
### What problem does this PR solve?
Fix: Agent.reset() argument wrong #10463 & Unable to converse with agent
through Python API. #10415

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-10 20:44:05 +08:00
390b2b8f26 Feat: Adjust the style of the delete button on the agent side #9869 (#10473)
### What problem does this PR solve?

Feat: Adjust the style of the delete button on the agent side #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-10 19:52:08 +08:00
0283e4098f Fix #10408 (#10471)
### What problem does this PR solve?

Google Cloud model does not work correctly with gemini-2.5 models
Close #10408

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-10-10 19:18:24 +08:00
2cdba3d1e6 Fix: Fixed the issue where swagger apidocs could not be opened properly(#9522) (#10461)
### What problem does this PR solve?

Fix: Fixed the issue where swagger apidocs could not be opened
properly(#9522)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: virgilwong <hyhvirgil@gmail.com>
2025-10-10 18:36:20 +08:00
f0
eb0b37d7ee typo: fix _entity_index_dilimiter_key to entity_index_delimiter_key (#10465)
### What problem does this PR solve?

typo: fix _entity_index_dilimiter_key to entity_index_delimiter_key

### Type of change

- [x] Refactoring
2025-10-10 18:36:00 +08:00
198e52e990 Feat: Added toc enhance field to chat and retrieval operator configuration #10436 (#10470)
### What problem does this PR solve?

Feat: Added toc enhance field to chat and retrieval operator
configuration #10436

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-10 18:35:43 +08:00
a50ccf77f9 Fix: Fixed some discovered bugs #9869 (#10466)
### What problem does this PR solve?

Fix: Bug fixes #9869
- Adjusted the breadcrumb display logic on the data flow results page
- Added the default display of "Local Upload" to the Source field in the
dataset overview table
- Replaced the original Mindmap Task field with the GraphRAG Task field
on the dataset settings page
- Optimized the build button status check criteria and adjusted the
progress information display logic
- Introduced a Tooltip in the parsing status cell component and removed
redundant Button components

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-10 17:16:34 +08:00
deaf15a08b Fix: typo (#10469)
### What problem does this PR solve?

Fix some typo in document.

### Type of change

- [x] Documentation Update
2025-10-10 17:08:20 +08:00
0d8791936e Feat: TOC retrieval (#10456)
### What problem does this PR solve?

#10436

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-10 17:07:55 +08:00
5d167cd772 feat: support qwq reasoning models with non-stream output (#10468)
### What problem does this PR solve?
issue:
[#6193](https://github.com/infiniflow/ragflow/issues/6193)
change:
support qwq reasoning models with non-stream output

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-10 16:38:04 +08:00
f35c5ed119 Fix: Fixed an issue where parser configurations could be added infinitely #9869 (#10464)
### What problem does this PR solve?

Fix: Fixed an issue where parser configurations could be added
infinitely #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-10 16:30:13 +08:00
fc46d6bb87 Fix: Fixed the issue where the operator added by clicking the plus sign in the data flow would overlap with the original section #9886 (#10458)
### What problem does this PR solve?

Fix: Fixed the issue where the operator added by clicking the plus sign
in the data flow would overlap with the original section #9886

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-10 13:10:18 +08:00
8252b1c5c0 bump infinity (#10422)
### What problem does this PR solve?

bump infinity to v0.6.0-dev7
Needs https://github.com/infiniflow/infinity/pull/3016

### Type of change
- [x] Other (please describe): Infinity
2025-10-10 12:41:45 +08:00
c802a6ffdd Feat: Add prompts for toc relevance check according to #10436 (#10457)
### What problem does this PR solve?

Feat: Add prompts for toc relevance check according to #10436

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-10 11:44:46 +08:00
9b06734ced Feat: add total in List dataset API (#10448)
### What problem does this PR solve?

Feat: add total in List dataset API,  solved #10360 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-10 11:20:55 +08:00
6ab4c1a6e9 Refactor: improve how NvidiaCV calculate res total token counts (#10455)
### What problem does this PR solve?
improve how NvidiaCV calculate res total token counts

### Type of change
- [x] Refactoring
2025-10-10 11:03:40 +08:00
f631073ac2 Fix OCR GPU provider mem limit handling (#10407)
### What problem does this PR solve?

- Running DeepDoc OCR on large PDFs inside the GPU docker-compose setup
would intermittently fail with
[ONNXRuntimeError] ... p2o.Clip.6 ... Available memory of 0 is smaller
than requested bytes ...
- Root cause: load_model() in deepdoc/vision/ocr.py treated
device_id=None as-is.
torch.cuda.device_count() > device_id then raised a TypeError, the
helper returned False, and ONNXRuntime quietly fell back to
CPUExecutionProvider with
the hard-coded 512 MB limit, which then triggered the allocator failure.
- Environment where this reproduces: Windows 11, AMD 5900x, 64 GB RAM,
RTX 3090 (24 GB), docker-compose-gpu.yml from upstream, default DeepDoc
+ GraphRAG
parser settings, ingesting heavy PDF such as 《内科学》(第10版).pdf (~180 MB).

  Fixes:

- Normalize device_id to 0 when it is None before calling any CUDA APIs,
so the GPU path is considered available.
- Allow configuring the CUDA provider’s memory cap via
OCR_GPU_MEM_LIMIT_MB (default 2048 MB) and expose
OCR_ARENA_EXTEND_STRATEGY; the calculated byte
  limit is logged to confirm the effective settings.

  After the change, ragflow_server.log shows for example
load_model ... uses GPU (device 0, gpu_mem_limit=21474836480,
arena_strategy=kNextPowerOfTwo) and the same document finishes OCR
without allocator errors.

  ### Type of change

  - [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-10 11:03:12 +08:00
8aabc2807c Feat: Pipeline Docx file supports Markdown output (#10439)
### What problem does this PR solve?

Pipeline Docx file supports Markdown output.

<img width="1242" height="755" alt="image"
src="https://github.com/user-attachments/assets/63cca75b-20b9-4a90-a01c-c0c2fccf1f2a"
/>

<img width="1227" height="717" alt="image"
src="https://github.com/user-attachments/assets/0dcb94b2-7ba0-48d5-9231-dc6e5c4b4192"
/>


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-10 09:39:15 +08:00
d931c33ced Fix typos: retrievaler -> retriever (#10372)
### What problem does this PR solve?

Fix typos

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-10 09:17:36 +08:00
f4324e89d9 Feat: Importing data flow files from the list page #9869 (#10446)
### What problem does this PR solve?

Feat: Importing data flow files from the list page #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-09 19:03:29 +08:00
f04c9e2937 Fix: correctly update parser method & correct vllm pdf parser (#10441)
### What problem does this PR solve?

Fix: correctly update parser method

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-10-09 19:03:12 +08:00
1fc2889f98 Feat: Replace the collapse icon #9869 (#10442)
### What problem does this PR solve?

Feat: Replace the collapse icon #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-09 17:21:38 +08:00
ee0c38da66 fix:update searxng_url logic (#10440)
### What problem does this PR solve?
issue:
[#10417](https://github.com/infiniflow/ragflow/issues/10417)
change:
Adjusted the `searxng_url` priority logic to ensure the
frontend-provided URL takes precedence over the model’s default
configuration. This allows user-specified SearXNG endpoints to be
correctly applied during execution, improving flexibility across
different environments.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-09 16:56:23 +08:00
c1806e1ab2 Fix: Bug fixes and removed previous settings page code #9869 (#10435)
### What problem does this PR solve?

Fix: Bug fixes and removed previous settings page code
- Modified the default text color class name in the FileStatusBadge
component
- Adjusted the FilterType interface definition for ListFilterBar to
support JSX labels and an optional count field
- Removed redundant changeRaptor callback functions and comment blocks
in RaptorFormFields
- Corrected the isHorizontal check logic in SliderInputFormField
- Optimized the Command component's scrolling behavior and enhanced its
event handling
- Adjusted the TooltipContent style to support automatic scrollbar
display
- Removed the old settings page

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: jinhai <haijin.chn@gmail.com>
Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: balibabu <cike8899@users.noreply.github.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: Lynn <lynn_inf@hotmail.com>
Co-authored-by: 纷繁下的无奈 <zhileihuang@126.com>
Co-authored-by: huangzl <huangzl@shinemo.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
Co-authored-by: Wilmer <33392318@qq.com>
Co-authored-by: Adrian Weidig <adrianweidig@gmx.net>
Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Yongteng Lei <yongtengrey@outlook.com>
Co-authored-by: Liu An <asiro@qq.com>
Co-authored-by: buua436 <66937541+buua436@users.noreply.github.com>
Co-authored-by: BadwomanCraZY <511528396@qq.com>
Co-authored-by: cucusenok <31804608+cucusenok@users.noreply.github.com>
Co-authored-by: Russell Valentine <russ@coldstonelabs.org>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Billy Bao <newyorkupperbay@gmail.com>
Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: TensorNull <129579691+TensorNull@users.noreply.github.com>
Co-authored-by: TensorNull <tensor.null@gmail.com>
Co-authored-by: TeslaZY <TeslaZY@outlook.com>
Co-authored-by: Ajay <160579663+aybanda@users.noreply.github.com>
Co-authored-by: AB <aj@Ajays-MacBook-Air.local>
Co-authored-by: 天海蒼灆 <huangaoqin@tecpie.com>
Co-authored-by: He Wang <wanghechn@qq.com>
Co-authored-by: Atsushi Hatakeyama <atu729@icloud.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Mohamed Mathari <155896313+melmathari@users.noreply.github.com>
Co-authored-by: Mohamed Mathari <nocodeventure@Mac-mini-van-Mohamed.fritz.box>
Co-authored-by: Stephen Hu <stephenhu@seismic.com>
Co-authored-by: Shaun Zhang <zhangwfjh@users.noreply.github.com>
Co-authored-by: zhimeng123 <60221886+zhimeng123@users.noreply.github.com>
Co-authored-by: mxc <mxc@example.com>
Co-authored-by: Dominik Novotný <50611433+SgtMarmite@users.noreply.github.com>
Co-authored-by: EVGENY M <168018528+rjohny55@users.noreply.github.com>
Co-authored-by: mcoder6425 <mcoder64@gmail.com>
Co-authored-by: lemsn <lemsn@msn.com>
Co-authored-by: lemsn <lemsn@126.com>
Co-authored-by: Adrian Gora <47756404+adagora@users.noreply.github.com>
Co-authored-by: Womsxd <45663319+Womsxd@users.noreply.github.com>
Co-authored-by: FatMii <39074672+FatMii@users.noreply.github.com>
2025-10-09 16:55:27 +08:00
66d0d44a00 Feat: HTTP componant supports variables (#10432)
### What problem does this PR solve?

HTTP component supports variables. #10382




![http1](https://github.com/user-attachments/assets/196a2a5b-461c-455c-8896-ec2efe7c0a13)


![http2](https://github.com/user-attachments/assets/0ab97cb0-323c-456e-b556-6f416d52e59f)


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-09 16:05:58 +08:00
2078d88c28 Feat: Modify the translation file of the workflow #9869 (#10430)
### What problem does this PR solve?
Feat: Modify the translation file of the workflow #9869


### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-09 13:55:36 +08:00
7734ad7fcd Doc: guide for admin service (#10431)
### What problem does this PR solve?

Add guide document for admin cli and service.

### Type of change

- [x] Documentation Update
2025-10-09 13:48:02 +08:00
1a47e136e3 Feat: Adds a new feature that enables the LLM to extract a structured table of contents (TOC) directly from plain text. (#10428)
### What problem does this PR solve?

**Adds a new feature that enables the LLM to extract a structured table
of contents (TOC) directly from plain text.**
_This implementation prioritizes efficiency over reasoning — the model
runs in a strictly deterministic mode (thinking disabled) to minimize
latency.
As a result, overall performance may be less optimal, but the extraction
speed and consistency are guaranteed._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-09 13:47:31 +08:00
cbf04ee470 Feat: Use data pipeline to visualize the parsing configuration of the knowledge base (#10423)
### What problem does this PR solve?

#9869

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: jinhai <haijin.chn@gmail.com>
Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: chanx <1243304602@qq.com>
Co-authored-by: balibabu <cike8899@users.noreply.github.com>
Co-authored-by: Lynn <lynn_inf@hotmail.com>
Co-authored-by: 纷繁下的无奈 <zhileihuang@126.com>
Co-authored-by: huangzl <huangzl@shinemo.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
Co-authored-by: Wilmer <33392318@qq.com>
Co-authored-by: Adrian Weidig <adrianweidig@gmx.net>
Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Yongteng Lei <yongtengrey@outlook.com>
Co-authored-by: Liu An <asiro@qq.com>
Co-authored-by: buua436 <66937541+buua436@users.noreply.github.com>
Co-authored-by: BadwomanCraZY <511528396@qq.com>
Co-authored-by: cucusenok <31804608+cucusenok@users.noreply.github.com>
Co-authored-by: Russell Valentine <russ@coldstonelabs.org>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Billy Bao <newyorkupperbay@gmail.com>
Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: TensorNull <129579691+TensorNull@users.noreply.github.com>
Co-authored-by: TensorNull <tensor.null@gmail.com>
Co-authored-by: TeslaZY <TeslaZY@outlook.com>
Co-authored-by: Ajay <160579663+aybanda@users.noreply.github.com>
Co-authored-by: AB <aj@Ajays-MacBook-Air.local>
Co-authored-by: 天海蒼灆 <huangaoqin@tecpie.com>
Co-authored-by: He Wang <wanghechn@qq.com>
Co-authored-by: Atsushi Hatakeyama <atu729@icloud.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Mohamed Mathari <155896313+melmathari@users.noreply.github.com>
Co-authored-by: Mohamed Mathari <nocodeventure@Mac-mini-van-Mohamed.fritz.box>
Co-authored-by: Stephen Hu <stephenhu@seismic.com>
Co-authored-by: Shaun Zhang <zhangwfjh@users.noreply.github.com>
Co-authored-by: zhimeng123 <60221886+zhimeng123@users.noreply.github.com>
Co-authored-by: mxc <mxc@example.com>
Co-authored-by: Dominik Novotný <50611433+SgtMarmite@users.noreply.github.com>
Co-authored-by: EVGENY M <168018528+rjohny55@users.noreply.github.com>
Co-authored-by: mcoder6425 <mcoder64@gmail.com>
Co-authored-by: lemsn <lemsn@msn.com>
Co-authored-by: lemsn <lemsn@126.com>
Co-authored-by: Adrian Gora <47756404+adagora@users.noreply.github.com>
Co-authored-by: Womsxd <45663319+Womsxd@users.noreply.github.com>
Co-authored-by: FatMii <39074672+FatMii@users.noreply.github.com>
2025-10-09 12:36:19 +08:00
ef0aecea3b Refactor: fix admin exception (#10400)
### What problem does this PR solve?

As title

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-09 11:15:33 +08:00
dfc5fa1f4d Feat: add DeerAPI support (#10303)
### Related issues
#10078 

### What problem does this PR solve?
Integrate DeerAPI provider.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update

Co-authored-by: DeerAPI <tensor.null@gmail.com>
2025-10-09 11:14:49 +08:00
f341dc03b8 Feature/agent UI style optimization (#10385)
### What problem does this PR solve?

Hi team,  @ZhenhangTung @KevinHuSh  @cike8899 
About #10384 , I've completed the UI optimization adjustments for the
Agent page according to our previous discussions and the design draft
sketches provided by @Naomi. The main modifications include:

1. Adjusted the style and content of placeholder-node.
2. Adjusted the location of the dropdown (to the right of the
placeholder-node) .
3. Adjusted the tooltip position spacing when the mouse hovers in the
dropdown menu.
4. Hides the thick scroll bar on the dropdown component.
5. Highlight the connection line when dragging to generate a
placeholder-node

<img width="1323" height="509" alt="Image"
src="https://github.com/user-attachments/assets/0d366f7f-477d-4c00-bb58-d5d58b3a745f"
/>

Please review the related code modifications when you have time. Let me
know if further adjustments are needed!
Thanks!

### Type of change

- [x] Other (please describe): UI Enhancement

---------

Co-authored-by: leonlai <leonlai@futurefab.ai>
2025-10-09 11:12:12 +08:00
4585edc20e Refactor: improve cv model logics (#10414)
1. improve how to get total token count

Improve how to get total token count

### Type of change
- [x] Refactoring
2025-10-09 09:47:36 +08:00
dba9158f9a Fix MY_GITHUB_TOKEN (#10404)
### What problem does this PR solve?

Fix MY_GITHUB_TOKEN

### Type of change

- [x] Other (please describe): CI
2025-10-03 16:49:58 +08:00
82f572ff95 Check workflow duplication (#10399)
### What problem does this PR solve?

Check workflow duplication

### Type of change

- [x] Other (please describe): CI
2025-10-02 10:57:08 +08:00
518a00630e Fix highlight with infinity (#10345)
Fix highlight with infinity
Fix on OpenSUSE Tumbleweed

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 19:15:01 +08:00
aa61ae24dc Doc: Fixed a typo (#10391)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2025-09-30 18:57:34 +08:00
fb950079ef Feat/service manage (#10381)
### What problem does this PR solve?

- Admin service support SHOW SERVICE <id>.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

issue: #10241
2025-09-30 16:23:09 +08:00
aec8c15e7e fix: reset chat state when creating new dialog (#10380)
### What problem does this PR solve?
issue:
[Question]: New Chat Creation Renames Edited Chat Instead of Creating a
New One #10373
change:
reset chat state when creating new dialog

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 16:05:38 +08:00
7c620bdc69 Fix: unexpected operation of document management (#10366)
### What problem does this PR solve?

 Unexpected operation of document management.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 15:20:38 +08:00
e7dde69584 Fix: reset tools in/out-put. (#10378)
### What problem does this PR solve?

#10361

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 15:13:18 +08:00
d6eded1959 Refactor: move exceptions to common folder (#10383)
### What problem does this PR solve?

as title

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-09-30 15:10:04 +08:00
80f851922a Feat: add support for LongCat-Flash-Thinking and Claude Sonnet 4.5 (#10374)
### What problem does this PR solve?

Add support for LongCat-Flash-Thinking and Claude Sonnet 4.5.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-30 12:04:14 +08:00
17757930a3 Feat: add support for international Dashscope service (#10356)
### What problem does this PR solve?

 Add support for international Dashscope service. #10340 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-29 14:49:45 +08:00
a8883905a7 Feat: Add baseUrl to the Tongyi Qianwen model configuration modal #10340 (#10357)
### What problem does this PR solve?

Feat: Add baseUrl to the Tongyi Qianwen model configuration modal #10340

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-29 12:44:18 +08:00
8426cbbd02 Feat: Keep connection status during generating agent by drag and drop … (#10141)
### What problem does this PR solve?

About issue #10140

In version 0.20.1, we implemented the generation of new node through
mouse drag and drop. If we could create a workflow module like in Coze,
where there is not only a dropdown menu but also an intermediate node
(placeholder node) after the drag and drop is completed, this could
improve the user experience.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-29 10:28:19 +08:00
0b759f559c Fix: invalid user can login from OSS (#10348)
### What problem does this PR solve?

An invalid user can log in from OSS
https://github.com/infiniflow/ragflow/issues/10293

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-29 10:16:31 +08:00
2d5d10ecbf Feat/admin drop user (#10342)
### What problem does this PR solve?

- Admin client support drop user.

Issue: #10241 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-29 10:16:13 +08:00
954bd5a1c2 Fix: unexpected operation of file management (#10343)
### What problem does this PR solve?

Fix unexpected operation of file management.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-28 19:44:01 +08:00
ccb1c269e8 fix: Handling Null Values in SQL Execution Results (#10332)
### What problem does this PR solve?

Close #10324
The agent reported an error "undefined" after using the ExeSql tool.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

now:
<img width="641" height="687" alt="img"
src="https://github.com/user-attachments/assets/eb3bbd27-4cf8-42ce-939c-fc012a93efa0"
/>
2025-09-28 15:40:06 +08:00
6dfb0c245c Fix: The dataset uses the new id to obtain the knowledge graph #10333 (#10339)
### What problem does this PR solve?

Fix: The dataset uses the new id to obtain the knowledge graph #10333

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-28 15:39:49 +08:00
72d1047a8f Fix: update Japanese translations in ja.ts (#10338)
### What problem does this PR solve?

This PR addresses [issue
#9962](https://github.com/infiniflow/ragflow/issues/9962).
It updates the Japanese translations in `web/src/locales/ja.ts`.  

For this contribution, the scope is intentionally limited to **Chat**
and **Knowledge Base** related UI texts, ensuring focused and
incremental improvement without affecting other modules.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-28 15:39:05 +08:00
bece37e6c8 Fix: floating widget match style with original one (#10317)
### What problem does this PR solve?

These changes are intended to implement the remaining functionalities of
the fullscreen widget.

The question arises: how to display document prieview of PDFs in this
floating widget?
- simply enlarge the widget window
- implement zoom in/out
- render outside the iframe?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-28 13:58:10 +08:00
59cb0eb8bc fix: remove ibm-db dependency and refactor import order (#10330)
### What problem does this PR solve?
issue: 
#10326
change:
 remove ibm-db dependency and refactor import order

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-28 12:19:32 +08:00
fc56217eb3 Fix: The enterprise version of the knowledge graph cannot be displayed. #10333 (#10334)
### What problem does this PR solve?
Fix: The enterprise version of the knowledge graph cannot be displayed.
#10333
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-28 12:18:58 +08:00
723cf9443e Fix:After setting user's is_active to 0, the user can still log in to RAGFlow. (#10325)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/10293

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-28 12:18:01 +08:00
bd94b5dfb5 feat: add IBM DB2 support (#10306)
### What problem does this PR solve?

issue:#5617
change:add IBM DB2 support in ExeSQL 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-26 14:55:19 +08:00
ef59c5bab9 FIX: Rename the CometEmbed and CometSeq2txt classes to CometAPIEmbed and CometAPISeq2txt, and correct supported_models.mdx. (#10298)
### What problem does this PR solve?

Rename the CometEmbed and CometSeq2txt classes to CometAPIEmbed and
CometAPISeq2txt, and correct supported_models.mdx.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-26 10:50:56 +08:00
62b7c655c5 Refactor: migrate the function to specific file (#10201)
### What problem does this PR solve?

Move base64 related function to api/common/base64.py

### Type of change

- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-09-25 23:37:50 +08:00
b0b866c8fd Refactor: move some functions out of api/utils/__init__.py (#10216)
### What problem does this PR solve?

Refactor import modules.

### Type of change

- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-09-25 18:04:49 +08:00
3a831d0c28 Fixed the issue where database connections were interrupted under high concurrency (#10126)
### What problem does this PR solve?

Fixed the issue where database connections were interrupted under high
concurrency

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: lemsn <lemsn@126.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-09-25 17:03:43 +08:00
9e323a9351 Feat(nlp): add "怎么办" pattern to question word removal (#10284)
### What problem does this PR solve?

Added "怎么办" to the regex pattern in rmWWW method to improve query
cleaning by removing this common question phrase along with other
question words.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-25 16:47:56 +08:00
7ac95b759b Feat/admin service (#10233)
### What problem does this PR solve?

- Admin client support show user and create user command.
- Admin client support alter user password and active status.
- Admin client support list user datasets.

issue: #10241

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-25 16:15:15 +08:00
daea357940 Fix: invalid COMPONENT_EXEC_TIMEOUT (#10278)
### What problem does this PR solve?

Fix invalid COMPONENT_EXEC_TIMEOUT. #10273

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-25 14:11:09 +08:00
4aa1abd8e5 Refactor: move encrypt/decrypt to one file (#10203)
### What problem does this PR solve?

Move base64 related function to api/common/base64.py

### Type of change

- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2025-09-25 12:53:03 +08:00
922b5c652d Refactor: fix typos (#10200)
### What problem does this PR solve?

1. Fix typos
2. Rename function
3. Use English to write comment

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2025-09-25 12:05:43 +08:00
aaa97874c6 fix: replace traceback.print_exc() with logging.exception(e) in conve… (#10275)
…rsation_app.py

### What problem does this PR solve?
issue:
#10188
change:
This PR replaces traceback.print_exc() with logging.exception(e) in
conversation_app.py to ensure that full error tracebacks are captured by
the logging system instead of being written only to stderr.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-25 11:45:44 +08:00
193d93d820 Refactor: Improve the logic clean conf for ZhipuChat (#10274)
### What problem does this PR solve?
Improve the logic clean conf for ZhipuChat

### Type of change
- [x] Refactoring
2025-09-25 10:28:03 +08:00
4058715df7 Docs: Knowledge base renamed to dataset. (#10269)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-09-25 09:45:27 +08:00
3f595029d7 fix: Wrong Qwen models's ID (#10272)
### What problem does this PR solve?
fix: Wrong Qwen models's ID
[Bug]: ERROR: litellm.NotFoundError: DashscopeException - The model
Qwen/Qwen3-Omni-Flash does not exist or you do not have access to it.
change: delete wrong qwen model id

### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-25 09:43:44 +08:00
e8f5a4da56 add model: qwen3-max and qewn3-vl series (#10256)
### What problem does this PR solve?
qwen3-max and qewn3-vl series
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 20:00:53 +08:00
a9472e3652 add Qwen models (#10263)
### What problem does this PR solve?

add Qwen models

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 16:52:12 +08:00
4dd48b60f3 Fix: Russian language config.ts (#10250)
### What problem does this PR solve?

Fix ru language

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-24 12:48:41 +08:00
e4ab8ba2de UI: Update Russian language ru.ts (#10251)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 12:48:13 +08:00
a1f848bfe0 Fix:max_tokens must be at least 1, got -950, BadRequestError (#10252)
### What problem does this PR solve?
https://github.com/infiniflow/ragflow/issues/10235

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
2025-09-24 10:49:34 +08:00
f2309ff93e UI: Add Russian language (#10249)
Add Russian language

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 10:18:47 +08:00
38be53cf31 fix: prevent list index out of range in chat streaming (#10238)
### What problem does this PR solve?
issue:
[Bug]: ERROR: list index out of range #10188
change:
fix a potential list index out of range error in chat response parsing
by adding explicit checks for empty choices.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-23 19:59:39 +08:00
65a06d62d8 Flow text processing bug (#10246)
### What problem does this PR solve?
@KevinHuSh 

Hello, my submission this morning did not fully resolve this issue.
After researching the knowledge, I have decided to delete the two lines
of regular expression processing that were added this morning.

```
remote 2 line
modify 1 line
```
I have mounted the following code in Docker compose and verified that it
will no longer report '\ m' errors

<img width="1050" height="447" alt="image"
src="https://github.com/user-attachments/assets/2aaf1b86-04ac-45ce-a2f1-052fed620e80"
/>

[my before pull](https://github.com/infiniflow/ragflow/pull/10211) 

<img width="1000" height="603" alt="image"
src="https://github.com/user-attachments/assets/fb3909ef-00ee-46c6-a26f-e64736777291"
/>

Thanks for your code Review

### Type of change

- [√ ] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: mxc <mxc@example.com>
2025-09-23 19:59:13 +08:00
10cbbb76f8 revert gpt5 integration (#10228)
### What problem does this PR solve?

  Revert back to chat.completions.

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe):
  Revert back to chat.completions.
2025-09-23 16:06:12 +08:00
1c84d1b562 Fix: azure OpenAI retry (#10213)
### What problem does this PR solve?

Currently, Azure OpenAI returns one minute Quota limit responses when
chat API is utilized. This change is needed in order to be able to
process almost any documents using models deployed in Azure Foundry.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-23 12:19:28 +08:00
4eb7659499 Fix bug: broken import from rag.prompts.prompts (#10217)
### What problem does this PR solve?

Fix broken imports

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2025-09-23 10:19:25 +08:00
46a61e5aff Fix: string merge bug about agent TextProcessing. (\m) (#10211)
### What problem does this PR solve?
An error occurred while merging strings containing '\m' in the Text
Processing function of the agent.

Convert \ m to m using regular expressions

From my example alone, it doesn't affect the original meaning, it's
still math

<img width="1227" height="1056" alt="image"
src="https://github.com/user-attachments/assets/9306a8ca-bb97-47bf-b91f-77acfce49875"
/>


### Type of change

- [ √ ] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: mxc <mxc@example.com>
2025-09-23 10:16:11 +08:00
da82566304 Fix: resolve hash collisions by switching to UUID &correct logic for always-true statements & Update GPT api integration & Support qianwen-deepresearch (#10208)
### What problem does this PR solve?

Fix: resolve hash collisions by switching to UUID &correct logic for
always-true statements, solved: #10165
Feat: Update GPT api integration, solved: #10204 
Feat: Support qianwen-deepresearch, solved: #10163 
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-09-23 09:34:30 +08:00
c8b79dfed4 The retrieval component needs to support returning JSON data(#10170) (#10171)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-22 17:28:29 +08:00
da80fa40bc fix python_api example (#10196)
### What problem does this PR solve?

Fix coding example in example

### Type of change

- [x] Documentation Update
2025-09-22 17:27:25 +08:00
94dbd4aac9 Refactor: use the same implement for total token count from res (#10197)
### What problem does this PR solve?
use the same implement for total token count from res

### Type of change

- [x] Refactoring
2025-09-22 17:17:06 +08:00
ca9f30e1a1 Add tree_merge for law parsers, significantly outperforming hierarchical_merge (#10202)
### What problem does this PR solve?
Add tree_merge for law parsers, significantly outperforming
hierarchical_merge, solved: #8637
1. Add tree_merge for law parsers, include build_tree and get_tree by
dfs.
2. add Copyright statement for helath_utils
### Type of change

- [x] Documentation Update
- [x] Performance Improvement
2025-09-22 16:33:21 +08:00
2e4295d5ca Chat Widget (#10187)
### What problem does this PR solve?

Add a chat widget. I'll probably need some assistance to get this ready
for merge!

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Mohamed Mathari <nocodeventure@Mac-mini-van-Mohamed.fritz.box>
2025-09-22 11:03:33 +08:00
d11b1628a1 Feat: add admin CLI and admin service (#10186)
### What problem does this PR solve?

Introduce new feature: RAGFlow system admin service and CLI

### Introduction

Admin Service is a dedicated management component designed to monitor,
maintain, and administrate the RAGFlow system. It provides comprehensive
tools for ensuring system stability, performing operational tasks, and
managing users and permissions efficiently.

The service offers monitoring of critical components, including the
RAGFlow server, Task Executor processes, and dependent services such as
MySQL, Infinity / Elasticsearch, Redis, and MinIO. It automatically
checks their health status, resource usage, and uptime, and performs
restarts in case of failures to minimize downtime.

For user and system management, it supports listing, creating,
modifying, and deleting users and their associated resources like
knowledge bases and Agents.

Built with scalability and reliability in mind, the Admin Service
ensures smooth system operation and simplifies maintenance workflows.

It consists of a server-side Service and a command-line client (CLI),
both implemented in Python. User commands are parsed using the Lark
parsing toolkit.

- **Admin Service**: A backend service that interfaces with the RAGFlow
system to execute administrative operations and monitor its status.
- **Admin CLI**: A command-line interface that allows users to connect
to the Admin Service and issue commands for system management.

### Starting the Admin Service

1. Before start Admin Service, please make sure RAGFlow system is
already started.

2.  Run the service script:
    ```bash
    python admin/admin_server.py
    ```
The service will start and listen for incoming connections from the CLI
on the configured port.

### Using the Admin CLI

1.  Ensure the Admin Service is running.
2.  Launch the CLI client:
    ```bash
    python admin/admin_client.py -h 0.0.0.0 -p 9381
## Supported Commands
Commands are case-insensitive and must be terminated with a semicolon
(`;`).
### Service Management Commands
-  [x] `LIST SERVICES;`
    -   Lists all available services within the RAGFlow system.
-  [ ] `SHOW SERVICE <id>;`
- Shows detailed status information for the service identified by
`<id>`.
-  [ ] `STARTUP SERVICE <id>;`
    -   Attempts to start the service identified by `<id>`.
-  [ ] `SHUTDOWN SERVICE <id>;`
- Attempts to gracefully shut down the service identified by `<id>`.
-  [ ] `RESTART SERVICE <id>;`
    -   Attempts to restart the service identified by `<id>`.
### User Management Commands
-  [x] `LIST USERS;`
    -   Lists all users known to the system.
-  [ ] `SHOW USER '<username>';`
- Shows details and permissions for the specified user. The username
must be enclosed in single or double quotes.
-  [ ] `DROP USER '<username>';`
    -   Removes the specified user from the system. Use with caution.
-  [ ] `ALTER USER PASSWORD '<username>' '<new_password>';`
    -   Changes the password for the specified user.
### Data and Agent Commands
-  [ ] `LIST DATASETS OF '<username>';`
    -   Lists the datasets associated with the specified user.
-  [ ] `LIST AGENTS OF '<username>';`
    -   Lists the agents associated with the specified user.
### Meta-Commands
Meta-commands are prefixed with a backslash (`\`).
-   `\?` or `\help`
    -   Shows help information for the available commands.
-   `\q` or `\quit`
    -   Exits the CLI application.
## Examples
```commandline
admin> list users;
+-------------------------------+------------------------+-----------+-------------+
| create_date                   | email                  | is_active | nickname    |
+-------------------------------+------------------------+-----------+-------------+
| Fri, 22 Nov 2024 16:03:41 GMT | jeffery@infiniflow.org | 1         | Jeffery     |
| Fri, 22 Nov 2024 16:10:55 GMT | aya@infiniflow.org     | 1         | Waterdancer |
+-------------------------------+------------------------+-----------+-------------+
admin> list services;
+-------------------------------------------------------------------------------------------+-----------+----+---------------+-------+----------------+
| extra                                                                                     | host      | id | name          | port  | service_type   |
+-------------------------------------------------------------------------------------------+-----------+----+---------------+-------+----------------+
| {}                                                                                        | 0.0.0.0   | 0  | ragflow_0     | 9380  | ragflow_server |
| {'meta_type': 'mysql', 'password': 'infini_rag_flow', 'username': 'root'}                 | localhost | 1  | mysql         | 5455  | meta_data      |
| {'password': 'infini_rag_flow', 'store_type': 'minio', 'user': 'rag_flow'}                | localhost | 2  | minio         | 9000  | file_store     |
| {'password': 'infini_rag_flow', 'retrieval_type': 'elasticsearch', 'username': 'elastic'} | localhost | 3  | elasticsearch | 1200  | retrieval      |
| {'db_name': 'default_db', 'retrieval_type': 'infinity'}                                   | localhost | 4  | infinity      | 23817 | retrieval      |
| {'database': 1, 'mq_type': 'redis', 'password': 'infini_rag_flow'}                        | localhost | 5  | redis         | 6379  | message_queue  |
+-------------------------------------------------------------------------------------------+-----------+----+---------------+-------+----------------+
```

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Signed-off-by: jinhai <haijin.chn@gmail.com>
2025-09-22 10:37:49 +08:00
45f9f428db Fix: enable scrolling at chat setting (#10184)
### What problem does this PR solve?

This PR is related to
[#9961](https://github.com/infiniflow/ragflow/issues/9961).
In the Chat Settings screen, the textarea did not support scrolling when
the content grew longer than its visible area, which made it less
convenient to use.
Also, there was no Japanese placeholder text to guide users on what to
enter in the field.

This PR improves the user experience by:
- Adding `overflow-y-auto` to the textarea so that long content can be
scrolled smoothly.
- Introducing a placeholder (`メッセージを入力してください...`) to provide clearer
guidance for users.


https://github.com/user-attachments/assets/95553331-087b-42c5-a41d-5dfe08047bae

### What has been considered

As an alternative solution, I explored replacing the textarea with the
existing `PromptEditor` component.
However, this approach triggered a `canvas not found.` alert.  
The current implementation of `PromptEditor` internally attempts to
fetch **agent (canvas) information**, but in the Chat Settings screen no
such ID exists. As a result, the API call fails and the backend returns
`canvas not found.`.

One possible workaround would be to extend `PromptEditor` with a
**“disable variable picker” flag**, ensuring that plugins are not loaded
in contexts like Chat Settings. While feasible, this would have a
broader impact across the codebase.

Given these considerations, I decided to address the issue in a simpler
way by applying a Tailwind utility (`overflow-y-auto`). Since the UI
design is expected to change in the future, this solution is considered
sufficient for now.
<img width="1501" height="794" alt="Screenshot 2025-09-20 at 15 00 12"
src="https://github.com/user-attachments/assets/85578ee8-489f-4ede-b3af-bafd7afe95bd"
/>


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)  
- [ ] New Feature (non-breaking change which adds functionality)  
- [ ] Documentation Update  
- [ ] Refactoring  
- [ ] Performance Improvement  
- [ ] Other (please describe):
2025-09-22 10:37:34 +08:00
902703d145 Fix: skip tag query if tag kbs are invalid (#10168)
### What problem does this PR solve?

Skip `tag_query` step if `tag_kbs` are empty. 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-19 19:12:18 +08:00
7ccca2143c perf: add get_all_kb_doc_count func to simplify kb.doc_num updating (#10169)
### What problem does this PR solve?

Add get_all_kb_doc_count func to simplify kb.doc_num updating.

### Type of change

- [x] Performance Improvement
2025-09-19 19:11:50 +08:00
70ce02faf4 Feat: add support for Anthropic third-party API (#10173)
### What problem does this PR solve?
issue:
[Bug]: anthropic model have not baseurl selecting,need add #8546
change:
This PR adds support for using Anthropic models through a third-party
API by allowing a custom base_url.
It ensures compatibility with both the official Anthropic endpoint and
external providers.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-19 19:06:14 +08:00
3f1741c8c6 Docs: How to accelerate question answering (#10179)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-09-19 18:18:46 +08:00
6c24ad7966 fix: correct rerank_model condition logic (#10174)
### What problem does this PR solve?

fix the rerank_model condition logic by correcting the np.isclose check.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-19 16:02:10 +08:00
4846589599 Docs: Input and output variables defined in the Input and Output sections must also be implemented in your code. (#10162)
### What problem does this PR solve?
 
#10089 

### Type of change

- [x] Documentation Update
2025-09-19 11:35:58 +08:00
a24547aa66 Support server health check by http://localhost:<port>/v1/system/healthz (#10150)
### What problem does this PR solve?

Support server health check. Solved issue: #10106

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-19 11:11:07 +08:00
a04c5247ab Feat: Add file convert to document API just like file2document_app.py (#10158)
### What problem does this PR solve?

Add file convert to document API just like file2document_app.py

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-19 09:59:54 +08:00
ed6a76dcc0 Add Firecrawl integration for RAGFlow (#10152)
## 🚀 Firecrawl Integration for RAGFlow

This PR implements the Firecrawl integration for RAGFlow as requested in
issue https://github.com/firecrawl/firecrawl/issues/2167

###  Features Implemented

- **Data Source Integration**: Firecrawl appears as a selectable data
source in RAGFlow
- **Configuration Management**: Users can input Firecrawl API keys
through RAGFlow's interface
- **Web Scraping**: Supports single URL scraping, website crawling, and
batch processing
- **Content Processing**: Converts scraped content to RAGFlow's document
format with chunking
- **Error Handling**: Comprehensive error handling for rate limits,
failed requests, and malformed content
- **UI Components**: Complete UI schema and workflow components for
RAGFlow integration

### 📁 Files Added

- `intergrations/firecrawl/` - Complete integration package
- `intergrations/firecrawl/integration.py` - RAGFlow integration entry
point
- `intergrations/firecrawl/firecrawl_connector.py` - API communication
- `intergrations/firecrawl/firecrawl_config.py` - Configuration
management
- `intergrations/firecrawl/firecrawl_processor.py` - Content processing
- `intergrations/firecrawl/firecrawl_ui.py` - UI components
- `intergrations/firecrawl/ragflow_integration.py` - Main integration
class
- `intergrations/firecrawl/README.md` - Complete documentation
- `intergrations/firecrawl/example_usage.py` - Usage examples

### 🧪 Testing

The integration has been thoroughly tested with:
- Configuration validation
- Connection testing
- Content processing and chunking
- UI component rendering
- Error handling scenarios

### 📋 Acceptance Criteria Met

-  Integration appears as selectable data source in RAGFlow's data
source options
-  Users can input Firecrawl API keys through RAGFlow's configuration
interface
-  Successfully scrapes content from provided URLs and imports into
RAGFlow's document store
-  Handles common edge cases (rate limits, failed requests, malformed
content)
-  Includes basic documentation and README updates
-  Code follows RAGFlow's existing patterns and coding standards

### �� Related Issue

https://github.com/firecrawl/firecrawl/issues/2167

---------

Co-authored-by: AB <aj@Ajays-MacBook-Air.local>
2025-09-19 09:58:17 +08:00
a0ccbec8bd Fix: knowledge base's embedded model form layout and dependency imports in the main branch. #9869 (#10160)
### What problem does this PR solve?

Fix: Fixed the knowledge base's embedded model form layout and
dependency imports in the main branch.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-19 09:57:21 +08:00
4693c5382a Feat: migrate OpenAI-compatible chats to LiteLLM (#10148)
### What problem does this PR solve?

Migrate OpenAI-compatible chats to LiteLLM.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-18 17:16:59 +08:00
ff3b4d0dcd Fix: Merge different types of models from the same manufacturer #10146 (#10157)
### What problem does this PR solve?

Fix: Merge different types of models from the same manufacturer #10146

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-18 17:15:54 +08:00
62d35b1b73 Fix: handle zero (#10149)
### What problem does this PR solve?

Handle zero and nan in calculate.
#10125

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-18 16:28:03 +08:00
91b609447d Fix: embedding model failure in CometAPI (#10137)
### What problem does this PR solve?

Related PR:
Feat: add CometAPI to LLMFactory and update related mappings #10119 

Change:
Fixes the issue where the embedding model in CometAPI was not being
called correctly

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: TensorNull <tensor.null@gmail.com>
2025-09-18 14:49:47 +08:00
c353840244 Feat: add support for KB document basic info (#10134)
### What problem does this PR solve?

Add support for KB document basic info

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-18 09:52:33 +08:00
f12b9fdcd4 Feat: add CometAPI to LLMFactory and update related mappings (#10119)
### Related issues
#10078

### What problem does this PR solve?
Integrate CometAPI provider.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update
2025-09-18 09:51:29 +08:00
80ede65bbe Docs: Updated database types supported by the Execute SQL tool (#10113)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-09-18 09:47:35 +08:00
52cf186028 Correct the text of vectorSimilarityWeight in zh.ts (#10128)
### What problem does this PR solve?

The original text for vectorSimilarityWeight in Chinese version was
"相似度相似度权重," which is obviously a malformed phrase. It has now been
changed to "向量相似度权重". Also, align it with the English version 'Vector
similarity weight'.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-18 09:46:54 +08:00
ea0f1d47a5 Support image recognition for url links in Markdown file, fix log error in code_exec (#10139)
### What problem does this PR solve?

Support image recognition with image links in markdown files, solved
issue: #8755
Fixed log info error in code_exec, solved issue: #10064

### Type of change (8755)

- [x] New Feature (non-breaking change which adds functionality)

### Type of change (10064)

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-18 09:44:17 +08:00
9fe7c92217 Build(deps): Bump axios from 1.9.0 to 1.12.0 in /sandbox/sandbox_base_image/nodejs (#10091)
Bumps [axios](https://github.com/axios/axios) from 1.9.0 to 1.12.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/axios/axios/releases">axios's
releases</a>.</em></p>
<blockquote>
<h2>Release v1.12.0</h2>
<h2>Release notes:</h2>
<h3>Bug Fixes</h3>
<ul>
<li>adding build artifacts (<a
href="9ec86de257">9ec86de</a>)</li>
<li>dont add dist on release (<a
href="a2edc3606a">a2edc36</a>)</li>
<li><strong>fetch-adapter:</strong> set correct Content-Type for Node
FormData (<a
href="https://redirect.github.com/axios/axios/issues/6998">#6998</a>)
(<a
href="a9f47afbf3">a9f47af</a>)</li>
<li><strong>node:</strong> enforce maxContentLength for data: URLs (<a
href="https://redirect.github.com/axios/axios/issues/7011">#7011</a>)
(<a
href="945435fc51">945435f</a>)</li>
<li>package exports (<a
href="https://redirect.github.com/axios/axios/issues/5627">#5627</a>)
(<a
href="aa78ac23fc">aa78ac2</a>)</li>
<li><strong>params:</strong> removing '[' and ']' from URL encode
exclude characters (<a
href="https://redirect.github.com/axios/axios/issues/3316">#3316</a>)
(<a
href="https://redirect.github.com/axios/axios/issues/5715">#5715</a>)
(<a
href="6d84189349">6d84189</a>)</li>
<li>release pr run (<a
href="fd7f404488">fd7f404</a>)</li>
<li><strong>types:</strong> change the type guard on isCancel (<a
href="https://redirect.github.com/axios/axios/issues/5595">#5595</a>)
(<a
href="0dbb7fd4f6">0dbb7fd</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li><strong>adapter:</strong> surface low‑level network error details;
attach original error via cause (<a
href="https://redirect.github.com/axios/axios/issues/6982">#6982</a>)
(<a
href="78b290c57c">78b290c</a>)</li>
<li><strong>fetch:</strong> add fetch, Request, Response env config
variables for the adapter; (<a
href="https://redirect.github.com/axios/axios/issues/7003">#7003</a>)
(<a
href="c959ff2901">c959ff2</a>)</li>
<li>support reviver on JSON.parse (<a
href="https://redirect.github.com/axios/axios/issues/5926">#5926</a>)
(<a
href="2a9763426e">2a97634</a>),
closes <a
href="https://redirect.github.com/axios/axios/issues/5924">#5924</a></li>
<li><strong>types:</strong> extend AxiosResponse interface to include
custom headers type (<a
href="https://redirect.github.com/axios/axios/issues/6782">#6782</a>)
(<a
href="7960d34ede">7960d34</a>)</li>
</ul>
<h3>Contributors to this release</h3>
<ul>
<li><!-- raw HTML omitted --> <a
href="https://github.com/WillianAgostini" title="+132/-16760
([#7002](https://github.com/axios/axios/issues/7002)
[#5926](https://github.com/axios/axios/issues/5926)
[#6782](https://github.com/axios/axios/issues/6782) )">Willian
Agostini</a></li>
<li><!-- raw HTML omitted --> <a
href="https://github.com/DigitalBrainJS" title="+4263/-293
([#7006](https://github.com/axios/axios/issues/7006)
[#7003](https://github.com/axios/axios/issues/7003) )">Dmitriy
Mozgovoy</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/mkhani01"
title="+111/-15 ([#6982](https://github.com/axios/axios/issues/6982)
)">khani</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/AmeerAssadi"
title="+123/-0 ([#7011](https://github.com/axios/axios/issues/7011)
)">Ameer Assadi</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/emiedonmokumo"
title="+55/-35 ([#6998](https://github.com/axios/axios/issues/6998)
)">Emiedonmokumo Dick-Boro</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/opsysdebug"
title="+8/-8 ([#6980](https://github.com/axios/axios/issues/6980)
)">Zeroday BYTE</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/jasonsaayman"
title="+7/-7 ([#6985](https://github.com/axios/axios/issues/6985)
[#6985](https://github.com/axios/axios/issues/6985) )">Jason
Saayman</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/HealGaren"
title="+5/-7 ([#5715](https://github.com/axios/axios/issues/5715)
)">최예찬</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/gligorkot"
title="+3/-1 ([#5627](https://github.com/axios/axios/issues/5627)
)">Gligor Kotushevski</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/adimit"
title="+2/-1 ([#5595](https://github.com/axios/axios/issues/5595)
)">Aleksandar Dimitrov</a></li>
</ul>
<h2>Release v1.11.0</h2>
<h2>Release notes:</h2>
<h3>Bug Fixes</h3>
<ul>
<li>form-data npm pakcage (<a
href="https://redirect.github.com/axios/axios/issues/6970">#6970</a>)
(<a
href="e72c193722">e72c193</a>)</li>
<li>prevent RangeError when using large Buffers (<a
href="https://redirect.github.com/axios/axios/issues/6961">#6961</a>)
(<a
href="a2214ca1bc">a2214ca</a>)</li>
<li><strong>types:</strong> resolve type discrepancies between ESM and
CJS TypeScript declaration files (<a
href="https://redirect.github.com/axios/axios/issues/6956">#6956</a>)
(<a
href="8517aa16f8">8517aa1</a>)</li>
</ul>
<h3>Contributors to this release</h3>
<ul>
<li><!-- raw HTML omitted --> <a href="https://github.com/izzygld"
title="+186/-93 ([#6970](https://github.com/axios/axios/issues/6970)
)">izzy goldman</a></li>
<li><!-- raw HTML omitted --> <a
href="https://github.com/manishsahanidev" title="+70/-0
([#6961](https://github.com/axios/axios/issues/6961) )">Manish
Sahani</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/noritaka1166"
title="+12/-10 ([#6938](https://github.com/axios/axios/issues/6938)
[#6939](https://github.com/axios/axios/issues/6939) )">Noritaka
Kobayashi</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/jrnail23"
title="+13/-2 ([#6956](https://github.com/axios/axios/issues/6956)
)">James Nail</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/Tejaswi1305"
title="+1/-1 ([#6894](https://github.com/axios/axios/issues/6894)
)">Tejaswi1305</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/axios/axios/blob/v1.x/CHANGELOG.md">axios's
changelog</a>.</em></p>
<blockquote>
<h1><a
href="https://github.com/axios/axios/compare/v1.11.0...v1.12.0">1.12.0</a>
(2025-09-11)</h1>
<h3>Bug Fixes</h3>
<ul>
<li>adding build artifacts (<a
href="9ec86de257">9ec86de</a>)</li>
<li>dont add dist on release (<a
href="a2edc3606a">a2edc36</a>)</li>
<li><strong>fetch-adapter:</strong> set correct Content-Type for Node
FormData (<a
href="https://redirect.github.com/axios/axios/issues/6998">#6998</a>)
(<a
href="a9f47afbf3">a9f47af</a>)</li>
<li><strong>node:</strong> enforce maxContentLength for data: URLs (<a
href="https://redirect.github.com/axios/axios/issues/7011">#7011</a>)
(<a
href="945435fc51">945435f</a>)</li>
<li>package exports (<a
href="https://redirect.github.com/axios/axios/issues/5627">#5627</a>)
(<a
href="aa78ac23fc">aa78ac2</a>)</li>
<li><strong>params:</strong> removing '[' and ']' from URL encode
exclude characters (<a
href="https://redirect.github.com/axios/axios/issues/3316">#3316</a>)
(<a
href="https://redirect.github.com/axios/axios/issues/5715">#5715</a>)
(<a
href="6d84189349">6d84189</a>)</li>
<li>release pr run (<a
href="fd7f404488">fd7f404</a>)</li>
<li><strong>types:</strong> change the type guard on isCancel (<a
href="https://redirect.github.com/axios/axios/issues/5595">#5595</a>)
(<a
href="0dbb7fd4f6">0dbb7fd</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li><strong>adapter:</strong> surface low‑level network error details;
attach original error via cause (<a
href="https://redirect.github.com/axios/axios/issues/6982">#6982</a>)
(<a
href="78b290c57c">78b290c</a>)</li>
<li><strong>fetch:</strong> add fetch, Request, Response env config
variables for the adapter; (<a
href="https://redirect.github.com/axios/axios/issues/7003">#7003</a>)
(<a
href="c959ff2901">c959ff2</a>)</li>
<li>support reviver on JSON.parse (<a
href="https://redirect.github.com/axios/axios/issues/5926">#5926</a>)
(<a
href="2a9763426e">2a97634</a>),
closes <a
href="https://redirect.github.com/axios/axios/issues/5924">#5924</a></li>
<li><strong>types:</strong> extend AxiosResponse interface to include
custom headers type (<a
href="https://redirect.github.com/axios/axios/issues/6782">#6782</a>)
(<a
href="7960d34ede">7960d34</a>)</li>
</ul>
<h3>Contributors to this release</h3>
<ul>
<li><!-- raw HTML omitted --> <a
href="https://github.com/WillianAgostini" title="+132/-16760
([#7002](https://github.com/axios/axios/issues/7002)
[#5926](https://github.com/axios/axios/issues/5926)
[#6782](https://github.com/axios/axios/issues/6782) )">Willian
Agostini</a></li>
<li><!-- raw HTML omitted --> <a
href="https://github.com/DigitalBrainJS" title="+4263/-293
([#7006](https://github.com/axios/axios/issues/7006)
[#7003](https://github.com/axios/axios/issues/7003) )">Dmitriy
Mozgovoy</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/mkhani01"
title="+111/-15 ([#6982](https://github.com/axios/axios/issues/6982)
)">khani</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/AmeerAssadi"
title="+123/-0 ([#7011](https://github.com/axios/axios/issues/7011)
)">Ameer Assadi</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/emiedonmokumo"
title="+55/-35 ([#6998](https://github.com/axios/axios/issues/6998)
)">Emiedonmokumo Dick-Boro</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/opsysdebug"
title="+8/-8 ([#6980](https://github.com/axios/axios/issues/6980)
)">Zeroday BYTE</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/jasonsaayman"
title="+7/-7 ([#6985](https://github.com/axios/axios/issues/6985)
[#6985](https://github.com/axios/axios/issues/6985) )">Jason
Saayman</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/HealGaren"
title="+5/-7 ([#5715](https://github.com/axios/axios/issues/5715)
)">최예찬</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/gligorkot"
title="+3/-1 ([#5627](https://github.com/axios/axios/issues/5627)
)">Gligor Kotushevski</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/adimit"
title="+2/-1 ([#5595](https://github.com/axios/axios/issues/5595)
)">Aleksandar Dimitrov</a></li>
</ul>
<h1><a
href="https://github.com/axios/axios/compare/v1.10.0...v1.11.0">1.11.0</a>
(2025-07-22)</h1>
<h3>Bug Fixes</h3>
<ul>
<li>form-data npm pakcage (<a
href="https://redirect.github.com/axios/axios/issues/6970">#6970</a>)
(<a
href="e72c193722">e72c193</a>)</li>
<li>prevent RangeError when using large Buffers (<a
href="https://redirect.github.com/axios/axios/issues/6961">#6961</a>)
(<a
href="a2214ca1bc">a2214ca</a>)</li>
<li><strong>types:</strong> resolve type discrepancies between ESM and
CJS TypeScript declaration files (<a
href="https://redirect.github.com/axios/axios/issues/6956">#6956</a>)
(<a
href="8517aa16f8">8517aa1</a>)</li>
</ul>
<h3>Contributors to this release</h3>
<ul>
<li><!-- raw HTML omitted --> <a href="https://github.com/izzygld"
title="+186/-93 ([#6970](https://github.com/axios/axios/issues/6970)
)">izzy goldman</a></li>
<li><!-- raw HTML omitted --> <a
href="https://github.com/manishsahanidev" title="+70/-0
([#6961](https://github.com/axios/axios/issues/6961) )">Manish
Sahani</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/noritaka1166"
title="+12/-10 ([#6938](https://github.com/axios/axios/issues/6938)
[#6939](https://github.com/axios/axios/issues/6939) )">Noritaka
Kobayashi</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/jrnail23"
title="+13/-2 ([#6956](https://github.com/axios/axios/issues/6956)
)">James Nail</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0d8ad6e1de"><code>0d8ad6e</code></a>
chore(release): v1.12.0 (<a
href="https://redirect.github.com/axios/axios/issues/7013">#7013</a>)</li>
<li><a
href="fd7f404488"><code>fd7f404</code></a>
fix: release pr run</li>
<li><a
href="a2edc3606a"><code>a2edc36</code></a>
fix: dont add dist on release</li>
<li><a
href="9ec86de257"><code>9ec86de</code></a>
fix: adding build artifacts</li>
<li><a
href="945435fc51"><code>945435f</code></a>
fix(node): enforce maxContentLength for data: URLs (<a
href="https://redirect.github.com/axios/axios/issues/7011">#7011</a>)</li>
<li><a
href="28e5e3016d"><code>28e5e30</code></a>
chore(sponsor): update sponsor block (<a
href="https://redirect.github.com/axios/axios/issues/7005">#7005</a>)</li>
<li><a
href="d03f245a40"><code>d03f245</code></a>
chore(CI): fixed release info script to use npm registry instead of git
as fi...</li>
<li><a
href="a0bc911379"><code>a0bc911</code></a>
chore: removing dist files from src (<a
href="https://redirect.github.com/axios/axios/issues/7002">#7002</a>)</li>
<li><a
href="c959ff2901"><code>c959ff2</code></a>
feat(fetch): add fetch, Request, Response env config variables for the
adapte...</li>
<li><a
href="a9f47afbf3"><code>a9f47af</code></a>
fix(fetch-adapter): set correct Content-Type for Node FormData (<a
href="https://redirect.github.com/axios/axios/issues/6998">#6998</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/axios/axios/compare/v1.9.0...v1.12.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=axios&package-manager=npm_and_yarn&previous-version=1.9.0&new-version=1.12.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/infiniflow/ragflow/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-18 09:41:24 +08:00
d353f7f7f8 Feat/parse audio (#10133)
### What problem does this PR solve?

Dataflow support audio.  And fix giteeAI's sequence2text model. 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-09-18 09:31:32 +08:00
f3738b06f1 Fixes session_id passing in agent_openai completion. (#10124)
### What problem does this PR solve?

An exception happens if you give session_id to agent_open_ai completion.
Because session_id is being given as well as **req so it tries to send
session_id twice. But also the logic seemed odd on picking one of
session_id, id, metadata.id. So cleaned it up a little.

See #10111 

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-09-17 17:54:06 +08:00
5a8bc88147 Docs: Removed /v1 from Ollama base URLs (#10067)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-09-17 13:48:29 +08:00
04ef5b2783 Fix: usage of postgresql -> postgres for db_type (#10120)
### What problem does this PR solve?

This PR fixes incorrect naming for PostgreSQL usage by replacing all
instances of `postgresql` with the correct `postgres` in the `db_type`
field. This resolves potential configuration errors and ensures
consistency when specifying the database type.

Also fixed handling of None for `get_queue_length`

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: cucusenok <BP-116: updated readme.md>
2025-09-17 10:30:45 +08:00
c9ea22ef69 Fix: set default chunk_token_num in html_parser (#10118)
### What problem does this PR solve?

issue:
[Bug]: Agent component (HTTP Request) "'>' not supported between
instances of 'int' and 'NoneType'"
[#10096](https://github.com/infiniflow/ragflow/issues/10096)

Change:
When the Invoke class instantiates HtmlParser without providing the
chunk_token_num parameter, the value defaults to None, leading to a
comparison error with block_token_count.

This change sets the default chunk_token_num to 512 to prevent such
errors.
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: BadwomanCraZY <511528396@qq.com>
2025-09-17 09:36:31 +08:00
152111fd9d Feat/parse img (#10112)
### What problem does this PR solve?

support parse image by OCR or VLM.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-16 17:53:37 +08:00
86f6da2f74 Feat: add support for the Ascend table structure recognizer (#10110)
### What problem does this PR solve?

Add support for the Ascend table structure recognizer.

Use the environment variable `TABLE_STRUCTURE_RECOGNIZER_TYPE=ascend` to
enable the Ascend table structure recognizer.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-16 13:57:06 +08:00
8c00cbc87a Fix(agent template): wrap template variables in curly braces (#10109)
### What problem does this PR solve?

Updated SQL assistant template to wrap variables like 'sys.query' and
'Agent:WickedGoatsDivide@content' in curly braces for better template
variable syntax consistency.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-16 13:56:56 +08:00
41e808f4e6 Docs: Added an Execute SQL tool reference (#10108)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2025-09-16 11:39:56 +08:00
bc0281040b Feat: add support for the Ascend layout recognizer (#10105)
### What problem does this PR solve?

Supports Ascend layout recognizer.

Use the environment variable `LAYOUT_RECOGNIZER_TYPE=ascend` to enable
the Ascend layout recognizer, and `ASCEND_LAYOUT_RECOGNIZER_DEVICE_ID=n`
(for example, n=0) to specify the Ascend device ID.

Ensure that you have installed the [ais
tools](https://gitee.com/ascend/tools/tree/master/ais-bench_workload/tool/ais_bench)
properly.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-16 09:51:15 +08:00
341a7b1473 Fix: judge not empty before delete (#10099)
### What problem does this PR solve?

judge not empty before delete session.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-15 17:49:52 +08:00
c29c395390 Fix: The same model appears twice in the drop-down box. #10102 (#10103)
### What problem does this PR solve?

Fix: The same model appears twice in the drop-down box. #10102

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-15 16:38:08 +08:00
a23a0f230c feat: add multiple docker tags (latest, latest_full, latest_slim) to … (#10040)
…release workflow (#10039)  
This change updates the GitHub Actions workflow to push additional
stable tags alongside version tags, enabling automated update tools like
Watchtower to detect and pull the latest images correctly.
Refs:
[https://github.com/infiniflow/ragflow/issues/10039](https://github.com/infiniflow/ragflow/issues/10039)

### What problem does this PR solve?  
Automated container update tools such as Watchtower rely on stable tags
like `latest` to identify the newest images. Previously, only
version-specific tags were pushed, which prevented these tools from
detecting new releases automatically. This PR adds multiple stable tags
(`latest-full`, `latest-slim`) alongside version tags to the Docker
image publishing workflow, ensuring smooth and reliable automated
updates without manual tag management.

### Type of change  
- [ ] Bug Fix (non-breaking change which fixes an issue)  
- [x] New Feature (non-breaking change which adds functionality)  
- [ ] Documentation Update  
- [ ] Refactoring  
- [ ] Performance Improvement  
- [ ] Other (please describe):

---------

Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-13 21:44:53 +08:00
2a88ce6be1 Fix: terminate onnx inference session manually (#10076)
### What problem does this PR solve?

terminate onnx inference session and release memory manually.

Issue #5050 
Issue #9992 
Issue #8805

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-12 17:18:26 +08:00
664b781d62 Feat: Translate the fields of the embedded dialog box on the agent page #3221 (#10072)
### What problem does this PR solve?

Feat: Translate the fields of the embedded dialog box on the agent page
#3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-12 16:01:12 +08:00
65571e5254 Feat: dataflow supports text (#10058)
### What problem does this PR solve?

dataflow supports text.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-11 19:03:51 +08:00
aa30f20730 Feat: Agent component support inserting variables(#10048) (#10055)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-11 19:03:19 +08:00
b9b278d441 Docs: How to connect to an MCP server as a client (#10043)
### What problem does this PR solve?

#9769 

### Type of change


- [x] Documentation Update
2025-09-11 19:02:50 +08:00
e1d86cfee3 Feat: add TokenPony model provider (#9932)
### What problem does this PR solve?

Add TokenPony as a LLM provider

Co-authored-by: huangzl <huangzl@shinemo.com>
2025-09-11 17:25:31 +08:00
8ebd07337f The chat dialog box cannot be fully displayed on a small screen #10034 (#10049)
### What problem does this PR solve?

The chat dialog box cannot be fully displayed on a small screen #10034

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-11 13:32:23 +08:00
dd584d57b0 Fix: Hide dataflow related functions #9869 (#10045)
### What problem does this PR solve?

Fix: Hide dataflow related functions #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-11 12:02:26 +08:00
3d39b96c6f Fix: token num exceed (#10046)
### What problem does this PR solve?

fix text input exceed token num limit when using siliconflow's embedding
model BAAI/bge-large-zh-v1.5 and BAAI/bge-large-en-v1.5, truncate before
input.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-11 12:02:12 +08:00
272 changed files with 9106 additions and 8174 deletions

View File

@ -25,7 +25,7 @@ jobs:
- name: Check out code - name: Check out code
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
token: ${{ secrets.MY_GITHUB_TOKEN }} # Use the secret as an environment variable token: ${{ secrets.GITHUB_TOKEN }} # Use the secret as an environment variable
fetch-depth: 0 fetch-depth: 0
fetch-tags: true fetch-tags: true
@ -69,7 +69,7 @@ jobs:
# https://github.com/actions/upload-release-asset has been replaced by https://github.com/softprops/action-gh-release # https://github.com/actions/upload-release-asset has been replaced by https://github.com/softprops/action-gh-release
uses: softprops/action-gh-release@v2 uses: softprops/action-gh-release@v2
with: with:
token: ${{ secrets.MY_GITHUB_TOKEN }} # Use the secret as an environment variable token: ${{ secrets.GITHUB_TOKEN }} # Use the secret as an environment variable
prerelease: ${{ env.PRERELEASE }} prerelease: ${{ env.PRERELEASE }}
tag_name: ${{ env.RELEASE_TAG }} tag_name: ${{ env.RELEASE_TAG }}
# The body field does not support environment variable substitution directly. # The body field does not support environment variable substitution directly.

View File

@ -34,12 +34,10 @@ jobs:
# https://github.com/hmarr/debug-action # https://github.com/hmarr/debug-action
#- uses: hmarr/debug-action@v2 #- uses: hmarr/debug-action@v2
- name: Show who triggered this workflow - name: Ensure workspace ownership
run: | run: |
echo "Workflow triggered by ${{ github.event_name }}" echo "Workflow triggered by ${{ github.event_name }}"
echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
- name: Ensure workspace ownership
run: echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
# https://github.com/actions/checkout/issues/1781 # https://github.com/actions/checkout/issues/1781
- name: Check out code - name: Check out code
@ -48,6 +46,44 @@ jobs:
fetch-depth: 0 fetch-depth: 0
fetch-tags: true fetch-tags: true
- name: Check workflow duplication
if: ${{ !cancelled() && !failure() && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci')) }}
run: |
if [[ ${{ github.event_name }} != 'pull_request' ]]; then
HEAD=$(git rev-parse HEAD)
# Find a PR that introduced a given commit
gh auth login --with-token <<< "${{ secrets.GITHUB_TOKEN }}"
PR_NUMBER=$(gh pr list --search ${HEAD} --state merged --json number --jq .[0].number)
echo "HEAD=${HEAD}"
echo "PR_NUMBER=${PR_NUMBER}"
if [[ -n ${PR_NUMBER} ]]; then
PR_SHA_FP=${RUNNER_WORKSPACE_PREFIX}/artifacts/${GITHUB_REPOSITORY}/PR_${PR_NUMBER}
if [[ -f ${PR_SHA_FP} ]]; then
read -r PR_SHA PR_RUN_ID < "${PR_SHA_FP}"
# Calculate the hash of the current workspace content
HEAD_SHA=$(git rev-parse HEAD^{tree})
if [[ ${HEAD_SHA} == ${PR_SHA} ]]; then
echo "Cancel myself since the workspace content hash is the same with PR #${PR_NUMBER} merged. See ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}/actions/runs/${PR_RUN_ID} for details."
gh run cancel ${GITHUB_RUN_ID}
while true; do
status=$(gh run view ${GITHUB_RUN_ID} --json status -q .status)
[ "$status" = "completed" ] && break
sleep 5
done
exit 1
fi
fi
fi
else
PR_NUMBER=${{ github.event.pull_request.number }}
PR_SHA_FP=${RUNNER_WORKSPACE_PREFIX}/artifacts/${GITHUB_REPOSITORY}/PR_${PR_NUMBER}
# Calculate the hash of the current workspace content
PR_SHA=$(git rev-parse HEAD^{tree})
echo "PR #${PR_NUMBER} workspace content hash: ${PR_SHA}"
mkdir -p ${RUNNER_WORKSPACE_PREFIX}/artifacts/${GITHUB_REPOSITORY}
echo "${PR_SHA} ${GITHUB_RUN_ID}" > ${PR_SHA_FP}
fi
# https://github.com/astral-sh/ruff-action # https://github.com/astral-sh/ruff-action
- name: Static check with Ruff - name: Static check with Ruff
uses: astral-sh/ruff-action@v3 uses: astral-sh/ruff-action@v3
@ -59,11 +95,11 @@ jobs:
run: | run: |
RUNNER_WORKSPACE_PREFIX=${RUNNER_WORKSPACE_PREFIX:-$HOME} RUNNER_WORKSPACE_PREFIX=${RUNNER_WORKSPACE_PREFIX:-$HOME}
sudo docker pull ubuntu:22.04 sudo docker pull ubuntu:22.04
sudo docker build --progress=plain --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim . sudo DOCKER_BUILDKIT=1 docker build --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
- name: Build ragflow:nightly - name: Build ragflow:nightly
run: | run: |
sudo docker build --progress=plain --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly . sudo DOCKER_BUILDKIT=1 docker build --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
- name: Start ragflow:nightly-slim - name: Start ragflow:nightly-slim
run: | run: |

View File

@ -341,11 +341,13 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
5. If your operating system does not have jemalloc, please install it as follows: 5. If your operating system does not have jemalloc, please install it as follows:
```bash ```bash
# ubuntu # Ubuntu
sudo apt-get install libjemalloc-dev sudo apt-get install libjemalloc-dev
# centos # CentOS
sudo yum install jemalloc sudo yum install jemalloc
# mac # OpenSUSE
sudo zypper install jemalloc
# macOS
sudo brew install jemalloc sudo brew install jemalloc
``` ```

View File

@ -1,3 +1,19 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import argparse import argparse
import base64 import base64
@ -390,6 +406,22 @@ class AdminCLI:
service_id: int = command['number'] service_id: int = command['number']
print(f"Showing service: {service_id}") print(f"Showing service: {service_id}")
url = f'http://{self.host}:{self.port}/api/v1/admin/services/{service_id}'
response = requests.get(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
res_json = response.json()
if response.status_code == 200:
res_data = res_json['data']
if res_data['alive']:
print(f"Service {res_data['service_name']} is alive. Detail:")
if isinstance(res_data['message'], str):
print(res_data['message'])
else:
self._print_table_simple(res_data['message'])
else:
print(f"Service {res_data['service_name']} is down. Detail: {res_data['message']}")
else:
print(f"Fail to show service, code: {res_json['code']}, message: {res_json['message']}")
def _handle_restart_service(self, command): def _handle_restart_service(self, command):
service_id: int = command['number'] service_id: int = command['number']
print(f"Restart service {service_id}") print(f"Restart service {service_id}")

View File

@ -1,3 +1,18 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os import os
import signal import signal

View File

@ -1,9 +1,26 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging import logging
import uuid import uuid
from functools import wraps from functools import wraps
from flask import request, jsonify from flask import request, jsonify
from exceptions import AdminException from api.common.exceptions import AdminException
from api.db.init_data import encode_to_base64 from api.db.init_data import encode_to_base64
from api.db.services import UserService from api.db.services import UserService

View File

@ -1,3 +1,20 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging import logging
import threading import threading
from enum import Enum from enum import Enum
@ -32,9 +49,11 @@ class BaseConfig(BaseModel):
host: str host: str
port: int port: int
service_type: str service_type: str
detail_func_name: str
def to_dict(self) -> dict[str, Any]: def to_dict(self) -> dict[str, Any]:
return {'id': self.id, 'name': self.name, 'host': self.host, 'port': self.port, 'service_type': self.service_type} return {'id': self.id, 'name': self.name, 'host': self.host, 'port': self.port,
'service_type': self.service_type}
class MetaConfig(BaseConfig): class MetaConfig(BaseConfig):
@ -209,7 +228,8 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
name: str = f'ragflow_{ragflow_count}' name: str = f'ragflow_{ragflow_count}'
host: str = v['host'] host: str = v['host']
http_port: int = v['http_port'] http_port: int = v['http_port']
config = RAGFlowServerConfig(id=id_count, name=name, host=host, port=http_port, service_type="ragflow_server") config = RAGFlowServerConfig(id=id_count, name=name, host=host, port=http_port,
service_type="ragflow_server", detail_func_name="check_ragflow_server_alive")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
case "es": case "es":
@ -222,7 +242,8 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
password: str = v.get('password') password: str = v.get('password')
config = ElasticsearchConfig(id=id_count, name=name, host=host, port=port, service_type="retrieval", config = ElasticsearchConfig(id=id_count, name=name, host=host, port=port, service_type="retrieval",
retrieval_type="elasticsearch", retrieval_type="elasticsearch",
username=username, password=password) username=username, password=password,
detail_func_name="get_es_cluster_stats")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
@ -234,7 +255,7 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
port = int(parts[1]) port = int(parts[1])
database: str = v.get('db_name', 'default_db') database: str = v.get('db_name', 'default_db')
config = InfinityConfig(id=id_count, name=name, host=host, port=port, service_type="retrieval", retrieval_type="infinity", config = InfinityConfig(id=id_count, name=name, host=host, port=port, service_type="retrieval", retrieval_type="infinity",
db_name=database) db_name=database, detail_func_name="get_infinity_status")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
case "minio": case "minio":
@ -246,7 +267,7 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
user = v.get('user') user = v.get('user')
password = v.get('password') password = v.get('password')
config = MinioConfig(id=id_count, name=name, host=host, port=port, user=user, password=password, service_type="file_store", config = MinioConfig(id=id_count, name=name, host=host, port=port, user=user, password=password, service_type="file_store",
store_type="minio") store_type="minio", detail_func_name="check_minio_alive")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
case "redis": case "redis":
@ -258,7 +279,7 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
password = v.get('password') password = v.get('password')
db: int = v.get('db') db: int = v.get('db')
config = RedisConfig(id=id_count, name=name, host=host, port=port, password=password, database=db, config = RedisConfig(id=id_count, name=name, host=host, port=port, password=password, database=db,
service_type="message_queue", mq_type="redis") service_type="message_queue", mq_type="redis", detail_func_name="get_redis_info")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
case "mysql": case "mysql":
@ -268,7 +289,7 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
username = v.get('user') username = v.get('user')
password = v.get('password') password = v.get('password')
config = MySQLConfig(id=id_count, name=name, host=host, port=port, username=username, password=password, config = MySQLConfig(id=id_count, name=name, host=host, port=port, username=username, password=password,
service_type="meta_data", meta_type="mysql") service_type="meta_data", meta_type="mysql", detail_func_name="get_mysql_status")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
case "admin": case "admin":

View File

@ -0,0 +1,15 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

View File

@ -1,15 +1,34 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import jsonify from flask import jsonify
def success_response(data=None, message="Success", code = 0):
def success_response(data=None, message="Success", code=0):
return jsonify({ return jsonify({
"code": code, "code": code,
"message": message, "message": message,
"data": data "data": data
}), 200 }), 200
def error_response(message="Error", code=-1, data=None): def error_response(message="Error", code=-1, data=None):
return jsonify({ return jsonify({
"code": code, "code": code,
"message": message, "message": message,
"data": data "data": data
}), 400 }), 400

View File

@ -1,9 +1,26 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import Blueprint, request from flask import Blueprint, request
from auth import login_verify from auth import login_verify
from responses import success_response, error_response from responses import success_response, error_response
from services import UserMgr, ServiceMgr, UserServiceMgr from services import UserMgr, ServiceMgr, UserServiceMgr
from exceptions import AdminException from api.common.exceptions import AdminException
admin_bp = Blueprint('admin', __name__, url_prefix='/api/v1/admin') admin_bp = Blueprint('admin', __name__, url_prefix='/api/v1/admin')
@ -42,7 +59,7 @@ def create_user():
res = UserMgr.create_user(username, password, role) res = UserMgr.create_user(username, password, role)
if res["success"]: if res["success"]:
user_info = res["user_info"] user_info = res["user_info"]
user_info.pop("password") # do not return password user_info.pop("password") # do not return password
return success_response(user_info, "User created successfully") return success_response(user_info, "User created successfully")
else: else:
return error_response("create user failed") return error_response("create user failed")
@ -102,6 +119,7 @@ def alter_user_activate_status(username):
except Exception as e: except Exception as e:
return error_response(str(e), 500) return error_response(str(e), 500)
@admin_bp.route('/users/<username>', methods=['GET']) @admin_bp.route('/users/<username>', methods=['GET'])
@login_verify @login_verify
def get_user_details(username): def get_user_details(username):
@ -114,6 +132,7 @@ def get_user_details(username):
except Exception as e: except Exception as e:
return error_response(str(e), 500) return error_response(str(e), 500)
@admin_bp.route('/users/<username>/datasets', methods=['GET']) @admin_bp.route('/users/<username>/datasets', methods=['GET'])
@login_verify @login_verify
def get_user_datasets(username): def get_user_datasets(username):

View File

@ -1,3 +1,20 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import re import re
from werkzeug.security import check_password_hash from werkzeug.security import check_password_hash
from api.db import ActiveEnum from api.db import ActiveEnum
@ -7,16 +24,20 @@ from api.db.services.canvas_service import UserCanvasService
from api.db.services.user_service import TenantService from api.db.services.user_service import TenantService
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.utils.crypt import decrypt from api.utils.crypt import decrypt
from exceptions import AdminException, UserAlreadyExistsError, UserNotFoundError from api.utils import health_utils
from api.common.exceptions import AdminException, UserAlreadyExistsError, UserNotFoundError
from config import SERVICE_CONFIGS from config import SERVICE_CONFIGS
class UserMgr: class UserMgr:
@staticmethod @staticmethod
def get_all_users(): def get_all_users():
users = UserService.get_all_users() users = UserService.get_all_users()
result = [] result = []
for user in users: for user in users:
result.append({'email': user.email, 'nickname': user.nickname, 'create_date': user.create_date, 'is_active': user.is_active}) result.append({'email': user.email, 'nickname': user.nickname, 'create_date': user.create_date,
'is_active': user.is_active})
return result return result
@staticmethod @staticmethod
@ -110,6 +131,7 @@ class UserMgr:
UserService.update_user(usr.id, {"is_active": target_status}) UserService.update_user(usr.id, {"is_active": target_status})
return f"Turn {_activate_status} user activate status successfully!" return f"Turn {_activate_status} user activate status successfully!"
class UserServiceMgr: class UserServiceMgr:
@staticmethod @staticmethod
@ -148,6 +170,7 @@ class UserServiceMgr:
'canvas_category': r['canvas_category'] 'canvas_category': r['canvas_category']
} for r in res] } for r in res]
class ServiceMgr: class ServiceMgr:
@staticmethod @staticmethod
@ -164,7 +187,22 @@ class ServiceMgr:
@staticmethod @staticmethod
def get_service_details(service_id: int): def get_service_details(service_id: int):
raise AdminException("get_service_details: not implemented") service_id = int(service_id)
configs = SERVICE_CONFIGS.configs
service_config_mapping = {
c.id: {
'name': c.name,
'detail_func_name': c.detail_func_name
} for c in configs
}
service_info = service_config_mapping.get(service_id, {})
if not service_info:
raise AdminException(f"Invalid service_id: {service_id}")
detail_func = getattr(health_utils, service_info.get('detail_func_name'))
res = detail_func()
res.update({'service_name': service_info.get('name')})
return res
@staticmethod @staticmethod
def shutdown_service(service_id: int): def shutdown_service(service_id: int):

View File

@ -203,7 +203,6 @@ class Canvas(Graph):
self.history = [] self.history = []
self.retrieval = [] self.retrieval = []
self.memory = [] self.memory = []
for k in self.globals.keys(): for k in self.globals.keys():
if isinstance(self.globals[k], str): if isinstance(self.globals[k], str):
self.globals[k] = "" self.globals[k] = ""
@ -292,7 +291,6 @@ class Canvas(Graph):
"thoughts": self.get_component_thoughts(self.path[i]) "thoughts": self.get_component_thoughts(self.path[i])
}) })
_run_batch(idx, to) _run_batch(idx, to)
# post processing of components invocation # post processing of components invocation
for i in range(idx, to): for i in range(idx, to):
cpn = self.get_component(self.path[i]) cpn = self.get_component(self.path[i])
@ -393,7 +391,6 @@ class Canvas(Graph):
self.path = path self.path = path
yield decorate("user_inputs", {"inputs": another_inputs, "tips": tips}) yield decorate("user_inputs", {"inputs": another_inputs, "tips": tips})
return return
self.path = self.path[:idx] self.path = self.path[:idx]
if not self.error: if not self.error:
yield decorate("workflow_finished", yield decorate("workflow_finished",

View File

@ -346,3 +346,11 @@ Respond immediately with your final comprehensive answer.
return "Error occurred." return "Error occurred."
def reset(self, temp=False):
"""
Reset all tools if they have a reset method. This avoids errors for tools like MCPToolCallSession.
"""
for k, cpn in self.tools.items():
if hasattr(cpn, "reset") and callable(cpn.reset):
cpn.reset()

View File

@ -19,11 +19,12 @@ import os
import re import re
import time import time
from abc import ABC from abc import ABC
import requests import requests
from agent.component.base import ComponentBase, ComponentParamBase
from api.utils.api_utils import timeout from api.utils.api_utils import timeout
from deepdoc.parser import HtmlParser from deepdoc.parser import HtmlParser
from agent.component.base import ComponentBase, ComponentParamBase
class InvokeParam(ComponentParamBase): class InvokeParam(ComponentParamBase):
@ -43,11 +44,11 @@ class InvokeParam(ComponentParamBase):
self.datatype = "json" # New parameter to determine data posting type self.datatype = "json" # New parameter to determine data posting type
def check(self): def check(self):
self.check_valid_value(self.method.lower(), "Type of content from the crawler", ['get', 'post', 'put']) self.check_valid_value(self.method.lower(), "Type of content from the crawler", ["get", "post", "put"])
self.check_empty(self.url, "End point URL") self.check_empty(self.url, "End point URL")
self.check_positive_integer(self.timeout, "Timeout time in second") self.check_positive_integer(self.timeout, "Timeout time in second")
self.check_boolean(self.clean_html, "Clean HTML") self.check_boolean(self.clean_html, "Clean HTML")
self.check_valid_value(self.datatype.lower(), "Data post type", ['json', 'formdata']) # Check for valid datapost value self.check_valid_value(self.datatype.lower(), "Data post type", ["json", "formdata"]) # Check for valid datapost value
class Invoke(ComponentBase, ABC): class Invoke(ComponentBase, ABC):
@ -63,6 +64,18 @@ class Invoke(ComponentBase, ABC):
args[para["key"]] = self._canvas.get_variable_value(para["ref"]) args[para["key"]] = self._canvas.get_variable_value(para["ref"])
url = self._param.url.strip() url = self._param.url.strip()
def replace_variable(match):
var_name = match.group(1)
try:
value = self._canvas.get_variable_value(var_name)
return str(value or "")
except Exception:
return ""
# {base_url} or {component_id@variable_name}
url = re.sub(r"\{([a-zA-Z_][a-zA-Z0-9_.@-]*)\}", replace_variable, url)
if url.find("http") != 0: if url.find("http") != 0:
url = "http://" + url url = "http://" + url
@ -75,52 +88,32 @@ class Invoke(ComponentBase, ABC):
proxies = {"http": self._param.proxy, "https": self._param.proxy} proxies = {"http": self._param.proxy, "https": self._param.proxy}
last_e = "" last_e = ""
for _ in range(self._param.max_retries+1): for _ in range(self._param.max_retries + 1):
try: try:
if method == 'get': if method == "get":
response = requests.get(url=url, response = requests.get(url=url, params=args, headers=headers, proxies=proxies, timeout=self._param.timeout)
params=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html: if self._param.clean_html:
sections = HtmlParser()(None, response.content) sections = HtmlParser()(None, response.content)
self.set_output("result", "\n".join(sections)) self.set_output("result", "\n".join(sections))
else: else:
self.set_output("result", response.text) self.set_output("result", response.text)
if method == 'put': if method == "put":
if self._param.datatype.lower() == 'json': if self._param.datatype.lower() == "json":
response = requests.put(url=url, response = requests.put(url=url, json=args, headers=headers, proxies=proxies, timeout=self._param.timeout)
json=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
else: else:
response = requests.put(url=url, response = requests.put(url=url, data=args, headers=headers, proxies=proxies, timeout=self._param.timeout)
data=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html: if self._param.clean_html:
sections = HtmlParser()(None, response.content) sections = HtmlParser()(None, response.content)
self.set_output("result", "\n".join(sections)) self.set_output("result", "\n".join(sections))
else: else:
self.set_output("result", response.text) self.set_output("result", response.text)
if method == 'post': if method == "post":
if self._param.datatype.lower() == 'json': if self._param.datatype.lower() == "json":
response = requests.post(url=url, response = requests.post(url=url, json=args, headers=headers, proxies=proxies, timeout=self._param.timeout)
json=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
else: else:
response = requests.post(url=url, response = requests.post(url=url, data=args, headers=headers, proxies=proxies, timeout=self._param.timeout)
data=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html: if self._param.clean_html:
self.set_output("result", "\n".join(sections)) self.set_output("result", "\n".join(sections))
else: else:

File diff suppressed because one or more lines are too long

View File

@ -156,8 +156,8 @@ class CodeExec(ToolBase, ABC):
self.set_output("_ERROR", "construct code request error: " + str(e)) self.set_output("_ERROR", "construct code request error: " + str(e))
try: try:
resp = requests.post(url=f"http://{settings.SANDBOX_HOST}:9385/run", json=code_req, timeout=os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)) resp = requests.post(url=f"http://{settings.SANDBOX_HOST}:9385/run", json=code_req, timeout=int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
logging.info(f"http://{settings.SANDBOX_HOST}:9385/run", code_req, resp.status_code) logging.info(f"http://{settings.SANDBOX_HOST}:9385/run, code_req: {code_req}, resp.status_code {resp.status_code}:")
if resp.status_code != 200: if resp.status_code != 200:
resp.raise_for_status() resp.raise_for_status()
body = resp.json() body = resp.json()

View File

@ -53,12 +53,13 @@ class ExeSQLParam(ToolParamBase):
self.max_records = 1024 self.max_records = 1024
def check(self): def check(self):
self.check_valid_value(self.db_type, "Choose DB type", ['mysql', 'postgres', 'mariadb', 'mssql', 'IBM DB2']) self.check_valid_value(self.db_type, "Choose DB type", ['mysql', 'postgres', 'mariadb', 'mssql', 'IBM DB2', 'trino'])
self.check_empty(self.database, "Database name") self.check_empty(self.database, "Database name")
self.check_empty(self.username, "database username") self.check_empty(self.username, "database username")
self.check_empty(self.host, "IP Address") self.check_empty(self.host, "IP Address")
self.check_positive_integer(self.port, "IP Port") self.check_positive_integer(self.port, "IP Port")
self.check_empty(self.password, "Database password") if self.db_type != "trino":
self.check_empty(self.password, "Database password")
self.check_positive_integer(self.max_records, "Maximum number of records") self.check_positive_integer(self.max_records, "Maximum number of records")
if self.database == "rag_flow": if self.database == "rag_flow":
if self.host == "ragflow-mysql": if self.host == "ragflow-mysql":
@ -123,6 +124,45 @@ class ExeSQL(ToolBase, ABC):
r'PWD=' + self._param.password r'PWD=' + self._param.password
) )
db = pyodbc.connect(conn_str) db = pyodbc.connect(conn_str)
elif self._param.db_type == 'trino':
try:
import trino
from trino.auth import BasicAuthentication
except Exception:
raise Exception("Missing dependency 'trino'. Please install: pip install trino")
def _parse_catalog_schema(db: str):
if not db:
return None, None
if "." in db:
c, s = db.split(".", 1)
elif "/" in db:
c, s = db.split("/", 1)
else:
c, s = db, "default"
return c, s
catalog, schema = _parse_catalog_schema(self._param.database)
if not catalog:
raise Exception("For Trino, `database` must be 'catalog.schema' or at least 'catalog'.")
http_scheme = "https" if os.environ.get("TRINO_USE_TLS", "0") == "1" else "http"
auth = None
if http_scheme == "https" and self._param.password:
auth = BasicAuthentication(self._param.username, self._param.password)
try:
db = trino.dbapi.connect(
host=self._param.host,
port=int(self._param.port or 8080),
user=self._param.username or "ragflow",
catalog=catalog,
schema=schema or "default",
http_scheme=http_scheme,
auth=auth
)
except Exception as e:
raise Exception("Database Connection Failed! \n" + str(e))
elif self._param.db_type == 'IBM DB2': elif self._param.db_type == 'IBM DB2':
import ibm_db import ibm_db
conn_str = ( conn_str = (

View File

@ -85,13 +85,7 @@ class PubMed(ToolBase, ABC):
self._retrieve_chunks(pubmedcnt.findall("PubmedArticle"), self._retrieve_chunks(pubmedcnt.findall("PubmedArticle"),
get_title=lambda child: child.find("MedlineCitation").find("Article").find("ArticleTitle").text, get_title=lambda child: child.find("MedlineCitation").find("Article").find("ArticleTitle").text,
get_url=lambda child: "https://pubmed.ncbi.nlm.nih.gov/" + child.find("MedlineCitation").find("PMID").text, get_url=lambda child: "https://pubmed.ncbi.nlm.nih.gov/" + child.find("MedlineCitation").find("PMID").text,
get_content=lambda child: child.find("MedlineCitation") \ get_content=lambda child: self._format_pubmed_content(child),)
.find("Article") \
.find("Abstract") \
.find("AbstractText").text \
if child.find("MedlineCitation")\
.find("Article").find("Abstract") \
else "No abstract available")
return self.output("formalized_content") return self.output("formalized_content")
except Exception as e: except Exception as e:
last_e = e last_e = e
@ -104,5 +98,50 @@ class PubMed(ToolBase, ABC):
assert False, self.output() assert False, self.output()
def _format_pubmed_content(self, child):
"""Extract structured reference info from PubMed XML"""
def safe_find(path):
node = child
for p in path.split("/"):
if node is None:
return None
node = node.find(p)
return node.text if node is not None and node.text else None
title = safe_find("MedlineCitation/Article/ArticleTitle") or "No title"
abstract = safe_find("MedlineCitation/Article/Abstract/AbstractText") or "No abstract available"
journal = safe_find("MedlineCitation/Article/Journal/Title") or "Unknown Journal"
volume = safe_find("MedlineCitation/Article/Journal/JournalIssue/Volume") or "-"
issue = safe_find("MedlineCitation/Article/Journal/JournalIssue/Issue") or "-"
pages = safe_find("MedlineCitation/Article/Pagination/MedlinePgn") or "-"
# Authors
authors = []
for author in child.findall(".//AuthorList/Author"):
lastname = safe_find("LastName") or ""
forename = safe_find("ForeName") or ""
fullname = f"{forename} {lastname}".strip()
if fullname:
authors.append(fullname)
authors_str = ", ".join(authors) if authors else "Unknown Authors"
# DOI
doi = None
for eid in child.findall(".//ArticleId"):
if eid.attrib.get("IdType") == "doi":
doi = eid.text
break
return (
f"Title: {title}\n"
f"Authors: {authors_str}\n"
f"Journal: {journal}\n"
f"Volume: {volume}\n"
f"Issue: {issue}\n"
f"Pages: {pages}\n"
f"DOI: {doi or '-'}\n"
f"Abstract: {abstract.strip()}"
)
def thoughts(self) -> str: def thoughts(self) -> str:
return "Looking for scholarly papers on `{}`,” prioritising reputable sources.".format(self.get_input().get("query", "-_-!")) return "Looking for scholarly papers on `{}`,” prioritising reputable sources.".format(self.get_input().get("query", "-_-!"))

View File

@ -57,6 +57,7 @@ class RetrievalParam(ToolParamBase):
self.empty_response = "" self.empty_response = ""
self.use_kg = False self.use_kg = False
self.cross_languages = [] self.cross_languages = []
self.toc_enhance = False
def check(self): def check(self):
self.check_decimal_float(self.similarity_threshold, "[Retrieval] Similarity threshold") self.check_decimal_float(self.similarity_threshold, "[Retrieval] Similarity threshold")
@ -121,7 +122,7 @@ class Retrieval(ToolBase, ABC):
if kbs: if kbs:
query = re.sub(r"^user[:\s]*", "", query, flags=re.IGNORECASE) query = re.sub(r"^user[:\s]*", "", query, flags=re.IGNORECASE)
kbinfos = settings.retrievaler.retrieval( kbinfos = settings.retriever.retrieval(
query, query,
embd_mdl, embd_mdl,
[kb.tenant_id for kb in kbs], [kb.tenant_id for kb in kbs],
@ -134,8 +135,13 @@ class Retrieval(ToolBase, ABC):
rerank_mdl=rerank_mdl, rerank_mdl=rerank_mdl,
rank_feature=label_question(query, kbs), rank_feature=label_question(query, kbs),
) )
if self._param.toc_enhance:
chat_mdl = LLMBundle(self._canvas._tenant_id, LLMType.CHAT)
cks = settings.retriever.retrieval_by_toc(query, kbinfos["chunks"], [kb.tenant_id for kb in kbs], chat_mdl, self._param.top_n)
if cks:
kbinfos["chunks"] = cks
if self._param.use_kg: if self._param.use_kg:
ck = settings.kg_retrievaler.retrieval(query, ck = settings.kg_retriever.retrieval(query,
[kb.tenant_id for kb in kbs], [kb.tenant_id for kb in kbs],
kb_ids, kb_ids,
embd_mdl, embd_mdl,
@ -146,7 +152,7 @@ class Retrieval(ToolBase, ABC):
kbinfos = {"chunks": [], "doc_aggs": []} kbinfos = {"chunks": [], "doc_aggs": []}
if self._param.use_kg and kbs: if self._param.use_kg and kbs:
ck = settings.kg_retrievaler.retrieval(query, [kb.tenant_id for kb in kbs], filtered_kb_ids, embd_mdl, LLMBundle(kbs[0].tenant_id, LLMType.CHAT)) ck = settings.kg_retriever.retrieval(query, [kb.tenant_id for kb in kbs], filtered_kb_ids, embd_mdl, LLMBundle(kbs[0].tenant_id, LLMType.CHAT))
if ck["content_with_weight"]: if ck["content_with_weight"]:
ck["content"] = ck["content_with_weight"] ck["content"] = ck["content_with_weight"]
del ck["content_with_weight"] del ck["content_with_weight"]

View File

@ -85,7 +85,7 @@ class SearXNG(ToolBase, ABC):
self.set_output("formalized_content", "") self.set_output("formalized_content", "")
return "" return ""
searxng_url = (kwargs.get("searxng_url") or getattr(self._param, "searxng_url", "") or "").strip() searxng_url = (getattr(self._param, "searxng_url", "") or kwargs.get("searxng_url") or "").strip()
# In try-run, if no URL configured, just return empty instead of raising # In try-run, if no URL configured, just return empty instead of raising
if not searxng_url: if not searxng_url:
self.set_output("formalized_content", "") self.set_output("formalized_content", "")

View File

@ -536,7 +536,7 @@ def list_chunks():
) )
kb_ids = KnowledgebaseService.get_kb_ids(tenant_id) kb_ids = KnowledgebaseService.get_kb_ids(tenant_id)
res = settings.retrievaler.chunk_list(doc_id, tenant_id, kb_ids) res = settings.retriever.chunk_list(doc_id, tenant_id, kb_ids)
res = [ res = [
{ {
"content": res_item["content_with_weight"], "content": res_item["content_with_weight"],
@ -884,7 +884,7 @@ def retrieval():
if req.get("keyword", False): if req.get("keyword", False):
chat_mdl = LLMBundle(kbs[0].tenant_id, LLMType.CHAT) chat_mdl = LLMBundle(kbs[0].tenant_id, LLMType.CHAT)
question += keyword_extraction(chat_mdl, question) question += keyword_extraction(chat_mdl, question)
ranks = settings.retrievaler.retrieval(question, embd_mdl, kbs[0].tenant_id, kb_ids, page, size, ranks = settings.retriever.retrieval(question, embd_mdl, kbs[0].tenant_id, kb_ids, page, size,
similarity_threshold, vector_similarity_weight, top, similarity_threshold, vector_similarity_weight, top,
doc_ids, rerank_mdl=rerank_mdl, highlight= highlight, doc_ids, rerank_mdl=rerank_mdl, highlight= highlight,
rank_feature=label_question(question, kbs)) rank_feature=label_question(question, kbs))

View File

@ -409,6 +409,49 @@ def test_db_connect():
ibm_db.fetch_assoc(stmt) ibm_db.fetch_assoc(stmt)
ibm_db.close(conn) ibm_db.close(conn)
return get_json_result(data="Database Connection Successful!") return get_json_result(data="Database Connection Successful!")
elif req["db_type"] == 'trino':
def _parse_catalog_schema(db: str):
if not db:
return None, None
if "." in db:
c, s = db.split(".", 1)
elif "/" in db:
c, s = db.split("/", 1)
else:
c, s = db, "default"
return c, s
try:
import trino
import os
from trino.auth import BasicAuthentication
except Exception:
return server_error_response("Missing dependency 'trino'. Please install: pip install trino")
catalog, schema = _parse_catalog_schema(req["database"])
if not catalog:
return server_error_response("For Trino, 'database' must be 'catalog.schema' or at least 'catalog'.")
http_scheme = "https" if os.environ.get("TRINO_USE_TLS", "0") == "1" else "http"
auth = None
if http_scheme == "https" and req.get("password"):
auth = BasicAuthentication(req.get("username") or "ragflow", req["password"])
conn = trino.dbapi.connect(
host=req["host"],
port=int(req["port"] or 8080),
user=req["username"] or "ragflow",
catalog=catalog,
schema=schema or "default",
http_scheme=http_scheme,
auth=auth
)
cur = conn.cursor()
cur.execute("SELECT 1")
cur.fetchall()
cur.close()
conn.close()
return get_json_result(data="Database Connection Successful!")
else: else:
return server_error_response("Unsupported database type.") return server_error_response("Unsupported database type.")
if req["db_type"] != 'mssql': if req["db_type"] != 'mssql':

View File

@ -60,7 +60,7 @@ def list_chunk():
} }
if "available_int" in req: if "available_int" in req:
query["available_int"] = int(req["available_int"]) query["available_int"] = int(req["available_int"])
sres = settings.retrievaler.search(query, search.index_name(tenant_id), kb_ids, highlight=True) sres = settings.retriever.search(query, search.index_name(tenant_id), kb_ids, highlight=True)
res = {"total": sres.total, "chunks": [], "doc": doc.to_dict()} res = {"total": sres.total, "chunks": [], "doc": doc.to_dict()}
for id in sres.ids: for id in sres.ids:
d = { d = {
@ -346,7 +346,7 @@ def retrieval_test():
question += keyword_extraction(chat_mdl, question) question += keyword_extraction(chat_mdl, question)
labels = label_question(question, [kb]) labels = label_question(question, [kb])
ranks = settings.retrievaler.retrieval(question, embd_mdl, tenant_ids, kb_ids, page, size, ranks = settings.retriever.retrieval(question, embd_mdl, tenant_ids, kb_ids, page, size,
float(req.get("similarity_threshold", 0.0)), float(req.get("similarity_threshold", 0.0)),
float(req.get("vector_similarity_weight", 0.3)), float(req.get("vector_similarity_weight", 0.3)),
top, top,
@ -354,7 +354,7 @@ def retrieval_test():
rank_feature=labels rank_feature=labels
) )
if use_kg: if use_kg:
ck = settings.kg_retrievaler.retrieval(question, ck = settings.kg_retriever.retrieval(question,
tenant_ids, tenant_ids,
kb_ids, kb_ids,
embd_mdl, embd_mdl,
@ -384,7 +384,7 @@ def knowledge_graph():
"doc_ids": [doc_id], "doc_ids": [doc_id],
"knowledge_graph_kwd": ["graph", "mind_map"] "knowledge_graph_kwd": ["graph", "mind_map"]
} }
sres = settings.retrievaler.search(req, search.index_name(tenant_id), kb_ids) sres = settings.retriever.search(req, search.index_name(tenant_id), kb_ids)
obj = {"graph": {}, "mind_map": {}} obj = {"graph": {}, "mind_map": {}}
for id in sres.ids[:2]: for id in sres.ids[:2]:
ty = sres.field[id]["knowledge_graph_kwd"] ty = sres.field[id]["knowledge_graph_kwd"]

View File

@ -24,6 +24,7 @@ from flask import request
from flask_login import current_user, login_required from flask_login import current_user, login_required
from api import settings from api import settings
from api.common.check_team_permission import check_kb_team_permission
from api.constants import FILE_NAME_LEN_LIMIT, IMG_BASE64_PREFIX from api.constants import FILE_NAME_LEN_LIMIT, IMG_BASE64_PREFIX
from api.db import VALID_FILE_TYPES, VALID_TASK_STATUS, FileSource, FileType, ParserType, TaskStatus from api.db import VALID_FILE_TYPES, VALID_TASK_STATUS, FileSource, FileType, ParserType, TaskStatus
from api.db.db_models import File, Task from api.db.db_models import File, Task
@ -68,8 +69,10 @@ def upload():
e, kb = KnowledgebaseService.get_by_id(kb_id) e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e: if not e:
raise LookupError("Can't find this knowledgebase!") raise LookupError("Can't find this knowledgebase!")
err, files = FileService.upload_document(kb, file_objs, current_user.id) if not check_kb_team_permission(kb, current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
err, files = FileService.upload_document(kb, file_objs, current_user.id)
if err: if err:
return get_json_result(data=files, message="\n".join(err), code=settings.RetCode.SERVER_ERROR) return get_json_result(data=files, message="\n".join(err), code=settings.RetCode.SERVER_ERROR)
@ -94,6 +97,8 @@ def web_crawl():
e, kb = KnowledgebaseService.get_by_id(kb_id) e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e: if not e:
raise LookupError("Can't find this knowledgebase!") raise LookupError("Can't find this knowledgebase!")
if check_kb_team_permission(kb, current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
blob = html2pdf(url) blob = html2pdf(url)
if not blob: if not blob:
@ -552,8 +557,8 @@ def get(doc_id):
@login_required @login_required
@validate_request("doc_id") @validate_request("doc_id")
def change_parser(): def change_parser():
req = request.json
req = request.json
if not DocumentService.accessible(req["doc_id"], current_user.id): if not DocumentService.accessible(req["doc_id"], current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR) return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
@ -577,7 +582,7 @@ def change_parser():
settings.docStoreConn.delete({"doc_id": doc.id}, search.index_name(tenant_id), doc.kb_id) settings.docStoreConn.delete({"doc_id": doc.id}, search.index_name(tenant_id), doc.kb_id)
try: try:
if "pipeline_id" in req: if "pipeline_id" in req and req["pipeline_id"] != "":
if doc.pipeline_id == req["pipeline_id"]: if doc.pipeline_id == req["pipeline_id"]:
return get_json_result(data=True) return get_json_result(data=True)
DocumentService.update_by_id(doc.id, {"pipeline_id": req["pipeline_id"]}) DocumentService.update_by_id(doc.id, {"pipeline_id": req["pipeline_id"]})

View File

@ -21,6 +21,7 @@ import flask
from flask import request from flask import request
from flask_login import login_required, current_user from flask_login import login_required, current_user
from api.common.check_team_permission import check_file_team_permission
from api.db.services.document_service import DocumentService from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService from api.db.services.file2document_service import File2DocumentService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
@ -246,7 +247,7 @@ def rm():
return get_data_error_result(message="File or Folder not found!") return get_data_error_result(message="File or Folder not found!")
if not file.tenant_id: if not file.tenant_id:
return get_data_error_result(message="Tenant not found!") return get_data_error_result(message="Tenant not found!")
if file.tenant_id != current_user.id: if not check_file_team_permission(file, current_user.id):
return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR) return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR)
if file.source_type == FileSource.KNOWLEDGEBASE: if file.source_type == FileSource.KNOWLEDGEBASE:
continue continue
@ -294,7 +295,7 @@ def rename():
e, file = FileService.get_by_id(req["file_id"]) e, file = FileService.get_by_id(req["file_id"])
if not e: if not e:
return get_data_error_result(message="File not found!") return get_data_error_result(message="File not found!")
if file.tenant_id != current_user.id: if not check_file_team_permission(file, current_user.id):
return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR) return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR)
if file.type != FileType.FOLDER.value \ if file.type != FileType.FOLDER.value \
and pathlib.Path(req["name"].lower()).suffix != pathlib.Path( and pathlib.Path(req["name"].lower()).suffix != pathlib.Path(
@ -332,7 +333,7 @@ def get(file_id):
e, file = FileService.get_by_id(file_id) e, file = FileService.get_by_id(file_id)
if not e: if not e:
return get_data_error_result(message="Document not found!") return get_data_error_result(message="Document not found!")
if file.tenant_id != current_user.id: if not check_file_team_permission(file, current_user.id):
return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR) return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR)
blob = STORAGE_IMPL.get(file.parent_id, file.location) blob = STORAGE_IMPL.get(file.parent_id, file.location)
@ -373,7 +374,7 @@ def move():
return get_data_error_result(message="File or Folder not found!") return get_data_error_result(message="File or Folder not found!")
if not file.tenant_id: if not file.tenant_id:
return get_data_error_result(message="Tenant not found!") return get_data_error_result(message="Tenant not found!")
if file.tenant_id != current_user.id: if not check_file_team_permission(file, current_user.id):
return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR) return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR)
fe, _ = FileService.get_by_id(parent_id) fe, _ = FileService.get_by_id(parent_id)
if not fe: if not fe:

View File

@ -36,8 +36,10 @@ from api import settings
from rag.nlp import search from rag.nlp import search
from api.constants import DATASET_NAME_LIMIT from api.constants import DATASET_NAME_LIMIT
from rag.settings import PAGERANK_FLD from rag.settings import PAGERANK_FLD
from rag.utils.redis_conn import REDIS_CONN
from rag.utils.storage_factory import STORAGE_IMPL from rag.utils.storage_factory import STORAGE_IMPL
@manager.route('/create', methods=['post']) # noqa: F821 @manager.route('/create', methods=['post']) # noqa: F821
@login_required @login_required
@validate_request("name") @validate_request("name")
@ -281,7 +283,7 @@ def list_tags(kb_id):
tenants = UserTenantService.get_tenants_by_user_id(current_user.id) tenants = UserTenantService.get_tenants_by_user_id(current_user.id)
tags = [] tags = []
for tenant in tenants: for tenant in tenants:
tags += settings.retrievaler.all_tags(tenant["tenant_id"], [kb_id]) tags += settings.retriever.all_tags(tenant["tenant_id"], [kb_id])
return get_json_result(data=tags) return get_json_result(data=tags)
@ -300,7 +302,7 @@ def list_tags_from_kbs():
tenants = UserTenantService.get_tenants_by_user_id(current_user.id) tenants = UserTenantService.get_tenants_by_user_id(current_user.id)
tags = [] tags = []
for tenant in tenants: for tenant in tenants:
tags += settings.retrievaler.all_tags(tenant["tenant_id"], kb_ids) tags += settings.retriever.all_tags(tenant["tenant_id"], kb_ids)
return get_json_result(data=tags) return get_json_result(data=tags)
@ -361,7 +363,7 @@ def knowledge_graph(kb_id):
obj = {"graph": {}, "mind_map": {}} obj = {"graph": {}, "mind_map": {}}
if not settings.docStoreConn.indexExist(search.index_name(kb.tenant_id), kb_id): if not settings.docStoreConn.indexExist(search.index_name(kb.tenant_id), kb_id):
return get_json_result(data=obj) return get_json_result(data=obj)
sres = settings.retrievaler.search(req, search.index_name(kb.tenant_id), [kb_id]) sres = settings.retriever.search(req, search.index_name(kb.tenant_id), [kb_id])
if not len(sres.ids): if not len(sres.ids):
return get_json_result(data=obj) return get_json_result(data=obj)
@ -759,18 +761,25 @@ def delete_kb_task():
match pipeline_task_type: match pipeline_task_type:
case PipelineTaskType.GRAPH_RAG: case PipelineTaskType.GRAPH_RAG:
settings.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]}, search.index_name(kb.tenant_id), kb_id) settings.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]}, search.index_name(kb.tenant_id), kb_id)
kb_task_id = "graphrag_task_id" kb_task_id_field = "graphrag_task_id"
task_id = kb.graphrag_task_id
kb_task_finish_at = "graphrag_task_finish_at" kb_task_finish_at = "graphrag_task_finish_at"
case PipelineTaskType.RAPTOR: case PipelineTaskType.RAPTOR:
kb_task_id = "raptor_task_id" kb_task_id_field = "raptor_task_id"
task_id = kb.raptor_task_id
kb_task_finish_at = "raptor_task_finish_at" kb_task_finish_at = "raptor_task_finish_at"
case PipelineTaskType.MINDMAP: case PipelineTaskType.MINDMAP:
kb_task_id = "mindmap_task_id" kb_task_id_field = "mindmap_task_id"
task_id = kb.mindmap_task_id
kb_task_finish_at = "mindmap_task_finish_at" kb_task_finish_at = "mindmap_task_finish_at"
case _: case _:
return get_error_data_result(message="Internal Error: Invalid task type") return get_error_data_result(message="Internal Error: Invalid task type")
ok = KnowledgebaseService.update_by_id(kb_id, {kb_task_id: "", kb_task_finish_at: None}) def cancel_task(task_id):
REDIS_CONN.set(f"{task_id}-cancel", "x")
cancel_task(task_id)
ok = KnowledgebaseService.update_by_id(kb_id, {kb_task_id_field: "", kb_task_finish_at: None})
if not ok: if not ok:
return server_error_response(f"Internal error: cannot delete task {pipeline_task_type}") return server_error_response(f"Internal error: cannot delete task {pipeline_task_type}")

View File

@ -1,8 +1,26 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import Response from flask import Response
from flask_login import login_required from flask_login import login_required
from api.utils.api_utils import get_json_result from api.utils.api_utils import get_json_result
from plugin import GlobalPluginManager from plugin import GlobalPluginManager
@manager.route('/llm_tools', methods=['GET']) # noqa: F821 @manager.route('/llm_tools', methods=['GET']) # noqa: F821
@login_required @login_required
def llm_tools() -> Response: def llm_tools() -> Response:

View File

@ -25,6 +25,7 @@ from api.utils.api_utils import get_data_error_result, get_error_data_result, ge
from api.utils.api_utils import get_result from api.utils.api_utils import get_result
from flask import request from flask import request
@manager.route('/agents', methods=['GET']) # noqa: F821 @manager.route('/agents', methods=['GET']) # noqa: F821
@token_required @token_required
def list_agents(tenant_id): def list_agents(tenant_id):
@ -41,7 +42,7 @@ def list_agents(tenant_id):
desc = False desc = False
else: else:
desc = True desc = True
canvas = UserCanvasService.get_list(tenant_id,page_number,items_per_page,orderby,desc,id,title) canvas = UserCanvasService.get_list(tenant_id, page_number, items_per_page, orderby, desc, id, title)
return get_result(data=canvas) return get_result(data=canvas)
@ -93,7 +94,7 @@ def update_agent(tenant_id: str, agent_id: str):
req["dsl"] = json.dumps(req["dsl"], ensure_ascii=False) req["dsl"] = json.dumps(req["dsl"], ensure_ascii=False)
req["dsl"] = json.loads(req["dsl"]) req["dsl"] = json.loads(req["dsl"])
if req.get("title") is not None: if req.get("title") is not None:
req["title"] = req["title"].strip() req["title"] = req["title"].strip()

View File

@ -215,7 +215,8 @@ def delete(tenant_id):
continue continue
kb_id_instance_pairs.append((kb_id, kb)) kb_id_instance_pairs.append((kb_id, kb))
if len(error_kb_ids) > 0: if len(error_kb_ids) > 0:
return get_error_permission_result(message=f"""User '{tenant_id}' lacks permission for datasets: '{", ".join(error_kb_ids)}'""") return get_error_permission_result(
message=f"""User '{tenant_id}' lacks permission for datasets: '{", ".join(error_kb_ids)}'""")
errors = [] errors = []
success_count = 0 success_count = 0
@ -232,7 +233,8 @@ def delete(tenant_id):
] ]
) )
File2DocumentService.delete_by_document_id(doc.id) File2DocumentService.delete_by_document_id(doc.id)
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.type == "folder", File.name == kb.name]) FileService.filter_delete(
[File.source_type == FileSource.KNOWLEDGEBASE, File.type == "folder", File.name == kb.name])
if not KnowledgebaseService.delete_by_id(kb_id): if not KnowledgebaseService.delete_by_id(kb_id):
errors.append(f"Delete dataset error for {kb_id}") errors.append(f"Delete dataset error for {kb_id}")
continue continue
@ -329,7 +331,8 @@ def update(tenant_id, dataset_id):
try: try:
kb = KnowledgebaseService.get_or_none(id=dataset_id, tenant_id=tenant_id) kb = KnowledgebaseService.get_or_none(id=dataset_id, tenant_id=tenant_id)
if kb is None: if kb is None:
return get_error_permission_result(message=f"User '{tenant_id}' lacks permission for dataset '{dataset_id}'") return get_error_permission_result(
message=f"User '{tenant_id}' lacks permission for dataset '{dataset_id}'")
if req.get("parser_config"): if req.get("parser_config"):
req["parser_config"] = deep_merge(kb.parser_config, req["parser_config"]) req["parser_config"] = deep_merge(kb.parser_config, req["parser_config"])
@ -341,7 +344,8 @@ def update(tenant_id, dataset_id):
del req["parser_config"] del req["parser_config"]
if "name" in req and req["name"].lower() != kb.name.lower(): if "name" in req and req["name"].lower() != kb.name.lower():
exists = KnowledgebaseService.get_or_none(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value) exists = KnowledgebaseService.get_or_none(name=req["name"], tenant_id=tenant_id,
status=StatusEnum.VALID.value)
if exists: if exists:
return get_error_data_result(message=f"Dataset name '{req['name']}' already exists") return get_error_data_result(message=f"Dataset name '{req['name']}' already exists")
@ -349,7 +353,8 @@ def update(tenant_id, dataset_id):
if not req["embd_id"]: if not req["embd_id"]:
req["embd_id"] = kb.embd_id req["embd_id"] = kb.embd_id
if kb.chunk_num != 0 and req["embd_id"] != kb.embd_id: if kb.chunk_num != 0 and req["embd_id"] != kb.embd_id:
return get_error_data_result(message=f"When chunk_num ({kb.chunk_num}) > 0, embedding_model must remain {kb.embd_id}") return get_error_data_result(
message=f"When chunk_num ({kb.chunk_num}) > 0, embedding_model must remain {kb.embd_id}")
ok, err = verify_embedding_availability(req["embd_id"], tenant_id) ok, err = verify_embedding_availability(req["embd_id"], tenant_id)
if not ok: if not ok:
return err return err
@ -359,10 +364,12 @@ def update(tenant_id, dataset_id):
return get_error_argument_result(message="'pagerank' can only be set when doc_engine is elasticsearch") return get_error_argument_result(message="'pagerank' can only be set when doc_engine is elasticsearch")
if req["pagerank"] > 0: if req["pagerank"] > 0:
settings.docStoreConn.update({"kb_id": kb.id}, {PAGERANK_FLD: req["pagerank"]}, search.index_name(kb.tenant_id), kb.id) settings.docStoreConn.update({"kb_id": kb.id}, {PAGERANK_FLD: req["pagerank"]},
search.index_name(kb.tenant_id), kb.id)
else: else:
# Elasticsearch requires PAGERANK_FLD be non-zero! # Elasticsearch requires PAGERANK_FLD be non-zero!
settings.docStoreConn.update({"exists": PAGERANK_FLD}, {"remove": PAGERANK_FLD}, search.index_name(kb.tenant_id), kb.id) settings.docStoreConn.update({"exists": PAGERANK_FLD}, {"remove": PAGERANK_FLD},
search.index_name(kb.tenant_id), kb.id)
if not KnowledgebaseService.update_by_id(kb.id, req): if not KnowledgebaseService.update_by_id(kb.id, req):
return get_error_data_result(message="Update dataset error.(Database error)") return get_error_data_result(message="Update dataset error.(Database error)")
@ -454,7 +461,7 @@ def list_datasets(tenant_id):
return get_error_permission_result(message=f"User '{tenant_id}' lacks permission for dataset '{name}'") return get_error_permission_result(message=f"User '{tenant_id}' lacks permission for dataset '{name}'")
tenants = TenantService.get_joined_tenants_by_user_id(tenant_id) tenants = TenantService.get_joined_tenants_by_user_id(tenant_id)
kbs = KnowledgebaseService.get_list( kbs, total = KnowledgebaseService.get_list(
[m["tenant_id"] for m in tenants], [m["tenant_id"] for m in tenants],
tenant_id, tenant_id,
args["page"], args["page"],
@ -468,14 +475,15 @@ def list_datasets(tenant_id):
response_data_list = [] response_data_list = []
for kb in kbs: for kb in kbs:
response_data_list.append(remap_dictionary_keys(kb)) response_data_list.append(remap_dictionary_keys(kb))
return get_result(data=response_data_list) return get_result(data=response_data_list, total=total)
except OperationalError as e: except OperationalError as e:
logging.exception(e) logging.exception(e)
return get_error_data_result(message="Database operation failed") return get_error_data_result(message="Database operation failed")
@manager.route('/datasets/<dataset_id>/knowledge_graph', methods=['GET']) # noqa: F821 @manager.route('/datasets/<dataset_id>/knowledge_graph', methods=['GET']) # noqa: F821
@token_required @token_required
def knowledge_graph(tenant_id,dataset_id): def knowledge_graph(tenant_id, dataset_id):
if not KnowledgebaseService.accessible(dataset_id, tenant_id): if not KnowledgebaseService.accessible(dataset_id, tenant_id):
return get_result( return get_result(
data=False, data=False,
@ -491,7 +499,7 @@ def knowledge_graph(tenant_id,dataset_id):
obj = {"graph": {}, "mind_map": {}} obj = {"graph": {}, "mind_map": {}}
if not settings.docStoreConn.indexExist(search.index_name(kb.tenant_id), dataset_id): if not settings.docStoreConn.indexExist(search.index_name(kb.tenant_id), dataset_id):
return get_result(data=obj) return get_result(data=obj)
sres = settings.retrievaler.search(req, search.index_name(kb.tenant_id), [dataset_id]) sres = settings.retriever.search(req, search.index_name(kb.tenant_id), [dataset_id])
if not len(sres.ids): if not len(sres.ids):
return get_result(data=obj) return get_result(data=obj)
@ -507,14 +515,16 @@ def knowledge_graph(tenant_id,dataset_id):
if "nodes" in obj["graph"]: if "nodes" in obj["graph"]:
obj["graph"]["nodes"] = sorted(obj["graph"]["nodes"], key=lambda x: x.get("pagerank", 0), reverse=True)[:256] obj["graph"]["nodes"] = sorted(obj["graph"]["nodes"], key=lambda x: x.get("pagerank", 0), reverse=True)[:256]
if "edges" in obj["graph"]: if "edges" in obj["graph"]:
node_id_set = { o["id"] for o in obj["graph"]["nodes"] } node_id_set = {o["id"] for o in obj["graph"]["nodes"]}
filtered_edges = [o for o in obj["graph"]["edges"] if o["source"] != o["target"] and o["source"] in node_id_set and o["target"] in node_id_set] filtered_edges = [o for o in obj["graph"]["edges"] if
o["source"] != o["target"] and o["source"] in node_id_set and o["target"] in node_id_set]
obj["graph"]["edges"] = sorted(filtered_edges, key=lambda x: x.get("weight", 0), reverse=True)[:128] obj["graph"]["edges"] = sorted(filtered_edges, key=lambda x: x.get("weight", 0), reverse=True)[:128]
return get_result(data=obj) return get_result(data=obj)
@manager.route('/datasets/<dataset_id>/knowledge_graph', methods=['DELETE']) # noqa: F821 @manager.route('/datasets/<dataset_id>/knowledge_graph', methods=['DELETE']) # noqa: F821
@token_required @token_required
def delete_knowledge_graph(tenant_id,dataset_id): def delete_knowledge_graph(tenant_id, dataset_id):
if not KnowledgebaseService.accessible(dataset_id, tenant_id): if not KnowledgebaseService.accessible(dataset_id, tenant_id):
return get_result( return get_result(
data=False, data=False,
@ -522,6 +532,7 @@ def delete_knowledge_graph(tenant_id,dataset_id):
code=settings.RetCode.AUTHENTICATION_ERROR code=settings.RetCode.AUTHENTICATION_ERROR
) )
_, kb = KnowledgebaseService.get_by_id(dataset_id) _, kb = KnowledgebaseService.get_by_id(dataset_id)
settings.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]}, search.index_name(kb.tenant_id), dataset_id) settings.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]},
search.index_name(kb.tenant_id), dataset_id)
return get_result(data=True) return get_result(data=True)

View File

@ -1,4 +1,4 @@
# #
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved. # Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
@ -31,6 +31,89 @@ from api.db.services.dialog_service import meta_filter, convert_conditions
@apikey_required @apikey_required
@validate_request("knowledge_id", "query") @validate_request("knowledge_id", "query")
def retrieval(tenant_id): def retrieval(tenant_id):
"""
Dify-compatible retrieval API
---
tags:
- SDK
security:
- ApiKeyAuth: []
parameters:
- in: body
name: body
required: true
schema:
type: object
required:
- knowledge_id
- query
properties:
knowledge_id:
type: string
description: Knowledge base ID
query:
type: string
description: Query text
use_kg:
type: boolean
description: Whether to use knowledge graph
default: false
retrieval_setting:
type: object
description: Retrieval configuration
properties:
score_threshold:
type: number
description: Similarity threshold
default: 0.0
top_k:
type: integer
description: Number of results to return
default: 1024
metadata_condition:
type: object
description: Metadata filter condition
properties:
conditions:
type: array
items:
type: object
properties:
name:
type: string
description: Field name
comparison_operator:
type: string
description: Comparison operator
value:
type: string
description: Field value
responses:
200:
description: Retrieval succeeded
schema:
type: object
properties:
records:
type: array
items:
type: object
properties:
content:
type: string
description: Content text
score:
type: number
description: Similarity score
title:
type: string
description: Document title
metadata:
type: object
description: Metadata info
404:
description: Knowledge base or document not found
"""
req = request.json req = request.json
question = req["query"] question = req["query"]
kb_id = req["knowledge_id"] kb_id = req["knowledge_id"]
@ -38,9 +121,9 @@ def retrieval(tenant_id):
retrieval_setting = req.get("retrieval_setting", {}) retrieval_setting = req.get("retrieval_setting", {})
similarity_threshold = float(retrieval_setting.get("score_threshold", 0.0)) similarity_threshold = float(retrieval_setting.get("score_threshold", 0.0))
top = int(retrieval_setting.get("top_k", 1024)) top = int(retrieval_setting.get("top_k", 1024))
metadata_condition = req.get("metadata_condition",{}) metadata_condition = req.get("metadata_condition", {})
metas = DocumentService.get_meta_by_kbs([kb_id]) metas = DocumentService.get_meta_by_kbs([kb_id])
doc_ids = [] doc_ids = []
try: try:
@ -50,12 +133,12 @@ def retrieval(tenant_id):
embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id) embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
print(metadata_condition) print(metadata_condition)
print("after",convert_conditions(metadata_condition)) # print("after", convert_conditions(metadata_condition))
doc_ids.extend(meta_filter(metas, convert_conditions(metadata_condition))) doc_ids.extend(meta_filter(metas, convert_conditions(metadata_condition)))
print("doc_ids",doc_ids) # print("doc_ids", doc_ids)
if not doc_ids and metadata_condition is not None: if not doc_ids and metadata_condition is not None:
doc_ids = ['-999'] doc_ids = ['-999']
ranks = settings.retrievaler.retrieval( ranks = settings.retriever.retrieval(
question, question,
embd_mdl, embd_mdl,
kb.tenant_id, kb.tenant_id,
@ -70,17 +153,17 @@ def retrieval(tenant_id):
) )
if use_kg: if use_kg:
ck = settings.kg_retrievaler.retrieval(question, ck = settings.kg_retriever.retrieval(question,
[tenant_id], [tenant_id],
[kb_id], [kb_id],
embd_mdl, embd_mdl,
LLMBundle(kb.tenant_id, LLMType.CHAT)) LLMBundle(kb.tenant_id, LLMType.CHAT))
if ck["content_with_weight"]: if ck["content_with_weight"]:
ranks["chunks"].insert(0, ck) ranks["chunks"].insert(0, ck)
records = [] records = []
for c in ranks["chunks"]: for c in ranks["chunks"]:
e, doc = DocumentService.get_by_id( c["doc_id"]) e, doc = DocumentService.get_by_id(c["doc_id"])
c.pop("vector", None) c.pop("vector", None)
meta = getattr(doc, 'meta_fields', {}) meta = getattr(doc, 'meta_fields', {})
meta["doc_id"] = c["doc_id"] meta["doc_id"] = c["doc_id"]
@ -100,5 +183,3 @@ def retrieval(tenant_id):
) )
logging.exception(e) logging.exception(e)
return build_error_result(message=str(e), code=settings.RetCode.SERVER_ERROR) return build_error_result(message=str(e), code=settings.RetCode.SERVER_ERROR)

View File

@ -458,7 +458,7 @@ def list_docs(dataset_id, tenant_id):
required: false required: false
default: true default: true
description: Order in descending. description: Order in descending.
- in: query - in: query
name: create_time_from name: create_time_from
type: integer type: integer
required: false required: false
@ -982,7 +982,7 @@ def list_chunks(tenant_id, dataset_id, document_id):
_ = Chunk(**final_chunk) _ = Chunk(**final_chunk)
elif settings.docStoreConn.indexExist(search.index_name(tenant_id), dataset_id): elif settings.docStoreConn.indexExist(search.index_name(tenant_id), dataset_id):
sres = settings.retrievaler.search(query, search.index_name(tenant_id), [dataset_id], emb_mdl=None, highlight=True) sres = settings.retriever.search(query, search.index_name(tenant_id), [dataset_id], emb_mdl=None, highlight=True)
res["total"] = sres.total res["total"] = sres.total
for id in sres.ids: for id in sres.ids:
d = { d = {
@ -1446,7 +1446,7 @@ def retrieval_test(tenant_id):
chat_mdl = LLMBundle(kb.tenant_id, LLMType.CHAT) chat_mdl = LLMBundle(kb.tenant_id, LLMType.CHAT)
question += keyword_extraction(chat_mdl, question) question += keyword_extraction(chat_mdl, question)
ranks = settings.retrievaler.retrieval( ranks = settings.retriever.retrieval(
question, question,
embd_mdl, embd_mdl,
tenant_ids, tenant_ids,
@ -1462,7 +1462,7 @@ def retrieval_test(tenant_id):
rank_feature=label_question(question, kbs), rank_feature=label_question(question, kbs),
) )
if use_kg: if use_kg:
ck = settings.kg_retrievaler.retrieval(question, [k.tenant_id for k in kbs], kb_ids, embd_mdl, LLMBundle(kb.tenant_id, LLMType.CHAT)) ck = settings.kg_retriever.retrieval(question, [k.tenant_id for k in kbs], kb_ids, embd_mdl, LLMBundle(kb.tenant_id, LLMType.CHAT))
if ck["content_with_weight"]: if ck["content_with_weight"]:
ranks["chunks"].insert(0, ck) ranks["chunks"].insert(0, ck)

View File

@ -1,3 +1,20 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import pathlib import pathlib
import re import re
@ -17,7 +34,8 @@ from api.utils.api_utils import get_json_result
from api.utils.file_utils import filename_type from api.utils.file_utils import filename_type
from rag.utils.storage_factory import STORAGE_IMPL from rag.utils.storage_factory import STORAGE_IMPL
@manager.route('/file/upload', methods=['POST']) # noqa: F821
@manager.route('/file/upload', methods=['POST']) # noqa: F821
@token_required @token_required
def upload(tenant_id): def upload(tenant_id):
""" """
@ -44,22 +62,22 @@ def upload(tenant_id):
type: object type: object
properties: properties:
data: data:
type: array type: array
items: items:
type: object type: object
properties: properties:
id: id:
type: string type: string
description: File ID description: File ID
name: name:
type: string type: string
description: File name description: File name
size: size:
type: integer type: integer
description: File size in bytes description: File size in bytes
type: type:
type: string type: string
description: File type (e.g., document, folder) description: File type (e.g., document, folder)
""" """
pf_id = request.form.get("parent_id") pf_id = request.form.get("parent_id")
@ -97,12 +115,14 @@ def upload(tenant_id):
e, file = FileService.get_by_id(file_id_list[len_id_list - 1]) e, file = FileService.get_by_id(file_id_list[len_id_list - 1])
if not e: if not e:
return get_json_result(data=False, message="Folder not found!", code=404) return get_json_result(data=False, message="Folder not found!", code=404)
last_folder = FileService.create_folder(file, file_id_list[len_id_list - 1], file_obj_names, len_id_list) last_folder = FileService.create_folder(file, file_id_list[len_id_list - 1], file_obj_names,
len_id_list)
else: else:
e, file = FileService.get_by_id(file_id_list[len_id_list - 2]) e, file = FileService.get_by_id(file_id_list[len_id_list - 2])
if not e: if not e:
return get_json_result(data=False, message="Folder not found!", code=404) return get_json_result(data=False, message="Folder not found!", code=404)
last_folder = FileService.create_folder(file, file_id_list[len_id_list - 2], file_obj_names, len_id_list) last_folder = FileService.create_folder(file, file_id_list[len_id_list - 2], file_obj_names,
len_id_list)
filetype = filename_type(file_obj_names[file_len - 1]) filetype = filename_type(file_obj_names[file_len - 1])
location = file_obj_names[file_len - 1] location = file_obj_names[file_len - 1]
@ -129,7 +149,7 @@ def upload(tenant_id):
return server_error_response(e) return server_error_response(e)
@manager.route('/file/create', methods=['POST']) # noqa: F821 @manager.route('/file/create', methods=['POST']) # noqa: F821
@token_required @token_required
def create(tenant_id): def create(tenant_id):
""" """
@ -207,7 +227,7 @@ def create(tenant_id):
return server_error_response(e) return server_error_response(e)
@manager.route('/file/list', methods=['GET']) # noqa: F821 @manager.route('/file/list', methods=['GET']) # noqa: F821
@token_required @token_required
def list_files(tenant_id): def list_files(tenant_id):
""" """
@ -299,7 +319,7 @@ def list_files(tenant_id):
return server_error_response(e) return server_error_response(e)
@manager.route('/file/root_folder', methods=['GET']) # noqa: F821 @manager.route('/file/root_folder', methods=['GET']) # noqa: F821
@token_required @token_required
def get_root_folder(tenant_id): def get_root_folder(tenant_id):
""" """
@ -335,7 +355,7 @@ def get_root_folder(tenant_id):
return server_error_response(e) return server_error_response(e)
@manager.route('/file/parent_folder', methods=['GET']) # noqa: F821 @manager.route('/file/parent_folder', methods=['GET']) # noqa: F821
@token_required @token_required
def get_parent_folder(): def get_parent_folder():
""" """
@ -380,7 +400,7 @@ def get_parent_folder():
return server_error_response(e) return server_error_response(e)
@manager.route('/file/all_parent_folder', methods=['GET']) # noqa: F821 @manager.route('/file/all_parent_folder', methods=['GET']) # noqa: F821
@token_required @token_required
def get_all_parent_folders(tenant_id): def get_all_parent_folders(tenant_id):
""" """
@ -428,7 +448,7 @@ def get_all_parent_folders(tenant_id):
return server_error_response(e) return server_error_response(e)
@manager.route('/file/rm', methods=['POST']) # noqa: F821 @manager.route('/file/rm', methods=['POST']) # noqa: F821
@token_required @token_required
def rm(tenant_id): def rm(tenant_id):
""" """
@ -502,7 +522,7 @@ def rm(tenant_id):
return server_error_response(e) return server_error_response(e)
@manager.route('/file/rename', methods=['POST']) # noqa: F821 @manager.route('/file/rename', methods=['POST']) # noqa: F821
@token_required @token_required
def rename(tenant_id): def rename(tenant_id):
""" """
@ -542,7 +562,8 @@ def rename(tenant_id):
if not e: if not e:
return get_json_result(message="File not found!", code=404) return get_json_result(message="File not found!", code=404)
if file.type != FileType.FOLDER.value and pathlib.Path(req["name"].lower()).suffix != pathlib.Path(file.name.lower()).suffix: if file.type != FileType.FOLDER.value and pathlib.Path(req["name"].lower()).suffix != pathlib.Path(
file.name.lower()).suffix:
return get_json_result(data=False, message="The extension of file can't be changed", code=400) return get_json_result(data=False, message="The extension of file can't be changed", code=400)
for existing_file in FileService.query(name=req["name"], pf_id=file.parent_id): for existing_file in FileService.query(name=req["name"], pf_id=file.parent_id):
@ -562,9 +583,9 @@ def rename(tenant_id):
return server_error_response(e) return server_error_response(e)
@manager.route('/file/get/<file_id>', methods=['GET']) # noqa: F821 @manager.route('/file/get/<file_id>', methods=['GET']) # noqa: F821
@token_required @token_required
def get(tenant_id,file_id): def get(tenant_id, file_id):
""" """
Download a file. Download a file.
--- ---
@ -610,7 +631,7 @@ def get(tenant_id,file_id):
return server_error_response(e) return server_error_response(e)
@manager.route('/file/mv', methods=['POST']) # noqa: F821 @manager.route('/file/mv', methods=['POST']) # noqa: F821
@token_required @token_required
def move(tenant_id): def move(tenant_id):
""" """
@ -669,6 +690,7 @@ def move(tenant_id):
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/file/convert', methods=['POST']) # noqa: F821 @manager.route('/file/convert', methods=['POST']) # noqa: F821
@token_required @token_required
def convert(tenant_id): def convert(tenant_id):
@ -735,4 +757,4 @@ def convert(tenant_id):
file2documents.append(file2document.to_json()) file2documents.append(file2document.to_json())
return get_json_result(data=file2documents) return get_json_result(data=file2documents)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)

View File

@ -36,7 +36,8 @@ from api.db.services.llm_service import LLMBundle
from api.db.services.search_service import SearchService from api.db.services.search_service import SearchService
from api.db.services.user_service import UserTenantService from api.db.services.user_service import UserTenantService
from api.utils import get_uuid from api.utils import get_uuid
from api.utils.api_utils import check_duplicate_ids, get_data_openai, get_error_data_result, get_json_result, get_result, server_error_response, token_required, validate_request from api.utils.api_utils import check_duplicate_ids, get_data_openai, get_error_data_result, get_json_result, \
get_result, server_error_response, token_required, validate_request
from rag.app.tag import label_question from rag.app.tag import label_question
from rag.prompts.template import load_prompt from rag.prompts.template import load_prompt
from rag.prompts.generator import cross_languages, gen_meta_filter, keyword_extraction, chunks_format from rag.prompts.generator import cross_languages, gen_meta_filter, keyword_extraction, chunks_format
@ -88,7 +89,8 @@ def create_agent_session(tenant_id, agent_id):
canvas.reset() canvas.reset()
cvs.dsl = json.loads(str(canvas)) cvs.dsl = json.loads(str(canvas))
conv = {"id": session_id, "dialog_id": cvs.id, "user_id": user_id, "message": [{"role": "assistant", "content": canvas.get_prologue()}], "source": "agent", "dsl": cvs.dsl} conv = {"id": session_id, "dialog_id": cvs.id, "user_id": user_id,
"message": [{"role": "assistant", "content": canvas.get_prologue()}], "source": "agent", "dsl": cvs.dsl}
API4ConversationService.save(**conv) API4ConversationService.save(**conv)
conv["agent_id"] = conv.pop("dialog_id") conv["agent_id"] = conv.pop("dialog_id")
return get_result(data=conv) return get_result(data=conv)
@ -279,7 +281,7 @@ def chat_completion_openai_like(tenant_id, chat_id):
reasoning_match = re.search(r"<think>(.*?)</think>", answer, flags=re.DOTALL) reasoning_match = re.search(r"<think>(.*?)</think>", answer, flags=re.DOTALL)
if reasoning_match: if reasoning_match:
reasoning_part = reasoning_match.group(1) reasoning_part = reasoning_match.group(1)
content_part = answer[reasoning_match.end() :] content_part = answer[reasoning_match.end():]
else: else:
reasoning_part = "" reasoning_part = ""
content_part = answer content_part = answer
@ -324,7 +326,8 @@ def chat_completion_openai_like(tenant_id, chat_id):
response["choices"][0]["delta"]["content"] = None response["choices"][0]["delta"]["content"] = None
response["choices"][0]["delta"]["reasoning_content"] = None response["choices"][0]["delta"]["reasoning_content"] = None
response["choices"][0]["finish_reason"] = "stop" response["choices"][0]["finish_reason"] = "stop"
response["usage"] = {"prompt_tokens": len(prompt), "completion_tokens": token_used, "total_tokens": len(prompt) + token_used} response["usage"] = {"prompt_tokens": len(prompt), "completion_tokens": token_used,
"total_tokens": len(prompt) + token_used}
if need_reference: if need_reference:
response["choices"][0]["delta"]["reference"] = chunks_format(last_ans.get("reference", [])) response["choices"][0]["delta"]["reference"] = chunks_format(last_ans.get("reference", []))
response["choices"][0]["delta"]["final_content"] = last_ans.get("answer", "") response["choices"][0]["delta"]["final_content"] = last_ans.get("answer", "")
@ -559,7 +562,8 @@ def list_agent_session(tenant_id, agent_id):
desc = True desc = True
# dsl defaults to True in all cases except for False and false # dsl defaults to True in all cases except for False and false
include_dsl = request.args.get("dsl") != "False" and request.args.get("dsl") != "false" include_dsl = request.args.get("dsl") != "False" and request.args.get("dsl") != "false"
total, convs = API4ConversationService.get_list(agent_id, tenant_id, page_number, items_per_page, orderby, desc, id, user_id, include_dsl) total, convs = API4ConversationService.get_list(agent_id, tenant_id, page_number, items_per_page, orderby, desc, id,
user_id, include_dsl)
if not convs: if not convs:
return get_result(data=[]) return get_result(data=[])
for conv in convs: for conv in convs:
@ -581,7 +585,8 @@ def list_agent_session(tenant_id, agent_id):
if message_num != 0 and messages[message_num]["role"] != "user": if message_num != 0 and messages[message_num]["role"] != "user":
chunk_list = [] chunk_list = []
# Add boundary and type checks to prevent KeyError # Add boundary and type checks to prevent KeyError
if chunk_num < len(conv["reference"]) and conv["reference"][chunk_num] is not None and isinstance(conv["reference"][chunk_num], dict) and "chunks" in conv["reference"][chunk_num]: if chunk_num < len(conv["reference"]) and conv["reference"][chunk_num] is not None and isinstance(
conv["reference"][chunk_num], dict) and "chunks" in conv["reference"][chunk_num]:
chunks = conv["reference"][chunk_num]["chunks"] chunks = conv["reference"][chunk_num]["chunks"]
for chunk in chunks: for chunk in chunks:
# Ensure chunk is a dictionary before calling get method # Ensure chunk is a dictionary before calling get method
@ -639,13 +644,16 @@ def delete(tenant_id, chat_id):
if errors: if errors:
if success_count > 0: if success_count > 0:
return get_result(data={"success_count": success_count, "errors": errors}, message=f"Partially deleted {success_count} sessions with {len(errors)} errors") return get_result(data={"success_count": success_count, "errors": errors},
message=f"Partially deleted {success_count} sessions with {len(errors)} errors")
else: else:
return get_error_data_result(message="; ".join(errors)) return get_error_data_result(message="; ".join(errors))
if duplicate_messages: if duplicate_messages:
if success_count > 0: if success_count > 0:
return get_result(message=f"Partially deleted {success_count} sessions with {len(duplicate_messages)} errors", data={"success_count": success_count, "errors": duplicate_messages}) return get_result(
message=f"Partially deleted {success_count} sessions with {len(duplicate_messages)} errors",
data={"success_count": success_count, "errors": duplicate_messages})
else: else:
return get_error_data_result(message=";".join(duplicate_messages)) return get_error_data_result(message=";".join(duplicate_messages))
@ -691,13 +699,16 @@ def delete_agent_session(tenant_id, agent_id):
if errors: if errors:
if success_count > 0: if success_count > 0:
return get_result(data={"success_count": success_count, "errors": errors}, message=f"Partially deleted {success_count} sessions with {len(errors)} errors") return get_result(data={"success_count": success_count, "errors": errors},
message=f"Partially deleted {success_count} sessions with {len(errors)} errors")
else: else:
return get_error_data_result(message="; ".join(errors)) return get_error_data_result(message="; ".join(errors))
if duplicate_messages: if duplicate_messages:
if success_count > 0: if success_count > 0:
return get_result(message=f"Partially deleted {success_count} sessions with {len(duplicate_messages)} errors", data={"success_count": success_count, "errors": duplicate_messages}) return get_result(
message=f"Partially deleted {success_count} sessions with {len(duplicate_messages)} errors",
data={"success_count": success_count, "errors": duplicate_messages})
else: else:
return get_error_data_result(message=";".join(duplicate_messages)) return get_error_data_result(message=";".join(duplicate_messages))
@ -730,7 +741,9 @@ def ask_about(tenant_id):
for ans in ask(req["question"], req["kb_ids"], uid): for ans in ask(req["question"], req["kb_ids"], uid):
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n" yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
except Exception as e: except Exception as e:
yield "data:" + json.dumps({"code": 500, "message": str(e), "data": {"answer": "**ERROR**: " + str(e), "reference": []}}, ensure_ascii=False) + "\n\n" yield "data:" + json.dumps(
{"code": 500, "message": str(e), "data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n" yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
resp = Response(stream(), mimetype="text/event-stream") resp = Response(stream(), mimetype="text/event-stream")
@ -882,7 +895,9 @@ def begin_inputs(agent_id):
return get_error_data_result(f"Can't find agent by ID: {agent_id}") return get_error_data_result(f"Can't find agent by ID: {agent_id}")
canvas = Canvas(json.dumps(cvs.dsl), objs[0].tenant_id) canvas = Canvas(json.dumps(cvs.dsl), objs[0].tenant_id)
return get_result(data={"title": cvs.title, "avatar": cvs.avatar, "inputs": canvas.get_component_input_form("begin"), "prologue": canvas.get_prologue(), "mode": canvas.get_mode()}) return get_result(
data={"title": cvs.title, "avatar": cvs.avatar, "inputs": canvas.get_component_input_form("begin"),
"prologue": canvas.get_prologue(), "mode": canvas.get_mode()})
@manager.route("/searchbots/ask", methods=["POST"]) # noqa: F821 @manager.route("/searchbots/ask", methods=["POST"]) # noqa: F821
@ -911,7 +926,9 @@ def ask_about_embedded():
for ans in ask(req["question"], req["kb_ids"], uid, search_config=search_config): for ans in ask(req["question"], req["kb_ids"], uid, search_config=search_config):
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n" yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
except Exception as e: except Exception as e:
yield "data:" + json.dumps({"code": 500, "message": str(e), "data": {"answer": "**ERROR**: " + str(e), "reference": []}}, ensure_ascii=False) + "\n\n" yield "data:" + json.dumps(
{"code": 500, "message": str(e), "data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n" yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
resp = Response(stream(), mimetype="text/event-stream") resp = Response(stream(), mimetype="text/event-stream")
@ -978,7 +995,8 @@ def retrieval_test_embedded():
tenant_ids.append(tenant.tenant_id) tenant_ids.append(tenant.tenant_id)
break break
else: else:
return get_json_result(data=False, message="Only owner of knowledgebase authorized for this operation.", code=settings.RetCode.OPERATING_ERROR) return get_json_result(data=False, message="Only owner of knowledgebase authorized for this operation.",
code=settings.RetCode.OPERATING_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_ids[0]) e, kb = KnowledgebaseService.get_by_id(kb_ids[0])
if not e: if not e:
@ -998,11 +1016,13 @@ def retrieval_test_embedded():
question += keyword_extraction(chat_mdl, question) question += keyword_extraction(chat_mdl, question)
labels = label_question(question, [kb]) labels = label_question(question, [kb])
ranks = settings.retrievaler.retrieval( ranks = settings.retriever.retrieval(
question, embd_mdl, tenant_ids, kb_ids, page, size, similarity_threshold, vector_similarity_weight, top, doc_ids, rerank_mdl=rerank_mdl, highlight=req.get("highlight"), rank_feature=labels question, embd_mdl, tenant_ids, kb_ids, page, size, similarity_threshold, vector_similarity_weight, top,
doc_ids, rerank_mdl=rerank_mdl, highlight=req.get("highlight"), rank_feature=labels
) )
if use_kg: if use_kg:
ck = settings.kg_retrievaler.retrieval(question, tenant_ids, kb_ids, embd_mdl, LLMBundle(kb.tenant_id, LLMType.CHAT)) ck = settings.kg_retriever.retrieval(question, tenant_ids, kb_ids, embd_mdl,
LLMBundle(kb.tenant_id, LLMType.CHAT))
if ck["content_with_weight"]: if ck["content_with_weight"]:
ranks["chunks"].insert(0, ck) ranks["chunks"].insert(0, ck)
@ -1013,7 +1033,8 @@ def retrieval_test_embedded():
return get_json_result(data=ranks) return get_json_result(data=ranks)
except Exception as e: except Exception as e:
if str(e).find("not_found") > 0: if str(e).find("not_found") > 0:
return get_json_result(data=False, message="No chunk found! Check the chunk status please!", code=settings.RetCode.DATA_ERROR) return get_json_result(data=False, message="No chunk found! Check the chunk status please!",
code=settings.RetCode.DATA_ERROR)
return server_error_response(e) return server_error_response(e)
@ -1082,7 +1103,8 @@ def detail_share_embedded():
if SearchService.query(tenant_id=tenant.tenant_id, id=search_id): if SearchService.query(tenant_id=tenant.tenant_id, id=search_id):
break break
else: else:
return get_json_result(data=False, message="Has no permission for this operation.", code=settings.RetCode.OPERATING_ERROR) return get_json_result(data=False, message="Has no permission for this operation.",
code=settings.RetCode.OPERATING_ERROR)
search = SearchService.get_detail(search_id) search = SearchService.get_detail(search_id)
if not search: if not search:

View File

@ -162,7 +162,7 @@ def status():
task_executors = REDIS_CONN.smembers("TASKEXE") task_executors = REDIS_CONN.smembers("TASKEXE")
now = datetime.now().timestamp() now = datetime.now().timestamp()
for task_executor_id in task_executors: for task_executor_id in task_executors:
heartbeats = REDIS_CONN.zrangebyscore(task_executor_id, now - 60*30, now) heartbeats = REDIS_CONN.zrangebyscore(task_executor_id, now - 60 * 30, now)
heartbeats = [json.loads(heartbeat) for heartbeat in heartbeats] heartbeats = [json.loads(heartbeat) for heartbeat in heartbeats]
task_executor_heartbeats[task_executor_id] = heartbeats task_executor_heartbeats[task_executor_id] = heartbeats
except Exception: except Exception:
@ -178,6 +178,11 @@ def healthz():
return jsonify(result), (200 if all_ok else 500) return jsonify(result), (200 if all_ok else 500)
@manager.route("/ping", methods=["GET"]) # noqa: F821
def ping():
return "pong", 200
@manager.route("/new_token", methods=["POST"]) # noqa: F821 @manager.route("/new_token", methods=["POST"]) # noqa: F821
@login_required @login_required
def new_token(): def new_token():
@ -269,7 +274,8 @@ def token_list():
objs = [o.to_dict() for o in objs] objs = [o.to_dict() for o in objs]
for o in objs: for o in objs:
if not o["beta"]: if not o["beta"]:
o["beta"] = generate_confirmation_token(generate_confirmation_token(tenants[0].tenant_id)).replace("ragflow-", "")[:32] o["beta"] = generate_confirmation_token(generate_confirmation_token(tenants[0].tenant_id)).replace(
"ragflow-", "")[:32]
APITokenService.filter_update([APIToken.tenant_id == tenant_id, APIToken.token == o["token"]], o) APITokenService.filter_update([APIToken.tenant_id == tenant_id, APIToken.token == o["token"]], o)
return get_json_result(data=objs) return get_json_result(data=objs)
except Exception as e: except Exception as e:

View File

@ -70,7 +70,8 @@ def create(tenant_id):
return get_data_error_result(message=f"{invite_user_email} is already in the team.") return get_data_error_result(message=f"{invite_user_email} is already in the team.")
if user_tenant_role == UserTenantRole.OWNER: if user_tenant_role == UserTenantRole.OWNER:
return get_data_error_result(message=f"{invite_user_email} is the owner of the team.") return get_data_error_result(message=f"{invite_user_email} is the owner of the team.")
return get_data_error_result(message=f"{invite_user_email} is in the team, but the role: {user_tenant_role} is invalid.") return get_data_error_result(
message=f"{invite_user_email} is in the team, but the role: {user_tenant_role} is invalid.")
UserTenantService.save( UserTenantService.save(
id=get_uuid(), id=get_uuid(),
@ -132,7 +133,8 @@ def tenant_list():
@login_required @login_required
def agree(tenant_id): def agree(tenant_id):
try: try:
UserTenantService.filter_update([UserTenant.tenant_id == tenant_id, UserTenant.user_id == current_user.id], {"role": UserTenantRole.NORMAL}) UserTenantService.filter_update([UserTenant.tenant_id == tenant_id, UserTenant.user_id == current_user.id],
{"role": UserTenantRole.NORMAL})
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)

View File

@ -0,0 +1,59 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from api.db import TenantPermission
from api.db.db_models import File, Knowledgebase
from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.user_service import TenantService
def check_kb_team_permission(kb: dict | Knowledgebase, other: str) -> bool:
kb = kb.to_dict() if isinstance(kb, Knowledgebase) else kb
kb_tenant_id = kb["tenant_id"]
if kb_tenant_id == other:
return True
if kb["permission"] != TenantPermission.TEAM:
return False
joined_tenants = TenantService.get_joined_tenants_by_user_id(other)
return any(tenant["tenant_id"] == kb_tenant_id for tenant in joined_tenants)
def check_file_team_permission(file: dict | File, other: str) -> bool:
file = file.to_dict() if isinstance(file, File) else file
file_tenant_id = file["tenant_id"]
if file_tenant_id == other:
return True
file_id = file["id"]
kb_ids = [kb_info["kb_id"] for kb_info in FileService.get_kb_id_by_file_id(file_id)]
for kb_id in kb_ids:
ok, kb = KnowledgebaseService.get_by_id(kb_id)
if not ok:
continue
if check_kb_team_permission(kb, other):
return True
return False

View File

@ -12,4 +12,27 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
#
class AdminException(Exception):
def __init__(self, message, code=400):
super().__init__(message)
self.type = "admin"
self.code = code
self.message = message
class UserNotFoundError(AdminException):
def __init__(self, username):
super().__init__(f"User '{username}' not found", 404)
class UserAlreadyExistsError(AdminException):
def __init__(self, username):
super().__init__(f"User '{username}' already exists", 409)
class CannotDeleteAdminError(AdminException):
def __init__(self):
super().__init__("Cannot delete admin account", 403)

View File

@ -641,7 +641,7 @@ class TenantLLM(DataBaseModel):
llm_factory = CharField(max_length=128, null=False, help_text="LLM factory name", index=True) llm_factory = CharField(max_length=128, null=False, help_text="LLM factory name", index=True)
model_type = CharField(max_length=128, null=True, help_text="LLM, Text Embedding, Image2Text, ASR", index=True) model_type = CharField(max_length=128, null=True, help_text="LLM, Text Embedding, Image2Text, ASR", index=True)
llm_name = CharField(max_length=128, null=True, help_text="LLM name", default="", index=True) llm_name = CharField(max_length=128, null=True, help_text="LLM name", default="", index=True)
api_key = CharField(max_length=2048, null=True, help_text="API KEY", index=True) api_key = TextField(null=True, help_text="API KEY")
api_base = CharField(max_length=255, null=True, help_text="API Base") api_base = CharField(max_length=255, null=True, help_text="API Base")
max_tokens = IntegerField(default=8192, index=True) max_tokens = IntegerField(default=8192, index=True)
used_tokens = IntegerField(default=0, index=True) used_tokens = IntegerField(default=0, index=True)
@ -1142,4 +1142,8 @@ def migrate_db():
migrate(migrator.add_column("knowledgebase", "mindmap_task_finish_at", CharField(null=True))) migrate(migrator.add_column("knowledgebase", "mindmap_task_finish_at", CharField(null=True)))
except Exception: except Exception:
pass pass
try:
migrate(migrator.alter_column_type("tenant_llm", "api_key", TextField(null=True, help_text="API KEY")))
except Exception:
pass
logging.disable(logging.NOTSET) logging.disable(logging.NOTSET)

View File

@ -370,7 +370,7 @@ def chat(dialog, messages, stream=True, **kwargs):
chat_mdl.bind_tools(toolcall_session, tools) chat_mdl.bind_tools(toolcall_session, tools)
bind_models_ts = timer() bind_models_ts = timer()
retriever = settings.retrievaler retriever = settings.retriever
questions = [m["content"] for m in messages if m["role"] == "user"][-3:] questions = [m["content"] for m in messages if m["role"] == "user"][-3:]
attachments = kwargs["doc_ids"].split(",") if "doc_ids" in kwargs else [] attachments = kwargs["doc_ids"].split(",") if "doc_ids" in kwargs else []
if "doc_ids" in messages[-1]: if "doc_ids" in messages[-1]:
@ -466,13 +466,17 @@ def chat(dialog, messages, stream=True, **kwargs):
rerank_mdl=rerank_mdl, rerank_mdl=rerank_mdl,
rank_feature=label_question(" ".join(questions), kbs), rank_feature=label_question(" ".join(questions), kbs),
) )
if prompt_config.get("toc_enhance"):
cks = retriever.retrieval_by_toc(" ".join(questions), kbinfos["chunks"], tenant_ids, chat_mdl, dialog.top_n)
if cks:
kbinfos["chunks"] = cks
if prompt_config.get("tavily_api_key"): if prompt_config.get("tavily_api_key"):
tav = Tavily(prompt_config["tavily_api_key"]) tav = Tavily(prompt_config["tavily_api_key"])
tav_res = tav.retrieve_chunks(" ".join(questions)) tav_res = tav.retrieve_chunks(" ".join(questions))
kbinfos["chunks"].extend(tav_res["chunks"]) kbinfos["chunks"].extend(tav_res["chunks"])
kbinfos["doc_aggs"].extend(tav_res["doc_aggs"]) kbinfos["doc_aggs"].extend(tav_res["doc_aggs"])
if prompt_config.get("use_kg"): if prompt_config.get("use_kg"):
ck = settings.kg_retrievaler.retrieval(" ".join(questions), tenant_ids, dialog.kb_ids, embd_mdl, ck = settings.kg_retriever.retrieval(" ".join(questions), tenant_ids, dialog.kb_ids, embd_mdl,
LLMBundle(dialog.tenant_id, LLMType.CHAT)) LLMBundle(dialog.tenant_id, LLMType.CHAT))
if ck["content_with_weight"]: if ck["content_with_weight"]:
kbinfos["chunks"].insert(0, ck) kbinfos["chunks"].insert(0, ck)
@ -658,7 +662,7 @@ Please write the SQL, only SQL, without any other explanations or text.
logging.debug(f"{question} get SQL(refined): {sql}") logging.debug(f"{question} get SQL(refined): {sql}")
tried_times += 1 tried_times += 1
return settings.retrievaler.sql_retrieval(sql, format="json"), sql return settings.retriever.sql_retrieval(sql, format="json"), sql
tbl, sql = get_table() tbl, sql = get_table()
if tbl is None: if tbl is None:
@ -752,7 +756,7 @@ def ask(question, kb_ids, tenant_id, chat_llm_name=None, search_config={}):
embedding_list = list(set([kb.embd_id for kb in kbs])) embedding_list = list(set([kb.embd_id for kb in kbs]))
is_knowledge_graph = all([kb.parser_id == ParserType.KG for kb in kbs]) is_knowledge_graph = all([kb.parser_id == ParserType.KG for kb in kbs])
retriever = settings.retrievaler if not is_knowledge_graph else settings.kg_retrievaler retriever = settings.retriever if not is_knowledge_graph else settings.kg_retriever
embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING, embedding_list[0]) embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING, embedding_list[0])
chat_mdl = LLMBundle(tenant_id, LLMType.CHAT, chat_llm_name) chat_mdl = LLMBundle(tenant_id, LLMType.CHAT, chat_llm_name)
@ -848,7 +852,7 @@ def gen_mindmap(question, kb_ids, tenant_id, search_config={}):
if not doc_ids: if not doc_ids:
doc_ids = None doc_ids = None
ranks = settings.retrievaler.retrieval( ranks = settings.retriever.retrieval(
question=question, question=question,
embd_mdl=embd_mdl, embd_mdl=embd_mdl,
tenant_ids=tenant_ids, tenant_ids=tenant_ids,

View File

@ -379,6 +379,7 @@ class KnowledgebaseService(CommonService):
# name: Optional name filter # name: Optional name filter
# Returns: # Returns:
# List of knowledge bases # List of knowledge bases
# Total count of knowledge bases
kbs = cls.model.select() kbs = cls.model.select()
if id: if id:
kbs = kbs.where(cls.model.id == id) kbs = kbs.where(cls.model.id == id)
@ -390,6 +391,7 @@ class KnowledgebaseService(CommonService):
cls.model.tenant_id == user_id)) cls.model.tenant_id == user_id))
& (cls.model.status == StatusEnum.VALID.value) & (cls.model.status == StatusEnum.VALID.value)
) )
if desc: if desc:
kbs = kbs.order_by(cls.model.getter_by(orderby).desc()) kbs = kbs.order_by(cls.model.getter_by(orderby).desc())
else: else:
@ -397,7 +399,7 @@ class KnowledgebaseService(CommonService):
kbs = kbs.paginate(page_number, items_per_page) kbs = kbs.paginate(page_number, items_per_page)
return list(kbs.dicts()) return list(kbs.dicts()), kbs.count()
@classmethod @classmethod
@DB.connection_context() @DB.connection_context()

View File

@ -33,7 +33,8 @@ class MCPServerService(CommonService):
@classmethod @classmethod
@DB.connection_context() @DB.connection_context()
def get_servers(cls, tenant_id: str, id_list: list[str] | None, page_number, items_per_page, orderby, desc, keywords): def get_servers(cls, tenant_id: str, id_list: list[str] | None, page_number, items_per_page, orderby, desc,
keywords):
"""Retrieve all MCP servers associated with a tenant. """Retrieve all MCP servers associated with a tenant.
This method fetches all MCP servers for a given tenant, ordered by creation time. This method fetches all MCP servers for a given tenant, ordered by creation time.

View File

@ -94,7 +94,8 @@ class SearchService(CommonService):
query = ( query = (
cls.model.select(*fields) cls.model.select(*fields)
.join(User, on=(cls.model.tenant_id == User.id)) .join(User, on=(cls.model.tenant_id == User.id))
.where(((cls.model.tenant_id.in_(joined_tenant_ids)) | (cls.model.tenant_id == user_id)) & (cls.model.status == StatusEnum.VALID.value)) .where(((cls.model.tenant_id.in_(joined_tenant_ids)) | (cls.model.tenant_id == user_id)) & (
cls.model.status == StatusEnum.VALID.value))
) )
if keywords: if keywords:

View File

@ -165,7 +165,7 @@ class TaskService(CommonService):
] ]
tasks = ( tasks = (
cls.model.select(*fields).order_by(cls.model.from_page.asc(), cls.model.create_time.desc()) cls.model.select(*fields).order_by(cls.model.from_page.asc(), cls.model.create_time.desc())
.where(cls.model.doc_id == doc_id) .where(cls.model.doc_id == doc_id)
) )
tasks = list(tasks.dicts()) tasks = list(tasks.dicts())
if not tasks: if not tasks:
@ -205,18 +205,18 @@ class TaskService(CommonService):
cls.model.select( cls.model.select(
*[Document.id, Document.kb_id, Document.location, File.parent_id] *[Document.id, Document.kb_id, Document.location, File.parent_id]
) )
.join(Document, on=(cls.model.doc_id == Document.id)) .join(Document, on=(cls.model.doc_id == Document.id))
.join( .join(
File2Document, File2Document,
on=(File2Document.document_id == Document.id), on=(File2Document.document_id == Document.id),
join_type=JOIN.LEFT_OUTER, join_type=JOIN.LEFT_OUTER,
) )
.join( .join(
File, File,
on=(File2Document.file_id == File.id), on=(File2Document.file_id == File.id),
join_type=JOIN.LEFT_OUTER, join_type=JOIN.LEFT_OUTER,
) )
.where( .where(
Document.status == StatusEnum.VALID.value, Document.status == StatusEnum.VALID.value,
Document.run == TaskStatus.RUNNING.value, Document.run == TaskStatus.RUNNING.value,
~(Document.type == FileType.VIRTUAL.value), ~(Document.type == FileType.VIRTUAL.value),
@ -294,8 +294,8 @@ class TaskService(CommonService):
cls.model.update(progress=prog).where( cls.model.update(progress=prog).where(
(cls.model.id == id) & (cls.model.id == id) &
( (
(cls.model.progress != -1) & (cls.model.progress != -1) &
((prog == -1) | (prog > cls.model.progress)) ((prog == -1) | (prog > cls.model.progress))
) )
).execute() ).execute()
else: else:
@ -343,6 +343,7 @@ def queue_tasks(doc: dict, bucket: str, name: str, priority: int):
- Task digests are calculated for optimization and reuse - Task digests are calculated for optimization and reuse
- Previous task chunks may be reused if available - Previous task chunks may be reused if available
""" """
def new_task(): def new_task():
return { return {
"id": get_uuid(), "id": get_uuid(),
@ -515,7 +516,7 @@ def queue_dataflow(tenant_id:str, flow_id:str, task_id:str, doc_id:str=CANVAS_DE
task["file"] = file task["file"] = file
if not REDIS_CONN.queue_product( if not REDIS_CONN.queue_product(
get_svr_queue_name(priority), message=task get_svr_queue_name(priority), message=task
): ):
return False, "Can't access Redis. Please check the Redis' status." return False, "Can't access Redis. Please check the Redis' status."

View File

@ -57,8 +57,10 @@ class TenantLLMService(CommonService):
@classmethod @classmethod
@DB.connection_context() @DB.connection_context()
def get_my_llms(cls, tenant_id): def get_my_llms(cls, tenant_id):
fields = [cls.model.llm_factory, LLMFactories.logo, LLMFactories.tags, cls.model.model_type, cls.model.llm_name, cls.model.used_tokens] fields = [cls.model.llm_factory, LLMFactories.logo, LLMFactories.tags, cls.model.model_type, cls.model.llm_name,
objs = cls.model.select(*fields).join(LLMFactories, on=(cls.model.llm_factory == LLMFactories.name)).where(cls.model.tenant_id == tenant_id, ~cls.model.api_key.is_null()).dicts() cls.model.used_tokens]
objs = cls.model.select(*fields).join(LLMFactories, on=(cls.model.llm_factory == LLMFactories.name)).where(
cls.model.tenant_id == tenant_id, ~cls.model.api_key.is_null()).dicts()
return list(objs) return list(objs)
@ -122,7 +124,8 @@ class TenantLLMService(CommonService):
model_config = {"llm_factory": llm[0].fid, "api_key": "", "llm_name": mdlnm, "api_base": ""} model_config = {"llm_factory": llm[0].fid, "api_key": "", "llm_name": mdlnm, "api_base": ""}
if not model_config: if not model_config:
if mdlnm == "flag-embedding": if mdlnm == "flag-embedding":
model_config = {"llm_factory": "Tongyi-Qianwen", "api_key": "", "llm_name": llm_name, "api_base": ""} model_config = {"llm_factory": "Tongyi-Qianwen", "api_key": "", "llm_name": llm_name,
"api_base": ""}
else: else:
if not mdlnm: if not mdlnm:
raise LookupError(f"Type of {llm_type} model is not set.") raise LookupError(f"Type of {llm_type} model is not set.")
@ -137,27 +140,33 @@ class TenantLLMService(CommonService):
if llm_type == LLMType.EMBEDDING.value: if llm_type == LLMType.EMBEDDING.value:
if model_config["llm_factory"] not in EmbeddingModel: if model_config["llm_factory"] not in EmbeddingModel:
return return
return EmbeddingModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"], base_url=model_config["api_base"]) return EmbeddingModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"],
base_url=model_config["api_base"])
if llm_type == LLMType.RERANK: if llm_type == LLMType.RERANK:
if model_config["llm_factory"] not in RerankModel: if model_config["llm_factory"] not in RerankModel:
return return
return RerankModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"], base_url=model_config["api_base"]) return RerankModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"],
base_url=model_config["api_base"])
if llm_type == LLMType.IMAGE2TEXT.value: if llm_type == LLMType.IMAGE2TEXT.value:
if model_config["llm_factory"] not in CvModel: if model_config["llm_factory"] not in CvModel:
return return
return CvModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"], lang, base_url=model_config["api_base"], **kwargs) return CvModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"], lang,
base_url=model_config["api_base"], **kwargs)
if llm_type == LLMType.CHAT.value: if llm_type == LLMType.CHAT.value:
if model_config["llm_factory"] not in ChatModel: if model_config["llm_factory"] not in ChatModel:
return return
return ChatModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"], base_url=model_config["api_base"], **kwargs) return ChatModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"],
base_url=model_config["api_base"], **kwargs)
if llm_type == LLMType.SPEECH2TEXT: if llm_type == LLMType.SPEECH2TEXT:
if model_config["llm_factory"] not in Seq2txtModel: if model_config["llm_factory"] not in Seq2txtModel:
return return
return Seq2txtModel[model_config["llm_factory"]](key=model_config["api_key"], model_name=model_config["llm_name"], lang=lang, base_url=model_config["api_base"]) return Seq2txtModel[model_config["llm_factory"]](key=model_config["api_key"],
model_name=model_config["llm_name"], lang=lang,
base_url=model_config["api_base"])
if llm_type == LLMType.TTS: if llm_type == LLMType.TTS:
if model_config["llm_factory"] not in TTSModel: if model_config["llm_factory"] not in TTSModel:
return return
@ -194,11 +203,14 @@ class TenantLLMService(CommonService):
try: try:
num = ( num = (
cls.model.update(used_tokens=cls.model.used_tokens + used_tokens) cls.model.update(used_tokens=cls.model.used_tokens + used_tokens)
.where(cls.model.tenant_id == tenant_id, cls.model.llm_name == llm_name, cls.model.llm_factory == llm_factory if llm_factory else True) .where(cls.model.tenant_id == tenant_id, cls.model.llm_name == llm_name,
cls.model.llm_factory == llm_factory if llm_factory else True)
.execute() .execute()
) )
except Exception: except Exception:
logging.exception("TenantLLMService.increase_usage got exception,Failed to update used_tokens for tenant_id=%s, llm_name=%s", tenant_id, llm_name) logging.exception(
"TenantLLMService.increase_usage got exception,Failed to update used_tokens for tenant_id=%s, llm_name=%s",
tenant_id, llm_name)
return 0 return 0
return num return num
@ -206,7 +218,9 @@ class TenantLLMService(CommonService):
@classmethod @classmethod
@DB.connection_context() @DB.connection_context()
def get_openai_models(cls): def get_openai_models(cls):
objs = cls.model.select().where((cls.model.llm_factory == "OpenAI"), ~(cls.model.llm_name == "text-embedding-3-small"), ~(cls.model.llm_name == "text-embedding-3-large")).dicts() objs = cls.model.select().where((cls.model.llm_factory == "OpenAI"),
~(cls.model.llm_name == "text-embedding-3-small"),
~(cls.model.llm_name == "text-embedding-3-large")).dicts()
return list(objs) return list(objs)
@classmethod @classmethod
@ -250,8 +264,9 @@ class LLM4Tenant:
langfuse_keys = TenantLangfuseService.filter_by_tenant(tenant_id=tenant_id) langfuse_keys = TenantLangfuseService.filter_by_tenant(tenant_id=tenant_id)
self.langfuse = None self.langfuse = None
if langfuse_keys: if langfuse_keys:
langfuse = Langfuse(public_key=langfuse_keys.public_key, secret_key=langfuse_keys.secret_key, host=langfuse_keys.host) langfuse = Langfuse(public_key=langfuse_keys.public_key, secret_key=langfuse_keys.secret_key,
host=langfuse_keys.host)
if langfuse.auth_check(): if langfuse.auth_check():
self.langfuse = langfuse self.langfuse = langfuse
trace_id = self.langfuse.create_trace_id() trace_id = self.langfuse.create_trace_id()
self.trace_context = {"trace_id": trace_id} self.trace_context = {"trace_id": trace_id}

View File

@ -2,22 +2,22 @@ from api.db.db_models import UserCanvasVersion, DB
from api.db.services.common_service import CommonService from api.db.services.common_service import CommonService
from peewee import DoesNotExist from peewee import DoesNotExist
class UserCanvasVersionService(CommonService): class UserCanvasVersionService(CommonService):
model = UserCanvasVersion model = UserCanvasVersion
@classmethod @classmethod
@DB.connection_context() @DB.connection_context()
def list_by_canvas_id(cls, user_canvas_id): def list_by_canvas_id(cls, user_canvas_id):
try: try:
user_canvas_version = cls.model.select( user_canvas_version = cls.model.select(
*[cls.model.id, *[cls.model.id,
cls.model.create_time, cls.model.create_time,
cls.model.title, cls.model.title,
cls.model.create_date, cls.model.create_date,
cls.model.update_date, cls.model.update_date,
cls.model.user_canvas_id, cls.model.user_canvas_id,
cls.model.update_time] cls.model.update_time]
).where(cls.model.user_canvas_id == user_canvas_id) ).where(cls.model.user_canvas_id == user_canvas_id)
return user_canvas_version return user_canvas_version
except DoesNotExist: except DoesNotExist:
@ -46,18 +46,16 @@ class UserCanvasVersionService(CommonService):
@DB.connection_context() @DB.connection_context()
def delete_all_versions(cls, user_canvas_id): def delete_all_versions(cls, user_canvas_id):
try: try:
user_canvas_version = cls.model.select().where(cls.model.user_canvas_id == user_canvas_id).order_by(cls.model.create_time.desc()) user_canvas_version = cls.model.select().where(cls.model.user_canvas_id == user_canvas_id).order_by(
cls.model.create_time.desc())
if user_canvas_version.count() > 20: if user_canvas_version.count() > 20:
delete_ids = [] delete_ids = []
for i in range(20, user_canvas_version.count()): for i in range(20, user_canvas_version.count()):
delete_ids.append(user_canvas_version[i].id) delete_ids.append(user_canvas_version[i].id)
cls.delete_by_ids(delete_ids) cls.delete_by_ids(delete_ids)
return True return True
except DoesNotExist: except DoesNotExist:
return None return None
except Exception: except Exception:
return None return None

View File

@ -315,4 +315,4 @@ class UserTenantService(CommonService):
).first() ).first()
return user_tenant return user_tenant
except peewee.DoesNotExist: except peewee.DoesNotExist:
return None return None

View File

@ -65,8 +65,8 @@ OAUTH_CONFIG = None
DOC_ENGINE = None DOC_ENGINE = None
docStoreConn = None docStoreConn = None
retrievaler = None retriever = None
kg_retrievaler = None kg_retriever = None
# user registration switch # user registration switch
REGISTER_ENABLED = 1 REGISTER_ENABLED = 1
@ -174,7 +174,7 @@ def init_settings():
OAUTH_CONFIG = get_base_config("oauth", {}) OAUTH_CONFIG = get_base_config("oauth", {})
global DOC_ENGINE, docStoreConn, retrievaler, kg_retrievaler global DOC_ENGINE, docStoreConn, retriever, kg_retriever
DOC_ENGINE = os.environ.get("DOC_ENGINE", "elasticsearch") DOC_ENGINE = os.environ.get("DOC_ENGINE", "elasticsearch")
# DOC_ENGINE = os.environ.get('DOC_ENGINE', "opensearch") # DOC_ENGINE = os.environ.get('DOC_ENGINE', "opensearch")
lower_case_doc_engine = DOC_ENGINE.lower() lower_case_doc_engine = DOC_ENGINE.lower()
@ -187,10 +187,10 @@ def init_settings():
else: else:
raise Exception(f"Not supported doc engine: {DOC_ENGINE}") raise Exception(f"Not supported doc engine: {DOC_ENGINE}")
retrievaler = search.Dealer(docStoreConn) retriever = search.Dealer(docStoreConn)
from graphrag import search as kg_search from graphrag import search as kg_search
kg_retrievaler = kg_search.KGSearch(docStoreConn) kg_retriever = kg_search.KGSearch(docStoreConn)
if int(os.environ.get("SANDBOX_ENABLED", "0")): if int(os.environ.get("SANDBOX_ENABLED", "0")):
global SANDBOX_HOST global SANDBOX_HOST

View File

@ -51,15 +51,13 @@ from api import settings
from api.constants import REQUEST_MAX_WAIT_SEC, REQUEST_WAIT_SEC from api.constants import REQUEST_MAX_WAIT_SEC, REQUEST_WAIT_SEC
from api.db import ActiveEnum from api.db import ActiveEnum
from api.db.db_models import APIToken from api.db.db_models import APIToken
from api.db.services import UserService
from api.db.services.llm_service import LLMService
from api.db.services.tenant_llm_service import TenantLLMService
from api.utils.json import CustomJSONEncoder, json_dumps from api.utils.json import CustomJSONEncoder, json_dumps
from api.utils import get_uuid from api.utils import get_uuid
from rag.utils.mcp_tool_call_conn import MCPToolCallSession, close_multiple_mcp_toolcall_sessions from rag.utils.mcp_tool_call_conn import MCPToolCallSession, close_multiple_mcp_toolcall_sessions
requests.models.complexjson.dumps = functools.partial(json.dumps, cls=CustomJSONEncoder) requests.models.complexjson.dumps = functools.partial(json.dumps, cls=CustomJSONEncoder)
def serialize_for_json(obj): def serialize_for_json(obj):
""" """
Recursively serialize objects to make them JSON serializable. Recursively serialize objects to make them JSON serializable.
@ -68,8 +66,8 @@ def serialize_for_json(obj):
if hasattr(obj, '__dict__'): if hasattr(obj, '__dict__'):
# For objects with __dict__, try to serialize their attributes # For objects with __dict__, try to serialize their attributes
try: try:
return {key: serialize_for_json(value) for key, value in obj.__dict__.items() return {key: serialize_for_json(value) for key, value in obj.__dict__.items()
if not key.startswith('_')} if not key.startswith('_')}
except (AttributeError, TypeError): except (AttributeError, TypeError):
return str(obj) return str(obj)
elif hasattr(obj, '__name__'): elif hasattr(obj, '__name__'):
@ -85,6 +83,7 @@ def serialize_for_json(obj):
# Fallback: convert to string representation # Fallback: convert to string representation
return str(obj) return str(obj)
def request(**kwargs): def request(**kwargs):
sess = requests.Session() sess = requests.Session()
stream = kwargs.pop("stream", sess.stream) stream = kwargs.pop("stream", sess.stream)
@ -105,7 +104,8 @@ def request(**kwargs):
settings.HTTP_APP_KEY.encode("ascii"), settings.HTTP_APP_KEY.encode("ascii"),
prepped.path_url.encode("ascii"), prepped.path_url.encode("ascii"),
prepped.body if kwargs.get("json") else b"", prepped.body if kwargs.get("json") else b"",
urlencode(sorted(kwargs["data"].items()), quote_via=quote, safe="-._~").encode("ascii") if kwargs.get("data") and isinstance(kwargs["data"], dict) else b"", urlencode(sorted(kwargs["data"].items()), quote_via=quote, safe="-._~").encode(
"ascii") if kwargs.get("data") and isinstance(kwargs["data"], dict) else b"",
] ]
), ),
"sha1", "sha1",
@ -127,7 +127,7 @@ def request(**kwargs):
def get_exponential_backoff_interval(retries, full_jitter=False): def get_exponential_backoff_interval(retries, full_jitter=False):
"""Calculate the exponential backoff wait time.""" """Calculate the exponential backoff wait time."""
# Will be zero if factor equals 0 # Will be zero if factor equals 0
countdown = min(REQUEST_MAX_WAIT_SEC, REQUEST_WAIT_SEC * (2**retries)) countdown = min(REQUEST_MAX_WAIT_SEC, REQUEST_WAIT_SEC * (2 ** retries))
# Full jitter according to # Full jitter according to
# https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/ # https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/
if full_jitter: if full_jitter:
@ -158,11 +158,12 @@ def server_error_response(e):
if len(e.args) > 1: if len(e.args) > 1:
try: try:
serialized_data = serialize_for_json(e.args[1]) serialized_data = serialize_for_json(e.args[1])
return get_json_result(code= settings.RetCode.EXCEPTION_ERROR, message=repr(e.args[0]), data=serialized_data) return get_json_result(code=settings.RetCode.EXCEPTION_ERROR, message=repr(e.args[0]), data=serialized_data)
except Exception: except Exception:
return get_json_result(code=settings.RetCode.EXCEPTION_ERROR, message=repr(e.args[0]), data=None) return get_json_result(code=settings.RetCode.EXCEPTION_ERROR, message=repr(e.args[0]), data=None)
if repr(e).find("index_not_found_exception") >= 0: if repr(e).find("index_not_found_exception") >= 0:
return get_json_result(code=settings.RetCode.EXCEPTION_ERROR, message="No chunk found, please upload file and parse it.") return get_json_result(code=settings.RetCode.EXCEPTION_ERROR,
message="No chunk found, please upload file and parse it.")
return get_json_result(code=settings.RetCode.EXCEPTION_ERROR, message=repr(e)) return get_json_result(code=settings.RetCode.EXCEPTION_ERROR, message=repr(e))
@ -207,7 +208,8 @@ def validate_request(*args, **kwargs):
if no_arguments: if no_arguments:
error_string += "required argument are missing: {}; ".format(",".join(no_arguments)) error_string += "required argument are missing: {}; ".format(",".join(no_arguments))
if error_arguments: if error_arguments:
error_string += "required argument values: {}".format(",".join(["{}={}".format(a[0], a[1]) for a in error_arguments])) error_string += "required argument values: {}".format(
",".join(["{}={}".format(a[0], a[1]) for a in error_arguments]))
return get_json_result(code=settings.RetCode.ARGUMENT_ERROR, message=error_string) return get_json_result(code=settings.RetCode.ARGUMENT_ERROR, message=error_string)
return func(*_args, **_kwargs) return func(*_args, **_kwargs)
@ -222,7 +224,8 @@ def not_allowed_parameters(*params):
input_arguments = flask_request.json or flask_request.form.to_dict() input_arguments = flask_request.json or flask_request.form.to_dict()
for param in params: for param in params:
if param in input_arguments: if param in input_arguments:
return get_json_result(code=settings.RetCode.ARGUMENT_ERROR, message=f"Parameter {param} isn't allowed") return get_json_result(code=settings.RetCode.ARGUMENT_ERROR,
message=f"Parameter {param} isn't allowed")
return f(*args, **kwargs) return f(*args, **kwargs)
return wrapper return wrapper
@ -233,12 +236,14 @@ def not_allowed_parameters(*params):
def active_required(f): def active_required(f):
@wraps(f) @wraps(f)
def wrapper(*args, **kwargs): def wrapper(*args, **kwargs):
from api.db.services import UserService
user_id = current_user.id user_id = current_user.id
usr = UserService.filter_by_id(user_id) usr = UserService.filter_by_id(user_id)
# check is_active # check is_active
if not usr or not usr.is_active == ActiveEnum.ACTIVE.value: if not usr or not usr.is_active == ActiveEnum.ACTIVE.value:
return get_json_result(code=settings.RetCode.FORBIDDEN, message="User isn't active, please activate first.") return get_json_result(code=settings.RetCode.FORBIDDEN, message="User isn't active, please activate first.")
return f(*args, **kwargs) return f(*args, **kwargs)
return wrapper return wrapper
@ -259,7 +264,7 @@ def send_file_in_mem(data, filename):
return send_file(f, as_attachment=True, attachment_filename=filename) return send_file(f, as_attachment=True, attachment_filename=filename)
def get_json_result(code=settings.RetCode.SUCCESS, message="success", data=None): def get_json_result(code: settings.RetCode = settings.RetCode.SUCCESS, message="success", data=None):
response = {"code": code, "message": message, "data": data} response = {"code": code, "message": message, "data": data}
return jsonify(response) return jsonify(response)
@ -314,7 +319,7 @@ def construct_result(code=settings.RetCode.DATA_ERROR, message="data is missing"
return jsonify(response) return jsonify(response)
def construct_json_result(code=settings.RetCode.SUCCESS, message="success", data=None): def construct_json_result(code: settings.RetCode = settings.RetCode.SUCCESS, message="success", data=None):
if data is None: if data is None:
return jsonify({"code": code, "message": message}) return jsonify({"code": code, "message": message})
else: else:
@ -347,27 +352,39 @@ def token_required(func):
token = authorization_list[1] token = authorization_list[1]
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result(data=False, message="Authentication error: API key is invalid!", code=settings.RetCode.AUTHENTICATION_ERROR) return get_json_result(data=False, message="Authentication error: API key is invalid!",
code=settings.RetCode.AUTHENTICATION_ERROR)
kwargs["tenant_id"] = objs[0].tenant_id kwargs["tenant_id"] = objs[0].tenant_id
return func(*args, **kwargs) return func(*args, **kwargs)
return decorated_function return decorated_function
def get_result(code=settings.RetCode.SUCCESS, message="", data=None): def get_result(code=settings.RetCode.SUCCESS, message="", data=None, total=None):
if code == 0: """
Standard API response format:
{
"code": 0,
"data": [...], # List or object, backward compatible
"total": 47, # Optional field for pagination
"message": "..." # Error or status message
}
"""
response = {"code": code}
if code == settings.RetCode.SUCCESS:
if data is not None: if data is not None:
response = {"code": code, "data": data} response["data"] = data
else: if total is not None:
response = {"code": code} response["total_datasets"] = total
else: else:
response = {"code": code, "message": message} response["message"] = message or "Error"
return jsonify(response) return jsonify(response)
def get_error_data_result( def get_error_data_result(
message="Sorry! Data missing!", message="Sorry! Data missing!",
code=settings.RetCode.DATA_ERROR, code=settings.RetCode.DATA_ERROR,
): ):
result_dict = {"code": code, "message": message} result_dict = {"code": code, "message": message}
response = {} response = {}
@ -402,7 +419,8 @@ def get_parser_config(chunk_method, parser_config):
# Define default configurations for each chunking method # Define default configurations for each chunking method
key_mapping = { key_mapping = {
"naive": {"chunk_token_num": 512, "delimiter": r"\n", "html4excel": False, "layout_recognize": "DeepDOC", "raptor": {"use_raptor": False}, "graphrag": {"use_graphrag": False}}, "naive": {"chunk_token_num": 512, "delimiter": r"\n", "html4excel": False, "layout_recognize": "DeepDOC",
"raptor": {"use_raptor": False}, "graphrag": {"use_graphrag": False}},
"qa": {"raptor": {"use_raptor": False}, "graphrag": {"use_graphrag": False}}, "qa": {"raptor": {"use_raptor": False}, "graphrag": {"use_graphrag": False}},
"tag": None, "tag": None,
"resume": None, "resume": None,
@ -441,16 +459,16 @@ def get_parser_config(chunk_method, parser_config):
def get_data_openai( def get_data_openai(
id=None, id=None,
created=None, created=None,
model=None, model=None,
prompt_tokens=0, prompt_tokens=0,
completion_tokens=0, completion_tokens=0,
content=None, content=None,
finish_reason=None, finish_reason=None,
object="chat.completion", object="chat.completion",
param=None, param=None,
stream=False stream=False
): ):
total_tokens = prompt_tokens + completion_tokens total_tokens = prompt_tokens + completion_tokens
@ -524,6 +542,8 @@ def check_duplicate_ids(ids, id_type="item"):
def verify_embedding_availability(embd_id: str, tenant_id: str) -> tuple[bool, Response | None]: def verify_embedding_availability(embd_id: str, tenant_id: str) -> tuple[bool, Response | None]:
from api.db.services.llm_service import LLMService
from api.db.services.tenant_llm_service import TenantLLMService
""" """
Verifies availability of an embedding model for a specific tenant. Verifies availability of an embedding model for a specific tenant.
@ -562,7 +582,9 @@ def verify_embedding_availability(embd_id: str, tenant_id: str) -> tuple[bool, R
in_llm_service = bool(LLMService.query(llm_name=llm_name, fid=llm_factory, model_type="embedding")) in_llm_service = bool(LLMService.query(llm_name=llm_name, fid=llm_factory, model_type="embedding"))
tenant_llms = TenantLLMService.get_my_llms(tenant_id=tenant_id) tenant_llms = TenantLLMService.get_my_llms(tenant_id=tenant_id)
is_tenant_model = any(llm["llm_name"] == llm_name and llm["llm_factory"] == llm_factory and llm["model_type"] == "embedding" for llm in tenant_llms) is_tenant_model = any(
llm["llm_name"] == llm_name and llm["llm_factory"] == llm_factory and llm["model_type"] == "embedding" for
llm in tenant_llms)
is_builtin_model = embd_id in settings.BUILTIN_EMBEDDING_MODELS is_builtin_model = embd_id in settings.BUILTIN_EMBEDDING_MODELS
if not (is_builtin_model or is_tenant_model or in_llm_service): if not (is_builtin_model or is_tenant_model or in_llm_service):
@ -793,7 +815,9 @@ async def is_strong_enough(chat_model, embedding_model):
_ = await trio.to_thread.run_sync(lambda: embedding_model.encode(["Are you strong enough!?"])) _ = await trio.to_thread.run_sync(lambda: embedding_model.encode(["Are you strong enough!?"]))
if chat_model: if chat_model:
with trio.fail_after(30): with trio.fail_after(30):
res = await trio.to_thread.run_sync(lambda: chat_model.chat("Nothing special.", [{"role": "user", "content": "Are you strong enough!?"}], {})) res = await trio.to_thread.run_sync(lambda: chat_model.chat("Nothing special.", [{"role": "user",
"content": "Are you strong enough!?"}],
{}))
if res.find("**ERROR**") >= 0: if res.find("**ERROR**") >= 0:
raise Exception(res) raise Exception(res)

View File

@ -21,3 +21,26 @@ def string_to_bytes(string):
def bytes_to_string(byte): def bytes_to_string(byte):
return byte.decode(encoding="utf-8") return byte.decode(encoding="utf-8")
def convert_bytes(size_in_bytes: int) -> str:
"""
Format size in bytes.
"""
if size_in_bytes == 0:
return "0 B"
units = ['B', 'KB', 'MB', 'GB', 'TB', 'PB']
i = 0
size = float(size_in_bytes)
while size >= 1024 and i < len(units) - 1:
size /= 1024
i += 1
if i == 0 or size >= 100:
return f"{size:.0f} {units[i]}"
elif size >= 10:
return f"{size:.1f} {units[i]}"
else:
return f"{size:.2f} {units[i]}"

View File

@ -13,14 +13,17 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import os
import requests
from timeit import default_timer as timer from timeit import default_timer as timer
from api import settings from api import settings
from api.db.db_models import DB from api.db.db_models import DB
from rag import settings as rag_settings
from rag.utils.redis_conn import REDIS_CONN from rag.utils.redis_conn import REDIS_CONN
from rag.utils.storage_factory import STORAGE_IMPL from rag.utils.storage_factory import STORAGE_IMPL
from rag.utils.es_conn import ESConnection
from rag.utils.infinity_conn import InfinityConnection
def _ok_nok(ok: bool) -> str: def _ok_nok(ok: bool) -> str:
@ -65,6 +68,96 @@ def check_storage() -> tuple[bool, dict]:
return False, {"elapsed": f"{(timer() - st) * 1000.0:.1f}", "error": str(e)} return False, {"elapsed": f"{(timer() - st) * 1000.0:.1f}", "error": str(e)}
def get_es_cluster_stats() -> dict:
doc_engine = os.getenv('DOC_ENGINE', 'elasticsearch')
if doc_engine != 'elasticsearch':
raise Exception("Elasticsearch is not in use.")
try:
return {
"alive": True,
"message": ESConnection().get_cluster_stats()
}
except Exception as e:
return {
"alive": False,
"message": f"error: {str(e)}",
}
def get_infinity_status():
doc_engine = os.getenv('DOC_ENGINE', 'elasticsearch')
if doc_engine != 'infinity':
raise Exception("Infinity is not in use.")
try:
return {
"alive": True,
"message": InfinityConnection().health()
}
except Exception as e:
return {
"alive": False,
"message": f"error: {str(e)}",
}
def get_mysql_status():
try:
cursor = DB.execute_sql("SHOW PROCESSLIST;")
res_rows = cursor.fetchall()
headers = ['id', 'user', 'host', 'db', 'command', 'time', 'state', 'info']
cursor.close()
return {
"alive": True,
"message": [dict(zip(headers, r)) for r in res_rows]
}
except Exception as e:
return {
"alive": False,
"message": f"error: {str(e)}",
}
def check_minio_alive():
start_time = timer()
try:
response = requests.get(f'http://{rag_settings.MINIO["host"]}/minio/health/live')
if response.status_code == 200:
return {'alive': True, "message": f"Confirm elapsed: {(timer() - start_time) * 1000.0:.1f} ms."}
else:
return {'alive': False, "message": f"Confirm elapsed: {(timer() - start_time) * 1000.0:.1f} ms."}
except Exception as e:
return {
"alive": False,
"message": f"error: {str(e)}",
}
def get_redis_info():
try:
return {
"alive": True,
"message": REDIS_CONN.info()
}
except Exception as e:
return {
"alive": False,
"message": f"error: {str(e)}",
}
def check_ragflow_server_alive():
start_time = timer()
try:
response = requests.get(f'http://{settings.HOST_IP}:{settings.HOST_PORT}/v1/system/ping')
if response.status_code == 200:
return {'alive': True, "message": f"Confirm elapsed: {(timer() - start_time) * 1000.0:.1f} ms."}
else:
return {'alive': False, "message": f"Confirm elapsed: {(timer() - start_time) * 1000.0:.1f} ms."}
except Exception as e:
return {
"alive": False,
"message": f"error: {str(e)}",
}
def run_health_checks() -> tuple[dict, bool]: def run_health_checks() -> tuple[dict, bool]:

View File

@ -2816,6 +2816,13 @@
"tags": "LLM,TEXT EMBEDDING,TEXT RE-RANK,IMAGE2TEXT", "tags": "LLM,TEXT EMBEDDING,TEXT RE-RANK,IMAGE2TEXT",
"status": "1", "status": "1",
"llm": [ "llm": [
{
"llm_name":"THUDM/GLM-4.1V-9B-Thinking",
"tags":"LLM,CHAT,IMAGE2TEXT, 64k",
"max_tokens":64000,
"model_type":"chat",
"is_tools": false
},
{ {
"llm_name": "Qwen/Qwen3-Embedding-8B", "llm_name": "Qwen/Qwen3-Embedding-8B",
"tags": "TEXT EMBEDDING,TEXT RE-RANK,32k", "tags": "TEXT EMBEDDING,TEXT RE-RANK,32k",
@ -3145,13 +3152,6 @@
"model_type": "chat", "model_type": "chat",
"is_tools": true "is_tools": true
}, },
{
"llm_name": "Qwen/Qwen2-1.5B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{ {
"llm_name": "Pro/Qwen/Qwen2.5-Coder-7B-Instruct", "llm_name": "Pro/Qwen/Qwen2.5-Coder-7B-Instruct",
"tags": "LLM,CHAT,32k", "tags": "LLM,CHAT,32k",
@ -3159,13 +3159,6 @@
"model_type": "chat", "model_type": "chat",
"is_tools": false "is_tools": false
}, },
{
"llm_name": "Pro/Qwen/Qwen2-VL-7B-Instruct",
"tags": "LLM,CHAT,IMAGE2TEXT,32k",
"max_tokens": 32000,
"model_type": "image2text",
"is_tools": false
},
{ {
"llm_name": "Pro/Qwen/Qwen2.5-7B-Instruct", "llm_name": "Pro/Qwen/Qwen2.5-7B-Instruct",
"tags": "LLM,CHAT,32k", "tags": "LLM,CHAT,32k",
@ -3533,6 +3526,13 @@
"model_type": "chat", "model_type": "chat",
"is_tools": true "is_tools": true
}, },
{
"llm_name": "claude-sonnet-4-5-20250929",
"tags": "LLM,CHAT,IMAGE2TEXT,200k",
"max_tokens": 204800,
"model_type": "chat",
"is_tools": true
},
{ {
"llm_name": "claude-sonnet-4-20250514", "llm_name": "claude-sonnet-4-20250514",
"tags": "LLM,CHAT,IMAGE2TEXT,200k", "tags": "LLM,CHAT,IMAGE2TEXT,200k",
@ -4862,6 +4862,280 @@
"max_tokens": 8000, "max_tokens": 8000,
"model_type": "chat", "model_type": "chat",
"is_tools": true "is_tools": true
},
{
"llm_name": "LongCat-Flash-Thinking",
"tags": "LLM,CHAT,8000",
"max_tokens": 8000,
"model_type": "chat",
"is_tools": true
}
]
},
{
"name": "DeerAPI",
"logo": "",
"tags": "LLM,TEXT EMBEDDING,IMAGE2TEXT",
"status": "1",
"llm": [
{
"llm_name": "gpt-5-chat-latest",
"tags": "LLM,CHAT,400k",
"max_tokens": 400000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "chatgpt-4o-latest",
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-5-mini",
"tags": "LLM,CHAT,400k",
"max_tokens": 400000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-5-nano",
"tags": "LLM,CHAT,400k",
"max_tokens": 400000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-5",
"tags": "LLM,CHAT,400k",
"max_tokens": 400000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-4.1-mini",
"tags": "LLM,CHAT,1M",
"max_tokens": 1047576,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-4.1-nano",
"tags": "LLM,CHAT,1M",
"max_tokens": 1047576,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-4.1",
"tags": "LLM,CHAT,1M",
"max_tokens": 1047576,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-4o-mini",
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "o4-mini-2025-04-16",
"tags": "LLM,CHAT,200k",
"max_tokens": 200000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "o3-pro-2025-06-10",
"tags": "LLM,CHAT,200k",
"max_tokens": 200000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "claude-opus-4-1-20250805",
"tags": "LLM,CHAT,200k,IMAGE2TEXT",
"max_tokens": 200000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "claude-opus-4-1-20250805-thinking",
"tags": "LLM,CHAT,200k,IMAGE2TEXT",
"max_tokens": 200000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "claude-sonnet-4-20250514",
"tags": "LLM,CHAT,200k,IMAGE2TEXT",
"max_tokens": 200000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "claude-sonnet-4-20250514-thinking",
"tags": "LLM,CHAT,200k,IMAGE2TEXT",
"max_tokens": 200000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "claude-3-7-sonnet-latest",
"tags": "LLM,CHAT,200k",
"max_tokens": 200000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "claude-3-5-haiku-latest",
"tags": "LLM,CHAT,200k",
"max_tokens": 200000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gemini-2.5-pro",
"tags": "LLM,CHAT,1M,IMAGE2TEXT",
"max_tokens": 1000000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "gemini-2.5-flash",
"tags": "LLM,CHAT,1M,IMAGE2TEXT",
"max_tokens": 1000000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "gemini-2.5-flash-lite",
"tags": "LLM,CHAT,1M,IMAGE2TEXT",
"max_tokens": 1000000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "gemini-2.0-flash",
"tags": "LLM,CHAT,1M,IMAGE2TEXT",
"max_tokens": 1000000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "grok-4-0709",
"tags": "LLM,CHAT,131k",
"max_tokens": 131072,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "grok-3",
"tags": "LLM,CHAT,131k",
"max_tokens": 131072,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "grok-3-mini",
"tags": "LLM,CHAT,131k",
"max_tokens": 131072,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "grok-2-image-1212",
"tags": "LLM,CHAT,32k,IMAGE2TEXT",
"max_tokens": 32768,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "deepseek-v3.1",
"tags": "LLM,CHAT,64k",
"max_tokens": 64000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-v3",
"tags": "LLM,CHAT,64k",
"max_tokens": 64000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-r1-0528",
"tags": "LLM,CHAT,164k",
"max_tokens": 164000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-chat",
"tags": "LLM,CHAT,32k",
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-reasoner",
"tags": "LLM,CHAT,64k",
"max_tokens": 64000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "qwen3-30b-a3b",
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "qwen3-coder-plus-2025-07-22",
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "text-embedding-ada-002",
"tags": "TEXT EMBEDDING,8K",
"max_tokens": 8191,
"model_type": "embedding",
"is_tools": false
},
{
"llm_name": "text-embedding-3-small",
"tags": "TEXT EMBEDDING,8K",
"max_tokens": 8191,
"model_type": "embedding",
"is_tools": false
},
{
"llm_name": "text-embedding-3-large",
"tags": "TEXT EMBEDDING,8K",
"max_tokens": 8191,
"model_type": "embedding",
"is_tools": false
},
{
"llm_name": "whisper-1",
"tags": "SPEECH2TEXT",
"max_tokens": 26214400,
"model_type": "speech2text",
"is_tools": false
},
{
"llm_name": "tts-1",
"tags": "TTS",
"max_tokens": 2048,
"model_type": "tts",
"is_tools": false
} }
] ]
} }

View File

@ -200,6 +200,61 @@
} }
} }
}, },
{
"knn_vector": {
"match": "*_2048_vec",
"mapping": {
"type": "knn_vector",
"index": true,
"space_type": "cosinesimil",
"dimension": 2048
}
}
},
{
"knn_vector": {
"match": "*_4096_vec",
"mapping": {
"type": "knn_vector",
"index": true,
"space_type": "cosinesimil",
"dimension": 4096
}
}
},
{
"knn_vector": {
"match": "*_6144_vec",
"mapping": {
"type": "knn_vector",
"index": true,
"space_type": "cosinesimil",
"dimension": 6144
}
}
},
{
"knn_vector": {
"match": "*_8192_vec",
"mapping": {
"type": "knn_vector",
"index": true,
"space_type": "cosinesimil",
"dimension": 8192
}
}
},
{
"knn_vector": {
"match": "*_10240_vec",
"mapping": {
"type": "knn_vector",
"index": true,
"space_type": "cosinesimil",
"dimension": 10240
}
}
},
{ {
"binary": { "binary": {
"match": "*_bin", "match": "*_bin",

View File

@ -17,7 +17,6 @@
import re import re
import mistune
from markdown import markdown from markdown import markdown
@ -117,8 +116,6 @@ class MarkdownElementExtractor:
def __init__(self, markdown_content): def __init__(self, markdown_content):
self.markdown_content = markdown_content self.markdown_content = markdown_content
self.lines = markdown_content.split("\n") self.lines = markdown_content.split("\n")
self.ast_parser = mistune.create_markdown(renderer="ast")
self.ast_nodes = self.ast_parser(markdown_content)
def extract_elements(self): def extract_elements(self):
"""Extract individual elements (headers, code blocks, lists, etc.)""" """Extract individual elements (headers, code blocks, lists, etc.)"""

View File

@ -15,11 +15,13 @@
# #
import logging import logging
import math
import os import os
import random import random
import re import re
import sys import sys
import threading import threading
from collections import Counter, defaultdict
from copy import deepcopy from copy import deepcopy
from io import BytesIO from io import BytesIO
from timeit import default_timer as timer from timeit import default_timer as timer
@ -349,9 +351,78 @@ class RAGFlowPdfParser:
self.boxes[i]["top"] += self.page_cum_height[self.boxes[i]["page_number"] - 1] self.boxes[i]["top"] += self.page_cum_height[self.boxes[i]["page_number"] - 1]
self.boxes[i]["bottom"] += self.page_cum_height[self.boxes[i]["page_number"] - 1] self.boxes[i]["bottom"] += self.page_cum_height[self.boxes[i]["page_number"] - 1]
def _text_merge(self): def _assign_column(self, boxes, zoomin=3):
if not boxes:
return boxes
if all("col_id" in b for b in boxes):
return boxes
by_page = defaultdict(list)
for b in boxes:
by_page[b["page_number"]].append(b)
page_info = {} # pg -> dict(page_w, left_edge, cand_cols)
counter = Counter()
for pg, bxs in by_page.items():
if not bxs:
page_info[pg] = {"page_w": 1.0, "left_edge": 0.0, "cand": 1}
counter[1] += 1
continue
if hasattr(self, "page_images") and self.page_images and len(self.page_images) >= pg:
page_w = self.page_images[pg - 1].size[0] / max(1, zoomin)
left_edge = 0.0
else:
xs0 = [box["x0"] for box in bxs]
xs1 = [box["x1"] for box in bxs]
left_edge = float(min(xs0))
page_w = max(1.0, float(max(xs1) - left_edge))
widths = [max(1.0, (box["x1"] - box["x0"])) for box in bxs]
median_w = float(np.median(widths)) if widths else 1.0
raw_cols = int(page_w / max(1.0, median_w))
# cand = raw_cols if (raw_cols >= 2 and median_w < page_w / raw_cols * 0.8) else 1
cand = raw_cols
page_info[pg] = {"page_w": page_w, "left_edge": left_edge, "cand": cand}
counter[cand] += 1
logging.info(f"[Page {pg}] median_w={median_w:.2f}, page_w={page_w:.2f}, raw_cols={raw_cols}, cand={cand}")
global_cols = counter.most_common(1)[0][0]
logging.info(f"Global column_num decided by majority: {global_cols}")
for pg, bxs in by_page.items():
if not bxs:
continue
page_w = page_info[pg]["page_w"]
left_edge = page_info[pg]["left_edge"]
if global_cols == 1:
for box in bxs:
box["col_id"] = 0
continue
for box in bxs:
w = box["x1"] - box["x0"]
if w >= 0.8 * page_w:
box["col_id"] = 0
continue
cx = 0.5 * (box["x0"] + box["x1"])
norm_cx = (cx - left_edge) / page_w
norm_cx = max(0.0, min(norm_cx, 0.999999))
box["col_id"] = int(min(global_cols - 1, norm_cx * global_cols))
return boxes
def _text_merge(self, zoomin=3):
# merge adjusted boxes # merge adjusted boxes
bxs = self.boxes bxs = self._assign_column(self.boxes, zoomin)
def end_with(b, txt): def end_with(b, txt):
txt = txt.strip() txt = txt.strip()
@ -367,9 +438,15 @@ class RAGFlowPdfParser:
while i < len(bxs) - 1: while i < len(bxs) - 1:
b = bxs[i] b = bxs[i]
b_ = bxs[i + 1] b_ = bxs[i + 1]
if b["page_number"] != b_["page_number"] or b.get("col_id") != b_.get("col_id"):
i += 1
continue
if b.get("layoutno", "0") != b_.get("layoutno", "1") or b.get("layout_type", "") in ["table", "figure", "equation"]: if b.get("layoutno", "0") != b_.get("layoutno", "1") or b.get("layout_type", "") in ["table", "figure", "equation"]:
i += 1 i += 1
continue continue
if abs(self._y_dis(b, b_)) < self.mean_height[bxs[i]["page_number"] - 1] / 3: if abs(self._y_dis(b, b_)) < self.mean_height[bxs[i]["page_number"] - 1] / 3:
# merge # merge
bxs[i]["x1"] = b_["x1"] bxs[i]["x1"] = b_["x1"]
@ -379,83 +456,108 @@ class RAGFlowPdfParser:
bxs.pop(i + 1) bxs.pop(i + 1)
continue continue
i += 1 i += 1
continue
dis_thr = 1
dis = b["x1"] - b_["x0"]
if b.get("layout_type", "") != "text" or b_.get("layout_type", "") != "text":
if end_with(b, "") or start_with(b_, ""):
dis_thr = -8
else:
i += 1
continue
if abs(self._y_dis(b, b_)) < self.mean_height[bxs[i]["page_number"] - 1] / 5 and dis >= dis_thr and b["x1"] < b_["x1"]:
# merge
bxs[i]["x1"] = b_["x1"]
bxs[i]["top"] = (b["top"] + b_["top"]) / 2
bxs[i]["bottom"] = (b["bottom"] + b_["bottom"]) / 2
bxs[i]["text"] += b_["text"]
bxs.pop(i + 1)
continue
i += 1
self.boxes = bxs self.boxes = bxs
def _naive_vertical_merge(self, zoomin=3): def _naive_vertical_merge(self, zoomin=3):
import math bxs = self._assign_column(self.boxes, zoomin)
bxs = Recognizer.sort_Y_firstly(self.boxes, np.median(self.mean_height) / 3)
column_width = np.median([b["x1"] - b["x0"] for b in self.boxes]) grouped = defaultdict(list)
if not column_width or math.isnan(column_width): for b in bxs:
column_width = self.mean_width[0] grouped[(b["page_number"], b.get("col_id", 0))].append(b)
self.column_num = int(self.page_images[0].size[0] / zoomin / column_width)
if column_width < self.page_images[0].size[0] / zoomin / self.column_num:
logging.info("Multi-column................... {} {}".format(column_width, self.page_images[0].size[0] / zoomin / self.column_num))
self.boxes = self.sort_X_by_page(self.boxes, column_width / self.column_num)
i = 0 merged_boxes = []
while i + 1 < len(bxs): for (pg, col), bxs in grouped.items():
b = bxs[i] bxs = sorted(bxs, key=lambda x: (x["top"], x["x0"]))
b_ = bxs[i + 1] if not bxs:
if b["page_number"] < b_["page_number"] and re.match(r"[0-9 •一—-]+$", b["text"]):
bxs.pop(i)
continue continue
if not b["text"].strip():
bxs.pop(i) mh = self.mean_height[pg - 1] if self.mean_height else np.median([b["bottom"] - b["top"] for b in bxs]) or 10
continue
concatting_feats = [ i = 0
b["text"].strip()[-1] in ",;:'\",、‘“;:-", while i + 1 < len(bxs):
len(b["text"].strip()) > 1 and b["text"].strip()[-2] in ",;:'\",‘“、;:", b = bxs[i]
b_["text"].strip() and b_["text"].strip()[0] in "。;?!?”)),,、:", b_ = bxs[i + 1]
]
# features for not concating if b["page_number"] < b_["page_number"] and re.match(r"[0-9 •一—-]+$", b["text"]):
feats = [ bxs.pop(i)
b.get("layoutno", 0) != b_.get("layoutno", 0), continue
b["text"].strip()[-1] in "。?!?",
self.is_english and b["text"].strip()[-1] in ".!?", if not b["text"].strip():
b["page_number"] == b_["page_number"] and b_["top"] - b["bottom"] > self.mean_height[b["page_number"] - 1] * 1.5, bxs.pop(i)
b["page_number"] < b_["page_number"] and abs(b["x0"] - b_["x0"]) > self.mean_width[b["page_number"] - 1] * 4, continue
]
# split features if not b["text"].strip() or b.get("layoutno") != b_.get("layoutno"):
detach_feats = [b["x1"] < b_["x0"], b["x0"] > b_["x1"]] i += 1
if (any(feats) and not any(concatting_feats)) or any(detach_feats): continue
logging.debug(
"{} {} {} {}".format( if b_["top"] - b["bottom"] > mh * 1.5:
b["text"], i += 1
b_["text"], continue
any(feats),
any(concatting_feats), overlap = max(0, min(b["x1"], b_["x1"]) - max(b["x0"], b_["x0"]))
if overlap / max(1, min(b["x1"] - b["x0"], b_["x1"] - b_["x0"])) < 0.3:
i += 1
continue
concatting_feats = [
b["text"].strip()[-1] in ",;:'\",、‘“;:-",
len(b["text"].strip()) > 1 and b["text"].strip()[-2] in ",;:'\",‘“、;:",
b_["text"].strip() and b_["text"].strip()[0] in "。;?!?”)),,、:",
]
# features for not concating
feats = [
b.get("layoutno", 0) != b_.get("layoutno", 0),
b["text"].strip()[-1] in "。?!?",
self.is_english and b["text"].strip()[-1] in ".!?",
b["page_number"] == b_["page_number"] and b_["top"] - b["bottom"] > self.mean_height[b["page_number"] - 1] * 1.5,
b["page_number"] < b_["page_number"] and abs(b["x0"] - b_["x0"]) > self.mean_width[b["page_number"] - 1] * 4,
]
# split features
detach_feats = [b["x1"] < b_["x0"], b["x0"] > b_["x1"]]
if (any(feats) and not any(concatting_feats)) or any(detach_feats):
logging.debug(
"{} {} {} {}".format(
b["text"],
b_["text"],
any(feats),
any(concatting_feats),
)
) )
) i += 1
i += 1 continue
continue
# merge up and down b["text"] = (b["text"].rstrip() + " " + b_["text"].lstrip()).strip()
b["bottom"] = b_["bottom"] b["bottom"] = b_["bottom"]
b["text"] += b_["text"] b["x0"] = min(b["x0"], b_["x0"])
b["x0"] = min(b["x0"], b_["x0"]) b["x1"] = max(b["x1"], b_["x1"])
b["x1"] = max(b["x1"], b_["x1"]) bxs.pop(i + 1)
bxs.pop(i + 1)
self.boxes = bxs merged_boxes.extend(bxs)
self.boxes = sorted(merged_boxes, key=lambda x: (x["page_number"], x.get("col_id", 0), x["top"]))
def _final_reading_order_merge(self, zoomin=3):
if not self.boxes:
return
self.boxes = self._assign_column(self.boxes, zoomin=zoomin)
pages = defaultdict(lambda: defaultdict(list))
for b in self.boxes:
pg = b["page_number"]
col = b.get("col_id", 0)
pages[pg][col].append(b)
for pg in pages:
for col in pages[pg]:
pages[pg][col].sort(key=lambda x: (x["top"], x["x0"]))
new_boxes = []
for pg in sorted(pages.keys()):
for col in sorted(pages[pg].keys()):
new_boxes.extend(pages[pg][col])
self.boxes = new_boxes
def _concat_downward(self, concat_between_pages=True): def _concat_downward(self, concat_between_pages=True):
self.boxes = Recognizer.sort_Y_firstly(self.boxes, 0) self.boxes = Recognizer.sort_Y_firstly(self.boxes, 0)
@ -997,7 +1099,7 @@ class RAGFlowPdfParser:
self.__ocr(i + 1, img, chars, zoomin, id) self.__ocr(i + 1, img, chars, zoomin, id)
if callback and i % 6 == 5: if callback and i % 6 == 5:
callback(prog=(i + 1) * 0.6 / len(self.page_images), msg="") callback((i + 1) * 0.6 / len(self.page_images))
async def __img_ocr_launcher(): async def __img_ocr_launcher():
def __ocr_preprocess(): def __ocr_preprocess():
@ -1048,7 +1150,7 @@ class RAGFlowPdfParser:
def parse_into_bboxes(self, fnm, callback=None, zoomin=3): def parse_into_bboxes(self, fnm, callback=None, zoomin=3):
start = timer() start = timer()
self.__images__(fnm, zoomin) self.__images__(fnm, zoomin, callback=callback)
if callback: if callback:
callback(0.40, "OCR finished ({:.2f}s)".format(timer() - start)) callback(0.40, "OCR finished ({:.2f}s)".format(timer() - start))
@ -1074,7 +1176,6 @@ class RAGFlowPdfParser:
def insert_table_figures(tbls_or_figs, layout_type): def insert_table_figures(tbls_or_figs, layout_type):
def min_rectangle_distance(rect1, rect2): def min_rectangle_distance(rect1, rect2):
import math
pn1, left1, right1, top1, bottom1 = rect1 pn1, left1, right1, top1, bottom1 = rect1
pn2, left2, right2, top2, bottom2 = rect2 pn2, left2, right2, top2, bottom2 = rect2
if right1 >= left2 and right2 >= left1 and bottom1 >= top2 and bottom2 >= top1: if right1 >= left2 and right2 >= left1 and bottom1 >= top2 and bottom2 >= top1:
@ -1091,27 +1192,39 @@ class RAGFlowPdfParser:
dy = top1 - bottom2 dy = top1 - bottom2
else: else:
dy = 0 dy = 0
return math.sqrt(dx*dx + dy*dy)# + (pn2-pn1)*10000 return math.sqrt(dx * dx + dy * dy) # + (pn2-pn1)*10000
for (img, txt), poss in tbls_or_figs: for (img, txt), poss in tbls_or_figs:
bboxes = [(i, (b["page_number"], b["x0"], b["x1"], b["top"], b["bottom"])) for i, b in enumerate(self.boxes)] bboxes = [(i, (b["page_number"], b["x0"], b["x1"], b["top"], b["bottom"])) for i, b in enumerate(self.boxes)]
dists = [(min_rectangle_distance((pn, left, right, top+self.page_cum_height[pn], bott+self.page_cum_height[pn]), rect),i) for i, rect in bboxes for pn, left, right, top, bott in poss] dists = [
(min_rectangle_distance((pn, left, right, top + self.page_cum_height[pn], bott + self.page_cum_height[pn]), rect), i) for i, rect in bboxes for pn, left, right, top, bott in poss
]
min_i = np.argmin(dists, axis=0)[0] min_i = np.argmin(dists, axis=0)[0]
min_i, rect = bboxes[dists[min_i][-1]] min_i, rect = bboxes[dists[min_i][-1]]
if isinstance(txt, list): if isinstance(txt, list):
txt = "\n".join(txt) txt = "\n".join(txt)
pn, left, right, top, bott = poss[0] pn, left, right, top, bott = poss[0]
if self.boxes[min_i]["bottom"] < top+self.page_cum_height[pn]: if self.boxes[min_i]["bottom"] < top + self.page_cum_height[pn]:
min_i += 1 min_i += 1
self.boxes.insert(min_i, { self.boxes.insert(
"page_number": pn+1, "x0": left, "x1": right, "top": top+self.page_cum_height[pn], "bottom": bott+self.page_cum_height[pn], "layout_type": layout_type, "text": txt, "image": img, min_i,
"positions": [[pn+1, int(left), int(right), int(top), int(bott)]] {
}) "page_number": pn + 1,
"x0": left,
"x1": right,
"top": top + self.page_cum_height[pn],
"bottom": bott + self.page_cum_height[pn],
"layout_type": layout_type,
"text": txt,
"image": img,
"positions": [[pn + 1, int(left), int(right), int(top), int(bott)]],
},
)
for b in self.boxes: for b in self.boxes:
b["position_tag"] = self._line_tag(b, zoomin) b["position_tag"] = self._line_tag(b, zoomin)
b["image"] = self.crop(b["position_tag"], zoomin) b["image"] = self.crop(b["position_tag"], zoomin)
b["positions"] = [[pos[0][-1]+1, *pos[1:]] for pos in RAGFlowPdfParser.extract_positions(b["position_tag"])] b["positions"] = [[pos[0][-1] + 1, *pos[1:]] for pos in RAGFlowPdfParser.extract_positions(b["position_tag"])]
insert_table_figures(tbls, "table") insert_table_figures(tbls, "table")
insert_table_figures(figs, "figure") insert_table_figures(figs, "figure")
@ -1129,7 +1242,7 @@ class RAGFlowPdfParser:
for tag in re.findall(r"@@[0-9-]+\t[0-9.\t]+##", txt): for tag in re.findall(r"@@[0-9-]+\t[0-9.\t]+##", txt):
pn, left, right, top, bottom = tag.strip("#").strip("@").split("\t") pn, left, right, top, bottom = tag.strip("#").strip("@").split("\t")
left, right, top, bottom = float(left), float(right), float(top), float(bottom) left, right, top, bottom = float(left), float(right), float(top), float(bottom)
poss.append(([int(p) - 1 for p in pn.split("-")], int(left), int(right), int(top), int(bottom))) poss.append(([int(p) - 1 for p in pn.split("-")], left, right, top, bottom))
return poss return poss
def crop(self, text, ZM=3, need_position=False): def crop(self, text, ZM=3, need_position=False):
@ -1274,12 +1387,16 @@ class VisionParser(RAGFlowPdfParser):
prompt=vision_llm_describe_prompt(page=pdf_page_num + 1), prompt=vision_llm_describe_prompt(page=pdf_page_num + 1),
callback=callback, callback=callback,
) )
if kwargs.get("callback"): if kwargs.get("callback"):
kwargs["callback"](idx * 1.0 / len(self.page_images), f"Processed: {idx + 1}/{len(self.page_images)}") kwargs["callback"](idx * 1.0 / len(self.page_images), f"Processed: {idx + 1}/{len(self.page_images)}")
if text: if text:
width, height = self.page_images[idx].size width, height = self.page_images[idx].size
all_docs.append((text, f"{pdf_page_num + 1} 0 {width / zoomin} 0 {height / zoomin}")) all_docs.append((
text,
f"@@{pdf_page_num + 1}\t{0.0:.1f}\t{width / zoomin:.1f}\t{0.0:.1f}\t{height / zoomin:.1f}##"
))
return all_docs, [] return all_docs, []

View File

@ -84,7 +84,8 @@ def load_model(model_dir, nm, device_id: int | None = None):
def cuda_is_available(): def cuda_is_available():
try: try:
import torch import torch
if torch.cuda.is_available() and torch.cuda.device_count() > device_id: target_id = 0 if device_id is None else device_id
if torch.cuda.is_available() and torch.cuda.device_count() > target_id:
return True return True
except Exception: except Exception:
return False return False
@ -100,10 +101,13 @@ def load_model(model_dir, nm, device_id: int | None = None):
# Shrink GPU memory after execution # Shrink GPU memory after execution
run_options = ort.RunOptions() run_options = ort.RunOptions()
if cuda_is_available(): if cuda_is_available():
gpu_mem_limit_mb = int(os.environ.get("OCR_GPU_MEM_LIMIT_MB", "2048"))
arena_strategy = os.environ.get("OCR_ARENA_EXTEND_STRATEGY", "kNextPowerOfTwo")
provider_device_id = 0 if device_id is None else device_id
cuda_provider_options = { cuda_provider_options = {
"device_id": device_id, # Use specific GPU "device_id": provider_device_id, # Use specific GPU
"gpu_mem_limit": 512 * 1024 * 1024, # Limit gpu memory "gpu_mem_limit": max(gpu_mem_limit_mb, 0) * 1024 * 1024,
"arena_extend_strategy": "kNextPowerOfTwo", # gpu memory allocation strategy "arena_extend_strategy": arena_strategy, # gpu memory allocation strategy
} }
sess = ort.InferenceSession( sess = ort.InferenceSession(
model_file_path, model_file_path,
@ -111,8 +115,8 @@ def load_model(model_dir, nm, device_id: int | None = None):
providers=['CUDAExecutionProvider'], providers=['CUDAExecutionProvider'],
provider_options=[cuda_provider_options] provider_options=[cuda_provider_options]
) )
run_options.add_run_config_entry("memory.enable_memory_arena_shrinkage", "gpu:" + str(device_id)) run_options.add_run_config_entry("memory.enable_memory_arena_shrinkage", "gpu:" + str(provider_device_id))
logging.info(f"load_model {model_file_path} uses GPU") logging.info(f"load_model {model_file_path} uses GPU (device {provider_device_id}, gpu_mem_limit={cuda_provider_options['gpu_mem_limit']}, arena_strategy={arena_strategy})")
else: else:
sess = ort.InferenceSession( sess = ort.InferenceSession(
model_file_path, model_file_path,

View File

@ -77,7 +77,7 @@ services:
container_name: ragflow-infinity container_name: ragflow-infinity
profiles: profiles:
- infinity - infinity
image: infiniflow/infinity:v0.6.0-dev5 image: infiniflow/infinity:v0.6.0-dev7
volumes: volumes:
- infinity_data:/var/infinity - infinity_data:/var/infinity
- ./infinity_conf.toml:/infinity_conf.toml - ./infinity_conf.toml:/infinity_conf.toml

View File

@ -98,7 +98,7 @@ Where:
- `mcp-host`: The MCP server's host address. - `mcp-host`: The MCP server's host address.
- `mcp-port`: The MCP server's listening port. - `mcp-port`: The MCP server's listening port.
- `mcp-base_url`: The address of the running RAGFlow server. - `mcp-base-url`: The address of the running RAGFlow server.
- `mcp-script-path`: The file path to the MCP servers main script. - `mcp-script-path`: The file path to the MCP servers main script.
- `mcp-mode`: The launch mode. - `mcp-mode`: The launch mode.
- `self-host`: (default) self-host mode. - `self-host`: (default) self-host mode.

View File

@ -21,6 +21,10 @@ Ensure that your metadata is in JSON format; otherwise, your updates will not be
![Input metadata](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/input_metadata.jpg) ![Input metadata](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/input_metadata.jpg)
## Related APIs
[Retrieve chunks](../../references/http_api_reference.md#retrieve-chunks)
## Frequently asked questions ## Frequently asked questions
### Can I set metadata for multiple documents at once? ### Can I set metadata for multiple documents at once?

View File

@ -0,0 +1,360 @@
# Admin CLI and Admin Service
The Admin CLI and Admin Service form a client-server architectural suite for RAGflow system administration. The Admin CLI serves as an interactive command-line interface that receives instructions and displays execution results from the Admin Service in real-time. This duo enables real-time monitoring of system operational status, supporting visibility into RAGflow Server services and dependent components including MySQL, Elasticsearch, Redis, and MinIO. In administrator mode, they provide user management capabilities that allow viewing users and performing critical operations—such as user creation, password updates, activation status changes, and comprehensive user data deletion—even when corresponding web interface functionalities are disabled.
## Starting the Admin Service
1. Before start Admin Service, please make sure RAGFlow system is already started.
2. Switch to ragflow/ directory and run the service script:
```bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
python admin/admin_server.py
```
The service will start and listen for incoming connections from the CLI on the configured port. Default port is 9381.
## Using the Admin CLI
1. Ensure the Admin Service is running.
2. Launch the CLI client:
```bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
python admin/admin_client.py -h 0.0.0.0 -p 9381
```
Enter superuser's password to login. Default password is `admin`.
## Supported Commands
Commands are case-insensitive and must be terminated with a semicolon(;).
### Service manage commands
`LIST SERVICES;`
- Lists all available services within the RAGFlow system.
- [Example](#example-list-services)
`SHOW SERVICE <id>;`
- Shows detailed status information for the service identified by <id>.
- [Example](#example-show-service)
### User Management Commands
`LIST USERS;`
- Lists all users known to the system.
- [Example](#example-list-users)
`SHOW USER <username>;`
- Shows details and permissions for the user specified by **email**. The username must be enclosed in single or double quotes.
- [Example](#example-show-user)
`CREATE USER <username> <password>;`
- Create user by username and password. The username and password must be enclosed in single or double quotes.
- [Example](#example-create-user)
`DROP USER <username>;`
- Removes the specified user from the system. Use with caution.
- [Example](#example-drop-user)
`ALTER USER PASSWORD <username> <new_password>;`
- Changes the password for the specified user.
- [Example](#example-alter-user-password)
`ALTER USER ACTIVE <username> <on/off>;`
- Changes the user to active or inactive.
- [Example](#example-alter-user-active)
### Data and Agent Commands
`LIST DATASETS OF <username>;`
- Lists the datasets associated with the specified user.
- [Example](#example-list-datasets-of-user)
`LIST AGENTS OF <username>;`
- Lists the agents associated with the specified user.
- [Example](#example-list-agents-of-user)
### Meta-Commands
- \? or \help
Shows help information for the available commands.
- \q or \quit
Exits the CLI application.
- [Example](#example-meta-commands)
### Examples
<span id="example-list-services"></span>
- List all available services.
```
admin> list services;
command: list services;
Listing all services
+-------------------------------------------------------------------------------------------+-----------+----+---------------+-------+----------------+
| extra | host | id | name | port | service_type |
+-------------------------------------------------------------------------------------------+-----------+----+---------------+-------+----------------+
| {} | 0.0.0.0 | 0 | ragflow_0 | 9380 | ragflow_server |
| {'meta_type': 'mysql', 'password': 'infini_rag_flow', 'username': 'root'} | localhost | 1 | mysql | 5455 | meta_data |
| {'password': 'infini_rag_flow', 'store_type': 'minio', 'user': 'rag_flow'} | localhost | 2 | minio | 9000 | file_store |
| {'password': 'infini_rag_flow', 'retrieval_type': 'elasticsearch', 'username': 'elastic'} | localhost | 3 | elasticsearch | 1200 | retrieval |
| {'db_name': 'default_db', 'retrieval_type': 'infinity'} | localhost | 4 | infinity | 23817 | retrieval |
| {'database': 1, 'mq_type': 'redis', 'password': 'infini_rag_flow'} | localhost | 5 | redis | 6379 | message_queue |
+-------------------------------------------------------------------------------------------+-----------+----+---------------+-------+----------------+
```
<span id="example-show-service"></span>
- Show ragflow_server.
```
admin> show service 0;
command: show service 0;
Showing service: 0
Service ragflow_0 is alive. Detail:
Confirm elapsed: 26.0 ms.
```
- Show mysql.
```
admin> show service 1;
command: show service 1;
Showing service: 1
Service mysql is alive. Detail:
+---------+----------+------------------+------+------------------+------------------------+-------+-----------------+
| command | db | host | id | info | state | time | user |
+---------+----------+------------------+------+------------------+------------------------+-------+-----------------+
| Daemon | None | localhost | 5 | None | Waiting on empty queue | 16111 | event_scheduler |
| Sleep | rag_flow | 172.18.0.1:40046 | 1610 | None | | 2 | root |
| Query | rag_flow | 172.18.0.1:35882 | 1629 | SHOW PROCESSLIST | init | 0 | root |
+---------+----------+------------------+------+------------------+------------------------+-------+-----------------+
```
- Show minio.
```
admin> show service 2;
command: show service 2;
Showing service: 2
Service minio is alive. Detail:
Confirm elapsed: 2.1 ms.
```
- Show elasticsearch.
```
admin> show service 3;
command: show service 3;
Showing service: 3
Service elasticsearch is alive. Detail:
+----------------+------+--------------+---------+----------------+--------------+---------------+--------------+------------------------------+----------------------------+-----------------+-------+---------------+---------+-------------+---------------------+--------+------------+--------------------+
| cluster_name | docs | docs_deleted | indices | indices_shards | jvm_heap_max | jvm_heap_used | jvm_versions | mappings_deduplicated_fields | mappings_deduplicated_size | mappings_fields | nodes | nodes_version | os_mem | os_mem_used | os_mem_used_percent | status | store_size | total_dataset_size |
+----------------+------+--------------+---------+----------------+--------------+---------------+--------------+------------------------------+----------------------------+-----------------+-------+---------------+---------+-------------+---------------------+--------+------------+--------------------+
| docker-cluster | 717 | 86 | 37 | 42 | 3.76 GB | 1.74 GB | 21.0.1+12-29 | 6575 | 48.0 KB | 8521 | 1 | ['8.11.3'] | 7.52 GB | 4.55 GB | 61 | green | 4.60 MB | 4.60 MB |
+----------------+------+--------------+---------+----------------+--------------+---------------+--------------+------------------------------+----------------------------+-----------------+-------+---------------+---------+-------------+---------------------+--------+------------+--------------------+
```
- Show infinity.
```
admin> show service 4;
command: show service 4;
Showing service: 4
Fail to show service, code: 500, message: Infinity is not in use.
```
- Show redis.
```
admin> show service 5;
command: show service 5;
Showing service: 5
Service redis is alive. Detail:
+-----------------+-------------------+---------------------------+-------------------------+---------------+-------------+--------------------------+---------------------+-------------+
| blocked_clients | connected_clients | instantaneous_ops_per_sec | mem_fragmentation_ratio | redis_version | server_mode | total_commands_processed | total_system_memory | used_memory |
+-----------------+-------------------+---------------------------+-------------------------+---------------+-------------+--------------------------+---------------------+-------------+
| 0 | 2 | 1 | 10.41 | 7.2.4 | standalone | 10446 | 30.84G | 1.10M |
+-----------------+-------------------+---------------------------+-------------------------+---------------+-------------+--------------------------+---------------------+-------------+
```
<span id="example-list-users"></span>
- List all user.
```
admin> list users;
command: list users;
Listing all users
+-------------------------------+----------------------+-----------+----------+
| create_date | email | is_active | nickname |
+-------------------------------+----------------------+-----------+----------+
| Mon, 22 Sep 2025 10:59:04 GMT | admin@ragflow.io | 1 | admin |
| Sun, 14 Sep 2025 17:36:27 GMT | lynn_inf@hotmail.com | 1 | Lynn |
+-------------------------------+----------------------+-----------+----------+
```
<span id="example-show-user"></span>
- Show specified user.
```
admin> show user "admin@ragflow.io";
command: show user "admin@ragflow.io";
Showing user: admin@ragflow.io
+-------------------------------+------------------+-----------+--------------+------------------+--------------+----------+-----------------+---------------+--------+-------------------------------+
| create_date | email | is_active | is_anonymous | is_authenticated | is_superuser | language | last_login_time | login_channel | status | update_date |
+-------------------------------+------------------+-----------+--------------+------------------+--------------+----------+-----------------+---------------+--------+-------------------------------+
| Mon, 22 Sep 2025 10:59:04 GMT | admin@ragflow.io | 1 | 0 | 1 | True | Chinese | None | None | 1 | Mon, 22 Sep 2025 10:59:04 GMT |
+-------------------------------+------------------+-----------+--------------+------------------+--------------+----------+-----------------+---------------+--------+-------------------------------+
```
<span id="example-create-user"></span>
- Create new user.
```
admin> create user "example@ragflow.io" "psw";
command: create user "example@ragflow.io" "psw";
Create user: example@ragflow.io, password: psw, role: user
+----------------------------------+--------------------+----------------------------------+--------------+---------------+----------+
| access_token | email | id | is_superuser | login_channel | nickname |
+----------------------------------+--------------------+----------------------------------+--------------+---------------+----------+
| 5cdc6d1e9df111f099b543aee592c6bf | example@ragflow.io | 5cdc6ca69df111f099b543aee592c6bf | False | password | |
+----------------------------------+--------------------+----------------------------------+--------------+---------------+----------+
```
<span id="example-alter-user-password"></span>
- Alter user password.
```
admin> alter user password "example@ragflow.io" "newpsw";
command: alter user password "example@ragflow.io" "newpsw";
Alter user: example@ragflow.io, password: newpsw
Password updated successfully!
```
<span id="example-alter-user-active"></span>
- Alter user active, turn off.
```
admin> alter user active "example@ragflow.io" off;
command: alter user active "example@ragflow.io" off;
Alter user example@ragflow.io activate status, turn off.
Turn off user activate status successfully!
```
<span id="example-drop-user"></span>
- Drop user.
```
admin> Drop user "example@ragflow.io";
command: Drop user "example@ragflow.io";
Drop user: example@ragflow.io
Successfully deleted user. Details:
Start to delete owned tenant.
- Deleted 2 tenant-LLM records.
- Deleted 0 langfuse records.
- Deleted 1 tenant.
- Deleted 1 user-tenant records.
- Deleted 1 user.
Delete done!
```
Delete user's data at the same time.
<span id="example-list-datasets-of-user"></span>
- List the specified user's dataset.
```
admin> list datasets of "lynn_inf@hotmail.com";
command: list datasets of "lynn_inf@hotmail.com";
Listing all datasets of user: lynn_inf@hotmail.com
+-----------+-------------------------------+---------+----------+---------------+------------+--------+-----------+-------------------------------+
| chunk_num | create_date | doc_num | language | name | permission | status | token_num | update_date |
+-----------+-------------------------------+---------+----------+---------------+------------+--------+-----------+-------------------------------+
| 29 | Mon, 15 Sep 2025 11:56:59 GMT | 12 | Chinese | test_dataset | me | 1 | 12896 | Fri, 19 Sep 2025 17:50:58 GMT |
| 4 | Sun, 28 Sep 2025 11:49:31 GMT | 6 | Chinese | dataset_share | team | 1 | 1121 | Sun, 28 Sep 2025 14:41:03 GMT |
+-----------+-------------------------------+---------+----------+---------------+------------+--------+-----------+-------------------------------+
```
<span id="example-list-agents-of-user"></span>
- List the specified user's agents.
```
admin> list agents of "lynn_inf@hotmail.com";
command: list agents of "lynn_inf@hotmail.com";
Listing all agents of user: lynn_inf@hotmail.com
+-----------------+-------------+------------+-----------------+
| canvas_category | canvas_type | permission | title |
+-----------------+-------------+------------+-----------------+
| agent_canvas | None | team | research_helper |
+-----------------+-------------+------------+-----------------+
```
<span id="example-meta-commands"></span>
- Show help information.
```
admin> \help
command: \help
Commands:
LIST SERVICES
SHOW SERVICE <service>
STARTUP SERVICE <service>
SHUTDOWN SERVICE <service>
RESTART SERVICE <service>
LIST USERS
SHOW USER <user>
DROP USER <user>
CREATE USER <user> <password>
ALTER USER PASSWORD <user> <new_password>
ALTER USER ACTIVE <user> <on/off>
LIST DATASETS OF <user>
LIST AGENTS OF <user>
Meta Commands:
\?, \h, \help Show this help
\q, \quit, \exit Quit the CLI
```
- Exit
```
admin> \q
command: \q
Goodbye!
```

View File

@ -830,7 +830,8 @@ Success:
"update_time": 1728533243536, "update_time": 1728533243536,
"vector_similarity_weight": 0.3 "vector_similarity_weight": 0.3
} }
] ],
"total": 1
} }
``` ```
@ -1280,7 +1281,7 @@ Success:
"update_time": 1728897061948 "update_time": 1728897061948
} }
], ],
"total": 1 "total_datasets": 1
} }
} }
``` ```
@ -1822,7 +1823,21 @@ curl --request POST \
{ {
"question": "What is advantage of ragflow?", "question": "What is advantage of ragflow?",
"dataset_ids": ["b2a62730759d11ef987d0242ac120004"], "dataset_ids": ["b2a62730759d11ef987d0242ac120004"],
"document_ids": ["77df9ef4759a11ef8bdd0242ac120004"] "document_ids": ["77df9ef4759a11ef8bdd0242ac120004"],
"metadata_condition": {
"conditions": [
{
"name": "author",
"comparison_operator": "=",
"value": "Toby"
},
{
"name": "url",
"comparison_operator": "not contains",
"value": "amd"
}
]
}
}' }'
``` ```
@ -1857,7 +1872,25 @@ curl --request POST \
- `"cross_languages"`: (*Body parameter*) `list[string]` - `"cross_languages"`: (*Body parameter*) `list[string]`
The languages that should be translated into, in order to achieve keywords retrievals in different languages. The languages that should be translated into, in order to achieve keywords retrievals in different languages.
- `"metadata_condition"`: (*Body parameter*), `object` - `"metadata_condition"`: (*Body parameter*), `object`
The metadata condition for filtering chunks. The metadata condition used for filtering chunks:
- `"conditions"`: (*Body parameter*), `array`
A list of metadata filter conditions.
- `"name"`: `string` - The metadata field name to filter by, e.g., `"author"`, `"company"`, `"url"`. Ensure this parameter before use. See [Set metadata](../guides/dataset/set_metadata.md) for details.
- `comparison_operator`: `string` - The comparison operator. Can be one of:
- `"contains"`
- `"not contains"`
- `"start with"`
- `"empty"`
- `"not empty"`
- `"="`
- `"≠"`
- `">"`
- `"<"`
- `"≥"`
- `"≤"`
- `"value"`: `string` - The value to compare.
#### Response #### Response
Success: Success:

View File

@ -33,7 +33,7 @@ A complete list of models supported by RAGFlow, which will continue to expand.
| Jina | | :heavy_check_mark: | :heavy_check_mark: | | | | | Jina | | :heavy_check_mark: | :heavy_check_mark: | | | |
| LeptonAI | :heavy_check_mark: | | | | | | | LeptonAI | :heavy_check_mark: | | | | | |
| LocalAI | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | | | LocalAI | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | |
| LM-Studio | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | | LM-Studio | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | |
| MiniMax | :heavy_check_mark: | | | | | | | MiniMax | :heavy_check_mark: | | | | | |
| Mistral | :heavy_check_mark: | :heavy_check_mark: | | | | | | Mistral | :heavy_check_mark: | :heavy_check_mark: | | | | |
| ModelScope | :heavy_check_mark: | | | | | | | ModelScope | :heavy_check_mark: | | | | | |
@ -66,6 +66,7 @@ A complete list of models supported by RAGFlow, which will continue to expand.
| DeepInfra | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | | DeepInfra | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: |
| 302.AI | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | | 302.AI | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
| CometAPI | :heavy_check_mark: | :heavy_check_mark: | | | | | | CometAPI | :heavy_check_mark: | :heavy_check_mark: | | | | |
| DeerAPI | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | :heavy_check_mark: |
```mdx-code-block ```mdx-code-block
</APITable> </APITable>

View File

@ -6,7 +6,6 @@
# dependencies = [ # dependencies = [
# "huggingface-hub", # "huggingface-hub",
# "nltk", # "nltk",
# "argparse",
# ] # ]
# /// # ///
@ -17,7 +16,7 @@ import os
import urllib.request import urllib.request
import argparse import argparse
def get_urls(use_china_mirrors=False) -> Union[str, list[str]]: def get_urls(use_china_mirrors=False) -> list[Union[str, list[str]]]:
if use_china_mirrors: if use_china_mirrors:
return [ return [
"http://mirrors.tuna.tsinghua.edu.cn/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2_amd64.deb", "http://mirrors.tuna.tsinghua.edu.cn/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2_amd64.deb",

View File

@ -60,7 +60,7 @@ class EntityResolution(Extractor):
self._llm = llm_invoker self._llm = llm_invoker
self._resolution_prompt = ENTITY_RESOLUTION_PROMPT self._resolution_prompt = ENTITY_RESOLUTION_PROMPT
self._record_delimiter_key = "record_delimiter" self._record_delimiter_key = "record_delimiter"
self._entity_index_dilimiter_key = "entity_index_delimiter" self._entity_index_delimiter_key = "entity_index_delimiter"
self._resolution_result_delimiter_key = "resolution_result_delimiter" self._resolution_result_delimiter_key = "resolution_result_delimiter"
self._input_text_key = "input_text" self._input_text_key = "input_text"
@ -77,7 +77,7 @@ class EntityResolution(Extractor):
**prompt_variables, **prompt_variables,
self._record_delimiter_key: prompt_variables.get(self._record_delimiter_key) self._record_delimiter_key: prompt_variables.get(self._record_delimiter_key)
or DEFAULT_RECORD_DELIMITER, or DEFAULT_RECORD_DELIMITER,
self._entity_index_dilimiter_key: prompt_variables.get(self._entity_index_dilimiter_key) self._entity_index_delimiter_key: prompt_variables.get(self._entity_index_delimiter_key)
or DEFAULT_ENTITY_INDEX_DELIMITER, or DEFAULT_ENTITY_INDEX_DELIMITER,
self._resolution_result_delimiter_key: prompt_variables.get(self._resolution_result_delimiter_key) self._resolution_result_delimiter_key: prompt_variables.get(self._resolution_result_delimiter_key)
or DEFAULT_RESOLUTION_RESULT_DELIMITER, or DEFAULT_RESOLUTION_RESULT_DELIMITER,
@ -185,7 +185,7 @@ class EntityResolution(Extractor):
result = self._process_results(len(candidate_resolution_i[1]), response, result = self._process_results(len(candidate_resolution_i[1]), response,
self.prompt_variables.get(self._record_delimiter_key, self.prompt_variables.get(self._record_delimiter_key,
DEFAULT_RECORD_DELIMITER), DEFAULT_RECORD_DELIMITER),
self.prompt_variables.get(self._entity_index_dilimiter_key, self.prompt_variables.get(self._entity_index_delimiter_key,
DEFAULT_ENTITY_INDEX_DELIMITER), DEFAULT_ENTITY_INDEX_DELIMITER),
self.prompt_variables.get(self._resolution_result_delimiter_key, self.prompt_variables.get(self._resolution_result_delimiter_key,
DEFAULT_RESOLUTION_RESULT_DELIMITER)) DEFAULT_RESOLUTION_RESULT_DELIMITER))

View File

@ -55,7 +55,7 @@ async def run_graphrag(
start = trio.current_time() start = trio.current_time()
tenant_id, kb_id, doc_id = row["tenant_id"], str(row["kb_id"]), row["doc_id"] tenant_id, kb_id, doc_id = row["tenant_id"], str(row["kb_id"]), row["doc_id"]
chunks = [] chunks = []
for d in settings.retrievaler.chunk_list(doc_id, tenant_id, [kb_id], fields=["content_with_weight", "doc_id"], sort_by_position=True): for d in settings.retriever.chunk_list(doc_id, tenant_id, [kb_id], fields=["content_with_weight", "doc_id"], sort_by_position=True):
chunks.append(d["content_with_weight"]) chunks.append(d["content_with_weight"])
with trio.fail_after(max(120, len(chunks) * 60 * 10) if enable_timeout_assertion else 10000000000): with trio.fail_after(max(120, len(chunks) * 60 * 10) if enable_timeout_assertion else 10000000000):
@ -170,7 +170,7 @@ async def run_graphrag_for_kb(
chunks = [] chunks = []
current_chunk = "" current_chunk = ""
for d in settings.retrievaler.chunk_list( for d in settings.retriever.chunk_list(
doc_id, doc_id,
tenant_id, tenant_id,
[kb_id], [kb_id],

View File

@ -62,7 +62,7 @@ async def main():
chunks = [ chunks = [
d["content_with_weight"] d["content_with_weight"]
for d in settings.retrievaler.chunk_list( for d in settings.retriever.chunk_list(
args.doc_id, args.doc_id,
args.tenant_id, args.tenant_id,
[kb_id], [kb_id],

View File

@ -63,7 +63,7 @@ async def main():
chunks = [ chunks = [
d["content_with_weight"] d["content_with_weight"]
for d in settings.retrievaler.chunk_list( for d in settings.retriever.chunk_list(
args.doc_id, args.doc_id,
args.tenant_id, args.tenant_id,
[kb_id], [kb_id],

View File

@ -92,10 +92,7 @@ def dict_has_keys_with_types(data: dict, expected_fields: list[tuple[str, type]]
def get_llm_cache(llmnm, txt, history, genconf): def get_llm_cache(llmnm, txt, history, genconf):
hasher = xxhash.xxh64() hasher = xxhash.xxh64()
hasher.update(str(llmnm).encode("utf-8")) hasher.update((str(llmnm)+str(txt)+str(history)+str(genconf)).encode("utf-8"))
hasher.update(str(txt).encode("utf-8"))
hasher.update(str(history).encode("utf-8"))
hasher.update(str(genconf).encode("utf-8"))
k = hasher.hexdigest() k = hasher.hexdigest()
bin = REDIS_CONN.get(k) bin = REDIS_CONN.get(k)
@ -106,11 +103,7 @@ def get_llm_cache(llmnm, txt, history, genconf):
def set_llm_cache(llmnm, txt, v, history, genconf): def set_llm_cache(llmnm, txt, v, history, genconf):
hasher = xxhash.xxh64() hasher = xxhash.xxh64()
hasher.update(str(llmnm).encode("utf-8")) hasher.update((str(llmnm)+str(txt)+str(history)+str(genconf)).encode("utf-8"))
hasher.update(str(txt).encode("utf-8"))
hasher.update(str(history).encode("utf-8"))
hasher.update(str(genconf).encode("utf-8"))
k = hasher.hexdigest() k = hasher.hexdigest()
REDIS_CONN.set(k, v.encode("utf-8"), 24 * 3600) REDIS_CONN.set(k, v.encode("utf-8"), 24 * 3600)
@ -341,7 +334,7 @@ def get_relation(tenant_id, kb_id, from_ent_name, to_ent_name, size=1):
ents = list(set(ents)) ents = list(set(ents))
conds = {"fields": ["content_with_weight"], "size": size, "from_entity_kwd": ents, "to_entity_kwd": ents, "knowledge_graph_kwd": ["relation"]} conds = {"fields": ["content_with_weight"], "size": size, "from_entity_kwd": ents, "to_entity_kwd": ents, "knowledge_graph_kwd": ["relation"]}
res = [] res = []
es_res = settings.retrievaler.search(conds, search.index_name(tenant_id), [kb_id] if isinstance(kb_id, str) else kb_id) es_res = settings.retriever.search(conds, search.index_name(tenant_id), [kb_id] if isinstance(kb_id, str) else kb_id)
for id in es_res.ids: for id in es_res.ids:
try: try:
if size == 1: if size == 1:
@ -398,7 +391,7 @@ async def does_graph_contains(tenant_id, kb_id, doc_id):
async def get_graph_doc_ids(tenant_id, kb_id) -> list[str]: async def get_graph_doc_ids(tenant_id, kb_id) -> list[str]:
conds = {"fields": ["source_id"], "removed_kwd": "N", "size": 1, "knowledge_graph_kwd": ["graph"]} conds = {"fields": ["source_id"], "removed_kwd": "N", "size": 1, "knowledge_graph_kwd": ["graph"]}
res = await trio.to_thread.run_sync(lambda: settings.retrievaler.search(conds, search.index_name(tenant_id), [kb_id])) res = await trio.to_thread.run_sync(lambda: settings.retriever.search(conds, search.index_name(tenant_id), [kb_id]))
doc_ids = [] doc_ids = []
if res.total == 0: if res.total == 0:
return doc_ids return doc_ids
@ -409,7 +402,7 @@ async def get_graph_doc_ids(tenant_id, kb_id) -> list[str]:
async def get_graph(tenant_id, kb_id, exclude_rebuild=None): async def get_graph(tenant_id, kb_id, exclude_rebuild=None):
conds = {"fields": ["content_with_weight", "removed_kwd", "source_id"], "size": 1, "knowledge_graph_kwd": ["graph"]} conds = {"fields": ["content_with_weight", "removed_kwd", "source_id"], "size": 1, "knowledge_graph_kwd": ["graph"]}
res = await trio.to_thread.run_sync(settings.retrievaler.search, conds, search.index_name(tenant_id), [kb_id]) res = await trio.to_thread.run_sync(settings.retriever.search, conds, search.index_name(tenant_id), [kb_id])
if not res.total == 0: if not res.total == 0:
for id in res.ids: for id in res.ids:
try: try:
@ -562,7 +555,7 @@ def merge_tuples(list1, list2):
async def get_entity_type2samples(idxnms, kb_ids: list): async def get_entity_type2samples(idxnms, kb_ids: list):
es_res = await trio.to_thread.run_sync(lambda: settings.retrievaler.search({"knowledge_graph_kwd": "ty2ents", "kb_id": kb_ids, "size": 10000, "fields": ["content_with_weight"]}, idxnms, kb_ids)) es_res = await trio.to_thread.run_sync(lambda: settings.retriever.search({"knowledge_graph_kwd": "ty2ents", "kb_id": kb_ids, "size": 10000, "fields": ["content_with_weight"]}, idxnms, kb_ids))
res = defaultdict(list) res = defaultdict(list)
for id in es_res.ids: for id in es_res.ids:

View File

@ -96,7 +96,7 @@ ragflow:
infinity: infinity:
image: image:
repository: infiniflow/infinity repository: infiniflow/infinity
tag: v0.6.0-dev5 tag: v0.6.0-dev7
pullPolicy: IfNotPresent pullPolicy: IfNotPresent
pullSecrets: [] pullSecrets: []
storage: storage:

View File

@ -46,7 +46,7 @@ dependencies = [
"html-text==0.6.2", "html-text==0.6.2",
"httpx[socks]==0.27.2", "httpx[socks]==0.27.2",
"huggingface-hub>=0.25.0,<0.26.0", "huggingface-hub>=0.25.0,<0.26.0",
"infinity-sdk==0.6.0.dev5", "infinity-sdk==0.6.0.dev7",
"infinity-emb>=0.0.66,<0.0.67", "infinity-emb>=0.0.66,<0.0.67",
"itsdangerous==2.1.2", "itsdangerous==2.1.2",
"json-repair==0.35.0", "json-repair==0.35.0",
@ -119,7 +119,7 @@ dependencies = [
"graspologic>=3.4.1,<4.0.0", "graspologic>=3.4.1,<4.0.0",
"mini-racer>=0.12.4,<0.13.0", "mini-racer>=0.12.4,<0.13.0",
"pyodbc>=5.2.0,<6.0.0", "pyodbc>=5.2.0,<6.0.0",
"pyicu>=2.13.1,<3.0.0", "pyicu>=2.15.3,<3.0.0",
"flasgger>=0.9.7.1,<0.10.0", "flasgger>=0.9.7.1,<0.10.0",
"xxhash>=3.5.0,<4.0.0", "xxhash>=3.5.0,<4.0.0",
"trio>=0.29.0", "trio>=0.29.0",
@ -133,6 +133,8 @@ dependencies = [
"litellm>=1.74.15.post1", "litellm>=1.74.15.post1",
"flask-mail>=0.10.0", "flask-mail>=0.10.0",
"lark>=1.2.2", "lark>=1.2.2",
"mammoth>=1.11.0",
"markdownify>=1.2.0",
] ]
[project.optional-dependencies] [project.optional-dependencies]
@ -159,7 +161,7 @@ test = [
] ]
[[tool.uv.index]] [[tool.uv.index]]
url = "https://mirrors.aliyun.com/pypi/simple" url = "https://pypi.tuna.tsinghua.edu.cn/simple"
[tool.setuptools] [tool.setuptools]
packages = [ packages = [

View File

@ -256,6 +256,49 @@ class Docx(DocxParser):
tbls.append(((None, html), "")) tbls.append(((None, html), ""))
return new_line, tbls return new_line, tbls
def to_markdown(self, filename=None, binary=None, inline_images: bool = True):
"""
This function uses mammoth, licensed under the BSD 2-Clause License.
"""
import base64
import uuid
import mammoth
from markdownify import markdownify
docx_file = BytesIO(binary) if binary else open(filename, "rb")
def _convert_image_to_base64(image):
try:
with image.open() as image_file:
image_bytes = image_file.read()
encoded = base64.b64encode(image_bytes).decode("utf-8")
base64_url = f"data:{image.content_type};base64,{encoded}"
alt_name = "image"
alt_name = f"img_{uuid.uuid4().hex[:8]}"
return {"src": base64_url, "alt": alt_name}
except Exception as e:
logging.warning(f"Failed to convert image to base64: {e}")
return {"src": "", "alt": "image"}
try:
if inline_images:
result = mammoth.convert_to_html(docx_file, convert_image=mammoth.images.img_element(_convert_image_to_base64))
else:
result = mammoth.convert_to_html(docx_file)
html = result.value
markdown_text = markdownify(html)
return markdown_text
finally:
if not binary:
docx_file.close()
class Pdf(PdfParser): class Pdf(PdfParser):
def __init__(self): def __init__(self):
@ -285,7 +328,7 @@ class Pdf(PdfParser):
callback(0.65, "Table analysis ({:.2f}s)".format(timer() - start)) callback(0.65, "Table analysis ({:.2f}s)".format(timer() - start))
start = timer() start = timer()
self._text_merge() self._text_merge(zoomin=zoomin)
callback(0.67, "Text merged ({:.2f}s)".format(timer() - start)) callback(0.67, "Text merged ({:.2f}s)".format(timer() - start))
if separate_tables_figures: if separate_tables_figures:
@ -297,6 +340,7 @@ class Pdf(PdfParser):
tbls = self._extract_table_figure(True, zoomin, True, True) tbls = self._extract_table_figure(True, zoomin, True, True)
self._naive_vertical_merge() self._naive_vertical_merge()
self._concat_downward() self._concat_downward()
self._final_reading_order_merge()
# self._filter_forpages() # self._filter_forpages()
logging.info("layouts cost: {}s".format(timer() - first_start)) logging.info("layouts cost: {}s".format(timer() - first_start))
return [(b["text"], self._line_tag(b, zoomin)) for b in self.boxes], tbls return [(b["text"], self._line_tag(b, zoomin)) for b in self.boxes], tbls
@ -512,7 +556,7 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
callback(0.2, "Visual model detected. Attempting to enhance figure extraction...") callback(0.2, "Visual model detected. Attempting to enhance figure extraction...")
except Exception: except Exception:
vision_model = None vision_model = None
if vision_model: if vision_model:
# Process images for each section # Process images for each section
section_images = [] section_images = []

View File

@ -133,14 +133,14 @@ def label_question(question, kbs):
if tag_kb_ids: if tag_kb_ids:
all_tags = get_tags_from_cache(tag_kb_ids) all_tags = get_tags_from_cache(tag_kb_ids)
if not all_tags: if not all_tags:
all_tags = settings.retrievaler.all_tags_in_portion(kb.tenant_id, tag_kb_ids) all_tags = settings.retriever.all_tags_in_portion(kb.tenant_id, tag_kb_ids)
set_tags_to_cache(tags=all_tags, kb_ids=tag_kb_ids) set_tags_to_cache(tags=all_tags, kb_ids=tag_kb_ids)
else: else:
all_tags = json.loads(all_tags) all_tags = json.loads(all_tags)
tag_kbs = KnowledgebaseService.get_by_ids(tag_kb_ids) tag_kbs = KnowledgebaseService.get_by_ids(tag_kb_ids)
if not tag_kbs: if not tag_kbs:
return tags return tags
tags = settings.retrievaler.tag_query(question, tags = settings.retriever.tag_query(question,
list(set([kb.tenant_id for kb in tag_kbs])), list(set([kb.tenant_id for kb in tag_kbs])),
tag_kb_ids, tag_kb_ids,
all_tags, all_tags,

View File

@ -52,7 +52,7 @@ class Benchmark:
run = defaultdict(dict) run = defaultdict(dict)
query_list = list(qrels.keys()) query_list = list(qrels.keys())
for query in query_list: for query in query_list:
ranks = settings.retrievaler.retrieval(query, self.embd_mdl, self.tenant_id, [self.kb.id], 1, 30, ranks = settings.retriever.retrieval(query, self.embd_mdl, self.tenant_id, [self.kb.id], 1, 30,
0.0, self.vector_similarity_weight) 0.0, self.vector_similarity_weight)
if len(ranks["chunks"]) == 0: if len(ranks["chunks"]) == 0:
print(f"deleted query: {query}") print(f"deleted query: {query}")

View File

@ -1,299 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import random
import trio
from api.db import LLMType
from api.db.services.llm_service import LLMBundle
from deepdoc.parser.pdf_parser import RAGFlowPdfParser
from graphrag.utils import chat_limiter, get_llm_cache, set_llm_cache
from rag.flow.base import ProcessBase, ProcessParamBase
from rag.flow.chunker.schema import ChunkerFromUpstream
from rag.nlp import naive_merge, naive_merge_with_images, concat_img
from rag.prompts.prompts import keyword_extraction, question_proposal, detect_table_of_contents, \
table_of_contents_index, toc_transformer
from rag.utils import num_tokens_from_string
class ChunkerParam(ProcessParamBase):
def __init__(self):
super().__init__()
self.method_options = [
# General
"general",
"onetable",
# Customer Service
"q&a",
"manual",
# Recruitment
"resume",
# Education & Research
"book",
"paper",
"laws",
"presentation",
"toc" # table of contents
# Other
# "Tag" # TODO: Other method
]
self.method = "general"
self.chunk_token_size = 512
self.delimiter = "\n"
self.overlapped_percent = 0
self.page_rank = 0
self.auto_keywords = 0
self.auto_questions = 0
self.tag_sets = []
self.llm_setting = {"llm_id": "", "lang": "Chinese"}
def check(self):
self.check_valid_value(self.method.lower(), "Chunk method abnormal.", self.method_options)
self.check_positive_integer(self.chunk_token_size, "Chunk token size.")
self.check_nonnegative_number(self.page_rank, "Page rank value: (0, 10]")
self.check_nonnegative_number(self.auto_keywords, "Auto-keyword value: (0, 10]")
self.check_nonnegative_number(self.auto_questions, "Auto-question value: (0, 10]")
self.check_decimal_float(self.overlapped_percent, "Overlapped percentage: [0, 1)")
def get_input_form(self) -> dict[str, dict]:
return {}
class Chunker(ProcessBase):
component_name = "Chunker"
def _general(self, from_upstream: ChunkerFromUpstream):
self.callback(random.randint(1, 5) / 100.0, "Start to chunk via `General`.")
if from_upstream.output_format in ["markdown", "text", "html"]:
if from_upstream.output_format == "markdown":
payload = from_upstream.markdown_result
elif from_upstream.output_format == "text":
payload = from_upstream.text_result
else: # == "html"
payload = from_upstream.html_result
if not payload:
payload = ""
cks = naive_merge(
payload,
self._param.chunk_token_size,
self._param.delimiter,
self._param.overlapped_percent,
)
return [{"text": c} for c in cks]
# json
sections, section_images = [], []
for o in from_upstream.json_result or []:
sections.append((o.get("text", ""), o.get("position_tag", "")))
section_images.append(o.get("image"))
chunks, images = naive_merge_with_images(
sections,
section_images,
self._param.chunk_token_size,
self._param.delimiter,
self._param.overlapped_percent,
)
return [
{
"text": RAGFlowPdfParser.remove_tag(c),
"image": img,
"positions": RAGFlowPdfParser.extract_positions(c),
}
for c, img in zip(chunks, images)
]
def _q_and_a(self, from_upstream: ChunkerFromUpstream):
pass
def _resume(self, from_upstream: ChunkerFromUpstream):
pass
def _manual(self, from_upstream: ChunkerFromUpstream):
pass
def _table(self, from_upstream: ChunkerFromUpstream):
pass
def _paper(self, from_upstream: ChunkerFromUpstream):
pass
def _book(self, from_upstream: ChunkerFromUpstream):
pass
def _laws(self, from_upstream: ChunkerFromUpstream):
pass
def _presentation(self, from_upstream: ChunkerFromUpstream):
pass
def _one(self, from_upstream: ChunkerFromUpstream):
pass
def _toc(self, from_upstream: ChunkerFromUpstream):
self.callback(random.randint(1, 5) / 100.0, "Start to chunk via `ToC`.")
if from_upstream.output_format in ["markdown", "text", "html"]:
return
# json
sections, section_images, page_1024, tc_arr = [], [], [""], [0]
for o in from_upstream.json_result or []:
txt = o.get("text", "")
tc = num_tokens_from_string(txt)
page_1024[-1] += "\n" + txt
tc_arr[-1] += tc
if tc_arr[-1] > 1024:
page_1024.append("")
tc_arr.append(0)
sections.append((o.get("text", ""), o.get("position_tag", "")))
section_images.append(o.get("image"))
print(len(sections), o)
llm_setting = self._param.llm_setting
chat_mdl = LLMBundle(self._canvas._tenant_id, LLMType.CHAT, llm_name=llm_setting["llm_id"], lang=llm_setting["lang"])
self.callback(random.randint(5, 15) / 100.0, "Start to detect table of contents...")
toc_secs = detect_table_of_contents(page_1024, chat_mdl)
if toc_secs:
self.callback(random.randint(25, 35) / 100.0, "Start to extract table of contents...")
toc_arr = toc_transformer(toc_secs, chat_mdl)
toc_arr = [it for it in toc_arr if it.get("structure")]
print(json.dumps(toc_arr, ensure_ascii=False, indent=2), flush=True)
self.callback(random.randint(35, 75) / 100.0, "Start to link table of contents...")
toc_arr = table_of_contents_index(toc_arr, [t for t,_ in sections], chat_mdl)
for i in range(len(toc_arr)-1):
if not toc_arr[i].get("indices"):
continue
for j in range(i+1, len(toc_arr)):
if toc_arr[j].get("indices"):
if toc_arr[j]["indices"][0] - toc_arr[i]["indices"][-1] > 1:
toc_arr[i]["indices"].extend([x for x in range(toc_arr[i]["indices"][-1]+1, toc_arr[j]["indices"][0])])
break
# put all sections ahead of toc_arr[0] into it
# for i in range(len(toc_arr)):
# if toc_arr[i].get("indices") and toc_arr[i]["indices"][0]:
# toc_arr[i]["indices"] = [x for x in range(toc_arr[i]["indices"][-1]+1)]
# break
# put all sections after toc_arr[-1] into it
for i in range(len(toc_arr)-1, -1, -1):
if toc_arr[i].get("indices") and toc_arr[i]["indices"][-1]:
toc_arr[i]["indices"] = [x for x in range(toc_arr[i]["indices"][0], len(sections))]
break
print(">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n", json.dumps(toc_arr, ensure_ascii=False, indent=2), flush=True)
chunks, images = [], []
for it in toc_arr:
if not it.get("indices"):
continue
txt = ""
img = None
for i in it["indices"]:
idx = i
txt += "\n" + sections[idx][0] + "\t" + sections[idx][1]
if img and section_images[idx]:
img = concat_img(img, section_images[idx])
elif section_images[idx]:
img = section_images[idx]
it["indices"] = []
if not txt:
continue
it["indices"] = [len(chunks)]
print(it, "KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK\n", txt)
chunks.append(txt)
images.append(img)
self.callback(1, "Done")
return [
{
"text": RAGFlowPdfParser.remove_tag(c),
"image": img,
"positions": RAGFlowPdfParser.extract_positions(c),
}
for c, img in zip(chunks, images)
]
self.callback(message="No table of contents detected.")
async def _invoke(self, **kwargs):
function_map = {
"general": self._general,
"q&a": self._q_and_a,
"resume": self._resume,
"manual": self._manual,
"table": self._table,
"paper": self._paper,
"book": self._book,
"laws": self._laws,
"presentation": self._presentation,
"one": self._one,
}
try:
from_upstream = ChunkerFromUpstream.model_validate(kwargs)
except Exception as e:
self.set_output("_ERROR", f"Input error: {str(e)}")
return
chunks = function_map[self._param.method](from_upstream)
llm_setting = self._param.llm_setting
async def auto_keywords():
nonlocal chunks, llm_setting
chat_mdl = LLMBundle(self._canvas._tenant_id, LLMType.CHAT, llm_name=llm_setting["llm_id"], lang=llm_setting["lang"])
async def doc_keyword_extraction(chat_mdl, ck, topn):
cached = get_llm_cache(chat_mdl.llm_name, ck["text"], "keywords", {"topn": topn})
if not cached:
async with chat_limiter:
cached = await trio.to_thread.run_sync(lambda: keyword_extraction(chat_mdl, ck["text"], topn))
set_llm_cache(chat_mdl.llm_name, ck["text"], cached, "keywords", {"topn": topn})
if cached:
ck["keywords"] = cached.split(",")
async with trio.open_nursery() as nursery:
for ck in chunks:
nursery.start_soon(doc_keyword_extraction, chat_mdl, ck, self._param.auto_keywords)
async def auto_questions():
nonlocal chunks, llm_setting
chat_mdl = LLMBundle(self._canvas._tenant_id, LLMType.CHAT, llm_name=llm_setting["llm_id"], lang=llm_setting["lang"])
async def doc_question_proposal(chat_mdl, d, topn):
cached = get_llm_cache(chat_mdl.llm_name, ck["text"], "question", {"topn": topn})
if not cached:
async with chat_limiter:
cached = await trio.to_thread.run_sync(lambda: question_proposal(chat_mdl, ck["text"], topn))
set_llm_cache(chat_mdl.llm_name, ck["text"], cached, "question", {"topn": topn})
if cached:
d["questions"] = cached.split("\n")
async with trio.open_nursery() as nursery:
for ck in chunks:
nursery.start_soon(doc_question_proposal, chat_mdl, ck, self._param.auto_questions)
async with trio.open_nursery() as nursery:
if self._param.auto_questions:
nursery.start_soon(auto_questions)
if self._param.auto_keywords:
nursery.start_soon(auto_keywords)
if self._param.page_rank:
for ck in chunks:
ck["page_rank"] = self._param.page_rank
self.set_output("chunks", chunks)

View File

@ -1,37 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, Literal
from pydantic import BaseModel, ConfigDict, Field
class ChunkerFromUpstream(BaseModel):
created_time: float | None = Field(default=None, alias="_created_time")
elapsed_time: float | None = Field(default=None, alias="_elapsed_time")
name: str
file: dict | None = Field(default=None)
output_format: Literal["json", "markdown", "text", "html"] | None = Field(default=None)
json_result: list[dict[str, Any]] | None = Field(default=None, alias="json")
markdown_result: str | None = Field(default=None, alias="markdown")
text_result: str | None = Field(default=None, alias="text")
html_result: list[str] | None = Field(default=None, alias="html")
model_config = ConfigDict(populate_by_name=True, extra="forbid")
# def to_dict(self, *, exclude_none: bool = True) -> dict:
# return self.model_dump(by_alias=True, exclude_none=exclude_none)

View File

@ -52,6 +52,7 @@ class ParserParam(ProcessParamBase):
], ],
"word": [ "word": [
"json", "json",
"markdown",
], ],
"slides": [ "slides": [
"json", "json",
@ -183,8 +184,6 @@ class ParserParam(ProcessParamBase):
audio_config = self.setups.get("audio", "") audio_config = self.setups.get("audio", "")
if audio_config: if audio_config:
self.check_empty(audio_config.get("llm_id"), "Audio VLM") self.check_empty(audio_config.get("llm_id"), "Audio VLM")
audio_language = audio_config.get("lang", "")
self.check_empty(audio_language, "Language")
email_config = self.setups.get("email", "") email_config = self.setups.get("email", "")
if email_config: if email_config:
@ -247,13 +246,15 @@ class Parser(ProcessBase):
conf = self._param.setups["word"] conf = self._param.setups["word"]
self.set_output("output_format", conf["output_format"]) self.set_output("output_format", conf["output_format"])
docx_parser = Docx() docx_parser = Docx()
sections, tbls = docx_parser(name, binary=blob)
sections = [{"text": section[0], "image": section[1]} for section in sections if section]
sections.extend([{"text": tb, "image": None} for ((_,tb), _) in tbls])
# json
assert conf.get("output_format") == "json", "have to be json for doc"
if conf.get("output_format") == "json": if conf.get("output_format") == "json":
sections, tbls = docx_parser(name, binary=blob)
sections = [{"text": section[0], "image": section[1]} for section in sections if section]
sections.extend([{"text": tb, "image": None} for ((_,tb), _) in tbls])
self.set_output("json", sections) self.set_output("json", sections)
elif conf.get("output_format") == "markdown":
markdown_text = docx_parser.to_markdown(name, binary=blob)
self.set_output("markdown", markdown_text)
def _slides(self, name, blob): def _slides(self, name, blob):
from deepdoc.parser.ppt_parser import RAGFlowPptParser as ppt_parser from deepdoc.parser.ppt_parser import RAGFlowPptParser as ppt_parser
@ -345,15 +346,13 @@ class Parser(ProcessBase):
conf = self._param.setups["audio"] conf = self._param.setups["audio"]
self.set_output("output_format", conf["output_format"]) self.set_output("output_format", conf["output_format"])
lang = conf["lang"]
_, ext = os.path.splitext(name) _, ext = os.path.splitext(name)
with tempfile.NamedTemporaryFile(suffix=ext) as tmpf: with tempfile.NamedTemporaryFile(suffix=ext) as tmpf:
tmpf.write(blob) tmpf.write(blob)
tmpf.flush() tmpf.flush()
tmp_path = os.path.abspath(tmpf.name) tmp_path = os.path.abspath(tmpf.name)
seq2txt_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.SPEECH2TEXT, lang=lang) seq2txt_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.SPEECH2TEXT)
txt = seq2txt_mdl.transcription(tmp_path) txt = seq2txt_mdl.transcription(tmp_path)
self.set_output("text", txt) self.set_output("text", txt)
@ -363,6 +362,7 @@ class Parser(ProcessBase):
email_content = {} email_content = {}
conf = self._param.setups["email"] conf = self._param.setups["email"]
self.set_output("output_format", conf["output_format"])
target_fields = conf["fields"] target_fields = conf["fields"]
_, ext = os.path.splitext(name) _, ext = os.path.splitext(name)
@ -400,8 +400,8 @@ class Parser(ProcessBase):
_add_content(msg, msg.get_content_type()) _add_content(msg, msg.get_content_type())
email_content["text"] = body_text email_content["text"] = "\n".join(body_text)
email_content["text_html"] = body_html email_content["text_html"] = "\n".join(body_html)
# get attachment # get attachment
if "attachments" in target_fields: if "attachments" in target_fields:
attachments = [] attachments = []
@ -411,7 +411,7 @@ class Parser(ProcessBase):
dispositions = content_disposition.strip().split(";") dispositions = content_disposition.strip().split(";")
if dispositions[0].lower() == "attachment": if dispositions[0].lower() == "attachment":
filename = part.get_filename() filename = part.get_filename()
payload = part.get_payload(decode=True) payload = part.get_payload(decode=True).decode(part.get_content_charset())
attachments.append({ attachments.append({
"filename": filename, "filename": filename,
"payload": payload, "payload": payload,
@ -439,15 +439,16 @@ class Parser(ProcessBase):
} }
# get body # get body
if "body" in target_fields: if "body" in target_fields:
email_content["text"] = msg.body # usually empty. try text_html instead email_content["text"] = msg.body[0] if isinstance(msg.body, list) and msg.body else msg.body
email_content["text_html"] = msg.htmlBody if not email_content["text"] and msg.htmlBody:
email_content["text"] = msg.htmlBody[0] if isinstance(msg.htmlBody, list) and msg.htmlBody else msg.htmlBody
# get attachments # get attachments
if "attachments" in target_fields: if "attachments" in target_fields:
attachments = [] attachments = []
for t in msg.attachments: for t in msg.attachments:
attachments.append({ attachments.append({
"filename": t.name, "filename": t.name,
"payload": t.data # binary "payload": t.data.decode("utf-8")
}) })
email_content["attachments"] = attachments email_content["attachments"] = attachments

View File

@ -25,7 +25,7 @@ class SplitterFromUpstream(BaseModel):
file: dict | None = Field(default=None) file: dict | None = Field(default=None)
chunks: list[dict[str, Any]] | None = Field(default=None) chunks: list[dict[str, Any]] | None = Field(default=None)
output_format: Literal["json", "markdown", "text", "html"] | None = Field(default=None) output_format: Literal["json", "markdown", "text", "html", "chunks"] | None = Field(default=None)
json_result: list[dict[str, Any]] | None = Field(default=None, alias="json") json_result: list[dict[str, Any]] | None = Field(default=None, alias="json")
markdown_result: str | None = Field(default=None, alias="markdown") markdown_result: str | None = Field(default=None, alias="markdown")

View File

@ -126,7 +126,7 @@ class Tokenizer(ProcessBase):
if ck.get("summary"): if ck.get("summary"):
ck["content_ltks"] = rag_tokenizer.tokenize(str(ck["summary"])) ck["content_ltks"] = rag_tokenizer.tokenize(str(ck["summary"]))
ck["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(ck["content_ltks"]) ck["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(ck["content_ltks"])
else: elif ck.get("text"):
ck["content_ltks"] = rag_tokenizer.tokenize(ck["text"]) ck["content_ltks"] = rag_tokenizer.tokenize(ck["text"])
ck["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(ck["content_ltks"]) ck["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(ck["content_ltks"])
if i % 100 == 99: if i % 100 == 99:
@ -155,6 +155,8 @@ class Tokenizer(ProcessBase):
for i, ck in enumerate(chunks): for i, ck in enumerate(chunks):
ck["title_tks"] = rag_tokenizer.tokenize(re.sub(r"\.[a-zA-Z]+$", "", from_upstream.name)) ck["title_tks"] = rag_tokenizer.tokenize(re.sub(r"\.[a-zA-Z]+$", "", from_upstream.name))
ck["title_sm_tks"] = rag_tokenizer.fine_grained_tokenize(ck["title_tks"]) ck["title_sm_tks"] = rag_tokenizer.fine_grained_tokenize(ck["title_tks"])
if not ck.get("text"):
continue
ck["content_ltks"] = rag_tokenizer.tokenize(ck["text"]) ck["content_ltks"] = rag_tokenizer.tokenize(ck["text"])
ck["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(ck["content_ltks"]) ck["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(ck["content_ltks"])
if i % 100 == 99: if i % 100 == 99:

View File

@ -132,7 +132,7 @@ class Base(ABC):
"tool_choice", "tool_choice",
"logprobs", "logprobs",
"top_logprobs", "top_logprobs",
"extra_headers", "extra_headers"
} }
gen_conf = {k: v for k, v in gen_conf.items() if k in allowed_conf} gen_conf = {k: v for k, v in gen_conf.items() if k in allowed_conf}
@ -141,6 +141,22 @@ class Base(ABC):
def _chat(self, history, gen_conf, **kwargs): def _chat(self, history, gen_conf, **kwargs):
logging.info("[HISTORY]" + json.dumps(history, ensure_ascii=False, indent=2)) logging.info("[HISTORY]" + json.dumps(history, ensure_ascii=False, indent=2))
if self.model_name.lower().find("qwq") >= 0:
logging.info(f"[INFO] {self.model_name} detected as reasoning model, using _chat_streamly")
final_ans = ""
tol_token = 0
for delta, tol in self._chat_streamly(history, gen_conf, with_reasoning=False, **kwargs):
if delta.startswith("<think>") or delta.endswith("</think>"):
continue
final_ans += delta
tol_token = tol
if len(final_ans.strip()) == 0:
final_ans = "**ERROR**: Empty response from reasoning model"
return final_ans.strip(), tol_token
if self.model_name.lower().find("qwen3") >= 0: if self.model_name.lower().find("qwen3") >= 0:
kwargs["extra_body"] = {"enable_thinking": False} kwargs["extra_body"] = {"enable_thinking": False}
@ -1166,13 +1182,43 @@ class GoogleChat(Base):
else: else:
if "max_tokens" in gen_conf: if "max_tokens" in gen_conf:
gen_conf["max_output_tokens"] = gen_conf["max_tokens"] gen_conf["max_output_tokens"] = gen_conf["max_tokens"]
del gen_conf["max_tokens"]
for k in list(gen_conf.keys()): for k in list(gen_conf.keys()):
if k not in ["temperature", "top_p", "max_output_tokens"]: if k not in ["temperature", "top_p", "max_output_tokens"]:
del gen_conf[k] del gen_conf[k]
return gen_conf return gen_conf
def _get_thinking_config(self, gen_conf):
"""Extract and create ThinkingConfig from gen_conf.
Default behavior for Vertex AI Generative Models: thinking_budget=0 (disabled)
unless explicitly specified by the user. This does not apply to Claude models.
Users can override by setting thinking_budget in gen_conf/llm_setting:
- 0: Disabled (default)
- 1-24576: Manual budget
- -1: Auto (model decides)
"""
# Claude models don't support ThinkingConfig
if "claude" in self.model_name:
gen_conf.pop("thinking_budget", None)
return None
# For Vertex AI Generative Models, default to thinking disabled
thinking_budget = gen_conf.pop("thinking_budget", 0)
if thinking_budget is not None:
try:
import vertexai.generative_models as glm # type: ignore
return glm.ThinkingConfig(thinking_budget=thinking_budget)
except Exception:
pass
return None
def _chat(self, history, gen_conf={}, **kwargs): def _chat(self, history, gen_conf={}, **kwargs):
system = history[0]["content"] if history and history[0]["role"] == "system" else "" system = history[0]["content"] if history and history[0]["role"] == "system" else ""
thinking_config = self._get_thinking_config(gen_conf)
gen_conf = self._clean_conf(gen_conf)
if "claude" in self.model_name: if "claude" in self.model_name:
response = self.client.messages.create( response = self.client.messages.create(
model=self.model_name, model=self.model_name,
@ -1205,7 +1251,10 @@ class GoogleChat(Base):
} }
] ]
response = self.client.generate_content(hist, generation_config=gen_conf) if thinking_config:
response = self.client.generate_content(hist, generation_config=gen_conf, thinking_config=thinking_config)
else:
response = self.client.generate_content(hist, generation_config=gen_conf)
ans = response.text ans = response.text
return ans, response.usage_metadata.total_token_count return ans, response.usage_metadata.total_token_count
@ -1234,9 +1283,13 @@ class GoogleChat(Base):
yield total_tokens yield total_tokens
else: else:
response = None
total_tokens = 0
self.client._system_instruction = system self.client._system_instruction = system
thinking_config = self._get_thinking_config(gen_conf)
if "max_tokens" in gen_conf: if "max_tokens" in gen_conf:
gen_conf["max_output_tokens"] = gen_conf["max_tokens"] gen_conf["max_output_tokens"] = gen_conf["max_tokens"]
del gen_conf["max_tokens"]
for k in list(gen_conf.keys()): for k in list(gen_conf.keys()):
if k not in ["temperature", "top_p", "max_output_tokens"]: if k not in ["temperature", "top_p", "max_output_tokens"]:
del gen_conf[k] del gen_conf[k]
@ -1244,18 +1297,26 @@ class GoogleChat(Base):
if "role" in item and item["role"] == "assistant": if "role" in item and item["role"] == "assistant":
item["role"] = "model" item["role"] = "model"
if "content" in item: if "content" in item:
item["parts"] = item.pop("content") item["parts"] = [
{
"text": item.pop("content"),
}
]
ans = "" ans = ""
try: try:
response = self.model.generate_content(history, generation_config=gen_conf, stream=True) if thinking_config:
response = self.client.generate_content(history, generation_config=gen_conf, thinking_config=thinking_config, stream=True)
else:
response = self.client.generate_content(history, generation_config=gen_conf, stream=True)
for resp in response: for resp in response:
ans = resp.text ans = resp.text
total_tokens += num_tokens_from_string(ans)
yield ans yield ans
except Exception as e: except Exception as e:
yield ans + "\n**ERROR**: " + str(e) yield ans + "\n**ERROR**: " + str(e)
yield response._chunks[-1].usage_metadata.total_token_count yield total_tokens
class GPUStackChat(Base): class GPUStackChat(Base):
@ -1275,6 +1336,14 @@ class TokenPonyChat(Base):
if not base_url: if not base_url:
base_url = "https://ragflow.vip-api.tokenpony.cn/v1" base_url = "https://ragflow.vip-api.tokenpony.cn/v1"
class DeerAPIChat(Base):
_FACTORY_NAME = "DeerAPI"
def __init__(self, key, model_name, base_url="https://api.deerapi.com/v1", **kwargs):
if not base_url:
base_url = "https://api.deerapi.com/v1"
super().__init__(key, model_name, base_url, **kwargs)
class LiteLLMBase(ABC): class LiteLLMBase(ABC):
_FACTORY_NAME = [ _FACTORY_NAME = [

View File

@ -26,7 +26,7 @@ from openai.lib.azure import AzureOpenAI
from zhipuai import ZhipuAI from zhipuai import ZhipuAI
from rag.nlp import is_english from rag.nlp import is_english
from rag.prompts.generator import vision_llm_describe_prompt from rag.prompts.generator import vision_llm_describe_prompt
from rag.utils import num_tokens_from_string from rag.utils import num_tokens_from_string, total_token_count_from_response
class Base(ABC): class Base(ABC):
@ -125,7 +125,7 @@ class Base(ABC):
b64 = base64.b64encode(data).decode("utf-8") b64 = base64.b64encode(data).decode("utf-8")
return f"data:{mime};base64,{b64}" return f"data:{mime};base64,{b64}"
with BytesIO() as buffered: with BytesIO() as buffered:
fmt = "JPEG" fmt = "jpeg"
try: try:
image.save(buffered, format="JPEG") image.save(buffered, format="JPEG")
except Exception: except Exception:
@ -133,10 +133,10 @@ class Base(ABC):
buffered.seek(0) buffered.seek(0)
buffered.truncate() buffered.truncate()
image.save(buffered, format="PNG") image.save(buffered, format="PNG")
fmt = "PNG" fmt = "png"
data = buffered.getvalue() data = buffered.getvalue()
b64 = base64.b64encode(data).decode("utf-8") b64 = base64.b64encode(data).decode("utf-8")
mime = f"image/{fmt.lower()}" mime = f"image/{fmt}"
return f"data:{mime};base64,{b64}" return f"data:{mime};base64,{b64}"
def prompt(self, b64): def prompt(self, b64):
@ -178,7 +178,7 @@ class GptV4(Base):
model=self.model_name, model=self.model_name,
messages=self.prompt(b64), messages=self.prompt(b64),
) )
return res.choices[0].message.content.strip(), res.usage.total_tokens return res.choices[0].message.content.strip(), total_token_count_from_response(res)
def describe_with_prompt(self, image, prompt=None): def describe_with_prompt(self, image, prompt=None):
b64 = self.image2base64(image) b64 = self.image2base64(image)
@ -186,7 +186,7 @@ class GptV4(Base):
model=self.model_name, model=self.model_name,
messages=self.vision_llm_prompt(b64, prompt), messages=self.vision_llm_prompt(b64, prompt),
) )
return res.choices[0].message.content.strip(), res.usage.total_tokens return res.choices[0].message.content.strip(),total_token_count_from_response(res)
class AzureGptV4(GptV4): class AzureGptV4(GptV4):
@ -522,11 +522,10 @@ class GeminiCV(Base):
) )
b64 = self.image2base64(image) b64 = self.image2base64(image)
with BytesIO(base64.b64decode(b64)) as bio: with BytesIO(base64.b64decode(b64)) as bio:
img = open(bio) with open(bio) as img:
input = [prompt, img] input = [prompt, img]
res = self.model.generate_content(input) res = self.model.generate_content(input)
img.close() return res.text, total_token_count_from_response(res)
return res.text, res.usage_metadata.total_token_count
def describe_with_prompt(self, image, prompt=None): def describe_with_prompt(self, image, prompt=None):
from PIL.Image import open from PIL.Image import open
@ -534,11 +533,10 @@ class GeminiCV(Base):
b64 = self.image2base64(image) b64 = self.image2base64(image)
vision_prompt = prompt if prompt else vision_llm_describe_prompt() vision_prompt = prompt if prompt else vision_llm_describe_prompt()
with BytesIO(base64.b64decode(b64)) as bio: with BytesIO(base64.b64decode(b64)) as bio:
img = open(bio) with open(bio) as img:
input = [vision_prompt, img] input = [vision_prompt, img]
res = self.model.generate_content(input) res = self.model.generate_content(input)
img.close() return res.text, total_token_count_from_response(res)
return res.text, res.usage_metadata.total_token_count
def chat(self, system, history, gen_conf, images=[]): def chat(self, system, history, gen_conf, images=[]):
generation_config = dict(temperature=gen_conf.get("temperature", 0.3), top_p=gen_conf.get("top_p", 0.7)) generation_config = dict(temperature=gen_conf.get("temperature", 0.3), top_p=gen_conf.get("top_p", 0.7))
@ -547,7 +545,7 @@ class GeminiCV(Base):
self._form_history(system, history, images), self._form_history(system, history, images),
generation_config=generation_config) generation_config=generation_config)
ans = response.text ans = response.text
return ans, response.usage_metadata.total_token_count return ans, total_token_count_from_response(ans)
except Exception as e: except Exception as e:
return "**ERROR**: " + str(e), 0 return "**ERROR**: " + str(e), 0
@ -570,10 +568,7 @@ class GeminiCV(Base):
except Exception as e: except Exception as e:
yield ans + "\n**ERROR**: " + str(e) yield ans + "\n**ERROR**: " + str(e)
if response and hasattr(response, "usage_metadata") and hasattr(response.usage_metadata, "total_token_count"): yield total_token_count_from_response(response)
yield response.usage_metadata.total_token_count
else:
yield 0
class NvidiaCV(Base): class NvidiaCV(Base):
@ -619,7 +614,7 @@ class NvidiaCV(Base):
response = response.json() response = response.json()
return ( return (
response["choices"][0]["message"]["content"].strip(), response["choices"][0]["message"]["content"].strip(),
response["usage"]["total_tokens"], total_token_count_from_response(response),
) )
def _request(self, msg, gen_conf={}): def _request(self, msg, gen_conf={}):
@ -642,7 +637,7 @@ class NvidiaCV(Base):
response = self._request(vision_prompt) response = self._request(vision_prompt)
return ( return (
response["choices"][0]["message"]["content"].strip(), response["choices"][0]["message"]["content"].strip(),
response["usage"]["total_tokens"], total_token_count_from_response(response)
) )
def chat(self, system, history, gen_conf, images=[], **kwargs): def chat(self, system, history, gen_conf, images=[], **kwargs):
@ -650,7 +645,7 @@ class NvidiaCV(Base):
response = self._request(self._form_history(system, history, images), gen_conf) response = self._request(self._form_history(system, history, images), gen_conf)
return ( return (
response["choices"][0]["message"]["content"].strip(), response["choices"][0]["message"]["content"].strip(),
response["usage"]["total_tokens"], total_token_count_from_response(response)
) )
except Exception as e: except Exception as e:
return "**ERROR**: " + str(e), 0 return "**ERROR**: " + str(e), 0
@ -661,7 +656,7 @@ class NvidiaCV(Base):
response = self._request(self._form_history(system, history, images), gen_conf) response = self._request(self._form_history(system, history, images), gen_conf)
cnt = response["choices"][0]["message"]["content"] cnt = response["choices"][0]["message"]["content"]
if "usage" in response and "total_tokens" in response["usage"]: if "usage" in response and "total_tokens" in response["usage"]:
total_tokens += response["usage"]["total_tokens"] total_tokens += total_token_count_from_response(response)
for resp in cnt: for resp in cnt:
yield resp yield resp
except Exception as e: except Exception as e:

View File

@ -33,7 +33,7 @@ from zhipuai import ZhipuAI
from api import settings from api import settings
from api.utils.file_utils import get_home_cache_dir from api.utils.file_utils import get_home_cache_dir
from api.utils.log_utils import log_exception from api.utils.log_utils import log_exception
from rag.utils import num_tokens_from_string, truncate, total_token_count_from_response from rag.utils import num_tokens_from_string, truncate
class Base(ABC): class Base(ABC):
@ -52,7 +52,15 @@ class Base(ABC):
raise NotImplementedError("Please implement encode method!") raise NotImplementedError("Please implement encode method!")
def total_token_count(self, resp): def total_token_count(self, resp):
return total_token_count_from_response(resp) try:
return resp.usage.total_tokens
except Exception:
pass
try:
return resp["usage"]["total_tokens"]
except Exception:
pass
return 0
class DefaultEmbedding(Base): class DefaultEmbedding(Base):
@ -936,7 +944,6 @@ class GiteeEmbed(SILICONFLOWEmbed):
base_url = "https://ai.gitee.com/v1/embeddings" base_url = "https://ai.gitee.com/v1/embeddings"
super().__init__(key, model_name, base_url) super().__init__(key, model_name, base_url)
class DeepInfraEmbed(OpenAIEmbed): class DeepInfraEmbed(OpenAIEmbed):
_FACTORY_NAME = "DeepInfra" _FACTORY_NAME = "DeepInfra"
@ -962,3 +969,11 @@ class CometAPIEmbed(OpenAIEmbed):
if not base_url: if not base_url:
base_url = "https://api.cometapi.com/v1" base_url = "https://api.cometapi.com/v1"
super().__init__(key, model_name, base_url) super().__init__(key, model_name, base_url)
class DeerAPIEmbed(OpenAIEmbed):
_FACTORY_NAME = "DeerAPI"
def __init__(self, key, model_name, base_url="https://api.deerapi.com/v1"):
if not base_url:
base_url = "https://api.deerapi.com/v1"
super().__init__(key, model_name, base_url)

View File

@ -244,3 +244,12 @@ class CometAPISeq2txt(Base):
base_url = "https://api.cometapi.com/v1" base_url = "https://api.cometapi.com/v1"
self.client = OpenAI(api_key=key, base_url=base_url) self.client = OpenAI(api_key=key, base_url=base_url)
self.model_name = model_name self.model_name = model_name
class DeerAPISeq2txt(Base):
_FACTORY_NAME = "DeerAPI"
def __init__(self, key, model_name="whisper-1", base_url="https://api.deerapi.com/v1", **kwargs):
if not base_url:
base_url = "https://api.deerapi.com/v1"
self.client = OpenAI(api_key=key, base_url=base_url)
self.model_name = model_name

View File

@ -402,3 +402,11 @@ class CometAPITTS(OpenAITTS):
if not base_url: if not base_url:
base_url = "https://api.cometapi.com/v1" base_url = "https://api.cometapi.com/v1"
super().__init__(key, model_name, base_url, **kwargs) super().__init__(key, model_name, base_url, **kwargs)
class DeerAPITTS(OpenAITTS):
_FACTORY_NAME = "DeerAPI"
def __init__(self, key, model_name, base_url="https://api.deerapi.com/v1", **kwargs):
if not base_url:
base_url = "https://api.deerapi.com/v1"
super().__init__(key, model_name, base_url, **kwargs)

View File

@ -613,13 +613,13 @@ def naive_merge(sections: str | list, chunk_token_num=128, delimiter="\n。
dels = get_delimiters(delimiter) dels = get_delimiters(delimiter)
for sec, pos in sections: for sec, pos in sections:
if num_tokens_from_string(sec) < chunk_token_num: if num_tokens_from_string(sec) < chunk_token_num:
add_chunk(sec, pos) add_chunk("\n"+sec, pos)
continue continue
split_sec = re.split(r"(%s)" % dels, sec, flags=re.DOTALL) split_sec = re.split(r"(%s)" % dels, sec, flags=re.DOTALL)
for sub_sec in split_sec: for sub_sec in split_sec:
if re.match(f"^{dels}$", sub_sec): if re.match(f"^{dels}$", sub_sec):
continue continue
add_chunk(sub_sec, pos) add_chunk("\n"+sub_sec, pos)
return cks return cks
@ -669,13 +669,13 @@ def naive_merge_with_images(texts, images, chunk_token_num=128, delimiter="\n。
for sub_sec in split_sec: for sub_sec in split_sec:
if re.match(f"^{dels}$", sub_sec): if re.match(f"^{dels}$", sub_sec):
continue continue
add_chunk(sub_sec, image, text_pos) add_chunk("\n"+sub_sec, image, text_pos)
else: else:
split_sec = re.split(r"(%s)" % dels, text) split_sec = re.split(r"(%s)" % dels, text)
for sub_sec in split_sec: for sub_sec in split_sec:
if re.match(f"^{dels}$", sub_sec): if re.match(f"^{dels}$", sub_sec):
continue continue
add_chunk(sub_sec, image) add_chunk("\n"+sub_sec, image)
return cks, result_images return cks, result_images
@ -757,7 +757,7 @@ def naive_merge_docx(sections, chunk_token_num=128, delimiter="\n。"):
for sub_sec in split_sec: for sub_sec in split_sec:
if re.match(f"^{dels}$", sub_sec): if re.match(f"^{dels}$", sub_sec):
continue continue
add_chunk(sub_sec, image,"") add_chunk("\n"+sub_sec, image,"")
line = "" line = ""
if line: if line:
@ -765,7 +765,7 @@ def naive_merge_docx(sections, chunk_token_num=128, delimiter="\n。"):
for sub_sec in split_sec: for sub_sec in split_sec:
if re.match(f"^{dels}$", sub_sec): if re.match(f"^{dels}$", sub_sec):
continue continue
add_chunk(sub_sec, image,"") add_chunk("\n"+sub_sec, image,"")
return cks, images return cks, images

View File

@ -13,12 +13,14 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import json
import logging import logging
import re import re
import math import math
from collections import OrderedDict from collections import OrderedDict
from dataclasses import dataclass from dataclasses import dataclass
from rag.prompts.generator import relevant_chunks_with_toc
from rag.settings import TAG_FLD, PAGERANK_FLD from rag.settings import TAG_FLD, PAGERANK_FLD
from rag.utils import rmSpace, get_float from rag.utils import rmSpace, get_float
from rag.nlp import rag_tokenizer, query from rag.nlp import rag_tokenizer, query
@ -514,3 +516,63 @@ class Dealer:
tag_fea = sorted([(a, round(0.1*(c + 1) / (cnt + S) / max(1e-6, all_tags.get(a, 0.0001)))) for a, c in aggs], tag_fea = sorted([(a, round(0.1*(c + 1) / (cnt + S) / max(1e-6, all_tags.get(a, 0.0001)))) for a, c in aggs],
key=lambda x: x[1] * -1)[:topn_tags] key=lambda x: x[1] * -1)[:topn_tags]
return {a.replace(".", "_"): max(1, c) for a, c in tag_fea} return {a.replace(".", "_"): max(1, c) for a, c in tag_fea}
def retrieval_by_toc(self, query:str, chunks:list[dict], tenant_ids:list[str], chat_mdl, topn: int=6):
if not chunks:
return []
idx_nms = [index_name(tid) for tid in tenant_ids]
ranks, doc_id2kb_id = {}, {}
for ck in chunks:
if ck["doc_id"] not in ranks:
ranks[ck["doc_id"]] = 0
ranks[ck["doc_id"]] += ck["similarity"]
doc_id2kb_id[ck["doc_id"]] = ck["kb_id"]
doc_id = sorted(ranks.items(), key=lambda x: x[1]*-1.)[0][0]
kb_ids = [doc_id2kb_id[doc_id]]
es_res = self.dataStore.search(["content_with_weight"], [], {"doc_id": doc_id, "toc_kwd": "toc"}, [], OrderByExpr(), 0, 128, idx_nms,
kb_ids)
toc = []
dict_chunks = self.dataStore.getFields(es_res, ["content_with_weight"])
for _, doc in dict_chunks.items():
try:
toc.extend(json.loads(doc["content_with_weight"]))
except Exception as e:
logging.exception(e)
if not toc:
return chunks
ids = relevant_chunks_with_toc(query, toc, chat_mdl, topn*2)
if not ids:
return chunks
vector_size = 1024
id2idx = {ck["chunk_id"]: i for i, ck in enumerate(chunks)}
for cid, sim in ids:
if cid in id2idx:
chunks[id2idx[cid]]["similarity"] += sim
continue
chunk = self.dataStore.get(cid, idx_nms, kb_ids)
d = {
"chunk_id": cid,
"content_ltks": chunk["content_ltks"],
"content_with_weight": chunk["content_with_weight"],
"doc_id": doc_id,
"docnm_kwd": chunk.get("docnm_kwd", ""),
"kb_id": chunk["kb_id"],
"important_kwd": chunk.get("important_kwd", []),
"image_id": chunk.get("img_id", ""),
"similarity": sim,
"vector_similarity": sim,
"term_similarity": sim,
"vector": [0.0] * vector_size,
"positions": chunk.get("position_int", []),
"doc_type_kwd": chunk.get("doc_type_kwd", "")
}
for k in chunk.keys():
if k[-4:] == "_vec":
d["vector"] = chunk[k]
vector_size = len(chunk[k])
break
chunks.append(d)
return sorted(chunks, key=lambda x:x["similarity"]*-1)[:topn]

View File

@ -0,0 +1,53 @@
You are given a JSON array of TOC(tabel of content) items. Each item has at least {"title": string} and may include an existing title hierarchical level.
Task
- For each item, assign a depth label using Arabic numerals only: top-level = 1, second-level = 2, third-level = 3, etc.
- Multiple items may share the same depth (e.g., many 1s, many 2s).
- Do not use dotted numbering (no 1.1/1.2). Use a single digit string per item indicating its depth only.
- Preserve the original item order exactly. Do not insert, delete, or reorder.
- Decide levels yourself to keep a coherent hierarchy. Keep peers at the same depth.
Output
- Return a valid JSON array only (no extra text).
- Each element must be {"level": "1|2|3", "title": <original title string>}.
- title must be the original title string.
Examples
Example A (chapters with sections)
Input:
["Chapter 1 Methods", "Section 1 Definition", "Section 2 Process", "Chapter 2 Experiment"]
Output:
[
{"level":"1","title":"Chapter 1 Methods"},
{"level":"2","title":"Section 1 Definition"},
{"level":"2","title":"Section 2 Process"},
{"level":"1","title":"Chapter 2 Experiment"}
]
Example B (parts with chapters)
Input:
["Part I Theory", "Chapter 1 Basics", "Chapter 2 Methods", "Part II Applications", "Chapter 3 Case Studies"]
Output:
[
{"level":"1","title":"Part I Theory"},
{"level":"2","title":"Chapter 1 Basics"},
{"level":"2","title":"Chapter 2 Methods"},
{"level":"1","title":"Part II Applications"},
{"level":"2","title":"Chapter 3 Case Studies"}
]
Example C (plain headings)
Input:
["Introduction", "Background and Motivation", "Related Work", "Methodology", "Evaluation"]
Output:
[
{"level":"1","title":"Introduction"},
{"level":"2","title":"Background and Motivation"},
{"level":"2","title":"Related Work"},
{"level":"1","title":"Methodology"},
{"level":"1","title":"Evaluation"}
]

View File

@ -21,7 +21,9 @@ from copy import deepcopy
from typing import Tuple from typing import Tuple
import jinja2 import jinja2
import json_repair import json_repair
import trio
from api.utils import hash_str2int from api.utils import hash_str2int
from rag.nlp import rag_tokenizer
from rag.prompts.template import load_prompt from rag.prompts.template import load_prompt
from rag.settings import TAG_FLD from rag.settings import TAG_FLD
from rag.utils import encoder, num_tokens_from_string from rag.utils import encoder, num_tokens_from_string
@ -29,7 +31,7 @@ from rag.utils import encoder, num_tokens_from_string
STOP_TOKEN="<|STOP|>" STOP_TOKEN="<|STOP|>"
COMPLETE_TASK="complete_task" COMPLETE_TASK="complete_task"
INPUT_UTILIZATION = 0.5
def get_value(d, k1, k2): def get_value(d, k1, k2):
return d.get(k1, d.get(k2)) return d.get(k1, d.get(k2))
@ -439,12 +441,18 @@ def gen_meta_filter(chat_mdl, meta_data:dict, query: str) -> list:
return [] return []
def gen_json(system_prompt:str, user_prompt:str, chat_mdl): def gen_json(system_prompt:str, user_prompt:str, chat_mdl, gen_conf = None):
from graphrag.utils import get_llm_cache, set_llm_cache
cached = get_llm_cache(chat_mdl.llm_name, system_prompt, user_prompt, gen_conf)
if cached:
return json_repair.loads(cached)
_, msg = message_fit_in(form_message(system_prompt, user_prompt), chat_mdl.max_length) _, msg = message_fit_in(form_message(system_prompt, user_prompt), chat_mdl.max_length)
ans = chat_mdl.chat(msg[0]["content"], msg[1:]) ans = chat_mdl.chat(msg[0]["content"], msg[1:],gen_conf=gen_conf)
ans = re.sub(r"(^.*</think>|```json\n|```\n*$)", "", ans, flags=re.DOTALL) ans = re.sub(r"(^.*</think>|```json\n|```\n*$)", "", ans, flags=re.DOTALL)
try: try:
return json_repair.loads(ans) res = json_repair.loads(ans)
set_llm_cache(chat_mdl.llm_name, system_prompt, ans, user_prompt, gen_conf)
return res
except Exception: except Exception:
logging.exception(f"Loading json failure: {ans}") logging.exception(f"Loading json failure: {ans}")
@ -649,4 +657,140 @@ def toc_transformer(toc_pages, chat_mdl):
return last_complete return last_complete
TOC_LEVELS = load_prompt("assign_toc_levels")
def assign_toc_levels(toc_secs, chat_mdl, gen_conf = {"temperature": 0.2}):
if not toc_secs:
return []
return gen_json(
PROMPT_JINJA_ENV.from_string(TOC_LEVELS).render(),
str(toc_secs),
chat_mdl,
gen_conf
)
TOC_FROM_TEXT_SYSTEM = load_prompt("toc_from_text_system")
TOC_FROM_TEXT_USER = load_prompt("toc_from_text_user")
# Generate TOC from text chunks with text llms
async def gen_toc_from_text(txt_info: dict, chat_mdl, callback=None):
try:
ans = gen_json(
PROMPT_JINJA_ENV.from_string(TOC_FROM_TEXT_SYSTEM).render(),
PROMPT_JINJA_ENV.from_string(TOC_FROM_TEXT_USER).render(text="\n".join([json.dumps(d, ensure_ascii=False) for d in txt_info["chunks"]])),
chat_mdl,
gen_conf={"temperature": 0.0, "top_p": 0.9}
)
print(ans, "::::::::::::::::::::::::::::::::::::", flush=True)
txt_info["toc"] = ans if ans else []
if callback:
callback(msg="")
except Exception as e:
logging.exception(e)
def split_chunks(chunks, max_length: int):
"""
Pack chunks into batches according to max_length, returning [{"id": idx, "text": chunk_text}, ...].
Do not split a single chunk, even if it exceeds max_length.
"""
result = []
batch, batch_tokens = [], 0
for idx, chunk in enumerate(chunks):
t = num_tokens_from_string(chunk)
if batch_tokens + t > max_length:
result.append(batch)
batch, batch_tokens = [], 0
batch.append({idx: chunk})
batch_tokens += t
if batch:
result.append(batch)
return result
async def run_toc_from_text(chunks, chat_mdl, callback=None):
input_budget = int(chat_mdl.max_length * INPUT_UTILIZATION) - num_tokens_from_string(
TOC_FROM_TEXT_USER + TOC_FROM_TEXT_SYSTEM
)
input_budget = 1024 if input_budget > 1024 else input_budget
chunk_sections = split_chunks(chunks, input_budget)
titles = []
chunks_res = []
async with trio.open_nursery() as nursery:
for i, chunk in enumerate(chunk_sections):
if not chunk:
continue
chunks_res.append({"chunks": chunk})
nursery.start_soon(gen_toc_from_text, chunks_res[-1], chat_mdl, callback)
for chunk in chunks_res:
titles.extend(chunk.get("toc", []))
print(titles, ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>")
# Filter out entries with title == -1
prune = len(titles) > 512
max_len = 12 if prune else 22
filtered = []
for x in titles:
if not x.get("title") or x["title"] == "-1":
continue
if len(rag_tokenizer.tokenize(x["title"]).split(" ")) > max_len:
continue
if re.match(r"[0-9,.()/ -]+$", x["title"]):
continue
filtered.append(x)
logging.info(f"\n\nFiltered TOC sections:\n{filtered}")
# Generate initial level (level/title)
raw_structure = [x.get("title", "") for x in filtered]
# Assign hierarchy levels using LLM
toc_with_levels = assign_toc_levels(raw_structure, chat_mdl, {"temperature": 0.0, "top_p": 0.9})
# Merge structure and content (by index)
prune = len(toc_with_levels) > 512
max_lvl = sorted([t.get("level", "0") for t in toc_with_levels])[-1]
merged = []
for _ , (toc_item, src_item) in enumerate(zip(toc_with_levels, filtered)):
if prune and toc_item.get("level", "0") >= max_lvl:
continue
merged.append({
"level": toc_item.get("level", "0"),
"title": toc_item.get("title", ""),
"chunk_id": src_item.get("chunk_id", ""),
})
return merged
TOC_RELEVANCE_SYSTEM = load_prompt("toc_relevance_system")
TOC_RELEVANCE_USER = load_prompt("toc_relevance_user")
def relevant_chunks_with_toc(query: str, toc:list[dict], chat_mdl, topn: int=6):
import numpy as np
try:
ans = gen_json(
PROMPT_JINJA_ENV.from_string(TOC_RELEVANCE_SYSTEM).render(),
PROMPT_JINJA_ENV.from_string(TOC_RELEVANCE_USER).render(query=query, toc_json="[\n%s\n]\n"%"\n".join([json.dumps({"level": d["level"], "title":d["title"]}, ensure_ascii=False) for d in toc])),
chat_mdl,
gen_conf={"temperature": 0.0, "top_p": 0.9}
)
print(ans, "::::::::::::::::::::::::::::::::::::", flush=True)
id2score = {}
for ti, sc in zip(toc, ans):
if not isinstance(sc, dict) or sc.get("score", -1) < 1:
continue
for id in ti.get("ids", []):
if id not in id2score:
id2score[id] = []
id2score[id].append(sc["score"]/5.)
for id in id2score.keys():
id2score[id] = np.mean(id2score[id])
return [(id, sc) for id, sc in list(id2score.items()) if sc>=0.3][:topn]
except Exception as e:
logging.exception(e)
return []

View File

@ -0,0 +1,119 @@
You are a robust Table-of-Contents (TOC) extractor.
GOAL
Given a dictionary of chunks {"<chunk_ID>": chunk_text}, extract TOC-like headings and return a strict JSON array of objects:
[
{"title": "", "chunk_id": ""},
...
]
FIELDS
- "title": the heading text (clean, no page numbers or leader dots).
- If any part of a chunk has no valid heading, output that part as {"title":"-1", ...}.
- "chunk_id": the chunk ID (string).
- One chunk can yield multiple JSON objects in order (unmatched text + one or more headings).
RULES
1) Preserve input chunk order strictly.
2) If a chunk contains multiple headings, expand them in order:
- Pre-heading narrative → {"title":"-1","chunk_id":"<chunk_ID>"}
- Then each heading → {"title":"...","chunk_id":"<chunk_ID>"}
3) Do not merge outputs across chunks; each object refers to exactly one chunk ID.
4) "title" must be non-empty (or exactly "-1"). "chunk_id" must be a string (chunk ID).
5) When ambiguous, prefer "-1" unless the text strongly looks like a heading.
HEADING DETECTION (cues, not hard rules)
- Appears near line start, short isolated phrase, often followed by content.
- May contain separators: — —— - : · •
- Numbering styles:
• 第[一二三四五六七八九十百]+(篇|章|节|条)
• [(]?[一二三四五六七八九十]+[)]?
• [(]?[①②③④⑤⑥⑦⑧⑨⑩][)]?
• ^\d+(\.\d+)*[).]?\s*
• ^[IVXLCDM]+[).]
• ^[A-Z][).]
- Canonical section cues (general only):
Common heading indicators include words such as:
"Overview", "Introduction", "Background", "Purpose", "Scope", "Definition",
"Method", "Procedure", "Result", "Discussion", "Summary", "Conclusion",
"Appendix", "Reference", "Annex", "Acknowledgment", "Disclaimer".
These are soft cues, not strict requirements.
- Length restriction:
• Chinese heading: ≤25 characters
• English heading: ≤80 characters
- Exclude long narrative sentences, continuous prose, or bullet-style lists → output as "-1".
OUTPUT FORMAT
- Return ONLY a valid JSON array of {"title","content"} objects.
- No reasoning or commentary.
EXAMPLES
Example 1 — No heading
Input:
[{"0": "Copyright page · Publication info (ISBN 123-456). All rights reserved."}, ...]
Output:
[
{"title":"-1","chunk_id":"0"},
...
]
Example 2 — One heading
Input:
[{"1": "Chapter 1: General Provisions This chapter defines the overall rules…"}, ...]
Output:
[
{"title":"Chapter 1: General Provisions","chunk_id":"1"},
...
]
Example 3 — Narrative + heading
Input:
[{"2": "This paragraph introduces the background and goals. Section 2: Definitions Key terms are explained…"}, ...]
Output:
[
{"title":"Section 2: Definitions","chunk_id":"2"},
...
]
Example 4 — Multiple headings in one chunk
Input:
[{"3": "Declarations and Commitments (I) Party B commits… (II) Party C commits… Appendix A Data Specification"}, ...]
Output:
[
{"title":"Declarations and Commitments","chunk_id":"3"},
{"title":"(I) Party B commits","chunk_id":"3"},
{"title":"(II) Party C commits","chunk_id":"3"},
{"title":"Appendix A Data Specification","chunk_id":"3"},
...
]
Example 5 — Numbering styles
Input:
[{"4": "1. Scope: Defines boundaries. 2) Definitions: Terms used. III) Methods Overview."}, ...]
Output:
[
{"title":"1. Scope","chunk_id":"4"},
{"title":"2) Definitions","chunk_id":"4"},
{"title":"III) Methods Overview","chunk_id":"4"},
...
]
Example 6 — Long list (NOT headings)
Input:
{"5": "Item list: apples, bananas, strawberries, blueberries, mangos, peaches"}, ...]
Output:
[
{"title":"-1","chunk_id":"5"},
...
]
Example 7 — Mixed Chinese/English
Input:
{"6": "出版信息略This standard follows industry practices. Chapter 1: Overview 摘要… 第2节术语与缩略语"}, ...]
Output:
[
{"title":"Chapter 1: Overview","chunk_id":"6"},
{"title":"第2节术语与缩略语","chunk_id":"6"},
...
]

View File

@ -0,0 +1,8 @@
OUTPUT FORMAT
- Return ONLY the JSON array.
- Use double quotes.
- No extra commentary.
- Keep language of "title" the same as the input.
INPUT
{{text}}

View File

@ -0,0 +1,118 @@
# System Prompt: TOC Relevance Evaluation
You are an expert logical reasoning assistant specializing in hierarchical Table of Contents (TOC) relevance evaluation.
## GOAL
You will receive:
1. A JSON list of TOC items, each with fields:
```json
{
"level": <integer>, // e.g., 1, 2, 3
"title": <string> // section title
}
```
2. A user query (natural language question).
You must assign a **relevance score** (integer) to every TOC entry, based on how related its `title` is to the `query`.
---
## RULES
### Scoring System
- 5 → highly relevant (directly answers or matches the query intent)
- 3 → somewhat related (same topic or partially overlaps)
- 1 → weakly related (vague or tangential)
- 0 → no clear relation
- -1 → explicitly irrelevant or contradictory
### Hierarchy Traversal
- The TOC is hierarchical: smaller `level` = higher layer (e.g., level 1 is top-level, level 2 is a subsection).
- You must traverse in **hierarchical order** — interpret the structure based on levels (1 > 2 > 3).
- If a high-level item (level 1) is strongly related (score 5), its child items (level 2, 3) are likely relevant too.
- If a high-level item is unrelated (-1 or 0), its deeper children are usually less relevant unless the titles clearly match the query.
- Lower (deeper) levels provide more specific content; prefer assigning higher scores if they directly match the query.
### Output Format
Return a **JSON array**, preserving the input order but adding a new key `"score"`:
```json
[
{"level": 1, "title": "Introduction", "score": 0},
{"level": 2, "title": "Definition of Sustainability", "score": 5}
]
```
### Constraints
- Output **only the JSON array** — no explanations or reasoning text.
### EXAMPLES
#### Example 1
Input TOC:
[
{"level": 1, "title": "Machine Learning Overview"},
{"level": 2, "title": "Supervised Learning"},
{"level": 2, "title": "Unsupervised Learning"},
{"level": 3, "title": "Applications of Deep Learning"}
]
Query:
"How is deep learning used in image classification?"
Output:
[
{"level": 1, "title": "Machine Learning Overview", "score": 3},
{"level": 2, "title": "Supervised Learning", "score": 3},
{"level": 2, "title": "Unsupervised Learning", "score": 0},
{"level": 3, "title": "Applications of Deep Learning", "score": 5}
]
---
#### Example 2
Input TOC:
[
{"level": 1, "title": "Marketing Basics"},
{"level": 2, "title": "Consumer Behavior"},
{"level": 2, "title": "Digital Marketing"},
{"level": 3, "title": "Social Media Campaigns"},
{"level": 3, "title": "SEO Optimization"}
]
Query:
"What are the best online marketing methods?"
Output:
[
{"level": 1, "title": "Marketing Basics", "score": 3},
{"level": 2, "title": "Consumer Behavior", "score": 1},
{"level": 2, "title": "Digital Marketing", "score": 5},
{"level": 3, "title": "Social Media Campaigns", "score": 5},
{"level": 3, "title": "SEO Optimization", "score": 5}
]
---
#### Example 3
Input TOC:
[
{"level": 1, "title": "Physics Overview"},
{"level": 2, "title": "Classical Mechanics"},
{"level": 3, "title": "Newtons Laws"},
{"level": 2, "title": "Thermodynamics"},
{"level": 3, "title": "Entropy and Heat Transfer"}
]
Query:
"What is entropy?"
Output:
[
{"level": 1, "title": "Physics Overview", "score": 3},
{"level": 2, "title": "Classical Mechanics", "score": 0},
{"level": 3, "title": "Newtons Laws", "score": -1},
{"level": 2, "title": "Thermodynamics", "score": 5},
{"level": 3, "title": "Entropy and Heat Transfer", "score": 5}
]

View File

@ -0,0 +1,17 @@
# User Prompt: TOC Relevance Evaluation
You will now receive:
1. A JSON list of TOC items (each with `level` and `title`)
2. A user query string.
Traverse the TOC hierarchically based on level numbers and assign scores (5,3,1,0,-1) according to the rules in the system prompt.
Output **only** the JSON array with the added `"score"` field.
---
**Input TOC:**
{{ toc_json }}
**Query:**
{{ query }}

View File

@ -32,7 +32,7 @@ from api.utils.log_utils import init_root_logger, get_project_base_directory
from graphrag.general.index import run_graphrag_for_kb from graphrag.general.index import run_graphrag_for_kb
from graphrag.utils import get_llm_cache, set_llm_cache, get_tags_from_cache, set_tags_to_cache from graphrag.utils import get_llm_cache, set_llm_cache, get_tags_from_cache, set_tags_to_cache
from rag.flow.pipeline import Pipeline from rag.flow.pipeline import Pipeline
from rag.prompts import keyword_extraction, question_proposal, content_tagging from rag.prompts.generator import keyword_extraction, question_proposal, content_tagging, run_toc_from_text
import logging import logging
import os import os
from datetime import datetime from datetime import datetime
@ -370,6 +370,38 @@ async def build_chunks(task, progress_callback):
nursery.start_soon(doc_question_proposal, chat_mdl, d, task["parser_config"]["auto_questions"]) nursery.start_soon(doc_question_proposal, chat_mdl, d, task["parser_config"]["auto_questions"])
progress_callback(msg="Question generation {} chunks completed in {:.2f}s".format(len(docs), timer() - st)) progress_callback(msg="Question generation {} chunks completed in {:.2f}s".format(len(docs), timer() - st))
if task["parser_id"].lower() == "naive" and task["parser_config"].get("toc_extraction", False):
progress_callback(msg="Start to generate table of content ...")
chat_mdl = LLMBundle(task["tenant_id"], LLMType.CHAT, llm_name=task["llm_id"], lang=task["language"])
docs = sorted(docs, key=lambda d:(
d.get("page_num_int", 0)[0] if isinstance(d.get("page_num_int", 0), list) else d.get("page_num_int", 0),
d.get("top_int", 0)[0] if isinstance(d.get("top_int", 0), list) else d.get("top_int", 0)
))
toc: list[dict] = await run_toc_from_text([d["content_with_weight"] for d in docs], chat_mdl, progress_callback)
logging.info("------------ T O C -------------\n"+json.dumps(toc, ensure_ascii=False, indent=' '))
ii = 0
while ii < len(toc):
try:
idx = int(toc[ii]["chunk_id"])
del toc[ii]["chunk_id"]
toc[ii]["ids"] = [docs[idx]["id"]]
if ii == len(toc) -1:
break
for jj in range(idx+1, int(toc[ii+1]["chunk_id"])+1):
toc[ii]["ids"].append(docs[jj]["id"])
except Exception as e:
logging.exception(e)
ii += 1
if toc:
d = copy.deepcopy(docs[-1])
d["content_with_weight"] = json.dumps(toc, ensure_ascii=False)
d["toc_kwd"] = "toc"
d["available_int"] = 0
d["page_num_int"] = 100000000
d["id"] = xxhash.xxh64((d["content_with_weight"] + str(d["doc_id"])).encode("utf-8", "surrogatepass")).hexdigest()
docs.append(d)
if task["kb_parser_config"].get("tag_kb_ids", []): if task["kb_parser_config"].get("tag_kb_ids", []):
progress_callback(msg="Start to tag for every chunk ...") progress_callback(msg="Start to tag for every chunk ...")
kb_ids = task["kb_parser_config"]["tag_kb_ids"] kb_ids = task["kb_parser_config"]["tag_kb_ids"]
@ -380,7 +412,7 @@ async def build_chunks(task, progress_callback):
examples = [] examples = []
all_tags = get_tags_from_cache(kb_ids) all_tags = get_tags_from_cache(kb_ids)
if not all_tags: if not all_tags:
all_tags = settings.retrievaler.all_tags_in_portion(tenant_id, kb_ids, S) all_tags = settings.retriever.all_tags_in_portion(tenant_id, kb_ids, S)
set_tags_to_cache(kb_ids, all_tags) set_tags_to_cache(kb_ids, all_tags)
else: else:
all_tags = json.loads(all_tags) all_tags = json.loads(all_tags)
@ -393,7 +425,7 @@ async def build_chunks(task, progress_callback):
if task_canceled: if task_canceled:
progress_callback(-1, msg="Task has been canceled.") progress_callback(-1, msg="Task has been canceled.")
return return
if settings.retrievaler.tag_content(tenant_id, kb_ids, d, all_tags, topn_tags=topn_tags, S=S) and len(d[TAG_FLD]) > 0: if settings.retriever.tag_content(tenant_id, kb_ids, d, all_tags, topn_tags=topn_tags, S=S) and len(d[TAG_FLD]) > 0:
examples.append({"content": d["content_with_weight"], TAG_FLD: d[TAG_FLD]}) examples.append({"content": d["content_with_weight"], TAG_FLD: d[TAG_FLD]})
else: else:
docs_to_tag.append(d) docs_to_tag.append(d)
@ -645,7 +677,7 @@ async def run_raptor_for_kb(row, kb_parser_config, chat_mdl, embd_mdl, vector_si
chunks = [] chunks = []
vctr_nm = "q_%d_vec"%vector_size vctr_nm = "q_%d_vec"%vector_size
for doc_id in doc_ids: for doc_id in doc_ids:
for d in settings.retrievaler.chunk_list(doc_id, row["tenant_id"], [str(row["kb_id"])], for d in settings.retriever.chunk_list(doc_id, row["tenant_id"], [str(row["kb_id"])],
fields=["content_with_weight", vctr_nm], fields=["content_with_weight", vctr_nm],
sort_by_position=True): sort_by_position=True):
chunks.append((d["content_with_weight"], np.array(d[vctr_nm]))) chunks.append((d["content_with_weight"], np.array(d[vctr_nm])))
@ -659,7 +691,7 @@ async def run_raptor_for_kb(row, kb_parser_config, chat_mdl, embd_mdl, vector_si
raptor_config["threshold"], raptor_config["threshold"],
) )
original_length = len(chunks) original_length = len(chunks)
chunks = await raptor(chunks, row["kb_parser_config"]["raptor"]["random_seed"], callback) chunks = await raptor(chunks, kb_parser_config["raptor"]["random_seed"], callback)
doc = { doc = {
"doc_id": fake_doc_id, "doc_id": fake_doc_id,
"kb_id": [str(row["kb_id"])], "kb_id": [str(row["kb_id"])],
@ -782,8 +814,22 @@ async def do_handle_task(task):
kb_parser_config = kb.parser_config kb_parser_config = kb.parser_config
if not kb_parser_config.get("raptor", {}).get("use_raptor", False): if not kb_parser_config.get("raptor", {}).get("use_raptor", False):
progress_callback(prog=-1.0, msg="Internal error: Invalid RAPTOR configuration") kb_parser_config.update(
return {
"raptor": {
"use_raptor": True,
"prompt": "Please summarize the following paragraphs. Be careful with the numbers, do not make things up. Paragraphs as following:\n {cluster_content}\nThe above is the content you need to summarize.",
"max_token": 256,
"threshold": 0.1,
"max_cluster": 64,
"random_seed": 0,
},
}
)
if not KnowledgebaseService.update_by_id(kb.id, {"parser_config":kb_parser_config}):
progress_callback(prog=-1.0, msg="Internal error: Invalid RAPTOR configuration")
return
# bind LLM for raptor # bind LLM for raptor
chat_model = LLMBundle(task_tenant_id, LLMType.CHAT, llm_name=task_llm_id, lang=task_language) chat_model = LLMBundle(task_tenant_id, LLMType.CHAT, llm_name=task_llm_id, lang=task_language)
# run RAPTOR # run RAPTOR
@ -806,8 +852,25 @@ async def do_handle_task(task):
kb_parser_config = kb.parser_config kb_parser_config = kb.parser_config
if not kb_parser_config.get("graphrag", {}).get("use_graphrag", False): if not kb_parser_config.get("graphrag", {}).get("use_graphrag", False):
progress_callback(prog=-1.0, msg="Internal error: Invalid GraphRAG configuration") kb_parser_config.update(
return {
"graphrag": {
"use_graphrag": True,
"entity_types": [
"organization",
"person",
"geo",
"event",
"category",
],
"method": "light",
}
}
)
if not KnowledgebaseService.update_by_id(kb.id, {"parser_config":kb_parser_config}):
progress_callback(prog=-1.0, msg="Internal error: Invalid GraphRAG configuration")
return
graphrag_conf = kb_parser_config.get("graphrag", {}) graphrag_conf = kb_parser_config.get("graphrag", {})
start_ts = timer() start_ts = timer()

View File

@ -95,6 +95,12 @@ def total_token_count_from_response(resp):
except Exception: except Exception:
pass pass
if hasattr(resp, "usage_metadata") and hasattr(resp.usage_metadata, "total_tokens"):
try:
return resp.usage_metadata.total_tokens
except Exception:
pass
if 'usage' in resp and 'total_tokens' in resp['usage']: if 'usage' in resp and 'total_tokens' in resp['usage']:
try: try:
return resp["usage"]["total_tokens"] return resp["usage"]["total_tokens"]

View File

@ -28,6 +28,7 @@ from rag import settings
from rag.settings import TAG_FLD, PAGERANK_FLD from rag.settings import TAG_FLD, PAGERANK_FLD
from rag.utils import singleton, get_float from rag.utils import singleton, get_float
from api.utils.file_utils import get_project_base_directory from api.utils.file_utils import get_project_base_directory
from api.utils.common import convert_bytes
from rag.utils.doc_store_conn import DocStoreConnection, MatchExpr, OrderByExpr, MatchTextExpr, MatchDenseExpr, \ from rag.utils.doc_store_conn import DocStoreConnection, MatchExpr, OrderByExpr, MatchTextExpr, MatchDenseExpr, \
FusionExpr FusionExpr
from rag.nlp import is_english, rag_tokenizer from rag.nlp import is_english, rag_tokenizer
@ -579,3 +580,52 @@ class ESConnection(DocStoreConnection):
break break
logger.error(f"ESConnection.sql timeout for {ATTEMPT_TIME} times!") logger.error(f"ESConnection.sql timeout for {ATTEMPT_TIME} times!")
return None return None
def get_cluster_stats(self):
"""
curl -XGET "http://{es_host}/_cluster/stats" -H "kbn-xsrf: reporting" to view raw stats.
"""
raw_stats = self.es.cluster.stats()
logger.debug(f"ESConnection.get_cluster_stats: {raw_stats}")
try:
res = {
'cluster_name': raw_stats['cluster_name'],
'status': raw_stats['status']
}
indices_status = raw_stats['indices']
res.update({
'indices': indices_status['count'],
'indices_shards': indices_status['shards']['total']
})
doc_info = indices_status['docs']
res.update({
'docs': doc_info['count'],
'docs_deleted': doc_info['deleted']
})
store_info = indices_status['store']
res.update({
'store_size': convert_bytes(store_info['size_in_bytes']),
'total_dataset_size': convert_bytes(store_info['total_data_set_size_in_bytes'])
})
mappings_info = indices_status['mappings']
res.update({
'mappings_fields': mappings_info['total_field_count'],
'mappings_deduplicated_fields': mappings_info['total_deduplicated_field_count'],
'mappings_deduplicated_size': convert_bytes(mappings_info['total_deduplicated_mapping_size_in_bytes'])
})
node_info = raw_stats['nodes']
res.update({
'nodes': node_info['count']['total'],
'nodes_version': node_info['versions'],
'os_mem': convert_bytes(node_info['os']['mem']['total_in_bytes']),
'os_mem_used': convert_bytes(node_info['os']['mem']['used_in_bytes']),
'os_mem_used_percent': node_info['os']['mem']['used_percent'],
'jvm_versions': node_info['jvm']['versions'][0]['vm_version'],
'jvm_heap_used': convert_bytes(node_info['jvm']['mem']['heap_used_in_bytes']),
'jvm_heap_max': convert_bytes(node_info['jvm']['mem']['heap_max_in_bytes'])
})
return res
except Exception as e:
logger.exception(f"ESConnection.get_cluster_stats: {e}")
return None

View File

@ -30,6 +30,7 @@ from rag.settings import PAGERANK_FLD, TAG_FLD
from rag.utils import singleton from rag.utils import singleton
import pandas as pd import pandas as pd
from api.utils.file_utils import get_project_base_directory from api.utils.file_utils import get_project_base_directory
from rag.nlp import is_english
from rag.utils.doc_store_conn import ( from rag.utils.doc_store_conn import (
DocStoreConnection, DocStoreConnection,
@ -40,13 +41,15 @@ from rag.utils.doc_store_conn import (
OrderByExpr, OrderByExpr,
) )
logger = logging.getLogger('ragflow.infinity_conn') logger = logging.getLogger("ragflow.infinity_conn")
def field_keyword(field_name: str): def field_keyword(field_name: str):
# The "docnm_kwd" field is always a string, not list. # The "docnm_kwd" field is always a string, not list.
if field_name == "source_id" or (field_name.endswith("_kwd") and field_name != "docnm_kwd" and field_name != "knowledge_graph_kwd"): if field_name == "source_id" or (field_name.endswith("_kwd") and field_name != "docnm_kwd" and field_name != "knowledge_graph_kwd"):
return True return True
return False return False
def equivalent_condition_to_str(condition: dict, table_instance=None) -> str | None: def equivalent_condition_to_str(condition: dict, table_instance=None) -> str | None:
assert "_id" not in condition assert "_id" not in condition
@ -74,7 +77,7 @@ def equivalent_condition_to_str(condition: dict, table_instance=None) -> str | N
inCond = list() inCond = list()
for item in v: for item in v:
if isinstance(item, str): if isinstance(item, str):
item = item.replace("'","''") item = item.replace("'", "''")
inCond.append(f"filter_fulltext('{k}', '{item}')") inCond.append(f"filter_fulltext('{k}', '{item}')")
if inCond: if inCond:
strInCond = " or ".join(inCond) strInCond = " or ".join(inCond)
@ -86,7 +89,7 @@ def equivalent_condition_to_str(condition: dict, table_instance=None) -> str | N
inCond = list() inCond = list()
for item in v: for item in v:
if isinstance(item, str): if isinstance(item, str):
item = item.replace("'","''") item = item.replace("'", "''")
inCond.append(f"'{item}'") inCond.append(f"'{item}'")
else: else:
inCond.append(str(item)) inCond.append(str(item))
@ -112,13 +115,13 @@ def concat_dataframes(df_list: list[pd.DataFrame], selectFields: list[str]) -> p
df_list2 = [df for df in df_list if not df.empty] df_list2 = [df for df in df_list if not df.empty]
if df_list2: if df_list2:
return pd.concat(df_list2, axis=0).reset_index(drop=True) return pd.concat(df_list2, axis=0).reset_index(drop=True)
schema = [] schema = []
for field_name in selectFields: for field_name in selectFields:
if field_name == 'score()': # Workaround: fix schema is changed to score() if field_name == "score()": # Workaround: fix schema is changed to score()
schema.append('SCORE') schema.append("SCORE")
elif field_name == 'similarity()': # Workaround: fix schema is changed to similarity() elif field_name == "similarity()": # Workaround: fix schema is changed to similarity()
schema.append('SIMILARITY') schema.append("SIMILARITY")
else: else:
schema.append(field_name) schema.append(field_name)
return pd.DataFrame(columns=schema) return pd.DataFrame(columns=schema)
@ -158,9 +161,7 @@ class InfinityConnection(DocStoreConnection):
def _migrate_db(self, inf_conn): def _migrate_db(self, inf_conn):
inf_db = inf_conn.create_database(self.dbName, ConflictType.Ignore) inf_db = inf_conn.create_database(self.dbName, ConflictType.Ignore)
fp_mapping = os.path.join( fp_mapping = os.path.join(get_project_base_directory(), "conf", "infinity_mapping.json")
get_project_base_directory(), "conf", "infinity_mapping.json"
)
if not os.path.exists(fp_mapping): if not os.path.exists(fp_mapping):
raise Exception(f"Mapping file not found at {fp_mapping}") raise Exception(f"Mapping file not found at {fp_mapping}")
schema = json.load(open(fp_mapping)) schema = json.load(open(fp_mapping))
@ -178,16 +179,12 @@ class InfinityConnection(DocStoreConnection):
continue continue
res = inf_table.add_columns({field_name: field_info}) res = inf_table.add_columns({field_name: field_info})
assert res.error_code == infinity.ErrorCode.OK assert res.error_code == infinity.ErrorCode.OK
logger.info( logger.info(f"INFINITY added following column to table {table_name}: {field_name} {field_info}")
f"INFINITY added following column to table {table_name}: {field_name} {field_info}"
)
if field_info["type"] != "varchar" or "analyzer" not in field_info: if field_info["type"] != "varchar" or "analyzer" not in field_info:
continue continue
inf_table.create_index( inf_table.create_index(
f"text_idx_{field_name}", f"text_idx_{field_name}",
IndexInfo( IndexInfo(field_name, IndexType.FullText, {"ANALYZER": field_info["analyzer"]}),
field_name, IndexType.FullText, {"ANALYZER": field_info["analyzer"]}
),
ConflictType.Ignore, ConflictType.Ignore,
) )
@ -221,9 +218,7 @@ class InfinityConnection(DocStoreConnection):
inf_conn = self.connPool.get_conn() inf_conn = self.connPool.get_conn()
inf_db = inf_conn.create_database(self.dbName, ConflictType.Ignore) inf_db = inf_conn.create_database(self.dbName, ConflictType.Ignore)
fp_mapping = os.path.join( fp_mapping = os.path.join(get_project_base_directory(), "conf", "infinity_mapping.json")
get_project_base_directory(), "conf", "infinity_mapping.json"
)
if not os.path.exists(fp_mapping): if not os.path.exists(fp_mapping):
raise Exception(f"Mapping file not found at {fp_mapping}") raise Exception(f"Mapping file not found at {fp_mapping}")
schema = json.load(open(fp_mapping)) schema = json.load(open(fp_mapping))
@ -253,15 +248,11 @@ class InfinityConnection(DocStoreConnection):
continue continue
inf_table.create_index( inf_table.create_index(
f"text_idx_{field_name}", f"text_idx_{field_name}",
IndexInfo( IndexInfo(field_name, IndexType.FullText, {"ANALYZER": field_info["analyzer"]}),
field_name, IndexType.FullText, {"ANALYZER": field_info["analyzer"]}
),
ConflictType.Ignore, ConflictType.Ignore,
) )
self.connPool.release_conn(inf_conn) self.connPool.release_conn(inf_conn)
logger.info( logger.info(f"INFINITY created table {table_name}, vector size {vectorSize}")
f"INFINITY created table {table_name}, vector size {vectorSize}"
)
def deleteIdx(self, indexName: str, knowledgebaseId: str): def deleteIdx(self, indexName: str, knowledgebaseId: str):
table_name = f"{indexName}_{knowledgebaseId}" table_name = f"{indexName}_{knowledgebaseId}"
@ -288,20 +279,21 @@ class InfinityConnection(DocStoreConnection):
""" """
def search( def search(
self, selectFields: list[str], self,
highlightFields: list[str], selectFields: list[str],
condition: dict, highlightFields: list[str],
matchExprs: list[MatchExpr], condition: dict,
orderBy: OrderByExpr, matchExprs: list[MatchExpr],
offset: int, orderBy: OrderByExpr,
limit: int, offset: int,
indexNames: str | list[str], limit: int,
knowledgebaseIds: list[str], indexNames: str | list[str],
aggFields: list[str] = [], knowledgebaseIds: list[str],
rank_feature: dict | None = None aggFields: list[str] = [],
rank_feature: dict | None = None,
) -> tuple[pd.DataFrame, int]: ) -> tuple[pd.DataFrame, int]:
""" """
TODO: Infinity doesn't provide highlight BUG: Infinity returns empty for a highlight field if the query string doesn't use that field.
""" """
if isinstance(indexNames, str): if isinstance(indexNames, str):
indexNames = indexNames.split(",") indexNames = indexNames.split(",")
@ -438,9 +430,7 @@ class InfinityConnection(DocStoreConnection):
matchExpr.extra_options.copy(), matchExpr.extra_options.copy(),
) )
elif isinstance(matchExpr, FusionExpr): elif isinstance(matchExpr, FusionExpr):
builder = builder.fusion( builder = builder.fusion(matchExpr.method, matchExpr.topn, matchExpr.fusion_params)
matchExpr.method, matchExpr.topn, matchExpr.fusion_params
)
else: else:
if filter_cond and len(filter_cond) > 0: if filter_cond and len(filter_cond) > 0:
builder.filter(filter_cond) builder.filter(filter_cond)
@ -455,15 +445,13 @@ class InfinityConnection(DocStoreConnection):
self.connPool.release_conn(inf_conn) self.connPool.release_conn(inf_conn)
res = concat_dataframes(df_list, output) res = concat_dataframes(df_list, output)
if matchExprs: if matchExprs:
res['Sum'] = res[score_column] + res[PAGERANK_FLD] res["Sum"] = res[score_column] + res[PAGERANK_FLD]
res = res.sort_values(by='Sum', ascending=False).reset_index(drop=True).drop(columns=['Sum']) res = res.sort_values(by="Sum", ascending=False).reset_index(drop=True).drop(columns=["Sum"])
res = res.head(limit) res = res.head(limit)
logger.debug(f"INFINITY search final result: {str(res)}") logger.debug(f"INFINITY search final result: {str(res)}")
return res, total_hits_count return res, total_hits_count
def get( def get(self, chunkId: str, indexName: str, knowledgebaseIds: list[str]) -> dict | None:
self, chunkId: str, indexName: str, knowledgebaseIds: list[str]
) -> dict | None:
inf_conn = self.connPool.get_conn() inf_conn = self.connPool.get_conn()
db_instance = inf_conn.get_database(self.dbName) db_instance = inf_conn.get_database(self.dbName)
df_list = list() df_list = list()
@ -476,8 +464,7 @@ class InfinityConnection(DocStoreConnection):
try: try:
table_instance = db_instance.get_table(table_name) table_instance = db_instance.get_table(table_name)
except Exception: except Exception:
logger.warning( logger.warning(f"Table not found: {table_name}, this knowledge base isn't created in Infinity. Maybe it is created in other document engine.")
f"Table not found: {table_name}, this knowledge base isn't created in Infinity. Maybe it is created in other document engine.")
continue continue
kb_res, _ = table_instance.output(["*"]).filter(f"id = '{chunkId}'").to_df() kb_res, _ = table_instance.output(["*"]).filter(f"id = '{chunkId}'").to_df()
logger.debug(f"INFINITY get table: {str(table_list)}, result: {str(kb_res)}") logger.debug(f"INFINITY get table: {str(table_list)}, result: {str(kb_res)}")
@ -487,9 +474,7 @@ class InfinityConnection(DocStoreConnection):
res_fields = self.getFields(res, res.columns.tolist()) res_fields = self.getFields(res, res.columns.tolist())
return res_fields.get(chunkId, None) return res_fields.get(chunkId, None)
def insert( def insert(self, documents: list[dict], indexName: str, knowledgebaseId: str = None) -> list[str]:
self, documents: list[dict], indexName: str, knowledgebaseId: str = None
) -> list[str]:
inf_conn = self.connPool.get_conn() inf_conn = self.connPool.get_conn()
db_instance = inf_conn.get_database(self.dbName) db_instance = inf_conn.get_database(self.dbName)
table_name = f"{indexName}_{knowledgebaseId}" table_name = f"{indexName}_{knowledgebaseId}"
@ -532,7 +517,7 @@ class InfinityConnection(DocStoreConnection):
d[k] = v d[k] = v
elif re.search(r"_feas$", k): elif re.search(r"_feas$", k):
d[k] = json.dumps(v) d[k] = json.dumps(v)
elif k == 'kb_id': elif k == "kb_id":
if isinstance(d[k], list): if isinstance(d[k], list):
d[k] = d[k][0] # since d[k] is a list, but we need a str d[k] = d[k][0] # since d[k] is a list, but we need a str
elif k == "position_int": elif k == "position_int":
@ -561,18 +546,16 @@ class InfinityConnection(DocStoreConnection):
logger.debug(f"INFINITY inserted into {table_name} {str_ids}.") logger.debug(f"INFINITY inserted into {table_name} {str_ids}.")
return [] return []
def update( def update(self, condition: dict, newValue: dict, indexName: str, knowledgebaseId: str) -> bool:
self, condition: dict, newValue: dict, indexName: str, knowledgebaseId: str
) -> bool:
# if 'position_int' in newValue: # if 'position_int' in newValue:
# logger.info(f"update position_int: {newValue['position_int']}") # logger.info(f"update position_int: {newValue['position_int']}")
inf_conn = self.connPool.get_conn() inf_conn = self.connPool.get_conn()
db_instance = inf_conn.get_database(self.dbName) db_instance = inf_conn.get_database(self.dbName)
table_name = f"{indexName}_{knowledgebaseId}" table_name = f"{indexName}_{knowledgebaseId}"
table_instance = db_instance.get_table(table_name) table_instance = db_instance.get_table(table_name)
#if "exists" in condition: # if "exists" in condition:
# del condition["exists"] # del condition["exists"]
clmns = {} clmns = {}
if table_instance: if table_instance:
for n, ty, de, _ in table_instance.show_columns().rows(): for n, ty, de, _ in table_instance.show_columns().rows():
@ -587,7 +570,7 @@ class InfinityConnection(DocStoreConnection):
newValue[k] = v newValue[k] = v
elif re.search(r"_feas$", k): elif re.search(r"_feas$", k):
newValue[k] = json.dumps(v) newValue[k] = json.dumps(v)
elif k == 'kb_id': elif k == "kb_id":
if isinstance(newValue[k], list): if isinstance(newValue[k], list):
newValue[k] = newValue[k][0] # since d[k] is a list, but we need a str newValue[k] = newValue[k][0] # since d[k] is a list, but we need a str
elif k == "position_int": elif k == "position_int":
@ -611,11 +594,11 @@ class InfinityConnection(DocStoreConnection):
del newValue[k] del newValue[k]
else: else:
newValue[k] = v newValue[k] = v
remove_opt = {} # "[k,new_value]": [id_to_update, ...] remove_opt = {} # "[k,new_value]": [id_to_update, ...]
if removeValue: if removeValue:
col_to_remove = list(removeValue.keys()) col_to_remove = list(removeValue.keys())
row_to_opt = table_instance.output(col_to_remove + ['id']).filter(filter).to_df() row_to_opt = table_instance.output(col_to_remove + ["id"]).filter(filter).to_df()
logger.debug(f"INFINITY search table {str(table_name)}, filter {filter}, result: {str(row_to_opt[0])}") logger.debug(f"INFINITY search table {str(table_name)}, filter {filter}, result: {str(row_to_opt[0])}")
row_to_opt = self.getFields(row_to_opt, col_to_remove) row_to_opt = self.getFields(row_to_opt, col_to_remove)
for id, old_v in row_to_opt.items(): for id, old_v in row_to_opt.items():
@ -632,8 +615,8 @@ class InfinityConnection(DocStoreConnection):
logger.debug(f"INFINITY update table {table_name}, filter {filter}, newValue {newValue}.") logger.debug(f"INFINITY update table {table_name}, filter {filter}, newValue {newValue}.")
for update_kv, ids in remove_opt.items(): for update_kv, ids in remove_opt.items():
k, v = json.loads(update_kv) k, v = json.loads(update_kv)
table_instance.update(filter + " AND id in ({0})".format(",".join([f"'{id}'" for id in ids])), {k:"###".join(v)}) table_instance.update(filter + " AND id in ({0})".format(",".join([f"'{id}'" for id in ids])), {k: "###".join(v)})
table_instance.update(filter, newValue) table_instance.update(filter, newValue)
self.connPool.release_conn(inf_conn) self.connPool.release_conn(inf_conn)
return True return True
@ -645,9 +628,7 @@ class InfinityConnection(DocStoreConnection):
try: try:
table_instance = db_instance.get_table(table_name) table_instance = db_instance.get_table(table_name)
except Exception: except Exception:
logger.warning( logger.warning(f"Skipped deleting from table {table_name} since the table doesn't exist.")
f"Skipped deleting from table {table_name} since the table doesn't exist."
)
return 0 return 0
filter = equivalent_condition_to_str(condition, table_instance) filter = equivalent_condition_to_str(condition, table_instance)
logger.debug(f"INFINITY delete table {table_name}, filter {filter}.") logger.debug(f"INFINITY delete table {table_name}, filter {filter}.")
@ -675,37 +656,39 @@ class InfinityConnection(DocStoreConnection):
if not fields: if not fields:
return {} return {}
fieldsAll = fields.copy() fieldsAll = fields.copy()
fieldsAll.append('id') fieldsAll.append("id")
column_map = {col.lower(): col for col in res.columns} column_map = {col.lower(): col for col in res.columns}
matched_columns = {column_map[col.lower()]:col for col in set(fieldsAll) if col.lower() in column_map} matched_columns = {column_map[col.lower()]: col for col in set(fieldsAll) if col.lower() in column_map}
none_columns = [col for col in set(fieldsAll) if col.lower() not in column_map] none_columns = [col for col in set(fieldsAll) if col.lower() not in column_map]
res2 = res[matched_columns.keys()] res2 = res[matched_columns.keys()]
res2 = res2.rename(columns=matched_columns) res2 = res2.rename(columns=matched_columns)
res2.drop_duplicates(subset=['id'], inplace=True) res2.drop_duplicates(subset=["id"], inplace=True)
for column in res2.columns: for column in res2.columns:
k = column.lower() k = column.lower()
if field_keyword(k): if field_keyword(k):
res2[column] = res2[column].apply(lambda v:[kwd for kwd in v.split("###") if kwd]) res2[column] = res2[column].apply(lambda v: [kwd for kwd in v.split("###") if kwd])
elif re.search(r"_feas$", k): elif re.search(r"_feas$", k):
res2[column] = res2[column].apply(lambda v: json.loads(v) if v else {}) res2[column] = res2[column].apply(lambda v: json.loads(v) if v else {})
elif k == "position_int": elif k == "position_int":
def to_position_int(v): def to_position_int(v):
if v: if v:
arr = [int(hex_val, 16) for hex_val in v.split('_')] arr = [int(hex_val, 16) for hex_val in v.split("_")]
v = [arr[i:i + 5] for i in range(0, len(arr), 5)] v = [arr[i : i + 5] for i in range(0, len(arr), 5)]
else: else:
v = [] v = []
return v return v
res2[column] = res2[column].apply(to_position_int) res2[column] = res2[column].apply(to_position_int)
elif k in ["page_num_int", "top_int"]: elif k in ["page_num_int", "top_int"]:
res2[column] = res2[column].apply(lambda v:[int(hex_val, 16) for hex_val in v.split('_')] if v else []) res2[column] = res2[column].apply(lambda v: [int(hex_val, 16) for hex_val in v.split("_")] if v else [])
else: else:
pass pass
for column in none_columns: for column in none_columns:
res2[column] = None res2[column] = None
return res2.set_index("id").to_dict(orient="index") return res2.set_index("id").to_dict(orient="index")
def getHighlight(self, res: tuple[pd.DataFrame, int] | pd.DataFrame, keywords: list[str], fieldnm: str): def getHighlight(self, res: tuple[pd.DataFrame, int] | pd.DataFrame, keywords: list[str], fieldnm: str):
@ -719,23 +702,35 @@ class InfinityConnection(DocStoreConnection):
for i in range(num_rows): for i in range(num_rows):
id = column_id[i] id = column_id[i]
txt = res[fieldnm][i] txt = res[fieldnm][i]
if re.search(r"<em>[^<>]+</em>", txt, flags=re.IGNORECASE | re.MULTILINE):
ans[id] = txt
continue
txt = re.sub(r"[\r\n]", " ", txt, flags=re.IGNORECASE | re.MULTILINE) txt = re.sub(r"[\r\n]", " ", txt, flags=re.IGNORECASE | re.MULTILINE)
txts = [] txts = []
for t in re.split(r"[.?!;\n]", txt): for t in re.split(r"[.?!;\n]", txt):
for w in keywords: if is_english([t]):
t = re.sub( for w in keywords:
r"(^|[ .?/'\"\(\)!,:;-])(%s)([ .?/'\"\(\)!,:;-])" t = re.sub(
% re.escape(w), r"(^|[ .?/'\"\(\)!,:;-])(%s)([ .?/'\"\(\)!,:;-])" % re.escape(w),
r"\1<em>\2</em>\3", r"\1<em>\2</em>\3",
t, t,
flags=re.IGNORECASE | re.MULTILINE, flags=re.IGNORECASE | re.MULTILINE,
) )
if not re.search( else:
r"<em>[^<>]+</em>", t, flags=re.IGNORECASE | re.MULTILINE for w in sorted(keywords, key=len, reverse=True):
): t = re.sub(
re.escape(w),
f"<em>{w}</em>",
t,
flags=re.IGNORECASE | re.MULTILINE,
)
if not re.search(r"<em>[^<>]+</em>", t, flags=re.IGNORECASE | re.MULTILINE):
continue continue
txts.append(t) txts.append(t)
ans[id] = "...".join(txts) if txts:
ans[id] = "...".join(txts)
else:
ans[id] = txt
return ans return ans
def getAggregation(self, res: tuple[pd.DataFrame, int] | pd.DataFrame, fieldnm: str): def getAggregation(self, res: tuple[pd.DataFrame, int] | pd.DataFrame, fieldnm: str):

View File

@ -91,6 +91,20 @@ class RedisDB:
if self.REDIS.get(a) == b: if self.REDIS.get(a) == b:
return True return True
def info(self):
info = self.REDIS.info()
return {
'redis_version': info["redis_version"],
'server_mode': info["server_mode"],
'used_memory': info["used_memory_human"],
'total_system_memory': info["total_system_memory_human"],
'mem_fragmentation_ratio': info["mem_fragmentation_ratio"],
'connected_clients': info["connected_clients"],
'blocked_clients': info["blocked_clients"],
'instantaneous_ops_per_sec': info["instantaneous_ops_per_sec"],
'total_commands_processed': info["total_commands_processed"]
}
def is_alive(self): def is_alive(self):
return self.REDIS is not None return self.REDIS is not None

View File

@ -33,35 +33,52 @@ class Session(Base):
self.__session_type = "agent" self.__session_type = "agent"
super().__init__(rag, res_dict) super().__init__(rag, res_dict)
def ask(self, question="", stream=True, **kwargs):
def ask(self, question="", stream=False, **kwargs):
"""
Ask a question to the session. If stream=True, yields Message objects as they arrive (SSE streaming).
If stream=False, returns a single Message object for the final answer.
"""
if self.__session_type == "agent": if self.__session_type == "agent":
res = self._ask_agent(question, stream) res = self._ask_agent(question, stream)
elif self.__session_type == "chat": elif self.__session_type == "chat":
res = self._ask_chat(question, stream, **kwargs) res = self._ask_chat(question, stream, **kwargs)
else:
raise Exception(f"Unknown session type: {self.__session_type}")
if stream: if stream:
for line in res.iter_lines(): for line in res.iter_lines(decode_unicode=True):
line = line.decode("utf-8") if not line:
if line.startswith("{"): continue # Skip empty lines
json_data = json.loads(line) line = line.strip()
raise Exception(json_data["message"])
if not line.startswith("data:"): if line.startswith("data:"):
continue content = line[len("data:"):].strip()
json_data = json.loads(line[5:]) if content == "[DONE]":
if json_data["data"] is True or json_data["data"].get("running_status"): break # End of stream
continue else:
message = self._structure_answer(json_data) content = line
yield message
try:
json_data = json.loads(content)
except json.JSONDecodeError:
continue # Skip lines that are not valid JSON
event = json_data.get("event")
if event == "message":
yield self._structure_answer(json_data)
elif event == "message_end":
return # End of message stream
else: else:
try: try:
json_data = json.loads(res.text) json_data = res.json()
except ValueError: except ValueError:
raise Exception(f"Invalid response {res}") raise Exception(f"Invalid response {res}")
return self._structure_answer(json_data) yield self._structure_answer(json_data["data"])
def _structure_answer(self, json_data): def _structure_answer(self, json_data):
answer = json_data["data"]["answer"] answer = json_data["data"]["content"]
reference = json_data["data"].get("reference", {}) reference = json_data["data"].get("reference", {})
temp_dict = { temp_dict = {
"content": answer, "content": answer,

7041
uv.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -3,6 +3,7 @@ import path from 'path';
const config: StorybookConfig = { const config: StorybookConfig = {
stories: ['../src/**/*.mdx', '../src/**/*.stories.@(js|jsx|mjs|ts|tsx)'], stories: ['../src/**/*.mdx', '../src/**/*.stories.@(js|jsx|mjs|ts|tsx)'],
staticDirs: ['../public'],
addons: [ addons: [
'@storybook/addon-webpack5-compiler-swc', '@storybook/addon-webpack5-compiler-swc',
'@storybook/addon-docs', '@storybook/addon-docs',

View File

@ -1,6 +1,7 @@
import '@/locales/config'; import '@/locales/config';
import type { Preview } from '@storybook/react-webpack5'; import type { Preview } from '@storybook/react-webpack5';
import { createElement } from 'react'; import { createElement } from 'react';
import '../public/iconfont.js';
import { TooltipProvider } from '../src/components/ui/tooltip'; import { TooltipProvider } from '../src/components/ui/tooltip';
import '../tailwind.css'; import '../tailwind.css';

View File

@ -59,6 +59,10 @@
`<symbol id="icon-GitHub" viewBox="0 0 1024 1024"> `<symbol id="icon-GitHub" viewBox="0 0 1024 1024">
<path d="M512 42.666667C252.714667 42.666667 42.666667 252.714667 42.666667 512c0 207.658667 134.357333 383.104 320.896 445.269333 23.466667 4.096 32.256-9.941333 32.256-22.272 0-11.178667-0.554667-48.128-0.554667-87.424-117.930667 21.717333-148.437333-28.757333-157.824-55.125333-5.290667-13.525333-28.16-55.168-48.085333-66.304-16.426667-8.832-39.936-30.506667-0.597334-31.104 36.949333-0.597333 63.36 34.005333 72.149334 48.128 42.24 70.954667 109.696 51.029333 136.704 38.698667 4.096-30.506667 16.426667-51.029333 29.909333-62.762667-104.448-11.733333-213.546667-52.224-213.546667-231.765333 0-51.029333 18.176-93.269333 48.128-126.122667-4.736-11.733333-21.162667-59.818667 4.693334-124.373333 0 0 39.296-12.288 129.024 48.128a434.901333 434.901333 0 0 1 117.333333-15.829334c39.936 0 79.829333 5.248 117.333333 15.829334 89.770667-61.013333 129.066667-48.128 129.066667-48.128 25.813333 64.554667 9.429333 112.64 4.736 124.373333 29.909333 32.853333 48.085333 74.538667 48.085333 126.122667 0 180.138667-109.696 220.032-214.144 231.765333 17.024 14.677333 31.701333 42.837333 31.701334 86.826667 0 62.762667-0.597333 113.237333-0.597334 129.066666 0 12.330667 8.789333 26.965333 32.256 22.272C846.976 895.104 981.333333 719.104 981.333333 512c0-259.285333-210.005333-469.333333-469.333333-469.333333z"></path> <path d="M512 42.666667C252.714667 42.666667 42.666667 252.714667 42.666667 512c0 207.658667 134.357333 383.104 320.896 445.269333 23.466667 4.096 32.256-9.941333 32.256-22.272 0-11.178667-0.554667-48.128-0.554667-87.424-117.930667 21.717333-148.437333-28.757333-157.824-55.125333-5.290667-13.525333-28.16-55.168-48.085333-66.304-16.426667-8.832-39.936-30.506667-0.597334-31.104 36.949333-0.597333 63.36 34.005333 72.149334 48.128 42.24 70.954667 109.696 51.029333 136.704 38.698667 4.096-30.506667 16.426667-51.029333 29.909333-62.762667-104.448-11.733333-213.546667-52.224-213.546667-231.765333 0-51.029333 18.176-93.269333 48.128-126.122667-4.736-11.733333-21.162667-59.818667 4.693334-124.373333 0 0 39.296-12.288 129.024 48.128a434.901333 434.901333 0 0 1 117.333333-15.829334c39.936 0 79.829333 5.248 117.333333 15.829334 89.770667-61.013333 129.066667-48.128 129.066667-48.128 25.813333 64.554667 9.429333 112.64 4.736 124.373333 29.909333 32.853333 48.085333 74.538667 48.085333 126.122667 0 180.138667-109.696 220.032-214.144 231.765333 17.024 14.677333 31.701333 42.837333 31.701334 86.826667 0 62.762667-0.597333 113.237333-0.597334 129.066666 0 12.330667 8.789333 26.965333 32.256 22.272C846.976 895.104 981.333333 719.104 981.333333 512c0-259.285333-210.005333-469.333333-469.333333-469.333333z"></path>
</symbol>` + </symbol>` +
`<symbol id="icon-more" viewBox="0 0 1024 1024">
<path d="M0 0h1024v1024H0z" opacity=".01"></path>
<path d="M867.072 141.184H156.032a32 32 0 0 0 0 64h711.04a32 32 0 0 0 0-64z m0.832 226.368H403.2a32 32 0 0 0 0 64h464.704a32 32 0 0 0 0-64zM403.2 573.888h464.704a32 32 0 0 1 0 64H403.2a32 32 0 0 1 0-64z m464.704 226.368H156.864a32 32 0 0 0 0 64h711.04a32 32 0 0 0 0-64zM137.472 367.552v270.336l174.528-122.24-174.528-148.096z" ></path>
</symbol>` +
'</svg>'), '</svg>'),
((h) => { ((h) => {
var a = (l = (l = document.getElementsByTagName('script'))[ var a = (l = (l = document.getElementsByTagName('script'))[

Some files were not shown because too many files have changed in this diff Show More