Compare commits

...

959 Commits

Author SHA1 Message Date
448fa1c4d4 Robust for abnormal response from LLMs. (#4747)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-06 17:34:53 +08:00
e786f596e2 Updated template description (#4744)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2025-02-06 17:14:13 +08:00
fe9e9a644f Preparation for release. (#4739)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-02-06 15:15:13 +08:00
d2961b2d25 Feat: Supports docx in the MANUAL chunk method and docx, markdown, and PDF in the Q&A chunk method #3957 (#4741)
### What problem does this PR solve?

Feat: Supports docx in the MANUAL chunk method and docx, markdown, and
PDF in the Q&A chunk method #3957
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-02-06 14:41:46 +08:00
a73e1750b6 Resume content flow ExecSQL (#4738)
resume content flow, instead of closed with errors.
2025-02-06 12:09:47 +08:00
c1d71e9a3f Fix: New user can't accept invite without configuring LLM API #4379 (#4736)
### What problem does this PR solve?

Fix: New user can't accept invite without configuring LLM API #4379

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-06 11:41:02 +08:00
2a07eb69a7 Fix too long context issue. (#4735)
### What problem does this PR solve?

#4728

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-06 11:37:23 +08:00
a3a70431f3 Avoid misunderstanding LLMs about the numbers value (#4724)
Avoid misunderstanding LLMs about the numbers value
2025-02-06 10:17:56 +08:00
6f2c3a3c3c Fix too long query exception. (#4729)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-06 10:11:52 +08:00
54803c4ef2 Fix: pt_br translation and adding pt to knoledg-add (#4674)
### What problem does this PR solve?

Fix Portuguese (Brazil) translation
Adding portuguese to Knowledge adding settings.

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
- [X] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Yingfeng <yingfeng.zhang@gmail.com>
2025-02-05 18:20:24 +08:00
efbaa484d7 Fix: Chat Assistant page goes blank #4566 (#4727)
### What problem does this PR solve?

Fix: Chat Assistant page goes blank #4566

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-05 18:19:38 +08:00
3411d0a2ce Added cuda_is_available (#4725)
### What problem does this PR solve?

Added cuda_is_available

### Type of change

- [x] Refactoring
2025-02-05 18:01:23 +08:00
283d036cba Fitin for infinity. (#4722)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2025-02-05 16:47:05 +08:00
307717b045 Fix exesql re-generate SQL issue. (#4717)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-05 16:23:48 +08:00
8e74bc8e42 Update readme. (#4715)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2025-02-05 14:24:30 +08:00
4b9c4c0705 Update deepseek model provider info. (#4714)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2025-02-05 13:43:40 +08:00
b2bb560007 Import akshare lazzily. (#4708)
### What problem does this PR solve?

#4696

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-05 12:04:11 +08:00
e1526846da Fixed GPU detection on CPU only environment (#4711)
### What problem does this PR solve?

Fixed GPU detection on CPU only environment. Close #4692

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-05 12:02:43 +08:00
7a7f98b1a9 Reverted build slim image documentation (#4687)
### Fixed documentation for building docker image

I'm assuming that this was a mistake (but I could be missing something)
[here](https://github.com/infiniflow/ragflow/pull/4658/files#:~:text=docker%20build%20%2D%2Dbuild%2Darg%20LIGHTEN%3D1%20%2Df%20Dockerfile%20%2Dt%20infiniflow/ragflow%3Anightly%2Dslim%20.)
when the docker building instructions for the slim image was just
changed away from an actual `docker build` command to the mac os `docker
compose up` command. This is just reverting the change.

### Type of change

- [X] Documentation Update
2025-02-05 10:35:08 +08:00
036f37a627 fix: err object has no attribute 'iter_lines' (#4686)
### What problem does this PR solve?

ERROR: 'Stream' object has no attribute 'iter_lines' with reference to
Claude/Anthropic chat streams

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Kyle Olmstead <k.olmstead@offensive-security.com>
2025-02-01 22:39:30 +08:00
191587346c Fix macOS startup (#4658)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/4319

This pull request includes several changes to improve the Docker setup
and documentation for the project. The most important changes include
updating the Dockerfile to support modern versions of Rust, adding a new
Docker Compose configuration for macOS, and updating the build
instructions in the documentation.

Improvements to Docker setup:

*
[`Dockerfile`](diffhunk://#diff-dd2c0eb6ea5cfc6c4bd4eac30934e2d5746747af48fef6da689e85b752f39557L80-R107):
Added installation steps for a modern version of Rust and updated the
logic for installing the correct ODBC driver based on the architecture.
*
[`docker/docker-compose-macos.yml`](diffhunk://#diff-8e8587143bb2442c02f6dff4caa217ebbe3ba4ec8e7c23b2e568886a67b00eafR1-R56):
Added a new Docker Compose configuration file specifically for macOS,
including service dependencies, environment variables, and volume
mappings.

Updates to documentation:

*
[`docs/guides/develop/build_docker_image.mdx`](diffhunk://#diff-d6136bb897f7245aae33b0accbcf7c508ceaef005c545f9f09cad3cada840a19L44-R44):
Updated the build instructions to use the new Docker Compose
configuration for macOS instead of the previous Docker build command.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Documentation Update

---------

Signed-off-by: Samuel Giffard <samuel.giffard@mytomorrows.com>
2025-01-28 16:51:16 +08:00
50055c47ec Infinity mapping refine. (#4665)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2025-01-27 18:53:49 +08:00
6f30397bb5 Infinity adapt to graphrag. (#4663)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-27 18:35:18 +08:00
d970d0ef39 Fix typos (#4662)
### What problem does this PR solve?

Fix typos in the documents

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2025-01-27 15:45:16 +08:00
ce8658aa84 Update FAQ (#4661)
### What problem does this PR solve?

Update FAQ

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2025-01-27 12:56:48 +08:00
bc6a768b90 Refactor the delimiter name (#4659)
### What problem does this PR solve?

Rename from 'Diagonal' to 'Forward slash'
Rename from 'Minus' to 'Dash'

issue: #4655 

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-01-27 11:04:43 +08:00
656a2fab41 Refresh deepseek models. (#4660)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-01-27 11:01:39 +08:00
47b28a27a6 Added description of the Iteration component (#4656)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2025-01-27 10:12:23 +08:00
c354239b79 Make infinity adapt to condition exist. (#4657)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-26 18:45:36 +08:00
b4303f6010 Update README. (#4654)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2025-01-26 14:14:58 +08:00
4776fa5e4e Refactor for total_tokens. (#4652)
### What problem does this PR solve?

#4567
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-26 13:54:26 +08:00
c24137bd11 Fix too long integer for Table. (#4651)
### What problem does this PR solve?

#4594

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-26 12:54:58 +08:00
4011c8f68c Fix potential error. (#4650)
### What problem does this PR solve?
#4622

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-26 12:38:32 +08:00
2cb8edc42c Added GPUStack (#4649)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2025-01-26 12:25:02 +08:00
284b4d4430 Align table heading with 'System Model Settings' (#4646)
…the 'System Model Settings'

### What problem does this PR solve?

### Type of change


- [x] Documentation Update
2025-01-26 11:12:38 +08:00
42f7261509 Fix param error. (#4645)
### What problem does this PR solve?

#4633

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-26 10:40:02 +08:00
f33415b751 refactor: better vertical alignment for icon and text in some setting buttons (#4615)
### What problem does this PR solve?

Fixed vertical alignment issues between icons and text in API-Key and
System Model Settings buttons. This improves visual consistency across
the settings interface.

### Type of change

- [x] Refactoring

Before: Icons and text were slightly misaligned vertically
<img width="635" alt="Screenshot 2025-01-23 at 20 22 46"
src="https://github.com/user-attachments/assets/28f15637-d3fd-45a2-aae8-ca72fb12a88e"
/>

After: Icons and text are now properly centered with consistent spacing
<img width="540" alt="Screenshot 2025-01-23 at 20 23 02"
src="https://github.com/user-attachments/assets/98bb0ca5-6995-42d8-bd23-8a8f44ec0209"
/>
2025-01-26 10:36:03 +08:00
530b0dab17 Make infinity able to cal embedding sim only. (#4644)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-26 10:29:52 +08:00
c4b1c4e6f4 Fix onnxruntime-gpu marks (#4643)
### What problem does this PR solve?

Fix onnxruntime-gpu marks

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-26 09:37:59 +08:00
3c2c8942d5 Removed onnxruntime (#4632)
### What problem does this PR solve?

Removed onnxruntime. It conflicts with the onnxruntime-gpu.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-24 23:41:52 +08:00
71c132f76d Make infinity adapt (#4635)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-24 17:45:04 +08:00
9d717f0b6e Fix csv reader exception. (#4628)
### What problem does this PR solve?

#4552
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-24 14:47:19 +08:00
8b49734241 Added onnxruntime-gpu (#4631)
### What problem does this PR solve?

Added onnxruntime-gpu

### Type of change

- [x] Refactoring
2025-01-24 14:33:21 +08:00
898ae7fa80 Fix missplace for vector sim weight and token sim weight. (#4627)
### What problem does this PR solve?

#4610

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-24 12:41:06 +08:00
fa4277225d Added document: Accelerate document indexing and retrieval (#4600)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2025-01-24 11:58:15 +08:00
1bff6b7333 Fix t_ocr.py for PNG image. (#4625)
### What problem does this PR solve?
#4586

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-24 11:47:27 +08:00
e9ccba0395 Add timestamp to messages (#4624)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-01-24 11:07:55 +08:00
f1d9f4290e Fix TogetherAIEmbed. (#4623)
### What problem does this PR solve?

#4567

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-24 10:29:30 +08:00
4230402fbb deepdoc use GPU if possible (#4618)
### What problem does this PR solve?

deepdoc use GPU if possible

### Type of change

- [x] Refactoring
2025-01-24 09:48:02 +08:00
e14d6ae441 Refactor. (#4612)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2025-01-23 18:56:02 +08:00
55f2b7c4d5 Code format. (#4611)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2025-01-23 18:43:32 +08:00
07b3e55903 Feat: Set the style of the header tag #3221 (#4608)
### What problem does this PR solve?

Feat: Set the style of the header tag #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-23 18:16:52 +08:00
86892959a0 Rebuild graph when it's out of time. (#4607)
### What problem does this PR solve?

#4543

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
2025-01-23 17:26:20 +08:00
bbc1d02c96 Template conversion adds Jinjia2 syntax support (#4545)
### What problem does this PR solve?

Template conversion adds Jinjia2 syntax support

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: wangrui <wangrui@haima.me>
Co-authored-by: Yingfeng <yingfeng.zhang@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-01-23 17:11:14 +08:00
b23a4a8fea Feat: Add keyword item to AssistantSetting #4543 (#4603)
### What problem does this PR solve?

Feat: Add keyword item to AssistantSetting #4543

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-23 15:58:22 +08:00
240e7d7c22 Unified user_service.py (#4606)
### What problem does this PR solve?

Unified user_service.py

### Type of change

- [x] Refactoring
2025-01-23 15:49:21 +08:00
52fa8bdcf3 fix bug KGSearch.search() got an unexpected keyword argument 'rank_feature' (#4601)
`graphrag/search.py`中`search`方法缺少参数`rank_feature`

Co-authored-by: xubh <xubh@wikiflyer.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-01-23 14:03:18 +08:00
13f04b7cca Fix pdf applying Q&A issue. (#4599)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-23 12:30:46 +08:00
c4b9e903c8 Fix index not found for new user. (#4597)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-23 11:45:22 +08:00
15f9406e7b Fix: Capture the problem that the knowledge graph interface returns null and causes page errors #4543 (#4598)
### What problem does this PR solve?

Fix: Capture the problem that the knowledge graph interface returns null
and causes page errors #4543

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-23 11:32:49 +08:00
c5c0dd2da0 Feat: Display the knowledge graph on the knowledge base page #4543 (#4587)
### What problem does this PR solve?

Feat: Display the knowledge graph on the knowledge base page #4543

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-22 19:43:27 +08:00
dd0ebbea35 Light GraphRAG (#4585)
### What problem does this PR solve?

#4543

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-01-22 19:43:14 +08:00
1a367664f1 Remove usage of eval() from postprocess.py (#4571)
Remove usage of `eval()` from postprocess.py

### What problem does this PR solve?

The use of `eval()` is a potential security risk. While the use of
`eval()` is guarded and thus not a security risk normally, `assert`s
aren't run if `-O` or `-OO` is passed to the interpreter, and as such
then the guard would not apply. In any case there is no reason to use
`eval()` here at all.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Other (please describe):

Potential security fix if somehow the passed `modul_name` could be user
controlled.
2025-01-22 19:37:24 +08:00
336e5fb37f Renamed entrypoint_task_executor.sh entrypoint-parser.sh (#4583)
### What problem does this PR solve?

Renamed entrypoint_task_executor.sh entrypoint-parser.sh

### Type of change

- [x] Refactoring
2025-01-22 18:21:51 +08:00
598e142c85 re-fix (#4584)
re-fix docs
2025-01-22 18:13:26 +08:00
cbc3c5297e Revert the chat history for rewrite. (#4579)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-22 16:55:11 +08:00
4b82275ae5 Fix Latest Release button on PT README (#4572)
### What problem does this PR solve?

Latest Release button on Portugese Read me is not looking like it shuld 

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-01-22 11:46:55 +08:00
3894de895b Update comments (#4569)
### What problem does this PR solve?

Add license statement.

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-01-21 20:52:28 +08:00
583050a876 minor (#4568)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2025-01-21 20:02:04 +08:00
a2946b0fb0 Added descriptions of the Note and Template components (#4560)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2025-01-21 19:51:30 +08:00
21052b2972 Feat: Support for Portuguese language #4557 (#4558)
### What problem does this PR solve?

Feat: Support for Portuguese language #4557

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-21 11:32:47 +08:00
5632613eb5 Add language portugese br (#4550)
### What problem does this PR solve?

Add language Portugese from Brazil

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
2025-01-21 11:22:29 +08:00
fc35821f81 Feat: Make the scroll bar of the DatasetSettings page appear inside #3221 (#4548)
### What problem does this PR solve?

Feat: Make the scroll bar of the DatasetSettings page appear inside
#3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-21 10:19:38 +08:00
db80376427 Added entrypoint for task executor (#4551)
### What problem does this PR solve?

Added entrypoint for task executor

### Type of change

- [x] Refactoring
2025-01-20 22:49:46 +08:00
99430a7db7 Added description of the Concentrator component (#4549)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2025-01-20 19:54:02 +08:00
a3391c4d55 Feat: Rename document name #3221 (#4544)
### What problem does this PR solve?

Feat: Rename document name #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-20 16:49:45 +08:00
e0f52eebc6 Added descriptions of Rewrite and Switch components. To be continued (#4526)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2025-01-20 15:56:21 +08:00
367babda2f Make Categorize see more chat hisotry. (#4538)
### What problem does this PR solve?

#4521

### Type of change
- [x] Performance Improvement
2025-01-20 11:57:56 +08:00
2962284c79 Bump akshare (#4536)
### What problem does this PR solve?

Bump akshare. Close #4525 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-20 11:17:59 +08:00
75e1981e13 Remove use of eval() from recognizer.py (#4480)
`eval(op_type)` -> `getattr(operators, op_type)`

### What problem does this PR solve?

Using `eval()` can lead to code injections and is entirely unnecessary
here.

### Type of change

- [x] Other (please describe):

Best practice code improvement, preventing the possibility of code
injection.
2025-01-20 09:52:47 +08:00
4f9f9405b8 Remove use of eval() from ocr.py (#4481)
`eval(op_name)` -> `getattr(operators, op_name)`

### What problem does this PR solve?

Using `eval()` can lead to code injections and is entirely unnecessary
here.

### Type of change

- [x] Other (please describe):

Best practice code improvement, preventing the possibility of code
injection.
2025-01-20 09:52:30 +08:00
938492cbae Fix: Rename segmented.tsx #3221 (#4522)
### What problem does this PR solve?

Fix: Rename segmented.tsx #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-17 19:01:09 +08:00
f4d084bcf1 Fix doc progress issue. (#4520)
### What problem does this PR solve?

#4516
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-17 18:28:15 +08:00
69984554a5 Fix: Translate the operator options of the Switch operator #1739 (#4519)
### What problem does this PR solve?

Fix: Translate the operator options of the Switch operator #1739

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-17 18:23:08 +08:00
03d7a51d49 add file README_tzh.md (#4513)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Other (please describe):
2025-01-17 18:22:02 +08:00
0efe7a544b Change index url per NEED_MIRROR (#4515)
### What problem does this PR solve?

Change index url per NEED_MIRROR. Close #4507

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-17 12:01:04 +08:00
c0799c53b3 Added descriptions of Message and Keyword agent components (#4512)
### What problem does this PR solve?

### Type of change


- [x] Documentation Update
2025-01-16 19:47:15 +08:00
4946e43941 Feat: Make the category operator form displayed in collapsed mode by default #4505 (#4510)
### What problem does this PR solve?

Feat: Make the category operator form displayed in collapsed mode by
default #4505

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-16 17:16:58 +08:00
37235315e1 Fix: Fixed an issue where math formulas could not be displayed correctly #4405 (#4506)
### What problem does this PR solve?

[remarkjs/react-markdown/issues/785](https://github.com/remarkjs/react-markdown/issues/785)
Fix: Fixed an issue where math formulas could not be displayed correctly
#4405

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-16 15:13:40 +08:00
39be08c83d Feat: Add the MessageHistoryWindowSizeItem to RewriteQuestionForm #1739 (#4502)
### What problem does this PR solve?

Feat: Add the MessageHistoryWindowSizeItem to RewriteQuestionForm #1739
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-16 15:01:15 +08:00
3805621564 Fix xinference rerank issue. (#4499)
### What problem does this PR solve?
#4495
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-16 11:35:51 +08:00
a75cda4957 Feat: Add LinkToDatasetDialog #3221 (#4500)
### What problem does this PR solve?

Feat: Add LinkToDatasetDialog #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-16 11:35:39 +08:00
961e8c4980 Fix: the Display when the knowledge base empty. (#4496)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-16 10:27:45 +08:00
57b4e0c464 Bump infinity to v0.6.0-dev2 (#4497)
### What problem does this PR solve?

Bump infinity to v0.6.0-dev2. Close #4477 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-16 10:07:17 +08:00
c852a6dfbf Accelerate titles' embeddings. (#4492)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2025-01-15 15:20:29 +08:00
b4614e9517 Feat: Add FilesTable #3221 (#4491)
### What problem does this PR solve?

Feat: Add FilesTable #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-15 14:39:33 +08:00
be5f830878 Truncate text for zhipu embedding. (#4490)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-15 14:36:27 +08:00
7944aacafa Feat: add gpustack model provider (#4469)
### What problem does this PR solve?

Add GPUStack as a new model provider.
[GPUStack](https://github.com/gpustack/gpustack) is an open-source GPU
cluster manager for running LLMs. Currently, locally deployed models in
GPUStack cannot integrate well with RAGFlow. GPUStack provides both
OpenAI compatible APIs (Models / Chat Completions / Embeddings /
Speech2Text / TTS) and other APIs like Rerank. We would like to use
GPUStack as a model provider in ragflow.

[GPUStack Docs](https://docs.gpustack.ai/latest/quickstart/)

Related issue: https://github.com/infiniflow/ragflow/issues/4064.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)



### Testing Instructions
1. Install GPUStack and deploy the `llama-3.2-1b-instruct` llm, `bge-m3`
text embedding model, `bge-reranker-v2-m3` rerank model,
`faster-whisper-medium` Speech-to-Text model, `cosyvoice-300m-sft` in
GPUStack.
2. Add provider in ragflow settings.
3. Testing in ragflow.
2025-01-15 14:15:58 +08:00
e478586a8e Refactor. (#4487)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2025-01-15 14:06:46 +08:00
713f38090b Sync prerequisites with Helm Charts (#4483)
### What problem does this PR solve?

Address #4391 

### Type of change

- [x] Other (please describe):
2025-01-15 11:57:47 +08:00
8f7ecde908 Update description (#4468)
### What problem does this PR solve?

Update description

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-01-14 18:35:06 +08:00
23ad459136 Feat: Add background to next login page #3221 (#4474)
### What problem does this PR solve?

Feat: Add background to next login page #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-14 13:43:18 +08:00
f556f0239c Fix dify retrieval issue. (#4473)
### What problem does this PR solve?

#4464
#4469 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-14 13:16:05 +08:00
f318342c8e Recalling the file uploaded while chatting. (#4472)
### What problem does this PR solve?

#4445

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-14 12:05:20 +08:00
d3c07794b5 Replace poetry with uv (#4471)
### What problem does this PR solve?

Replace poetry with uv

### Type of change

- [x] Refactoring
2025-01-14 11:49:43 +08:00
fd0bf3adf0 Format: dos2unix (#4467)
### What problem does this PR solve?

Format the code

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-01-13 18:19:01 +08:00
c08382099a Check meta data format in json map (#4461)
### What problem does this PR solve?

#3690

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-01-13 17:34:50 +08:00
d8346cb7a6 Feat: Metadata in documents for improve the prompt #3690 (#4462)
### What problem does this PR solve?

Feat: Metadata in documents for improve the prompt #3690

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-13 17:13:37 +08:00
46c52d65b7 Add meta data while chatting. (#4455)
### What problem does this PR solve?

#3690

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-01-13 14:35:24 +08:00
e098fcf6ad Fix csv for TAG. (#4454)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-13 12:03:18 +08:00
ecdb2a88bd Fix: Can not select GPT-4o / 4o mini as Chat Model #4421 (#4453)
### What problem does this PR solve?

Fix: Can not select GPT-4o / 4o mini as Chat Model #4421

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-13 11:54:16 +08:00
2c7ba90cb4 Fix: In order to distinguish the keys of a pair of messages, add a prefix to the id when rendering the message. #4409 (#4451)
### What problem does this PR solve?

Fix: In order to distinguish the keys of a pair of messages, add a
prefix to the id when rendering the message. #4409

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-13 10:51:59 +08:00
95261f17f6 Bump infinity to v0.6.0-dev1 (#4448)
### What problem does this PR solve?

Bump infinity to v0.6.0-dev1 and poetry to 2.0.1

### Type of change

- [x] Refactoring
2025-01-12 19:54:33 +08:00
7d909d4d1b Add doc meta data. (#4442)
### What problem does this PR solve?

#3690

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-01-10 19:06:59 +08:00
4dde73f897 Error message: Infinity not support table parsing method (#4439)
### What problem does this PR solve?

Specific error message.

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2025-01-10 16:39:13 +08:00
93b30b2fb5 update res vi (#4437)
update typo vi
2025-01-10 11:35:58 +08:00
06c54367fa Feat: Display tag word cloud on recall test page #4368 (#4438)
### What problem does this PR solve?

Feat: Display tag word cloud on recall test page #4368

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-10 11:35:07 +08:00
6acbd374d8 fix duckduckgo search subsection error (#4430)
### What problem does this PR solve?

duckduckgo search 6.3.0 still has error sometimes, need to update to
7.2.0, after updated, it works ok.
this PR is going to fix this issue
https://github.com/infiniflow/ragflow/issues/4396

### Type of change

 Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: xiaohzho <xiaohzho@cisco.com>
2025-01-10 09:34:20 +08:00
48bca0ca01 Fix: Modify the text of the category operator form #4412 (#4433)
### What problem does this PR solve?

Fix: Modify the text of the category operator form #4412

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-09 19:17:22 +08:00
300d8ecf51 Feat: Add TagFeatureItem #4368 (#4432)
### What problem does this PR solve?

Feat: Add TagFeatureItem #4368

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-09 18:24:27 +08:00
dac54ded96 Added a description of the Categorize agent component (#4428)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2025-01-09 18:10:18 +08:00
c5da3cdd97 Tagging (#4426)
### What problem does this PR solve?

#4367

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-01-09 17:07:21 +08:00
f892d7d426 Let the agent talk while there's pre-set param. (#4423)
### What problem does this PR solve?

#4385

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-09 14:07:57 +08:00
f86d8906e7 Fixed code error when mssql returns multiple columns (#4420)
Fixed code error when mssql returns multiple columns
2025-01-09 11:55:18 +08:00
bc681e2ee9 Remove redundant param of rewrite component. (#4422)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-09 11:54:31 +08:00
b6c71c1e01 Fix typo in helm charts (#4419)
### What problem does this PR solve?

Fix typo in helm chart

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-09 10:20:27 +08:00
7bebf4b7bf Added descriptions of the retrieval agent component (#4416)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update

---------

Co-authored-by: Jin Hai <haijin.chn@gmail.com>
2025-01-09 10:11:05 +08:00
d64df4de9c Update error message (#4417)
### What problem does this PR solve?

1. Update error message
2. Remove space characters

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-01-08 20:18:27 +08:00
af43cb04e8 Feat: Add tag_kwd parameter to chunk configuration modal #4368 (#4414)
### What problem does this PR solve?

Feat: Add tag_kwd parameter to chunk configuration modal  #4368

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-08 19:45:34 +08:00
3d66d78304 Fix API retrieval error. (#4408)
### What problem does this PR solve?

#4403

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-08 11:27:46 +08:00
b7ce4e7e62 fix:t_recognizer TypeError: 'super' object is not callable (#4404)
### What problem does this PR solve?

[Bug]: layout recognizer failed for wrong boxes class type #4230
(https://github.com/infiniflow/ragflow/issues/4230)

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: youzhiqiang <zhiqiang.you@aminer.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-01-08 10:59:35 +08:00
5e64d79587 Added generate component description (#4399)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2025-01-07 19:56:11 +08:00
49cebd9fec Feat: Add description for tag parsing method #4368 (#4402)
### What problem does this PR solve?

Feat: Add description for tag parsing method #4368

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-07 19:33:53 +08:00
d9a4e4cc3b Fix page size error. (#4401)
### What problem does this PR solve?

#4400

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-07 19:06:31 +08:00
ac89a2dda1 [Fix] fix duckduck go search 202 ratelimit failed (#4398)
this PR is going to fix issue
https://github.com/infiniflow/ragflow/issues/4396

### Type of change

 Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: xiaohzho <xiaohzho@cisco.com>
2025-01-07 18:40:46 +08:00
01a122dc9d fix bug, agent invoke can not get params from begin (#4390)
### What problem does this PR solve?

fix bug, agent invoke can not get params from begin

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: wangrui <wangrui@haima.me>
2025-01-07 18:40:27 +08:00
8ec392adb0 Feat: Add TagWorkCloud #4368 (#4393)
### What problem does this PR solve?

Feat: Add TagWorkCloud #4368

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-07 18:03:41 +08:00
de822a108f Refine variable display name. (#4397)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-01-07 18:02:33 +08:00
2e40c2a6f6 Fix t_recognizer issue. (#4387)
### What problem does this PR solve?

#4230

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-07 13:17:46 +08:00
d088a34fe2 Feat: Add LoadingButton #4368 (#4384)
### What problem does this PR solve?

Feat: Add LoadingButton #4368

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-07 12:53:06 +08:00
16e1681fa4 Refine DB assistant template. (#4383)
### What problem does this PR solve?

#4326

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-07 11:30:15 +08:00
bb24e5f739 Added instructions on embedding agent or assistant into a third-party webpage (#4369)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-01-06 20:25:47 +08:00
1d93eb81ae Feat: Add TagTable #4367 (#4368)
### What problem does this PR solve?

Feat: Add TagTable #4367

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-06 18:58:42 +08:00
439d20e41f Use LTS polars to resolve some machines don't support AVX CPU flag (#4364)
### What problem does this PR solve?

Some old types of machine or virtual machine doesn't support AVX CPU
flag. This PR is to use lts polars module to avoid this fault.

fix issue: #4349

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2025-01-06 18:52:17 +08:00
45619702ff Updated outdated descriptions and added multi-turn optimization (#4362)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update
2025-01-06 16:54:22 +08:00
b93c136797 Fix gemini embedding error. (#4356)
### What problem does this PR solve?

#4314

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-06 14:41:29 +08:00
983ec0666c Fix param error. (#4355)
### What problem does this PR solve?

#4230

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-06 13:54:17 +08:00
a9ba051582 Adds a research report generator. (#4354)
### What problem does this PR solve?

#4242

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-01-06 13:28:54 +08:00
bad764bcda Improve storage engine (#4341)
### What problem does this PR solve?

- Bring `STORAGE_IMPL` back in `rag/svr/cache_file_svr.py`
- Simplify storage connection when working with AWS S3

### Type of change

- [x] Refactoring
2025-01-06 12:06:24 +08:00
9c6cf12137 Refactor model list. (#4346)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2025-01-03 19:55:42 +08:00
6288b6d02b chrome extensions (#4308)
Build chrome extensions that allow interaction with browser content
2025-01-03 10:15:36 +08:00
52c20033d7 Fix total number error. (#4339)
### What problem does this PR solve?

#4328

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-03 10:02:30 +08:00
5dad15600c Feat: Add FileUploadDialog #3221 (#4327) (#4335)
### What problem does this PR solve?

Feat: Add ConfirmDeleteDialog #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-03 09:47:05 +08:00
8674156d1c Fix potential SSRF attack vulnerability (#4334)
### What problem does this PR solve?

Fix potential SSRF attack vulnerability

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2025-01-02 18:45:45 +08:00
5083d92998 Feat: Add model id to ExeSql operator form. #1739 (#4333)
### What problem does this PR solve?

Feat: Add model id to ExeSql operator form. #1739

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-02 17:38:01 +08:00
59a78408be Fix t_recognizer.py after model updating. (#4330)
### What problem does this PR solve?

#4230

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-02 17:00:11 +08:00
df22ead841 Fix agent_completion bug (#4329)
### What problem does this PR solve?

Fix agent_completion bug  #4320

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-01-02 16:59:54 +08:00
5883493c7d Feat: Add FileUploadDialog #3221 (#4327)
### What problem does this PR solve?

Feat: Add FileUploadDialog #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-01-02 16:10:41 +08:00
50f209204e Synchronize with enterprise version (#4325)
### Type of change

- [x] Refactoring
2025-01-02 13:44:44 +08:00
564277736a Update exesql component for agent (#4307)
### What problem does this PR solve?

Update exesql component for agent

### Type of change

- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-31 19:58:56 +08:00
061a22588a Feat: Add DatasetCreatingDialog #3221 (#4313)
### What problem does this PR solve?

Feat: Add DatasetCreatingDialog #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-31 19:17:09 +08:00
5071df9de1 Fix parameter name error. (#4312)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-31 17:38:01 +08:00
419b546f03 Update displayed_name to display_name (#4311)
### What problem does this PR solve?

Update displayed_name to display_name

### Type of change

- [x] Refactoring

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-31 17:25:24 +08:00
e5b1511c66 Fix: Fixed the issue that the graph could not display the grouping #4180 (#4306)
### What problem does this PR solve?

Fix: Fixed the issue that the graph could not display the grouping #4180

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-31 15:32:14 +08:00
0e5124ec99 Show the errors out. (#4305)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-12-31 15:32:02 +08:00
7c7b7d2689 Update error message for agent name conflict (#4299)
### What problem does this PR solve?

Update error message for agent name conflict

### Type of change

- [x] Refactoring

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-31 14:36:23 +08:00
accd3a6c7e Support OpenAI gpt-4o and gpt-4o-mini for img2text (#4300)
### What problem does this PR solve?

OpenAI has deprecated the gpt-4-vision-preview model. This PR adds
support for the newer gpt-4o and gpt-4o-mini models in the img2text
feature.


![image](https://github.com/user-attachments/assets/6dddf2dc-1b9e-4e94-bf07-6bf77d39122b)

This PR add addtional 4o/4o-mini entry for img2text besides original
ones. Utilized [alias](https://platform.openai.com/docs/models#gpt-4o)
model names (e.g., gpt-4o-2024-08-06) because the database schema uses
the model name as the primary key.



- [x] Other (please describe): model update
2024-12-31 14:36:06 +08:00
4ba4f622a5 Refactor (#4303)
### What problem does this PR solve?

### Type of change
- [x] Refactoring
2024-12-31 14:31:31 +08:00
b52b0f68fc Add top_k for create_chat and update_chat api (#4294)
### What problem does this PR solve?

Add top_k for create_chat and update_chat api #4157

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-30 19:22:57 +08:00
d42e78bce2 Fix bugs in chunk api (#4293)
### What problem does this PR solve?

Fix bugs in chunk api #4149

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-30 19:01:44 +08:00
8fb18f37f6 Code refactor. (#4291)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2024-12-30 18:38:51 +08:00
f619d5a9b6 Fix: After executing npm i --force locally, the login page cannot be opened #4290 (#4292)
### What problem does this PR solve?

Fix: After executing npm i --force locally, the login page cannot be
opened #4290

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-30 18:19:58 +08:00
d1971e988a Feat: The Begin and IterationStart operators cannot be deleted using shortcut keys #4287 (#4288)
### What problem does this PR solve?

Feat: The Begin and IterationStart operators cannot be deleted using
shortcut keys #4287

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-30 17:47:47 +08:00
54908ebd30 Fix the bug in create_dataset function (#4284)
### What problem does this PR solve?

Fix the bug in create_dataset function #4136

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-30 13:04:51 +08:00
3ba2b8d80f Fix agent session list by user_id. (#4285)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-30 12:18:16 +08:00
713b837276 Feat: Translate the system prompt of the generate operator #3993 (#4283)
### What problem does this PR solve?

 Feat: Translate the system prompt of the generate operator #3993

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-30 12:07:06 +08:00
8feb4c1a99 Fix BaiduFanyi TestRun parameter validation and debug method missing … (#4275)
### What problem does this PR solve?

Fix BaiduFanyi TestRun parameter validation and debug method missing
errors
![Uploading Snipaste_2024-12-27_19-56-31.png…]()


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: wangrui <wangrui@haima.me>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-30 10:34:57 +08:00
dd13a5d05c Fix some bugs in text2sql.(#4279)(#4281) (#4280)
Fix some bugs in text2sql.(#4279)(#4281)

### What problem does this PR solve?
- The incorrect results in parsing CSV files of the QA knowledge base in
the text2sql scenario. Process CSV files using the csv library. Decouple
CSV parsing from TXT parsing
- Most llm return results in markdown format ```sql query ```, Fix
execution error caused by LLM output SQLmarkdown format.### Type of
change
- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-30 10:32:19 +08:00
8cdf10148d Initial draft of the begin component reference (#4272)
### What problem does this PR solve?

### Type of change


- [x] Documentation Update
2024-12-27 17:07:31 +08:00
7773afa561 Update text2sql agent manual document (#4226) (#4271)
### What problem does this PR solve?
- Create a text2sql agent document: Receipe, procedure, debug (
including step run), run.

### Type of change
- [ ] Documentation Update

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-27 16:48:17 +08:00
798eb3647c Fix chat listing error. (#4270)
### What problem does this PR solve?

#4220
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-27 16:03:59 +08:00
2d17e5aa04 Feat: Delete useless code #4242 (#4267)
### What problem does this PR solve?

Feat: Delete useless code #4242

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-27 15:09:03 +08:00
c75aa11ae6 Fix: The edit box for the headers parameter of the invoke operator is always loading. #4265 (#4266)
### What problem does this PR solve?

Fix: The edit box for the headers parameter of the invoke operator is
always loading. #4265

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-27 15:03:42 +08:00
c7818770f4 Fix Python SDK example error. (#4263)
### What problem does this PR solve?

#4253
### Type of change

- [x] Documentation Update
2024-12-27 14:49:43 +08:00
6f6303d017 Fix Python SDK example error. (#4262)
### What problem does this PR solve?

#4252

### Type of change

- [x] Documentation Update
2024-12-27 14:40:00 +08:00
146e8bb793 Feat: Limit the iteration start node to only be the source node #4242 (#4260)
### What problem does this PR solve?

Feat: Limit the iteration start node to only be the source node #4242

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-27 14:25:15 +08:00
f948c0d9f1 Clean query. (#4259)
### What problem does this PR solve?

#4239

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-27 14:25:03 +08:00
c3e3f0fbb4 Add iteration for agent. (#4258)
### What problem does this PR solve?

#4242
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-27 11:38:33 +08:00
a1a825c830 Feat: Add the iteration Node #4242 (#4247)
### What problem does this PR solve?

Feat: Add the iteration Node #4242

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-27 11:24:17 +08:00
a6f4153775 Update UI text (#4248)
### What problem does this PR solve?

Update UI text

### Type of change

- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-27 10:48:11 +08:00
097aab09a2 Replace image2text model check with internal image. (#4250)
### What problem does this PR solve?

#4243

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-26 19:46:42 +08:00
600f435d27 Fix text (#4244)
### What problem does this PR solve?

Fix frontend text

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-26 17:03:48 +08:00
722545e5e0 Fix bugs (#4241)
### What problem does this PR solve?

1. Refactor error message
2. Fix knowledges are created on ES and can't be found in Infinity. The
document chunk fetch error.

### Type of change

- [x] Fix bug
- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-26 16:08:17 +08:00
9fa73771ee Fixed invoke component parameters #4236 (#4237)
### What problem does this PR solve?

to fixed issue https://github.com/infiniflow/ragflow/issues/4236

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-26 16:06:19 +08:00
fe279754ac Update version info (#4232)
### What problem does this PR solve?

Update version info to 0.15.1

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-12-26 12:15:28 +08:00
7e063283ba Removing invisible chars before tokenization. (#4233)
### What problem does this PR solve?

#4223

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-26 11:48:16 +08:00
28eeb29b88 Fix component input error. (#4231)
### What problem does this PR solve?

#4108

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-26 10:13:47 +08:00
85511cb1fd Miscellaneous updates (#4228)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-12-25 20:21:38 +08:00
a3eeb5de32 Fix: Q&A chunk modification (#4227)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-25 19:11:16 +08:00
1160b58b6e Update pyproject.toml to 0.15.1 (#4222)
### What problem does this PR solve?

Missing update information

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-25 14:06:00 +08:00
61790ebe15 Fix: Rename chat name, missing field 'avatar' #4125 (#4221)
### What problem does this PR solve?

Fix: Rename chat name, missing field 'avatar' #4125

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-25 11:25:50 +08:00
bc3288d390 Fix misspell. (#4219)
### What problem does this PR solve?
#4216

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-25 10:48:59 +08:00
4e5f92f01b Fix interface as input variable for component. (#4212)
### What problem does this PR solve?

#4108

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-24 17:58:11 +08:00
7d8e0602aa Remove session owner check. (#4211)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-24 17:40:31 +08:00
03cbbf7784 Add user_id for third-party system to record sessions. (#4206)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-12-24 15:59:11 +08:00
b7a7413419 Bump infinity to 0.5.2 (#4207)
### What problem does this PR solve?

Bump infinity to 0.5.2

### Type of change

- [x] Refactoring
2024-12-24 15:17:37 +08:00
321e9f3719 fix: stop rerank by model when search result is empty (#4203)
### What problem does this PR solve?


stop rerank by model when search result is empty, otherwise rerank may
raise an error (qwen).

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: 刘博 <liubo@ynby.cn>
2024-12-24 14:33:46 +08:00
76cd23eecf Catch the exception while parsing pptx. (#4202)
### What problem does this PR solve?
#4189

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-24 10:49:28 +08:00
d030b4a680 Update progress time info (#4193)
### What problem does this PR solve?

Ignore the millisecond and microsecond value.

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-23 21:04:44 +08:00
a9fd6066d2 Fix score() issue (#4194)
### What problem does this PR solve?

as title

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-23 21:01:20 +08:00
c373dba0bc Fix raptor bug. (#4192)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-23 18:59:48 +08:00
cf62230548 Fix: Fixed the issue that the page crashed when the node ID was the same as the combo ID #4180 (#4191)
### What problem does this PR solve?

Fix: Fixed the issue that the page crashed when the node ID was the same
as the combo ID #4180

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-23 18:13:56 +08:00
8d73cf6f02 Added time to progress message (#4185)
### What problem does this PR solve?

Added time to progress message

### Type of change

- [x] Refactoring
2024-12-23 17:25:55 +08:00
b635002666 Fix duplicated communitiy (#4187)
### What problem does this PR solve?

#4180

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-23 17:07:12 +08:00
4abc144d3d Fix error of changing embedding model (#4184)
### What problem does this PR solve?

1. Change embedding model of knowledge base won't change the default
embedding model.
2. Retrieval test bug

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-23 16:23:54 +08:00
a4bccc1ae7 Feat: If there is no result in the recall test, an empty data image will be displayed. #4182 (#4183)
### What problem does this PR solve?

Feat: If there is no result in the recall test, an empty data image will
be displayed. #4182
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-23 15:17:56 +08:00
8f070c3d56 Fix 'SCORE' not found bug (#4178)
### What problem does this PR solve?

As title

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-23 14:50:12 +08:00
31d67c850e Fetch chunk by batches. (#4177)
### What problem does this PR solve?

#4173

### Type of change

- [x] Performance Improvement
2024-12-23 12:12:15 +08:00
2cbe064080 Add Llama3.3 (#4174)
### What problem does this PR solve?

#4168

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-23 11:18:01 +08:00
cac7851fc5 Update jin10.svg (#4159)
Can custom color and size instead of base64
2024-12-23 10:34:13 +08:00
96da618b6a Fix bug over chunks classification by document in the promp (#4156)
The refactor in 0.15 contains a small bug that eliminate the
classification

### What problem does this PR solve

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2024-12-23 10:22:57 +08:00
f13f503952 Use s3 configuration from settings module (#4167)
### What problem does this PR solve?

Fix the issue when retrieving AWS credentials from the S3 configuration
from the settings module instead of getting from the environment
variables.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-23 10:22:45 +08:00
cb45431412 Fix Voyage re-rank model. Limit file name length. (#4171)
### What problem does this PR solve?

#4152 
#4154

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-23 10:03:50 +08:00
85083ad400 Validate returned chunk at list_chunks and add_chunk (#4153)
### What problem does this PR solve?

Validate returned chunk at list_chunks and add_chunk

### Type of change

- [x] Refactoring
2024-12-20 22:55:45 +08:00
35580af875 Update the component of the agent API with parameters. (#4131)
### What problem does this PR solve?

Update the  component of the agent API with parameters.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-12-20 17:34:16 +08:00
a0dc9e1bdf Fix position_int on infinity (#4144)
### What problem does this PR solve?

Fix position_int on infinity

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-20 11:30:33 +08:00
6379a934ff Fix redis get error. (#4140)
### What problem does this PR solve?

#4126
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-20 10:39:50 +08:00
10a62115c7 Fix example in doc (#4133)
### What problem does this PR solve?

Fix example in doc

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-12-19 20:08:22 +08:00
e38e3bcc3b Mask password in log (#4129)
### What problem does this PR solve?

Mask password in log

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-19 18:37:01 +08:00
8dcf99611b Corrections. (#4127)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-12-19 18:19:56 +08:00
213218a094 Refactor ask decorator (#4116)
### What problem does this PR solve?

Refactor ask decorator

### Type of change

- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-19 18:13:33 +08:00
478da3118c add gemini 2.0 (#4115)
add gemini 2.0
2024-12-19 17:30:45 +08:00
101b8ff813 fix chunk method "Table" losing content when the Excel file has multi… (#4123)
…ple sheets

### What problem does this PR solve?
discussed in https://github.com/infiniflow/ragflow/pull/4102
- In excel_parser.py, `total` means the total number of rows in Excel,
but it return in the first iterate, that lead to the wrong `to_page`
- In table.py, it when Excel file has multiple sheets, it will be
divided into multiple parts, every part size is 3000, `data` may be
empty, because it has recorded in the last iterate.
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-19 17:30:26 +08:00
d8fca43017 Make fast embed and default embed mutually exclusive. (#4121)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-12-19 17:27:09 +08:00
b35e811fe7 Add parameters for ask_chat and fix bugs in list_sessions (#4119)
### What problem does this PR solve?

Add parameters for ask_chat and fix bugs in list_sessions
#4105
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-19 17:24:26 +08:00
7474348394 Fix fastembed reloading issue. (#4117)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-19 16:18:18 +08:00
8939206531 Separated list_agents() from session management (#4111)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-12-19 14:36:51 +08:00
57c99dd811 Fixed infinity exception SCORE() / SCORE_FACTORS() requires Fusion or MATCH TEXT or MATCH TENSOR (#4110)
### What problem does this PR solve?

Fixed infinity exception SCORE() / SCORE_FACTORS() requires Fusion or
MATCH TEXT or MATCH TENSOR. Close #4109

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-19 13:49:36 +08:00
561eeabfa4 add typo locale (#4099)
Add typo vi
2024-12-19 13:34:11 +08:00
5fb9136251 Miscellaneous updates to session APIs (#4097)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-18 19:01:05 +08:00
044bb0334b Fix release.yml (#4100)
### What problem does this PR solve?

Fix release.yml

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-18 17:23:29 +08:00
a5cf6fc546 Feat: Translate the previous run into parsing #4094 (#4095)
### What problem does this PR solve?

Feat: Translate the previous run into parsing #4094

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-18 16:16:57 +08:00
57fe5d0864 Add latest updates. (#4093)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-12-18 16:00:24 +08:00
bfdc4944a3 Added release notes for v0.15.0 (#4056)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-12-18 15:46:31 +08:00
a45ba3a91e Prepare docs for v0.15.0 release (#4077)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-12-18 15:32:15 +08:00
e513ad2f16 Feat: Add MultiSelect #3221 (#4090)
### What problem does this PR solve?

Feat: Add MultiSelect #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-18 14:51:24 +08:00
1fdad50dac Fix raptor (#4089)
### What problem does this PR solve?

Fix raptor

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-18 14:42:33 +08:00
4764ca5ef7 Bump infinity to 0.5.0 (#4088)
### What problem does this PR solve?

Bump infinity to 0.5.0

### Type of change

- [x] Refactoring
2024-12-18 14:39:09 +08:00
85f3d92816 Update team invite message (#4085)
### What problem does this PR solve?

Refactor inviting team member message.

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-18 14:20:09 +08:00
742eef028f Add huqie trie to docker image. (#4084)
### What problem does this PR solve?



### Type of change

- [x] Performance Improvement
2024-12-18 14:19:43 +08:00
dfbdeaddaf Fix: Fixed the issue where the required information in the input box was incorrect when inviting users #2834 (#4086)
### What problem does this PR solve?

Fix: Fixed the issue where the required information in the input box was
incorrect when inviting users #2834

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-18 14:07:56 +08:00
50c2b9d562 Refactor trie load and construct (#4083)
### What problem does this PR solve?

1. Fix initial build and load trie
2. Update comment

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-18 12:52:56 +08:00
f8cef73244 Fix abnormal user invitaion message. (#4081)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-18 12:45:24 +08:00
f8c9ec4d56 Fix arm doc (#4080)
### What problem does this PR solve?

Fix arm doc

### Type of change

- [x] Documentation Update
2024-12-18 11:51:03 +08:00
db74a3ef34 Fix conversation bug in agent session (#4078)
### What problem does this PR solve?

Fix conversation bug in agent session

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-18 11:49:25 +08:00
00f99ecbd5 Fix: Fixed the issue with external chat box reporting errors #3909 (#4079)
### What problem does this PR solve?

Fix:  Fixed the issue with external chat box reporting errors #3909

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-18 11:16:52 +08:00
0a3c6fff7c update chinese model warning message (#4075)
### What problem does this PR solve?

fix chinese warning to update model

### Type of change

- [x] Other (please describe): see the message

---------

Signed-off-by: Hui Peng <benquike@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-18 11:16:31 +08:00
79e435fc2e Fix: The cursor is lost after entering a character in the operator form #4072 (#4073)
### What problem does this PR solve?

Fix: The cursor is lost after entering a character in the operator form
#4072

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-17 18:07:22 +08:00
163c2a70fc Feat: Add AdvancedSettingForm #3221 (#4071)
### What problem does this PR solve?

Feat: Add AdvancedSettingForm #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 17:58:03 +08:00
bedc09f69c Add Architecture-Specific Logic for msodbcsql in Dockerfile #4036 (#4049)
### What problem does this PR solve?

For the new feature Add mssql support in the Dockerfile, I suggest
including support for msodbcsql18 for ARM64.
Based on current testing results (on macOS ARM64 environment),
msodbcsql18 needs to be installed.
I hope future Dockerfiles can incorporate a conditional check for this.
Specifically:

When $ARCH=arm64 (macOS ARM64 environment), install msodbcsql18.
In other cases (general x86_64 environment), install msodbcsql17.
### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Mage Lu <magelu@MagedeMac-mini.local>
2024-12-17 17:44:51 +08:00
251592eeeb show avatar dialog instead of default (#4033)
show avatar dialog instead of default

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-17 17:29:35 +08:00
09436f6c60 Elasticsearch disk-based shard allocator use absolute byte values instead of ratio (#4069)
### What problem does this PR solve?

Elasticsearch disk-based shard allocator use absolute byte values
instead of ratio. Close #4018

### Type of change

- [ ] Documentation Update
- [x] Refactoring
2024-12-17 16:48:40 +08:00
e8b4e8b3d7 Feat: Bind event to the theme Switch #3221 (#4067)
### What problem does this PR solve?

Feat: Bind event to  the theme Switch #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 16:32:17 +08:00
000cd6d615 Fix position lost issue. (#4068)
### What problem does this PR solve?

#4040

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-17 16:31:58 +08:00
1d65299791 Fix rerank_model bug in chat and markdown bug (#4061)
### What problem does this PR solve?

Fix rerank_model bug in chat and markdown bug
#4000
#3992
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-17 16:03:37 +08:00
bcccaccc2b Added pagerank support to infinity (#4059)
### What problem does this PR solve?

Added pagerank support to infinity

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-17 15:45:01 +08:00
fddac1345d Fix raptor resuable issue. (#4063)
### What problem does this PR solve?

#4045

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 15:28:35 +08:00
4a95349492 Feat: Modify the link address of the agent id #3909 (#4062)
### What problem does this PR solve?

Feat: Modify the link address of the agent id #3909

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 15:13:22 +08:00
0fcb564261 Feat: Modify the text of the embedded website button #3909 (#4057)
### What problem does this PR solve?

Feat: Modify the text of the embedded website button #3909

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 14:46:54 +08:00
96667696d2 Compatible with former API keys. (#4055)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-17 13:58:26 +08:00
ce1e855328 Upgrades Document Layout Analysis model. (#4054)
### What problem does this PR solve?

#4052

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 11:27:19 +08:00
b5e4a5563c Feat: Set the color of the canvas's control button #3851 (#4053)
### What problem does this PR solve?

Feat: Set the color of the canvas's control button #3851

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 11:23:00 +08:00
1053ef5551 Fix: Hide the upload button in the external chat box #2242 (#4048)
### What problem does this PR solve?

Fix: Hide the upload button in the external chat box  #2242

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-17 10:32:52 +08:00
cb6e9ce164 Cache the result from llm for graphrag and raptor (#4051)
### What problem does this PR solve?

#4045

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 09:48:03 +08:00
8ea631a2a0 Fix: Every time you switch the page number of a chunk, the PDF document will be reloaded. #4046 (#4047)
### What problem does this PR solve?

Fix: Every time you switch the page number of a chunk, the PDF document
will be reloaded. #4046

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-16 18:51:45 +08:00
7fb67c4f67 Fix chunk number error after re-parsing. (#4043)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-16 15:23:49 +08:00
44ac87aef4 Remove Redundant None Check for vector_similarity_weight (#4037)
### What problem does this PR solve?
The removed if statement is unnecessary and adds cognitive load for
readers.
The original code:
```
vector_similarity_weight = req.get("vector_similarity_weight", 0.3)
if vector_similarity_weight is None:
    vector_similarity_weight = 0.3
```
has been simplified to:
```
vector_similarity_weight = req.get("vector_similarity_weight", 0.3)
```

### Type of change
- [x] Refactoring
2024-12-16 14:35:21 +08:00
7ddccbb952 extraction sqlquery (#4027)
clone https://github.com/infiniflow/ragflow/pull/4023 
improve the information extraction, most llm return results in markdown
format ````sql ___ query `____ ```
2024-12-16 09:46:59 +08:00
4a7bc4df92 Updated configurations (#4032)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2024-12-13 19:45:54 +08:00
3b7d182720 Fixed a display issue (#4030)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-12-13 19:11:52 +08:00
78527acd88 Update launch_ragflow_from_source.md (#4005)
update install  poetry full package

- [x] Documentation Update
2024-12-13 18:58:01 +08:00
e5c3083826 Updated RAGFlow Agent UI (#4029)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-12-13 18:57:22 +08:00
9b9039de92 Fix connection error for adding visual llm. (#4028)
### What problem does this PR solve?

#3897

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-13 18:54:51 +08:00
9b2ef62aee Fix xinfo_groups returns unexpected result (#4026)
### What problem does this PR solve?

Fix xinfo_groups returns unexpected result. Close #3545 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-13 17:31:15 +08:00
86507af770 Set task progress on exception (#4025)
### What problem does this PR solve?

Set task progress on exception

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-13 17:15:08 +08:00
93635674c3 Feat: Reparse a file shall reuse existing chunks if possible #3793 (#4021)
### What problem does this PR solve?

Feat: Reparse a file shall reuse existing chunks if possible #3793

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-13 16:55:13 +08:00
1defe0b19b Feat: Supports to debug single component in Agent. #3993 (#4007)
### What problem does this PR solve?

Feat: Supports to debug single component in Agent. #3993
Fix: The github button on the login page is displayed incorrectly  #4002

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-13 14:43:24 +08:00
0bca46ac3a Migrate infinity at startup (#3858)
### What problem does this PR solve?

Migrate infinity at startup

#3809
https://github.com/infiniflow/infinity/issues/2321

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-13 13:43:56 +08:00
1ecb687c51 Fix bugs in agent api and update api document (#3996)
### What problem does this PR solve?

Fix bugs in agent api and update api document

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-13 10:25:52 +08:00
68d46b2a1e Fix bug in hierarchical_merge function (#4006)
### What problem does this PR solve?

Fix hierarchical_merge function. From idx vs. actual value to actual
value vs. actual value.
Related issue #4003 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: luopan <luopan@example.com>
2024-12-13 08:50:58 +08:00
7559bbd46d Component debugging funcionality. (#4012)
### What problem does this PR solve?

#3993
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-13 08:50:32 +08:00
275b5d14f2 Fix json file parse (#4004)
### What problem does this PR solve?

Fix json file parsing

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-12 20:34:46 +08:00
9ae81b42a3 Updated UI (#4011)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-12-12 19:46:53 +08:00
d6c74ff131 Add mssql support (#3985)
some thing
-  execsql  add connection mssql
- fix bug duckduckgo-search rate limit
- update typo vi res

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-12 19:26:44 +08:00
e8d74108a5 Fix: Completion AttributeError: 'list' object has no attribute 'get' (#3999)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: lizheng@ssc-hn.com <lizheng@ssc-hn.com>
2024-12-12 19:00:34 +08:00
c8b1a564aa Replaced md5 with xxhash64 for chunk id (#4009)
### What problem does this PR solve?

Replaced md5 with xxhash64 for chunk id

### Type of change

- [x] Refactoring
2024-12-12 17:47:39 +08:00
301f95837c Try to reuse existing chunks (#3983)
### What problem does this PR solve?

Try to reuse existing chunks. Close #3793
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-12 16:38:03 +08:00
835fd7abcd Updated RAGFlow edition descriptions (#4001)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-12-12 11:45:59 +08:00
bb8f97c9cd UI updates + RAGFlow image description (#3995)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-12-12 09:57:52 +08:00
6d19294ddc Support debug components. (#3994)
### What problem does this PR solve?

#3993

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-11 19:23:59 +08:00
f61c276f74 Update comment (#3981)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-12-11 18:39:09 +08:00
409acf0d9f Fix: Fixed the issue where two consecutive indexes were displayed incorrectly #3839 (#3988)
### What problem does this PR solve?

Fix: Fixed the issue where two consecutive indexes were displayed
incorrectly #3839

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-11 16:29:17 +08:00
74c6b21f3b Update api documents (#3979)
### What problem does this PR solve?

Update api documents

### Type of change

- [x] Documentation Update

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-11 12:38:57 +08:00
beeacd3e3f Fix exec sql exception issue. (#3982)
### What problem does this PR solve?
#3978

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-11 11:44:59 +08:00
95259af68f update typo vietnamese (#3973)
update typo vietnamese

### Type of change
- [x] Refactoring

---------

Co-authored-by: Yingfeng <yingfeng.zhang@gmail.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: bill <yibie_jingnian@163.com>
2024-12-11 11:12:57 +08:00
855455006b Disable SQL DB binlog in Helm chart (#3976)
### What problem does this PR solve?

The initial Helm chart implementation added in #3815 suffers from an
issue where the 5GB data volume for the SQL DB is filled up with
[binlog](https://dev.mysql.com/doc/refman/8.4/en/binary-log.html) files
after just a few days. Since the app uses a non-replicated SQL DB config
I think it makes sense to disable the binlog in the SQL DB container.
This is achieved by simply adding the required argument to the container
startup command.

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2024-12-11 11:10:33 +08:00
b844ad6e06 Added release notes (#3969)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-12-10 19:38:27 +08:00
e0533f19e9 Fix: Answers with links to information not parsing #3839 (#3968)
### What problem does this PR solve?

Fix: Answers with links to information not parsing #3839

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-10 18:58:06 +08:00
9a6d976252 Add back beartype (#3967)
### What problem does this PR solve?

Add back beartype

### Type of change

- [x] Refactoring
2024-12-10 18:43:43 +08:00
3d76f10a91 Fixed retrieval TypeError: unhashable type: 'list' (#3966)
### What problem does this PR solve?

Fixed retrieval TypeError: unhashable type: 'list'

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-10 18:28:56 +08:00
e9b8c30a38 Support iframe chatbot. (#3961)
### What problem does this PR solve?

#3909

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-10 17:03:24 +08:00
601d74160b Feat: Exclude reference from the data returned by the conversation/get interface #3909 (#3962)
### What problem does this PR solve?

Feat: Exclude reference from the data returned by the conversation/get
interface #3909

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-10 16:46:47 +08:00
fc4e644e5f Feat: Modify the data structure of the chunk in the conversation #3909 (#3955)
### What problem does this PR solve?

Feat: Modify the data structure of the chunk in the conversation #3909

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-10 16:36:16 +08:00
03f00c9e6f Rename page_num_list, top_list, position_list (#3940)
### What problem does this PR solve?

Rename page_num_list, top_list, position_list to page_num_int, top_int,
position_int

### Type of change

- [x] Refactoring
2024-12-10 16:32:58 +08:00
87e46b4425 Fixed README (#3956)
### What problem does this PR solve?

Fixed README

### Type of change

- [x] Documentation Update
2024-12-10 12:11:39 +08:00
d5a322a352 Theme switch support (#3568)
### What problem does this PR solve?
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Yingfeng <yingfeng.zhang@gmail.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
2024-12-10 11:42:04 +08:00
7d4f1c0645 Case insensitive when set doc engine (#3954)
### What problem does this PR solve?

DOC_ENGINE="INFINITY" or "Infinity" or "Elasticsearch" also works

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-12-10 11:26:10 +08:00
927873bfa6 Fix syn error. (#3953)
### What problem does this PR solve?

Close #3696
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-10 10:54:54 +08:00
5fe0791684 Allows quick entry of entities separated by commas (#3914)
Allows quick entry of entities separated by commas

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-10 10:29:27 +08:00
3e134ac0ad Miscellaneous updates (#3942)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update

---------

Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
2024-12-10 10:19:50 +08:00
7a6bf4326e Fixed log not displaying (#3946)
### What problem does this PR solve?

Fixed log not displaying

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-10 09:36:59 +08:00
41a0601735 organize chunks by document in the prompt (#3925)
### What problem does this PR solve?

This PR organize chunks in the prompt by document and indicate what is
the name of the document in this way

```
Document: {doc_name} \nContains the following relevant fragments:
chunk1
chunk2
chunk3

Document: {doc_name} \nContains the following relevant fragments:
chunk4
chunk5
```

Maybe can be a baseline to add metadata to the documents.

This allow in my case to improve llm context about the orgin of the
information.


### Type of change

- [X] New Feature (non-breaking change which adds functionality)

Co-authored-by: Miguel <your-noreply-github-email>
2024-12-10 09:06:52 +08:00
60486ecde5 api http return error (#3941)
api http  return error when content is not found


- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-10 09:04:24 +08:00
255f4ccffc Fix session API issues. (#3939)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-09 17:37:36 +08:00
afe82feb57 Fix error message for image access. (#3936)
### What problem does this PR solve?

#3883

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-09 15:24:58 +08:00
044afa83d1 Fix transformers dependencies for slim. (#3934)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-09 14:21:37 +08:00
4b00be4173 Fixed tmp in Dockerfile (#3933)
### What problem does this PR solve?

Fixed tmp in Dockerfile

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-09 14:20:18 +08:00
215e9361ea Fix field missing issue. (#3931)
### What problem does this PR solve?

#3905
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-09 13:20:58 +08:00
aaec630759 Obsoleted dev and dev-slim (#3930)
### What problem does this PR solve?

Obsoleted dev and dev-slim
### Type of change

- [x] Documentation Update
2024-12-09 12:44:57 +08:00
3d735dca87 Add support to iframe chatbot (#3929)
### What problem does this PR solve?

#3909

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-09 12:38:04 +08:00
dcedfc5ec8 Add Japanese support (#3906)
### What problem does this PR solve?

Add native translation in locales for Japanese 🇯🇵 to support new local
language



### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Hiroshi Kameya <kameya_h@sunflare.co.jp>
2024-12-09 11:38:06 +08:00
1254ecf445 Added static check at PR CI (#3921)
### What problem does this PR solve?

Added static check at PR CI

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
2024-12-08 21:23:51 +08:00
0d68a6cd1b Fix errors detected by Ruff (#3918)
### What problem does this PR solve?

Fix errors detected by Ruff

### Type of change

- [x] Refactoring
2024-12-08 14:21:12 +08:00
e267a026f3 Fix VERSION 2024-12-07 16:56:34 +08:00
44d4686b20 Fix release.yml 2024-12-07 15:21:24 +08:00
95614175e6 Update oc9 docker compose file (#3913)
### What problem does this PR solve?

Update OC9 docker compose files

issue: #3901

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-07 11:04:45 +08:00
c817ff184b Refactor UI text (#3911)
### What problem does this PR solve?

Refactor UI text

### Type of change

- [x] Documentation Update
- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-07 11:04:36 +08:00
f284578cea Fix release.yml 2024-12-06 23:00:29 +08:00
e69e6b2274 Fix VERSION 2024-12-06 22:51:31 +08:00
8cdb805c0b Fix release.py 2024-12-06 22:31:27 +08:00
885418f3b0 Fix release.yml 2024-12-06 21:10:06 +08:00
b44321f9c3 Introduced NEED_MIRROR (#3907)
### What problem does this PR solve?

Introduced NEED_MIRROR

### Type of change

- [x] Refactoring
2024-12-06 20:47:22 +08:00
f54a8d7748 Remove vector stored in component output. (#3908)
### What problem does this PR solve?

Close #3805

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-06 18:32:57 +08:00
311a475b6f Fix: Fixed the issue that the agent list page failed to load #3827 (#3902)
### What problem does this PR solve?

Fix: Fixed the issue that the agent list page failed to load #3827

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-06 17:05:40 +08:00
655b01a0a4 Remove token check while adding model. (#3903)
### What problem does this PR solve?

#3820

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-06 17:01:19 +08:00
d4ee082735 fix 2024-12-06 15:50:58 +08:00
1f5a7c4b12 Fix release.yml 2024-12-06 15:24:53 +08:00
dab58b9311 Fix release.yml 2024-12-06 14:41:43 +08:00
e56a60b316 Fixed release.yml 2024-12-06 14:37:41 +08:00
f189452446 Fix: An error occurred when deleting a new conversation #3898 (#3900)
### What problem does this PR solve?

Fix: An error occurred when deleting a new conversation #3898

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-06 14:24:41 +08:00
f576c555e4 Fixed release.yml 2024-12-06 14:18:42 +08:00
d8eea624e2 release with CI (#3891)
### What problem does this PR solve?

Refactor Dockerfile files.
Release with CI.

### Type of change

- [x] Refactoring
2024-12-06 14:05:30 +08:00
e64c7dfdf6 Feat: Import & export the agents. #3851 (#3894)
### What problem does this PR solve?

Feat: Import & export the agents. #3851

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-06 13:43:17 +08:00
c76e7b1e28 Fix typos (#3887)
### What problem does this PR solve?

Lots of typo, also need to merge into DEMO

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-12-06 09:47:56 +08:00
0d5486aa57 Add a dependency blincker. (#3882)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-05 18:14:39 +08:00
3a0e9f9263 Feat: Add tooltip to question item of ChunkCreatingModal #3873 (#3880)
### What problem does this PR solve?

Feat: Add tooltip to question item of ChunkCreatingModal  #3873

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-05 17:03:20 +08:00
1f0a153d0e Revert "Feat: Add tooltip to question item of ChunkCreatingModal #3873" (#3879)
Reverts infiniflow/ragflow#3878
2024-12-05 16:51:33 +08:00
8bdf1d98a3 Feat: Add tooltip to question item of ChunkCreatingModal #3873 (#3878)
### What problem does this PR solve?

Feat: Add tooltip to question item of ChunkCreatingModal  #3873

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-05 16:49:55 +08:00
8037dc7b76 Fix chunk available issue (#3877)
### What problem does this PR solve?

Close #3873

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-05 16:49:43 +08:00
56f473b680 Feat: Add question parameter to edit chunk modal (#3875)
### What problem does this PR solve?

Close #3873

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-05 14:51:19 +08:00
b502dc7399 Feat: Adjust the input box width of EditTag to 100% #3873 (#3876)
### What problem does this PR solve?

Feat: Adjust the input box width of EditTag to 100% #3873

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-05 14:29:49 +08:00
cfe23badb0 Feat: Add question parameter to edit chunk modal #3873 (#3874)
### What problem does this PR solve?

Feat: Add question parameter to edit chunk modal #3873

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-05 13:58:33 +08:00
593ffc4067 Fix HuggingFace model error. (#3870)
### What problem does this PR solve?

#3865

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-05 13:28:42 +08:00
a88a1848ff Fix: The right coordinates of Categorize and Switch operators are misplaced #3868 (#3869)
### What problem does this PR solve?

Fix: The right coordinates of Categorize and Switch operators are
misplaced #3868

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-05 10:53:44 +08:00
5ae33184d5 Fix chunk position issue. (#3867)
### What problem does this PR solve?

Close #3864

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-05 10:53:26 +08:00
78601ee1bd Fix open AI compatible rerank issue. (#3866)
### What problem does this PR solve?
#3700
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-05 10:26:21 +08:00
84afb4259c Feat: Bind the route to the navigation bar in the head #3221 (#3863)
### What problem does this PR solve?
Feat: Bind the route to the navigation bar in the head #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-04 19:10:08 +08:00
1b817a5b4c Refine synonym query. (#3855)
### What problem does this PR solve?

### Type of change

- [x] Performance Improvement
2024-12-04 17:20:12 +08:00
1b589609a4 Add sdk for list agents and sessions (#3848)
### What problem does this PR solve?

Add sdk for list agents and sessions

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-04 16:23:22 +08:00
289f4f1916 Fix: Hide the download button if it is an empty file #3762 (#3853)
### What problem does this PR solve?

Fix: Hide the download button if it is an empty file #3762

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-04 15:09:41 +08:00
cf37e2ef1a Fix: Delete unused code #3651 (#3852)
### What problem does this PR solve?

Fix: Delete unused code #3651

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-04 14:15:17 +08:00
41e2dadea7 Feat: Fixed the problem of not finding EmailForm #3837 (#3847)
### What problem does this PR solve?

Feat: Fixed the problem of not finding EmailForm #3837

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-04 14:05:07 +08:00
f3318b2e49 Update docs. (#3849)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-12-04 12:23:31 +08:00
3f3469130b Fix preview issue in file manager. (#3846)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-04 11:53:23 +08:00
fc38afcec4 Web: Fixed the download and preview file not authorized. (#3652)
https://github.com/infiniflow/ragflow/issues/3651

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-04 11:48:06 +08:00
efae7afd62 Email sending tool (#3837)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._
Added the function of sending emails through SMTP
Instructions for use-
Corresponding parameters need to be configured
Need to output upstream in a fixed format

![image](https://github.com/user-attachments/assets/93bc1af7-6d4f-4406-bd1d-bc042535dd82)


### Type of change


- [√] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-04 11:21:17 +08:00
285bc58364 Update QRcode (#3844)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-12-04 10:11:20 +08:00
6657ca7cde Change default error message to English (#3838)
### What problem does this PR solve?

As title

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-12-04 09:34:49 +08:00
87455d79e4 Add api for list agents and agent seesions (#3835)
### What problem does this PR solve?

Add api for list agents and agent seesions

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-03 19:03:16 +08:00
821fdf02b4 Fix parsing JSON file error (#3829)
### What problem does this PR solve?

Close issue: #3828

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-03 19:02:03 +08:00
54980337e4 Feat: Modify the style of the home page in bright mode #3221 (#3832)
### What problem does this PR solve?

Feat: Modify the style of the home page in bright mode #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-03 18:59:11 +08:00
92ab7ef659 Refactor embedding batch_size (#3825)
### What problem does this PR solve?

Refactor embedding batch_size. Close #3657

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
2024-12-03 16:22:39 +08:00
934dbc2e2b Add more mistral models. (#3826)
### What problem does this PR solve?

#3647

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-03 15:18:38 +08:00
95da6de9e1 Fix the agent reference bug and the session prologue (#3823)
### What problem does this PR solve?

Fix the agent reference bug and the session prologue
#3285 #3819
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-03 14:49:26 +08:00
ccdeeda9cc Add Helm chart deployment method (#3815)
### What problem does this PR solve?

Add's a Helm chart for deploying RAGFlow on Kubernetes. 

Closes #864. 

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
2024-12-03 14:48:36 +08:00
74b28ef1b0 Add pagerank to KB. (#3809)
### What problem does this PR solve?

#3794

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-03 14:30:35 +08:00
7543047de3 Fix @ in model name issue. (#3821)
### What problem does this PR solve?

#3814

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-03 12:41:39 +08:00
e66addc82d Feat: Store the pagerank field at the outermost #3794 (#3822)
### What problem does this PR solve?

Feat: Store the pagerank field at the outermost #3794

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-03 11:55:21 +08:00
7b6a5ffaff Fix: page_chars attribute does not exist in some formats of PDF (#3796)
### What problem does this PR solve?

In #3335 someone suggested to upgrade pdfplumber==0.11.1, but that
didn't solve it.
It's actually the special formatting in some of the pdfs that triggers
the problem.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-03 11:08:06 +08:00
19545282aa Web API test cases (#3812)
### What problem does this PR solve?

1. Failed update dataset case
2. Upload and parse text file

### Type of change

- [x] Other (please describe): test cases

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-03 10:33:35 +08:00
6a0583f5ad Fix voyage embedding. (#3818)
### What problem does this PR solve?

#3816 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-03 09:33:54 +08:00
ed7e46b6ca Feat: Add TestingForm #3221 (#3810)
### What problem does this PR solve?

Feat: Add TestingForm #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-02 19:33:20 +08:00
9654e64a0a Fix the bug that the agent could not find the context (#3795)
### What problem does this PR solve?

Fix the bug that the agent could not find the context
#3682
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-02 19:05:18 +08:00
8b650fc9ef Feat: Supports page rank score for different knowledge bases. #3794 (#3800)
### What problem does this PR solve?

Feat: Supports page rank score for different knowledge bases. #3794

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-02 19:00:11 +08:00
69fb323581 [DOC] We have no plan to maintain Docker images for ARM. Please build your … (#3808)
…Docker image

### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-12-02 18:56:55 +08:00
9d093547e8 Improved ollama doc (#3787)
### What problem does this PR solve?

Improved ollama doc. Close #3723

### Type of change

- [x] Documentation Update
2024-12-02 17:28:30 +08:00
c5f13629af Set Log level by env (#3798)
### What problem does this PR solve?

Set Log level by env

### Type of change

- [x] Refactoring
2024-12-02 17:24:39 +08:00
c4b6df350a More dataset test cases (#3802)
### What problem does this PR solve?

1. Test not allowed fields of dataset

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Other (please describe): Test cases

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-02 17:15:19 +08:00
976d112280 Feat: Quit from jointed team #3759 (#3797)
### What problem does this PR solve?

Feat: Quit from jointed team #3759

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-02 15:31:41 +08:00
8fba5c4179 Update document: how upload file large than 128MB (#3791)
### What problem does this PR solve?

As title.

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-12-02 14:37:37 +08:00
d19f059f34 Detect invalid response from api.siliconflow.cn (#3792)
### What problem does this PR solve?

Detect invalid response from api.siliconflow.cn. Close #2643

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-02 12:55:05 +08:00
deca6c1b72 Fix service_conf for oc9 docker compose file. (#3790)
### What problem does this PR solve?

#3678

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-02 12:00:58 +08:00
3ee9ca749d Fix the bug that prevented modifying dataset_ids (#3784)
### What problem does this PR solve?

Fix the bug that prevented modifying `dataset_ids`.
 #3772

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-02 11:40:20 +08:00
7058ac0041 Fix out of boundary. (#3786)
### What problem does this PR solve?

#3769
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-02 11:38:53 +08:00
a7efd3cac5 Removed obsolete instructions (#3785)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-12-02 10:51:48 +08:00
59a5813f1b add jina new models in jina connector (#3770)
### What problem does this PR solve?

add new models in jinna connector, to allow use models that support
multilingual models

### Type of change

- [X] Other (please describe): new connectors no breaking change
2024-12-02 10:06:39 +08:00
08c1a5e1e8 Refactor parse progress (#3781)
### What problem does this PR solve?

Refactor parse file progress

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-01 22:28:00 +08:00
ea84cc2e33 Update file parsing progress info (#3780)
### What problem does this PR solve?

Refine the file parsing progress info

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-01 17:03:00 +08:00
b5f643681f Update README (#3777)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-01 14:53:51 +08:00
5497ea34b9 Update QR code of wechat (#3776)
### What problem does this PR solve?

Update wechat group QR code.

### Type of change

- [x] Documentation Update

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-01 12:47:53 +08:00
e079656473 Update progress info and start welcome info (#3768)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-30 18:48:06 +08:00
d00297a763 Fix chunk creation using Infinity (#3763)
### What problem does this PR solve?

1. Store error type in Infinity
2. position list value read from Infinity isn't correct.

Fix issue: #3729

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-30 00:10:14 +08:00
a19210daf1 Feat: Add tooltip to delimiter filed #1909 (#3758)
### What problem does this PR solve?

Feat: Add tooltip to delimiter filed #1909

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-29 18:13:59 +08:00
b2abc36baa Optimize the knowledge base card UI (#3756)
### What problem does this PR solve?

Set the title to 2 lines, display the excess part
Set the description to 3 lines, display the excess part

![image](https://github.com/user-attachments/assets/4b01cac5-b7a3-40cc-8965-42aeba9ea233)

![image](https://github.com/user-attachments/assets/aa4230eb-100f-4905-9cf0-75ce9813a332)


### Type of change

- [x] Other (please describe):
2024-11-29 18:02:17 +08:00
fadbe23bfe Feat: Translate comments of file-util.ts to English #3749 (#3757)
### What problem does this PR solve?

Feat: Translate comments of file-util.ts to English #3749

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-29 18:01:14 +08:00
ea8a59d0b0 Image compression (#3749)
### What problem does this PR solve?

The uploaded avatar has been compressed to preserve transparency while
meeting the length requirements for the 'text' type. The current
compressed size is 100x100 pixels.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-29 17:15:46 +08:00
381219aa41 Fixed increase_usage for builtin models (#3748)
### What problem does this PR solve?

Fixed increase_usage for builtin models. Close #1803

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-29 17:02:49 +08:00
0f08b0f053 Weight up title and keywords for chunks in terms of retrieval (#3750)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-11-29 16:39:55 +08:00
0dafce31c4 Feat: Support for formulas #1954 (#3747)
### What problem does this PR solve?

Feat: Support for formulas #1954

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-29 16:34:30 +08:00
c93e0355c3 Feat: Add DatasetTable #3221 (#3743)
### What problem does this PR solve?

Feat: Add DatasetTable #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-29 16:05:46 +08:00
1e0fc76efa Added release notes v0.11.0 (#3745)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-29 16:00:42 +08:00
d94386e00a Pass top_p to ollama (#3744)
### What problem does this PR solve?

Pass top_p to ollama. Close #1769

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-29 14:52:27 +08:00
0a62dd7a7e Update document (#3746)
### What problem does this PR solve?

Fix description on local LLM deployment case

### Type of change

- [x] Documentation Update

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-11-29 14:50:45 +08:00
06a21d2031 Change Traditional Chinese to Simplified Chinese (#3742)
### What problem does this PR solve?

Change Traditional Chinese to Simplified Chinese

### Type of change

- [x] Other (please describe):
2024-11-29 13:45:31 +08:00
9a3febb7c5 Refactor dockerfile (#3741)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-11-29 13:37:50 +08:00
27cd765d6f Fix raptor issue (#3737)
### What problem does this PR solve?

#3732

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-29 11:55:41 +08:00
a0c0a957b4 Fix GPU docker compose file (#3736)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-11-29 10:49:15 +08:00
b89f7c69ad Fix image_id absence issue (#3735)
### What problem does this PR solve?

#3731

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-29 10:37:09 +08:00
fcdc6ad085 Fix the issue where the agent interface cannot call the context (#3725)
### What problem does this PR solve?

Fix the context of the agent interface call to the context during web
testing, and change it to the context record of user chat
Remove duplicate messages and add them, which can be viewed in the
messages section of database api_4_comversation

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
2024-11-29 10:36:48 +08:00
834c4d81f3 Update version info to v0.14.1 (#3720)
### What problem does this PR solve?

Update version info to v0.14.1

### Type of change

- [x] Documentation Update

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-28 20:09:20 +08:00
a3e0ac9c0b Fix: Clicking the checkbox of the pop-up window for editing chunk is invalid #3726 (#3727)
### What problem does this PR solve?

Fix: Clicking the checkbox of the pop-up window for editing chunk is
invalid #3726

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-28 20:08:06 +08:00
80af3cc2d4 Don't log exception if object doesn't exist (#3724)
### What problem does this PR solve?

Don't log exception if object doesn't exist. Close #1483

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-28 19:37:01 +08:00
966bcda6b9 Updated descriptions for the Agent components (#3728)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-28 19:32:50 +08:00
112ef42a19 Ensure thumbnail be smaller than 64K (#3722)
### What problem does this PR solve?

Ensure thumbnail be smaller than 64K. Close #1443
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-28 19:15:31 +08:00
91f1814a87 Fix error response (#3719)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Jin Hai <haijin.chn@gmail.com>
2024-11-28 18:56:10 +08:00
4e8e4fe53f Feat: Add Dataset page #3221 (#3721)
### What problem does this PR solve?

Feat: Add Dataset page #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-28 18:44:36 +08:00
cdae8d28fe Fix test cases (#3718)
### What problem does this PR solve?

Fix test cases

### Type of change

- [x] Other (please describe): Fix error cases

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-28 17:37:46 +08:00
964a6f4ec4 Added an infinity configuration file to easily customize the settings of Infinity (#3715)
### What problem does this PR solve?

Added an infinity configuration file to easily customize the settings of
Infinity

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-28 15:59:00 +08:00
9fcad0500d Add more web test cases (#3702)
### What problem does this PR solve?

Test cases about dataset

### Type of change

- [x] Other (please describe): test cases

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-28 15:46:35 +08:00
ec560cc99d Feat: Scrolling knowledge base list and set the number of entries per page to 30 #3695 (#3712)
### What problem does this PR solve?

Feat: Scrolling knowledge base list and set the number of entries per
page to 30 #3695

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-28 15:25:38 +08:00
7ae8828e61 Added release notes v0.12.0 (#3711)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-11-28 14:57:50 +08:00
43e367f2ea Detect shape error of embedding (#3710)
### What problem does this PR solve?

Detect shape error of embedding. Close #2997

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-28 14:10:22 +08:00
e678819f70 Fix RGBA error (#3707)
### What problem does this PR solve?

**Passing cv_mdl.describe() is not an RGB converted image**

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-28 13:09:02 +08:00
bc701d7b4c Edit chunk shall update instead of insert it (#3709)
### What problem does this PR solve?

Edit chunk shall update instead of insert it. Close #3679 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-28 13:00:38 +08:00
9f57534843 Revert "Feat: Scrolling knowledge base list #3695" (#3708)
Reverts infiniflow/ragflow#3703
2024-11-28 11:44:23 +08:00
52b3492b18 Feat: Scrolling knowledge base list #3695 (#3703)
### What problem does this PR solve?

Feat: Scrolling knowledge base list #3695

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-28 10:51:30 +08:00
2229431803 Added release notes for v0.13.0 (#3691)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-27 19:26:03 +08:00
57208d8e53 Fix batch size issue. (#3675)
### What problem does this PR solve?

#3657

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-27 18:06:43 +08:00
535b15ace9 Feat: Add dataset sidebar #3221 (#3683)
### What problem does this PR solve?

Feat: Add dataset sidebar #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-27 18:06:05 +08:00
2249d5d413 Always open text file for write with UTF-8 (#3688)
### What problem does this PR solve?

Always open text file for write with UTF-8. Close #932 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-27 16:24:16 +08:00
6fb1a181aa Added aspose on macosx/arm64 (#3686)
### What problem does this PR solve?

Added aspose on macosx/arm64. Close #3666 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-27 15:00:07 +08:00
90ffcb4ddb Fix graphrag + infinity bugs (#3681)
### What problem does this PR solve?

Fix graphrag + infinity bugs

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-27 12:45:43 +08:00
7f48acb3fd Fix enable/disable bug (#3662)
### What problem does this PR solve?

Fix enable/disable bug   #3628

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-27 09:37:11 +08:00
d61bbe6750 Use polars-lts-cpu on arm64 (#3667)
### What problem does this PR solve?

Use polars-lts-cpu on arm64

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-27 09:32:41 +08:00
ee37ee3d28 Feat: Add Datasets page #3221 (#3661)
### What problem does this PR solve?

Feat: Add Datasets page #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-27 09:31:08 +08:00
8b35776916 Fix a bug in VolcEngine (#3658)
### What problem does this PR solve?

Fix a bug in VolcEngine  #3553

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-27 09:30:49 +08:00
b6f3f15f0b Fix KB list bugs and add web api test (#3649)
### What problem does this PR solve?

1. Read KB list path parameter, page_number and page_size, which type
isn't int
2. Add cases on create / list / delete datasets.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Test cases

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-26 18:21:15 +08:00
fa8e2c1678 Added release notes (#3660)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-11-26 18:11:39 +08:00
7669fc8f52 Fix es get NotFoundError (#3659)
### What problem does this PR solve?

Fix es get NotFoundError

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-26 18:07:07 +08:00
98cf1c2a9d Feat: add PromptManagement page #3221 (#3650)
### What problem does this PR solve?

Feat: add PromptManagement page #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-26 16:55:44 +08:00
5337cad7e4 Check model id when set dialog. Close #849 (#3655)
### What problem does this PR solve?

Check model id when set dialog. Close #849

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-26 16:32:46 +08:00
0891a393d7 Let ThreadPool exit gracefully. (#3653)
### What problem does this PR solve?

#3646

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-26 16:31:07 +08:00
5c59651bda Fix the bug causing garbled text (#3640)
### What problem does this PR solve?

Fix the bug causing garbled text #3613

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-26 12:06:56 +08:00
f6c3d7ccf6 Fixed es mapping (#3643)
### What problem does this PR solve?

Fixed es mapping

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-26 12:00:19 +08:00
3df1663e4f For security. (#3642)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-11-26 09:34:34 +08:00
32cf566a08 Feat: Add ModelManagement page #3221 (#3638)
### What problem does this PR solve?

Feat: Add ModelManagement page #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-26 09:10:48 +08:00
769c67a470 Updated UI (#3639)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] Documentation Update
2024-11-25 19:32:25 +08:00
49494d4e3c Update version number for poetry (#3637)
### What problem does this PR solve?



### Type of change

- [x] Refactoring
2024-11-25 17:57:08 +08:00
3839d8abc7 Updated FAQ and Upgrade guide (#3636)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-25 17:29:56 +08:00
d8b150a34c Add support for folder deletion (#3635)
### What problem does this PR solve?

Add support for folder deletion.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-25 16:59:43 +08:00
4454b33e51 Feat: Add PricingCard component #3221 (#3634)
### What problem does this PR solve?

Feat: Add PricingCard component #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-25 16:21:47 +08:00
ce6b4c0e05 Fix a bug in completions (#3632)
### What problem does this PR solve?

Fix a bug in completions

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-25 16:21:28 +08:00
ddf01e0450 Fix missing info in README and config error (#3633)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-25 15:53:29 +08:00
86e48179a1 Fix bugs in api (#3624)
### What problem does this PR solve?

#3488

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-25 15:25:48 +08:00
b2c33b4df7 DOC: Added health check and team management (#3630)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-25 14:31:01 +08:00
9348616659 Handle infinity empty response (#3627)
### What problem does this PR solve?

Handle infinity empty response. Close #3623
Show version in docker build log

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-25 14:09:42 +08:00
a0e9b62de5 Feat: Add TeamManagement component #3221 (#3626)
### What problem does this PR solve?

Feat: Add TeamManagement component #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-25 12:21:37 +08:00
7874aaaf60 doc:rewrite some confusing statement (#3607)
rewrite some confusing statement to improve user experience
2024-11-25 11:57:31 +08:00
08ead81dde Bump infinity to v0.5.0-dev5 (#3520)
### What problem does this PR solve?

Bump infinity to v0.5.0-dev5

### Type of change

- [x] Refactoring
2024-11-25 11:53:58 +08:00
e5af18d5ea Update docs for v0.14.0 (#3625)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-11-25 11:37:56 +08:00
609236f5c1 Let 'One' applicable for tables in docx (#3619)
### What problem does this PR solve?

#3598

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Performance Improvement
2024-11-25 09:57:54 +08:00
6a3f9bc32a Restore version info to v0.13.0 (#3605)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-11-23 17:06:22 +08:00
934d6d9ad1 Update quickstart.mdx (#3603)
### What problem does this PR solve?

Fixed typo from RAGFlow_IMAGE to RAGFlOW_IMAGE

### Type of change

- [X] Documentation Update
2024-11-23 15:27:13 +08:00
875096384b when qwen rerank model not return ok, raise exception to notice user (#3593)
### What problem does this PR solve?

When calling the Qwen rerank model, if the model does not return
correctly, an exception should be raised to notify the user, rather than
simply returning a value of 0, as this would be confusing to the user.
### Type of change          

- [x] New Feature (non-breaking change which adds functionality)
2024-11-22 22:34:34 +08:00
a10c2f2eff Fix: Solve the problem of model files in the image being soft links pointing to a non-existent address. #3584 (#3586)
### What problem does this PR solve?

Fix: Solve the problem of model files in the image being soft links
pointing to a non-existent address. #3584

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-22 21:54:39 +08:00
646ac1f2b4 Improved image build instructions (#3580)
### What problem does this PR solve?

Improved arm64 image build instructions

### Type of change

- [x] Documentation Update
- [x] Refactoring
2024-11-22 20:24:32 +08:00
8872aed512 Feat: Modify the prompt text for deleting team members #2834 (#3599)
### What problem does this PR solve?
Feat: Modify the prompt text for deleting team members #2834

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-22 18:50:53 +08:00
55692e4da6 Feat: Capitalize the first letter of the team's role #2834 (#3597)
### What problem does this PR solve?

Feat: Capitalize the first letter of the team's role #2834

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-22 18:19:34 +08:00
6314d3c727 Updates for readme (#3595)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-11-22 16:47:45 +08:00
06b9256972 Feat: remove useSetLlmSetting from GenerateForm #3591 (#3592)
### What problem does this PR solve?

Feat: remove useSetLlmSetting from GenerateForm #3591

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-22 16:31:44 +08:00
cc219ff648 Fix agent session API (#3589)
### What problem does this PR solve?

#3585
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-22 16:19:00 +08:00
ee33bf71eb Feat: Add ProfileSetting page #3221 (#3588)
### What problem does this PR solve?

Feat: Add ProfileSetting page  #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-22 15:11:38 +08:00
ee7fd71fdc Add an agent template: SEO bloger (#3582)
### What problem does this PR solve?

#3560

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-22 13:12:05 +08:00
d56f52eef8 Fix agent api invokation issue (#3581)
### What problem does this PR solve?

#3570 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-22 12:37:27 +08:00
9f3141804f Fix chunk enable/disable issue (#3579)
### What problem does this PR solve?

#3576

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-22 12:25:42 +08:00
60a3e1a8dc Update Docker .env (#3576)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-11-22 12:03:46 +08:00
9541d7e7bc Added TRACE_MALLOC_DELTA and TRACE_MALLOC_FULL (#3555)
### What problem does this PR solve?

Added TRACE_MALLOC_DELTA and TRACE_MALLOC_FULL to debug task_executor.py
heap. Relates to #3518

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-22 12:00:25 +08:00
811c49d7a2 Updated obsolete faqs (#3575)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2024-11-22 11:11:06 +08:00
482c1b59c8 Check tika.parser return result (#3564)
### What problem does this PR solve?

Check tika.parser return result. Close #3229

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Yingfeng <yingfeng.zhang@gmail.com>
2024-11-22 11:05:06 +08:00
691ea287c2 Added RAGFlow image version into bug report template (#3561)
### What problem does this PR solve?

Add RAGFlow image version into bug report template

### Type of change

- [x] Other (please describe):
2024-11-22 10:50:13 +08:00
b87d14492f Revert "Updated obsolete faqs (#3554)" (#3573)
This reverts commit 13ff463845.

### What problem does this PR solve?


### Type of change

- [x] Other (please describe):
2024-11-22 10:42:10 +08:00
cc5960b88e Vietnamese language support on web (#3549)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
2024-11-21 18:43:08 +08:00
ee50f78d99 Add component 'Template' (#3562)
### What problem does this PR solve?

#3560

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-21 18:26:22 +08:00
193b08a3ed Fix: Limit node version #3547 (#3563)
### What problem does this PR solve?

Fix: Limit node version #3547

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-21 18:14:22 +08:00
3a3e23d8d9 Feat: Add Template operator #3556 (#3559)
### What problem does this PR solve?

Feat: Add Template operator #3560

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-21 18:05:31 +08:00
30f111edb3 Fixs for translation agent (#3557)
### What problem does this PR solve?

#3556 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-11-21 16:22:25 +08:00
d47ee88454 Feat: When saving the canvas, other dls parameters passed from the backend are spliced ​​into the dsl parameters #3355 (#3558)
### What problem does this PR solve?

Feat: When saving the canvas, other dls parameters passed from the
backend are spliced ​​into the dsl parameters #3355
#3556

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-21 16:21:54 +08:00
13ff463845 Updated obsolete faqs (#3554)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-11-21 16:15:37 +08:00
bf9ebda3c8 Add tests for frontend API (#3552)
### What problem does this PR solve?

Add tests for frontend API

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-21 15:39:25 +08:00
85dd9fde43 Fix a bug in agent api (#3551)
### What problem does this PR solve?

Fix a bug in agent api
#3539

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-21 12:45:22 +08:00
c7c8b3812f Add test for document (#3548)
### What problem does this PR solve?

Add test for document

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-21 12:02:26 +08:00
0ac6dc8f8c Cut down the attempt times of ES (#3550)
### What problem does this PR solve?

#3541
### Type of change


- [x] Refactoring
- [x] Performance Improvement
2024-11-21 11:37:45 +08:00
58a2200b80 Fix: search issue for graphrag (#3546)
### What problem does this PR solve?

#3538

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-21 11:13:29 +08:00
e10b0e6b60 Fix: Inaccurate description (#3540)
### What problem does this PR solve?

Correction of inaccurate front-end description.

![image](https://github.com/user-attachments/assets/4a7330b5-1384-4092-825d-56675fb5799c)

![image](https://github.com/user-attachments/assets/b3737912-4f5a-4928-a629-404a4fa4da6e)

### Type of change

- [x] Refactoring
2024-11-21 10:09:20 +08:00
d9c882399d Ensure LIGHTEN work (#3542)
### What problem does this PR solve?
#3531

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-21 09:56:28 +08:00
8930bfcff8 Fix bugs (#3535)
### What problem does this PR solve?

1. system monitor icon and text missing
2. Team knowledge base can't be search

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-20 20:55:42 +08:00
9b9afa9d6e Feat: Display the input parameters of begin in the output result table #3355 (#3534)
### What problem does this PR solve?

Feat: Display the input parameters of begin in the output result table
#3355

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-20 20:55:31 +08:00
362db857d0 New: a new interpretor based on Andrew Ng theory. (#3532)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-20 20:55:22 +08:00
541272eb99 Fix: authorization issue (#3530)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:55:10 +08:00
5b44b99cfd Removed beartype (#3528)
### What problem does this PR solve?

The beartype configuration of
main(64f50992e0) is:
```
from beartype import BeartypeConf
from beartype.claw import beartype_all  # <-- you didn't sign up for this
beartype_all(conf=BeartypeConf(violation_type=UserWarning))    # <-- emit warnings from all code
```

ragflow_server failed at a third-party package:

```
(ragflow-py3.10) zhichyu@iris:~/github.com/infiniflow/ragflow$ rm -rf logs/* && bash docker/launch_backend_service.sh 
Starting task_executor.py for task 0 (Attempt 1)
Starting ragflow_server.py (Attempt 1)
Traceback (most recent call last):
  File "/home/zhichyu/github.com/infiniflow/ragflow/api/ragflow_server.py", line 22, in <module>
    from api.utils.log_utils import initRootLogger
  File "/home/zhichyu/github.com/infiniflow/ragflow/api/utils/__init__.py", line 25, in <module>
    import requests
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/requests/__init__.py", line 43, in <module>
    import urllib3
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/__init__.py", line 15, in <module>
    from ._base_connection import _TYPE_BODY
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/_base_connection.py", line 5, in <module>
    from .util.connection import _TYPE_SOCKET_OPTIONS
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/util/__init__.py", line 4, in <module>
    from .connection import is_connection_dropped
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 7, in <module>
    from .timeout import _DEFAULT_TIMEOUT, _TYPE_TIMEOUT
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/util/timeout.py", line 20, in <module>
    _DEFAULT_TIMEOUT: Final[_TYPE_DEFAULT] = _TYPE_DEFAULT.token
NameError: name 'Final' is not defined
Traceback (most recent call last):
  File "/home/zhichyu/github.com/infiniflow/ragflow/rag/svr/task_executor.py", line 22, in <module>
    from api.utils.log_utils import initRootLogger
  File "/home/zhichyu/github.com/infiniflow/ragflow/api/utils/__init__.py", line 25, in <module>
    import requests
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/requests/__init__.py", line 43, in <module>
    import urllib3
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/__init__.py", line 15, in <module>
    from ._base_connection import _TYPE_BODY
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/_base_connection.py", line 5, in <module>
    from .util.connection import _TYPE_SOCKET_OPTIONS
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/util/__init__.py", line 4, in <module>
    from .connection import is_connection_dropped
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 7, in <module>
    from .timeout import _DEFAULT_TIMEOUT, _TYPE_TIMEOUT
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/util/timeout.py", line 20, in <module>
    _DEFAULT_TIMEOUT: Final[_TYPE_DEFAULT] = _TYPE_DEFAULT.token
NameError: name 'Final' is not defined
```

This third-package is out of our control. I have to remove beartype
entirely.


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:54:57 +08:00
6be7901df2 Warning instead of exception on type mismatch (#3523)
### What problem does this PR solve?

Warning instead of exception on type mismatch.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:54:42 +08:00
4d42bcd517 Feat: merge the begin operator's url and file into one field #3355 (#3521)
### What problem does this PR solve?

Feat: merge the begin operator's url and file into one field #3355
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-20 20:54:27 +08:00
9b4c2868bd Fix set_output type hint (#3516)
### What problem does this PR solve?

Fix set_output type hint

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:54:09 +08:00
d02a2b131a Fix: potential risk (#3515)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-11-20 20:53:58 +08:00
81c7b6afc5 Make spark model robuster to model name (#3514)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:53:44 +08:00
cad341e794 Added kb_id filter to knn. Fix #3458 (#3513)
### What problem does this PR solve?

Added kb_id filter to knn. Fix #3458

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:53:30 +08:00
e559cebcdc fix: keyerror issue (#3512)
### What problem does this PR solve?

#3511

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:53:20 +08:00
8b4407a68c feat: Add Datasets component to home page #3221 (#3508)
### What problem does this PR solve?

feat: Add Datasets component to home page #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-20 20:53:07 +08:00
289034f36e smooth term weight (#3510)
### What problem does this PR solve?

#3499

### Type of change

- [x] Performance Improvement
2024-11-20 20:52:51 +08:00
17a7ea42eb fix synonym bug (#3506)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:52:36 +08:00
2044bb0039 Fix bugs (#3502)
### What problem does this PR solve?

1. Remove unused code
2. Fix type mismatch, in nlp search and infinity search interface
3. Fix chunk list, get all chunks of this user.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-20 20:52:23 +08:00
c4f2464935 fix: laws.py added missing import logging (#3501)
### What problem does this PR solve?

_Choosing Laws Chunk Method results in an error when parsing a document.
The error is caused by a missing import in the `laws.py` file._

```
Traceback (most recent call last):
  File "/ragflow/rag/svr/task_executor.py", line 445, in handle_task
    do_handle_task(task)
  File "/ragflow/rag/svr/task_executor.py", line 384, in do_handle_task
    cks = build(r)
          ^^^^^^^^
  File "/ragflow/rag/svr/task_executor.py", line 196, in build
    cks = chunker.chunk(row["name"], binary=binary, from_page=row["from_page"],
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/ragflow/rag/app/laws.py", line 161, in chunk
    for txt, poss in pdf_parser(filename if not binary else binary,
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/ragflow/rag/app/laws.py", line 124, in __call__
    logging.debug("layouts:".format(
    ^^^^^^^
NameError: name 'logging' is not defined. Did you forget to import 'logging'

```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):

Co-authored-by: Michal Masrna <m.marna1@gmail.com>
2024-11-20 20:52:05 +08:00
bcb6f7168f [bug]import typing.cast for leiden.py (#3493)
### What problem does this PR solve?

leiden alg throws exception for lack func cast definition

### Type of change

- Bug Fix (non-breaking change which fixes an issue)
2024-11-19 18:42:30 +08:00
361cff34fc add input variables to begin component (#3498)
### What problem does this PR solve?

#3355 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-19 18:41:48 +08:00
0cd5b64c3b Changed requirement to python 3.10 (#3496)
### What problem does this PR solve?

Changed requirement to python 3.10.
Changed image base to Ubuntu 22.04 since it contains python 3.10.

### Type of change

- [x] Refactoring
2024-11-19 18:25:04 +08:00
16fbe9920d feat: Stream the greetings of the agent dialogue #3355 (#3490)
### What problem does this PR solve?

feat: Stream the greetings of the agent dialogue #3355

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-19 15:14:35 +08:00
e4280be5e5 minor (#3483)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-11-19 14:59:21 +08:00
d42362deb6 Add api for sessions and add max_tokens for tenant_llm (#3472)
### What problem does this PR solve?

Add api for sessions and add max_tokens for tenant_llm

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-19 14:51:33 +08:00
883fafde72 Fix elasticsearch status display (#3487)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-19 14:40:58 +08:00
568322aeaf fix(rag): fix error in viewing document chunk and cannot start task_executor server (#3481)
### What problem does this PR solve?

1. Fix error in viewing document chunk

<img width="1677" alt="Pasted Graphic"
src="https://github.com/user-attachments/assets/acd84cde-f38c-4190-b135-5e5139ae2613">

Viewing document chunk details in a BeartypeCallHintParamViolation
error.

Traceback (most recent call last):
File "ragflow/.venv/lib/python3.12/site-packages/flask/app.py", line
880, in full_dispatch_request
    rv = self.dispatch_request()
         ^^^^^^^^^^^^^^^^^^^^^^^
File "ragflow/.venv/lib/python3.12/site-packages/flask/app.py", line
865, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
# type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "ragflow/.venv/lib/python3.12/site-packages/flask_login/utils.py",
line 290, in decorated_view
    return current_app.ensure_sync(func)(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ragflow/api/apps/chunk_app.py", line 311, in knowledge_graph
sres = settings.retrievaler.search(req, search.index_name(tenant_id),
kb_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<@beartype(rag.nlp.search.Dealer.search) at 0x3381fd800>", line
39, in search
beartype.roar.BeartypeCallHintParamViolation: Method
rag.nlp.search.Dealer.search() parameter
idx_names='ragflow_0e1e67f431d711ef98fc00155d29195d' violates type hint
list[str], as str 'ragflow_0e1e67f431d711ef98fc00155d29195d' not
instance of list.
2024-11-19 11:30:29,817 ERROR 91013 Method
rag.nlp.search.Dealer.search() parameter
idx_names='ragflow_0e1e67f431d711ef98fc00155d29195d' violates type hint
list[str], as str 'ragflow_0e1e67f431d711ef98fc00155d29195d' not
instance of list.
Traceback (most recent call last):
  File "ragflow/api/apps/chunk_app.py", line 60, in list_chunk
sres = settings.retrievaler.search(query, search.index_name(tenant_id),
kb_ids, highlight=True)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<@beartype(rag.nlp.search.Dealer.search) at 0x3381fd800>", line
39, in search
beartype.roar.BeartypeCallHintParamViolation: Method
rag.nlp.search.Dealer.search() parameter
idx_names='ragflow_0e1e67f431d711ef98fc00155d29195d' violates type hint
list[str], as str 'ragflow_0e1e67f431d711ef98fc00155d29195d' not
instance of list.


because in nlp/search.py,the idx_names is only list

<img width="1098" alt="Pasted Graphic 2"
src="https://github.com/user-attachments/assets/4998cb1e-94bc-470b-b2f4-41ecb5b08f8a">

but the DocStoreConnection.search method accept list or str
<img width="1175" alt="Pasted Graphic 3"
src="https://github.com/user-attachments/assets/ee918b4a-87a5-42c9-a6d2-d0db0884b875">


and his implements also list and str
es_conn.py

<img width="1121" alt="Pasted Graphic 4"
src="https://github.com/user-attachments/assets/3e6dc030-0a0d-416c-8fd4-0b4cfd576f8c">

infinity_conn.py

<img width="1221" alt="Pasted Graphic 5"
src="https://github.com/user-attachments/assets/44edac2b-6b81-45b0-a3fc-cb1c63219015">

2. Fix cannot star task_executor server with Unresolved reference
'Mapping'
<img width="1283" alt="Pasted Graphic 6"
src="https://github.com/user-attachments/assets/421f17b8-d0a5-46d3-bc4d-d05dc9dfc934">

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-11-19 14:36:10 +08:00
31decadd8e feat: Let the top navigation bar open in a tab when you right click on it #3018 (#3486)
### What problem does this PR solve?

feat: Let the top navigation bar open in a tab when you right click on
it #3018

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-19 14:20:00 +08:00
dec9b3e540 Fix logs. Use dict.pop instead of del. Close #3473 (#3484)
### What problem does this PR solve?

Fix logs. Use dict.pop instead of del.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-19 14:15:25 +08:00
d0f94a42ff Refactor ragflow server (#3482)
### What problem does this PR solve?

1. Fix document error
2. Move start server out of if __main__ branch.

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-19 12:31:20 +08:00
ed0d47fc8a feat: "Open link in new tab" doesn't work for profile image #3018 (#3480)
### What problem does this PR solve?

feat:  "Open link in new tab" doesn't work for profile image #3018

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-19 11:40:59 +08:00
aa9a16e073 Fix document error (#3456)
### What problem does this PR solve?

1. Update README.md
2. Fix error description in docker/.env

### Type of change

- [x] Documentation Update

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-19 11:31:11 +08:00
eef84a86bf feat: Disable clicking the Next button while uploading files in RunDrawer #3355 (#3477)
### What problem does this PR solve?

feat: Disable clicking the Next button while uploading files in
RunDrawer #3355

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-19 11:03:39 +08:00
ed72d1100b minor format updates (#3471)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-18 19:19:28 +08:00
f4e9dae33a Updated upgrade guide to avoid confusion (#3469)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-18 19:07:05 +08:00
50f7b7e0a3 feat: Capture task executor null value exception #3409 (#3468)
### What problem does this PR solve?

feat: Capture task executor null value exception #3409

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-18 19:06:52 +08:00
01c2712941 feat: Display task executor tooltip with json-view #3409 (#3467)
### What problem does this PR solve?

feat: Display task executor  tooltip with json-view #3409

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-18 18:36:18 +08:00
4413683898 Introduced beartype (#3460)
### What problem does this PR solve?

Introduced [beartype](https://github.com/beartype/beartype) for runtime
type-checking.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-18 17:38:17 +08:00
3824c1fec0 feat: Show task_executor heartbeat #3409 (#3461)
### What problem does this PR solve?

feat: Show task_executor heartbeat #3409
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-18 17:23:49 +08:00
4b3eeaa6ef Added LocalAI support for rerank models (#3446)
### What problem does this PR solve?

Hi there!
LocalAI added support of rerank models
https://localai.io/features/reranker/

I've implemented LocalAIRerank class (typically copied it from
OpenAI_APIRerank class).
Also, LocalAI model response with 500 error code if len of "documents"
is less than 2 in similarity check.
So I've added the second "document" on RERANK model connection check in
`api/apps/llm_app.py`.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-18 12:05:52 +08:00
70cd5c1599 Remove unused code (#3448)
### What problem does this PR solve?

1. Remove unused code.
2. Move some codes from settings to constants

### Type of change

- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-18 12:05:38 +08:00
f9643adc43 Adjusted heartbeat format (#3459)
### What problem does this PR solve?

Adjusted heartbeat format

### Type of change

- [x] Refactoring
2024-11-18 12:03:28 +08:00
7b9e0723d6 build(deps): bump cross-spawn from 7.0.3 to 7.0.5 in /web (#3455)
Bumps [cross-spawn](https://github.com/moxystudio/node-cross-spawn) from
7.0.3 to 7.0.5.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/moxystudio/node-cross-spawn/blob/master/CHANGELOG.md">cross-spawn's
changelog</a>.</em></p>
<blockquote>
<h3><a
href="https://github.com/moxystudio/node-cross-spawn/compare/v7.0.4...v7.0.5">7.0.5</a>
(2024-11-07)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>fix escaping bug introduced by backtracking (<a
href="640d391fde">640d391</a>)</li>
</ul>
<h3><a
href="https://github.com/moxystudio/node-cross-spawn/compare/v7.0.3...v7.0.4">7.0.4</a>
(2024-11-07)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>disable regexp backtracking (<a
href="https://redirect.github.com/moxystudio/node-cross-spawn/issues/160">#160</a>)
(<a
href="5ff3a07d9a">5ff3a07</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="085268352d"><code>0852683</code></a>
chore(release): 7.0.5</li>
<li><a
href="640d391fde"><code>640d391</code></a>
fix: fix escaping bug introduced by backtracking</li>
<li><a
href="bff0c87c8b"><code>bff0c87</code></a>
chore: remove codecov</li>
<li><a
href="a7c6abc6fe"><code>a7c6abc</code></a>
chore: replace travis with github workflows</li>
<li><a
href="9b9246e096"><code>9b9246e</code></a>
chore(release): 7.0.4</li>
<li><a
href="5ff3a07d9a"><code>5ff3a07</code></a>
fix: disable regexp backtracking (<a
href="https://redirect.github.com/moxystudio/node-cross-spawn/issues/160">#160</a>)</li>
<li><a
href="9521e2da94"><code>9521e2d</code></a>
chore: fix tests in recent node js versions</li>
<li><a
href="97ded399e9"><code>97ded39</code></a>
chore: convert package lock</li>
<li><a
href="d52b6b9da4"><code>d52b6b9</code></a>
chore: remove unused argument (<a
href="https://redirect.github.com/moxystudio/node-cross-spawn/issues/156">#156</a>)</li>
<li><a
href="5d843849e1"><code>5d84384</code></a>
chore: add travis jobs on ppc64le (<a
href="https://redirect.github.com/moxystudio/node-cross-spawn/issues/142">#142</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/moxystudio/node-cross-spawn/compare/v7.0.3...v7.0.5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cross-spawn&package-manager=npm_and_yarn&previous-version=7.0.3&new-version=7.0.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/infiniflow/ragflow/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-18 11:19:44 +08:00
a1d01a1b2f enlarge the default token length of RAPTOR summarization (#3454)
### What problem does this PR solve?

#3426

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-18 10:15:26 +08:00
dc05f43eee Minor: Fixed a broken link (#3451)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-16 20:22:53 +08:00
77bdeb32bd Added current task into task executor's hearbeat (#3444)
### What problem does this PR solve?

Added current task into task executor's hearbeat

### Type of change

- [x] Refactoring
2024-11-15 22:55:41 +08:00
af18217d78 feat: Add banner #3221 (#3442)
### What problem does this PR solve?

feat: Add banner #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-15 19:18:23 +08:00
4ed5ca2666 handle_task catch all exception (#3441)
### What problem does this PR solve?

handle_task catch all exception
Report heartbeats

### Type of change

- [x] Refactoring
2024-11-15 18:51:09 +08:00
1e90a1bf36 Move settings initialization after module init phase (#3438)
### What problem does this PR solve?

1. Module init won't connect database any more.
2. Config in settings need to be used with settings.CONFIG_NAME

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-15 17:30:56 +08:00
ac033b62cf fix tika-server issue (#3439)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-15 16:57:01 +08:00
cb3b9d7ada refine the message of queuing a task (#3437)
### What problem does this PR solve?



### Type of change

- [x] Refactoring
2024-11-15 15:59:54 +08:00
ca9e97d2f2 Enlarge the term weight difference (#3435)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-11-15 15:41:50 +08:00
6d451dbe06 Updated retrieval testing UI (#3433)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-15 15:17:34 +08:00
e0659a4f0e feat: Add RunDrawer #3355 (#3434)
### What problem does this PR solve?

feat: Translation test run form #3355
feat: Wrap QueryTable with Collapse #3355
feat: If the required fields are not filled in, the submit button will
be grayed out. #3355
feat: Add RunDrawer #3355

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-15 15:17:23 +08:00
a854bc22d1 Rework task executor heartbeat (#3430)
### What problem does this PR solve?

Rework task executor heartbeat, and print in console.

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-11-15 14:43:55 +08:00
48e060aa53 rm es query escape chars (#3428)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-15 13:19:07 +08:00
47abfc32d4 Remove unused settings (#3427)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-15 13:18:16 +08:00
a1ba228bc2 fix: empty token bug (#3424)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-15 10:33:03 +08:00
996c94a8e7 Move clk100k_base tokenizer to docker image (#3411)
### What problem does this PR solve?

Move the tiktoken of cl100k_base into docker image

issue: #3338 

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-15 10:18:40 +08:00
220aaddc62 fix: synonym bug (#3423)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-15 10:14:51 +08:00
6878d23a57 Print configs when startup RAGFlow server (#3414)
### What problem does this PR solve?

Print configs at the RAGFlow startup phase.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

```
2024-11-14 21:27:53,090 INFO     962231 Current configs, from /home/weilongma/Documents/development/ragflow/conf/service_conf.yaml:
2024-11-14 21:27:53,090 INFO     962231 ragflow: {'host': '0.0.0.0', 'http_port': 9380}
2024-11-14 21:27:53,090 INFO     962231 mysql: {'name': 'rag_flow', 'user': 'root', 'password': 'infini_rag_flow', 'host': 'mysql', 'port': 5455, 'max_connections': 100, 'stale_timeout': 30}
2024-11-14 21:27:53,090 INFO     962231 minio: {'user': 'rag_flow', 'password': 'infini_rag_flow', 'host': 'minio:9000'}
2024-11-14 21:27:53,090 INFO     962231 es: {'hosts': 'http://es01:1200', 'username': 'elastic', 'password': 'infini_rag_flow'}
2024-11-14 21:27:53,090 INFO     962231 redis: {'db': 1, 'password': 'infini_rag_flow', 'host': 'redis:6379'}
```

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-15 09:29:40 +08:00
df9d054551 Updated descriptions of agent APIs (#3407)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-11-14 18:44:37 +08:00
30c1f7ee29 make variables access robuster (#3406)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-11-14 18:28:41 +08:00
e4c4fdabbd Update version display on web UI (#3405)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-14 17:51:21 +08:00
30f6421760 Use consistent log file names, introduced initLogger (#3403)
### What problem does this PR solve?

Use consistent log file names, introduced initLogger

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-11-14 17:13:48 +08:00
ab4384e011 Updates on parsing progress, including more detailed time cost inform… (#3402)
### What problem does this PR solve?

#3401 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-14 16:28:10 +08:00
201bbef7c0 Print version when RAGFlow server startup (#3393)
### What problem does this PR solve?

Print version when RAGFlow server startup

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-14 15:51:30 +08:00
95d21e5d9f fix: standardize property name from 'chat' to 'chat_id' (#3383)
### What problem does this PR solve?

This PR addresses the inconsistency in property naming within the
codebase by renaming the 'chat' property to 'chat_id' in the session.py
file. This change aims to align the naming convention with other parts
of the application that refer to chat identifiers as 'chat_id', thereby
improving code clarity and maintainability.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-14 15:49:03 +08:00
c5368c7745 resolve halt while starting up (#3397)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-14 13:20:17 +08:00
0657a09e2c Update llm_factories.json (#3396)
### What problem does this PR solve?
Added: Claude-3-5-sonnet-20241022 version. 

### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2024-11-14 13:00:16 +08:00
4caf932808 fix bug about fetching knowledge graph (#3394)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-14 12:29:15 +08:00
400fc3f5e9 Add enviroment validation at server startup phase (#3388)
### What problem does this PR solve?

1. Validate the Python version should >= 3.11
2. Download nltk data

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
Co-authored-by: jinhai <haijin.chn@gmail.com>
Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
2024-11-14 11:01:08 +08:00
e44e3a67b0 adapt to lower case cohere (#3392)
### What problem does this PR solve?

#3384

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-14 10:18:25 +08:00
9d395ab74e Added doc for switching elasticsearch to infinity (#3370)
### What problem does this PR solve?

Added doc for switching elasticsearch to infinity

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update
2024-11-14 00:08:55 +08:00
83c6b1f308 set DLA active for KG (#3386)
### What problem does this PR solve?

### Type of change


- [x] Refactoring
2024-11-13 16:59:19 +08:00
7ab9715b0e refine the error message (#3382)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-11-13 16:20:59 +08:00
632b23486f Fix the value issue of anthropic (#3351)
### What problem does this PR solve?

This pull request fixes the issue mentioned in
https://github.com/infiniflow/ragflow/issues/3263.

1. response should be parsed as dict, prevent the following code from
failing to take values:
ans = response["content"][0]["text"]
2. API Model ```claude-instant-1.2``` has retired (by
[model-deprecations](https://docs.anthropic.com/en/docs/resources/model-deprecations)),
it will trigger errors in the code, so I deleted it from the
conf/llm_factories.json file and updated the latest API Model
```claude-3-5-sonnet-20241022```



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: chenhaodong <chenhaodong@ctrlvideo.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-13 16:13:52 +08:00
ccf189cb7f mv service_conf.yaml to conf/ and fix: add 'answer' as a parameter to 'generate' (#3379)
### What problem does this PR solve?
#3373

### Type of change

- [x] Refactoring
- [x] Bug fix
2024-11-13 15:56:40 +08:00
1fe9a2e6fd feat: Add input parameter to begin operator #3355 (#3375)
### What problem does this PR solve?

feat: Add input parameter to begin operator #3355

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-13 14:54:10 +08:00
9fc092a911 Updated RAGFlow's dataset configuration UI (#3376)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-11-13 14:45:55 +08:00
fa54cd5f5c exstract model dir from model‘s full name (#3368)
### What problem does this PR solve?

When model’s group name contains 0-9,we can't find downloaded
model,because we do not correctly exstract model dir's name from model‘s
full name

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: 王志鹏 <zhipeng3.wang@midea.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-13 14:10:16 +08:00
667d0e5537 Turn the relative date in question to absolute date (#3372)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-11-13 13:49:18 +08:00
91332fa0f8 Refine english synonym (#3371)
### What problem does this PR solve?

#3361

### Type of change

- [x] Performance Improvement
2024-11-13 12:58:37 +08:00
0c95a3382b Dynamically create the service_conf.yaml file by replacing environment variables from .env (#3341)
### What problem does this PR solve?

This pull request implements the feature mentioned in #3322. 

Instead of manually having to edit the `service_conf.yaml` file when
changes have been made to `.env` and mapping it into the docker
container at runtime, a template file is used and the values replaced by
the environment variables from the `.env` file when the container is
started.
 

### Type of change

- [X] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
2024-11-12 22:56:53 +08:00
7274420ecd Updated RAGFlow UI (#3362)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-12 19:56:56 +08:00
a2a5631da4 Rework logging (#3358)
Unified all log files into one.

### What problem does this PR solve?

Unified all log files into one.

### Type of change

- [x] Refactoring
2024-11-12 17:35:13 +08:00
567a7563e7 Fix bugs in api and add examples (#3353)
### What problem does this PR solve?

Fix bugs in api.
Add simple examples for api.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-12 17:14:33 +08:00
62a9afd382 Change launch backend script to handle errors gracefully (#3334)
### What problem does this PR solve?

The `launch_backend_service.sh` script enters infinite loops for both
the task executors and the backend server. When an error occurs in any
of these processes, the script continuously restarts them without
properly handling termination signals. This behavior causes the script
to even ignore interrupts, leading to persistent error messages and
making it difficult to exit the script gracefully.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

### Explanation of Modifications

1. **Signal Trapping with `trap`:** 
- The `trap cleanup SIGINT SIGTERM` line ensures that when a `SIGINT` or
`SIGTERM` signal is received, the cleanup function is invoked.
- The `cleanup` function sets the `STOP` flag to `true`, iterates
through all child process IDs stored in the `PIDS` array, and sends a
`kill` signal to each process to terminate them gracefully.
2. **Retry Limits:**
- Introduced a `MAX_RETRIES` variable to limit the number of restart
attempts for both `task_executor.py` and `ragflow_server.py`
- The loops now check if the retry count has reached the maximum limit.
If so, they invoke the `cleanup` function to terminate all processes and
exit the script.
3. **Process Tracking with `PIDS` Array:**
- After launching each background process (`task_exe` and `run_server`),
their Process IDs (PIDs) are stored in the `PIDS` array.
- This allows the `cleanup` function to terminate all child processes
effectively when needed.
4. **Graceful Shutdown:**
- When the `cleanup` function is called, it iterates over all child PIDs
and sends a termination signal (`kill`) to each, ensuring that all
subprocesses are stopped before the script exits.
5. **Logging Enhancements:**
- Added `echo` statements to provide clearer logs about the state of
each process, including attempts, successes, failures, and retries.
6. **Exit on Successful Completion:**
- If `ragflow_server.py` or a `task_executor.py` process exits with a
success code (0), the loop breaks, preventing unnecessary retries.

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-12 15:51:38 +08:00
aa68d3b8db fix: Cannot copy and paste text on agent page #3356 (#3357)
### What problem does this PR solve?

fix: Cannot copy and paste text on agent page #3356

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-12 15:49:45 +08:00
784ae896d1 add dependencies of chrome (#3352)
### What problem does this PR solve?



### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-11-12 15:49:33 +08:00
f4c52371ab Integration with Infinity (#2894)
### What problem does this PR solve?

Integration with Infinity

- Replaced ELASTICSEARCH with dataStoreConn
- Renamed deleteByQuery with delete
- Renamed bulk to upsertBulk
- getHighlight, getAggregation
- Fix KGSearch.search
- Moved Dealer.sql_retrieval to es_conn.py


### Type of change

- [x] Refactoring
2024-11-12 14:59:41 +08:00
00b6000b76 feat: Disable automatic saving of agent during running agent #3349 (#3350)
### What problem does this PR solve?

feat: Disable automatic saving of agent during running agent #3349

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-12 12:47:36 +08:00
db23d62827 feat: Add background colors such as inverse-strong #3221 (#3346)
### What problem does this PR solve?

feat: Add background colors such as inverse-strong #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-12 11:29:48 +08:00
70ea6661ed add new models for zhipu-ai (#3348)
### What problem does this PR solve?

#3345

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-12 11:27:43 +08:00
a01fceb328 feat: updates all readme to have a link to redirect to new locale (#3339)
### What problem does this PR solve?

Updates all readme to have a link to redirect to README_id.md for
documentation in Bahasa Indonesia

### Type of change

- [x] Documentation Update
2024-11-12 09:26:14 +08:00
e9e98ea093 feat: documentation updates to support Bahasa Indonesia (#3315)
### What problem does this PR solve?

Add Readme docs in Indonesia's native language (Bahasa Indonesia) for
ragflow

### Type of change

- [x] Documentation Update
2024-11-11 19:33:23 +08:00
528646a958 feat: Add custom background color #3221 (#3336)
### What problem does this PR solve?

feat: Add custom background color #3221

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-11 19:29:56 +08:00
8536335e63 Miscellaneous edits to RAGFlow's UI (#3337)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-11-11 19:29:34 +08:00
88072b1e90 fix: double brace issue (#3328)
### What problem does this PR solve?

#3299

### Type of change

- [x] Performance Improvement
2024-11-11 12:07:02 +08:00
34d1daac67 fix: Anthropic param error (#3327)
### What problem does this PR solve?

#3263

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-11 11:54:14 +08:00
3faae0b2c2 trival (#3325)
### What problem does this PR solve?



### Type of change


- [x] Performance Improvement
2024-11-11 10:39:49 +08:00
5e5a35191e fix benchmark issue (#3324)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-11 10:14:30 +08:00
7c486ee3f9 Fix typo (#3319)
### What problem does this PR solve?

Fix typo

### Type of change

- [x] Refactoring
2024-11-11 09:36:39 +08:00
20d686737a feat: Switch the login page to the registration component by changing the routing parameters #3221 (#3307)
### What problem does this PR solve?
feat: Switch the login page to the registration component by changing
the routing parameters #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-08 19:47:22 +08:00
85047e7e36 Added configuration guideline (#3309)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-08 19:46:18 +08:00
ac64e35a45 feat: Translate autosaved #3301 (#3304)
### What problem does this PR solve?

[_Briefly describe what this PR aims to solve. Include background
context that will help reviewers understand the purpose of the
PR._](feat: Translate autosaved #3301)

### Type of change


- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-08 18:27:42 +08:00
004487cca0 fix term weight issue (#3306)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-08 18:25:23 +08:00
74d1eeb4d3 feat: Automatically save agent page data #3301 (#3302)
### What problem does this PR solve?

feat: Automatically save agent page data #3301

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-08 17:28:11 +08:00
464a4d6ead Added env. MACOS (#3297)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-08 16:50:35 +08:00
3d3913419b Updated .env and Docker README (#3295)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-11-08 16:31:52 +08:00
63f7d3bae2 feat: Support shortcut keys to copy nodes #3283 (#3293)
### What problem does this PR solve?

feat: Support shortcut keys to copy nodes #3283

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-08 15:50:01 +08:00
8b6e272197 fix: term weight issue (#3294)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-08 15:49:44 +08:00
5205bdab24 feat: Modify the name of the answer operator #1739 (#3288)
### What problem does this PR solve?
feat: Modify the name of the answer operator #1739

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-08 13:56:43 +08:00
37d4708880 Archivo ts con la traducción para el idioma español (#3274)
¿Qué problema resuelve este PR?
Archivo TS para la traducción al idioma español
Este archivo TS es un archivo de traducción que se utiliza para
proporcionar soporte multilingüe en las aplicaciones. Al agregar la
traducción al español, se facilita el cambio del idioma de la interfaz
de usuario para los usuarios hispanohablantes.

Tipo de cambio
- [x] Nueva funcionalidad (cambio no disruptivo que agrega una nueva
característica)

### What problem does this PR solve?

This TS file is a translation file used to provide multilingual support
in applications. By adding the Spanish translation, it facilitates
changing the user interface language for Spanish-speaking users._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-08 13:44:03 +08:00
d88f0d43ea make language judgement robuster (#3287)
### What problem does this PR solve?



### Type of change

- [x] Performance Improvement
2024-11-08 12:48:11 +08:00
a2153d61ce feat: Support shortcut keys to delete nodes #3283 (#3284)
### What problem does this PR solve?
feat: Support shortcut keys to delete nodes #3283

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-08 12:07:26 +08:00
f16ef57979 fix switch bug (#3280)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-08 10:08:57 +08:00
ff2bbb487f fix index of range (#3279)
### What problem does this PR solve?

#3273

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-08 09:58:11 +08:00
416efbe7e8 Updated Docker README (#3272)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-11-08 09:22:40 +08:00
9c6cc20356 Fix:#3230 When parsing a docx file using the Book parsing method, to_page is always -1, resulting in a block count of 0 even if parsing is successful (#3249)
### What problem does this PR solve?

When parsing a docx file using the Book parsing method, to_page is
always -1, resulting in a block count of 0 even if parsing is successful

Fix:#3230

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-08 09:21:42 +08:00
7c0d28b62d Refactored Docker README (#3269)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-11-07 19:38:50 +08:00
48ab6d7a45 Update authorization for team (#3262)
### What problem does this PR solve?

Update authorization for team.
#3253 #3233
### Type of change

- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-07 19:26:03 +08:00
96b5d2b3a9 fix: The name of the copy operator is displayed the same as before ##3265 (#3266)
### What problem does this PR solve?

fix: The name of the copy operator is displayed the same as before
##3265

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-07 17:53:31 +08:00
f45c29360c New locale for Bahasa Indonesia support (#3255)
### What problem does this PR solve?
Add native translation in locales for Bahasa Indonesia to support new
local language

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-07 17:53:19 +08:00
cdcbe6c2b3 Fixed broken links (#3256)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-07 14:12:32 +08:00
5038552ed9 fix: improve embedding model validation logic for dataset operations (#3235)
What problem does this PR solve?
When creating or updating datasets with custom embedding models (e.g.,
Ollama), the validation logic was too restrictive and prevented valid
models from being used. The previous implementation would reject valid
custom models if they weren't in the predefined list, even when they
existed in TenantLLMService.

Changes:
- Simplify and improve the embedding model validation flow in
create/update endpoints
- Check TenantLLMService for custom models before rejecting
- Make validation logic more consistent between create and update
operations

### What problem does this PR solve?

This fix allows users to successfully create and update datasets with
custom embedding models while maintaining proper validation checks.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: liuhua <10215101452@stu.ecnu.edu.cn>
2024-11-07 10:36:28 +08:00
1b3e39dd12 fix ci issue (#3245)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-06 19:32:30 +08:00
fbcc0bb408 accelerate tokenize (#3244)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-11-06 18:54:41 +08:00
d3bb5e9f3d feat: Display input parameters on operator nodes #3240 (#3241)
### What problem does this PR solve?
feat: Display input parameters on operator nodes #3240


### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-06 18:48:05 +08:00
4097912d59 add inputs to display to every components (#3242)
### What problem does this PR solve?

#3240

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-06 18:47:53 +08:00
f3aaa0d453 Add sdk for Agent API (#3220)
### What problem does this PR solve?

Add sdk for Agent API

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-06 18:03:45 +08:00
0dff64f6ad fix: TypeError: only length-1 arrays can be converted to Python scalars (#3211)
### What problem does this PR solve?
fix "TypeError: only length-1 arrays can be converted to Python scalars"
while using cohere embedding model.

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)


![image](https://github.com/user-attachments/assets/2c21a69f-cd76-4d25-b320-058964812db8)
2024-11-06 11:15:00 +08:00
601a128cd3 feat: Add next login page with shadcn/ui #3221 (#3231)
### What problem does this PR solve?

feat: Add next login page with shadcn/ui  #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-06 11:13:04 +08:00
af74bf01c0 Editorial updates to Docker README (#3223)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-06 09:43:54 +08:00
a418a343d1 doc updates (#3216)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2024-11-05 16:41:43 +08:00
ab6e6019a7 Added a list of supported models (#3214)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] Documentation Update
2024-11-05 15:21:37 +08:00
13053172cb fix: Fixed the issue of api markdown document display being messy #2346 (#3212)
### What problem does this PR solve?

fix: Fixed the issue of api markdown document display being messy #2346

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-05 15:00:07 +08:00
38ebf6b2c0 Remove dead code (#3210)
### What problem does this PR solve?

Remove dead code

### Type of change

- [x] Refactoring
2024-11-05 14:34:49 +08:00
a7bf4ca8fc Fix bugs in API (#3204)
### What problem does this PR solve?

Fix bugs in API

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-05 14:07:31 +08:00
7e89be5ed1 feat: add qwen 2.5 models for silicon flow (#3203)
### What problem does this PR solve?

add qwen 2.5 models for silicon flow

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
2024-11-05 13:58:29 +08:00
b7b30c4b57 add keyword extraction time elapsed to the little lamp. (#3207)
### What problem does this PR solve?



### Type of change

- [x] Refactoring
2024-11-05 13:39:50 +08:00
55953819c1 accelerate term weight calculation (#3206)
### What problem does this PR solve?



### Type of change

- [x] Performance Improvement
2024-11-05 13:11:26 +08:00
677f02c2a7 rm unused file (#3205)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-11-05 11:56:09 +08:00
185c6a0c71 Unified API response json schema (#3170)
### What problem does this PR solve?

Unified API response json schema

### Type of change

- [x] Refactoring
2024-11-05 11:02:31 +08:00
339639a9db add assistant to canvas chat history (#3201)
### What problem does this PR solve?

#3185

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-05 10:04:31 +08:00
18ae8a4091 raise exception if embedding model not found (#3199)
### What problem does this PR solve?

#3173 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-05 09:29:01 +08:00
cbca7dfce6 fix bugs in test (#3196)
### What problem does this PR solve?

fix bugs in test

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-04 20:03:14 +08:00
a9344e6838 Docs update for upgrading DLA models (#3198)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-11-04 20:02:40 +08:00
aa733b1ea4 update poetry.lock (#3197)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-04 19:18:08 +08:00
8305632852 add agent completion API (#3192)
### What problem does this PR solve?

#3105

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-04 17:20:16 +08:00
57f23e0808 feat: Add agent interface document link to agent page #3189 (#3190)
### What problem does this PR solve?

feat: Add agent interface document link to agent page #3189

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-04 17:04:35 +08:00
16b6a78c1e feat: Add tooltip to clean_html item #2908 (#3183)
### What problem does this PR solve?

feat: Add tooltip to clean_html item #2908

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-11-04 15:36:57 +08:00
dd1146ec64 feat: docs for api endpoints to generate openapi specification (#3109)
### What problem does this PR solve?

**Added openapi specification for API routes. This creates swagger UI
similar to FastAPI to better use the API.**
Using python package `flasgger`

### Type of change
- [x] New Feature (non-breaking change which adds functionality)

Not all routes are included since this is a work in progress.

Docs can be accessed on: `{host}:{port}/apidocs`
2024-11-04 15:35:36 +08:00
07c453500b set default LLM to new registered user (#3180)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-04 15:03:07 +08:00
3e4fc12d30 Updated instructions on upgrading to RAGFlow dev (#3175)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-04 13:21:29 +08:00
285fd6ae14 feat: Proxy the SDK address to the backend #3166 (#3171)
### What problem does this PR solve?

feat: Proxy the SDK address to the backend #3166

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-04 11:37:19 +08:00
8d9238db14 fix es search parameter error (#3169)
### What problem does this PR solve?

#3151

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-04 09:53:41 +08:00
c06e765a5b Added tika jar into image to avoid downloading (#3167)
### What problem does this PR solve?

Added tika jar into image to avoid downloading. Close #3017

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-03 00:20:26 +08:00
c7ea7e9974 Added jdk to happify tika (#3165)
### What problem does this PR solve?

Added jdk to happify tika(https://pypi.org/project/tika/). The image
size become ~400MB bigger. Close #2886

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-02 22:21:17 +08:00
37d71dfa90 Replaced redis with Valkey (#3164)
### What problem does this PR solve?

Replaced redis with Valkey. Close #3070

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-11-02 20:05:12 +08:00
44ad9a6cd7 Add test for API (#3134)
### What problem does this PR solve?

Add test for API

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
2024-11-01 22:59:17 +08:00
7eafccf78a Updated descriptions for knowledge graph (#3154)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-01 18:57:58 +08:00
b42d24575c Updated an obsolete response (#3152)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] Documentation Update
2024-11-01 18:00:15 +08:00
3963aaa23e feat: Add DynamicInputVariable to RetrievalForm #1739 (#3112)
### What problem does this PR solve?

feat: Add DynamicInputVariable to RetrievalForm #1739
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-01 13:31:48 +08:00
33e5e5db5b Update gif for readme and add input param to every components (#3145)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update
2024-11-01 13:31:34 +08:00
039cde7893 Updated obsolete screenshot. (#3141)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] Documentation Update
2024-10-31 19:21:34 +08:00
fa9d76224b Changed version to 0.13.0 (#3140)
### What problem does this PR solve?

Changed version to 0.13.0

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe):
2024-10-31 19:11:09 +08:00
35a451c024 Updated screenshots for Starting an AI chat (#3139)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-10-31 19:10:49 +08:00
1d0a5606b2 minor (#3137)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] Documentation Update
2024-10-31 18:37:08 +08:00
4ad031e97d Reworded descriptions for development versions and latest version (#3132)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-31 17:09:52 +08:00
0081d0f05f Moved the Upgrade Manuel out of FAQ (#3131)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-10-31 16:35:13 +08:00
800c25a6b4 Updated list_documents() (#3126)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-31 14:10:35 +08:00
9aeb07d830 Add test for CI (#3114)
### What problem does this PR solve?

Add test for CI

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-31 14:07:23 +08:00
5590a823c6 Fixed a Docusaurus display issue. (#3125)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-31 13:34:04 +08:00
3fa570f49b fix: remove useless test code (#3122)
### What problem does this PR solve?

remove useless test code

### Type of change

- [X] Refactoring
2024-10-31 11:56:46 +08:00
60053e7b02 Fixed a docusaurus display issue. (#3124)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-31 11:56:30 +08:00
fa1b873280 feat: Delete http_api_reference.md from api folder #1102 (#3121)
### What problem does this PR solve?

feat: Delete http_api_reference.md from  api folder #1102

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-31 11:14:11 +08:00
578f70817e Fixed a docusaurus display issue (#3120)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-10-31 10:37:13 +08:00
6c6b658ffe add yi-lightning (#3119)
### What problem does this PR solve?

#3111
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-31 10:30:23 +08:00
9a5ff320f3 Manage ragflow-sdk with poetry (#3115)
### What problem does this PR solve?

Manage ragflow-sdk with poetry
### Type of change

- [x] Refactoring
2024-10-30 21:13:59 +08:00
48688afa5e Tried to fix a link issue (#3117)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-30 20:04:45 +08:00
a2b35098c6 Publish RAGFlow's HTTP and Python API references (#3116)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-30 19:40:39 +08:00
4d5354387b docs updates for 0.13 release (#3108)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-10-30 19:29:27 +08:00
c6512e689b Added chunk methods (#3110)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-30 17:59:23 +08:00
b7aff4f560 Differentiated API key names to avoid confusion. (#3107)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-10-30 16:56:55 +08:00
18dfa2900c Fix bugs in API (#3103)
### What problem does this PR solve?

Fix bugs in API


- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-30 16:15:42 +08:00
86b546f657 Updated parser_config description (#3104)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-30 15:33:36 +08:00
3fb2bc7613 Update README (#3092)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-29 21:05:38 +08:00
f4cb939317 Updated HTTP API reference and Python API reference based on test results (#3090)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-29 19:56:46 +08:00
d868c283c4 feat: The order of the category operator form is messed up after refreshing the page #3088 (#3089)
### What problem does this PR solve?

feat: The order of the category operator form is messed up after
refreshing the page #3088

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-29 19:21:52 +08:00
c7dfb0193b feat: Allow the component id drop-down box to select the answer operator #3085 (#3087)
### What problem does this PR solve?

feat: Allow the component id drop-down box to select the answer operator
#3085

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-29 18:01:26 +08:00
f7705d6bc9 let 'Generate' take user's input as parameter (#3086)
### What problem does this PR solve?

#3085

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-29 17:58:47 +08:00
3ed096fd3f feat: Add InvokeNode #1908 (#3081)
### What problem does this PR solve?

feat: Add InvokeNode #1908

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-10-29 16:39:56 +08:00
2d1fbefdb5 search between multiple indiices for team function (#3079)
### What problem does this PR solve?

#2834 
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-29 13:19:01 +08:00
c5a3146a8c fix: modify the response of metadata in Dify retrieval api (#3076)
### What problem does this PR solve?

Modify the response of metadata in Dify retrieval api

resolve   #2914

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-29 11:06:02 +08:00
1c364e0e5c feat: If the model supplier is not set, click the OK button to jump directly to the page for setting the model supplier. #3068 (#3069)
### What problem does this PR solve?
feat: If the model supplier is not set, click the OK button to jump
directly to the page for setting the model supplier. #3068

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-29 11:05:31 +08:00
9906526a91 Update 'api key' (#3078)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-29 11:04:27 +08:00
7e0148c058 fix local variable ans (#3077)
### What problem does this PR solve?
#3064

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-29 10:42:45 +08:00
f86826b7a0 refactor error message of qwen (#3074)
### What problem does this PR solve?
#3055

### Type of change
- [x] Refactoring
2024-10-29 10:08:08 +08:00
497bc1438a feat: Add component Invoke #2908 (#3067)
### What problem does this PR solve?

feat: Add component Invoke #2908

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-28 18:56:38 +08:00
d133cc043b remove file size check for sdk API (#3066)
### What problem does this PR solve?

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-28 16:13:40 +08:00
e56bd770ea agent template upgrade (#3060)
### What problem does this PR solve?
#3056

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-28 15:49:14 +08:00
07bb2a6fd6 Turn resource to plural form (#3061)
### What problem does this PR solve?

Turn resource to plural form

### Type of change
- [x] Refactoring

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-28 15:06:18 +08:00
396feadd4b feat: Add hint for operators, round to square, input variable, readable operator ID. #3056 (#3057)
### What problem does this PR solve?

feat: Add hint for operators, round to square, input variable, readable
operator ID. #3056

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-28 14:31:19 +08:00
f93f485696 Turn resource to plural form (#3059)
### What problem does this PR solve?

Turn resource to plural form

### Type of change

- [x] Refactoring

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-28 14:15:36 +08:00
a813736194 Bump werkzeug from 3.0.3 to 3.0.6 (#3026)
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 3.0.3 to
3.0.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/werkzeug/releases">werkzeug's
releases</a>.</em></p>
<blockquote>
<h2>3.0.6</h2>
<p>This is the Werkzeug 3.0.6 security fix release, which fixes security
issues but does not otherwise change behavior and should not result in
breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Werkzeug/3.0.6/">https://pypi.org/project/Werkzeug/3.0.6/</a>
Changes: <a
href="https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-6">https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-6</a></p>
<ul>
<li>Fix how <code>max_form_memory_size</code> is applied when parsing
large non-file fields. <a
href="https://github.com/advisories/GHSA-q34m-jh98-gwm2">GHSA-q34m-jh98-gwm2</a></li>
<li><code>safe_join</code> catches certain paths on Windows that were
not caught by <code>ntpath.isabs</code> on Python &lt; 3.11. <a
href="https://github.com/advisories/GHSA-f9vj-2wh5-fj8j">GHSA-f9vj-2wh5-fj8j</a></li>
</ul>
<h2>3.0.5</h2>
<p>This is the Werkzeug 3.0.5 fix release, which fixes bugs but does not
otherwise change behavior and should not result in breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Werkzeug/3.0.5/">https://pypi.org/project/Werkzeug/3.0.5/</a>
Changes: <a
href="https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-5">https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-5</a>
Milestone: <a
href="https://github.com/pallets/werkzeug/milestone/37?closed=1">https://github.com/pallets/werkzeug/milestone/37?closed=1</a></p>
<ul>
<li>The Watchdog reloader ignores file closed no write events. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2945">#2945</a></li>
<li>Logging works with client addresses containing an IPv6 scope. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2952">#2952</a></li>
<li>Ignore invalid authorization parameters. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2955">#2955</a></li>
<li>Improve type annotation fore <code>SharedDataMiddleware</code>. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2958">#2958</a></li>
<li>Compatibility with Python 3.13 when generating debugger pin and the
current UID does not have an associated name. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2957">#2957</a></li>
</ul>
<h2>3.0.4</h2>
<p>This is the Werkzeug 3.0.4 fix release, which fixes bugs but does not
otherwise change behavior and should not result in breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Werkzeug/3.0.4/">https://pypi.org/project/Werkzeug/3.0.4/</a>
Changes: <a
href="https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-4">https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-4</a>
Milestone: <a
href="https://github.com/pallets/werkzeug/milestone/36?closed=1">https://github.com/pallets/werkzeug/milestone/36?closed=1</a></p>
<ul>
<li>Restore behavior where parsing
<code>multipart/x-www-form-urlencoded</code> data with
invalid UTF-8 bytes in the body results in no form data parsed rather
than a
413 error. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2930">#2930</a></li>
<li>Improve <code>parse_options_header</code> performance when parsing
unterminated
quoted string values. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2904">#2904</a></li>
<li>Debugger pin auth is synchronized across threads/processes when
tracking
failed entries. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2916">#2916</a></li>
<li>Dev server handles unexpected <code>SSLEOFError</code> due to issue
in Python &lt; 3.13.
<a
href="https://redirect.github.com/pallets/werkzeug/issues/2926">#2926</a></li>
<li>Debugger pin auth works when the URL already contains a query
string.
<a
href="https://redirect.github.com/pallets/werkzeug/issues/2918">#2918</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/werkzeug/blob/main/CHANGES.rst">werkzeug's
changelog</a>.</em></p>
<blockquote>
<h2>Version 3.0.6</h2>
<p>Released 2024-10-25</p>
<ul>
<li>Fix how <code>max_form_memory_size</code> is applied when parsing
large non-file
fields. :ghsa:<code>q34m-jh98-gwm2</code></li>
<li><code>safe_join</code> catches certain paths on Windows that were
not caught by
<code>ntpath.isabs</code> on Python &lt; 3.11.
:ghsa:<code>f9vj-2wh5-fj8j</code></li>
</ul>
<h2>Version 3.0.5</h2>
<p>Released 2024-10-24</p>
<ul>
<li>The Watchdog reloader ignores file closed no write events.
:issue:<code>2945</code></li>
<li>Logging works with client addresses containing an IPv6 scope
:issue:<code>2952</code></li>
<li>Ignore invalid authorization parameters.
:issue:<code>2955</code></li>
<li>Improve type annotation fore <code>SharedDataMiddleware</code>.
:issue:<code>2958</code></li>
<li>Compatibility with Python 3.13 when generating debugger pin and the
current
UID does not have an associated name. :issue:<code>2957</code></li>
</ul>
<h2>Version 3.0.4</h2>
<p>Released 2024-08-21</p>
<ul>
<li>Restore behavior where parsing
<code>multipart/x-www-form-urlencoded</code> data with
invalid UTF-8 bytes in the body results in no form data parsed rather
than a
413 error. :issue:<code>2930</code></li>
<li>Improve <code>parse_options_header</code> performance when parsing
unterminated
quoted string values. :issue:<code>2904</code></li>
<li>Debugger pin auth is synchronized across threads/processes when
tracking
failed entries. :issue:<code>2916</code></li>
<li>Dev server handles unexpected <code>SSLEOFError</code> due to issue
in Python &lt; 3.13.
:issue:<code>2926</code></li>
<li>Debugger pin auth works when the URL already contains a query
string.
:issue:<code>2918</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="5eaefc3996"><code>5eaefc3</code></a>
release version 3.0.6</li>
<li><a
href="2767bcb10a"><code>2767bcb</code></a>
Merge commit from fork</li>
<li><a
href="87cc78a25f"><code>87cc78a</code></a>
catch special absolute path on Windows Python &lt; 3.11</li>
<li><a
href="50cfeebcb0"><code>50cfeeb</code></a>
Merge commit from fork</li>
<li><a
href="8760275afb"><code>8760275</code></a>
apply max_form_memory_size another level up in the parser</li>
<li><a
href="8d6a12e2af"><code>8d6a12e</code></a>
start version 3.0.6</li>
<li><a
href="a7b121abc7"><code>a7b121a</code></a>
release version 3.0.5 (<a
href="https://redirect.github.com/pallets/werkzeug/issues/2961">#2961</a>)</li>
<li><a
href="9caf72ac06"><code>9caf72a</code></a>
release version 3.0.5</li>
<li><a
href="e28a2451e9"><code>e28a245</code></a>
catch OSError from getpass.getuser (<a
href="https://redirect.github.com/pallets/werkzeug/issues/2960">#2960</a>)</li>
<li><a
href="e6b4cce97e"><code>e6b4cce</code></a>
catch OSError from getpass.getuser</li>
<li>Additional commits viewable in <a
href="https://github.com/pallets/werkzeug/compare/3.0.3...3.0.6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=werkzeug&package-manager=pip&previous-version=3.0.3&new-version=3.0.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/infiniflow/ragflow/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-28 11:58:25 +08:00
322bafdf2a fix baidufanyi param error (#3053)
### What problem does this PR solve?

#3052

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-28 11:05:32 +08:00
8257eeb3f2 add model moonshot-v1-auto (#3051)
### What problem does this PR solve?

#3048

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-28 10:37:22 +08:00
00810525d6 Minor editorial updates to the HTTP API reference (#3027)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-27 08:00:51 +08:00
391b950be6 Fix non-null violation when inviting people to team (#3015)
### What problem does this PR solve?

Not sure why MySQL is inserting empty string in this case, but when I
use postgres I got `null value in column "invited_by" of relation
"user_tenant" violates not-null constraint`

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2024-10-25 18:39:09 +08:00
d78f215caa Final touches to HTTP and Python API references (#3019)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-10-25 17:11:58 +08:00
9457d20ef1 make gemini robust (#3012)
### What problem does this PR solve?

#3003

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-25 10:50:44 +08:00
648f8e81d1 Fix issues in API (#3008)
### What problem does this PR solve?

Fix issues in API

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-24 20:10:47 +08:00
161c7a231b Fix some issues in API and test (#3001)
### What problem does this PR solve?

Fix some issues in API and test

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-24 20:05:21 +08:00
e997b42504 DRAFT: miscellaneous updates to HTTP API Reference (#3005)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-10-24 20:04:50 +08:00
524699da7d Miscellaneous updates to HTTP and PYthon APIs (#3000)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-10-24 16:14:07 +08:00
765a114be7 minor (#2998)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] Documentation Update
2024-10-24 11:02:09 +08:00
c86afff447 feat: Limit the maximum value of auto keywords to 30 #2687 (#2991)
### What problem does this PR solve?

feat: Limit the maximum value of auto keywords to 30 #2687

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-24 09:35:34 +08:00
b73fe0cc3c Integrating RAGFlow API as a Plugin into ChatGPT-on-WeChat (#2988)
### What problem does this PR solve?

This PR introduces the `ragflow_chat` plugin for the ChatGPT-on-WeChat
project, extending its functionality by integrating Retrieval-Augmented
Generation (RAG) capabilities. It allows users to have more contextually
relevant conversations by retrieving information from external knowledge
sources (via the RAGFlow API) and incorporating it into their chat
interactions.

The primary goal of this PR is to enable seamless communication between
ChatGPT-on-WeChat and the RAGFlow server, improving response accuracy by
embedding knowledge-based data into conversational flows. This is
particularly useful for users who need more complex, data-driven
responses.

This PR adds a new plugin that enhances ChatGPT-on-WeChat with Ragflow
capabilities, allowing for a more enriched conversational experience
driven by external knowledge.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-24 09:35:11 +08:00
2a614e0e23 Remove defaults to 'None' (#2996)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-23 22:58:37 +08:00
50b425cf89 Test Cases (#2993)
### What problem does this PR solve?

Test Cases

### Type of change

- [x] Refactoring

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-23 22:58:27 +08:00
2174c350be Updated HTTP API Reference (document, chat assistant, session, chat) (#2994)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-23 20:07:47 +08:00
7f81fc8f9b refactor auto keywords and auto question (#2990)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2024-10-23 17:00:56 +08:00
f090075cb2 allowing docker container to access service on host (#2895)
### What problem does this PR solve?

1. services running (e.g., ollama) running on the host could not be
accessed from docker containers

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
2024-10-23 16:43:21 +08:00
ec6d942d83 feat: Added auto_keywords and auto_questions fields to the parsing configuration page #2687 (#2987)
### What problem does this PR solve?

feat: Added auto_keywords and auto_questions fields to the parsing
configuration page #2687

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-23 15:45:03 +08:00
8714754afc Fix some issues in API (#2982)
### What problem does this PR solve?

Fix some issues in API

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-23 12:02:18 +08:00
43b959fe58 minor (#2984)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-23 11:00:35 +08:00
320e8f6553 fix generate string join issue (#2983)
### What problem does this PR solve?

#2921

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-23 10:54:04 +08:00
89d5b2414e fix SILICONFLOW rerank error (#2980)
### What problem does this PR solve?

#2977

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-23 10:12:39 +08:00
91ea559f9e DRAFT: Updated python and http api references (#2973)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-10-22 17:10:23 +08:00
445dce4363 [Bug]: unnecessary auto-increment calculations in the tokens statistics of the chat model (#2969)
### What problem does this PR solve?

the details is shown in
https://github.com/infiniflow/ragflow/issues/2968

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-22 16:26:04 +08:00
1fce6caf80 make titles in markdown not be splited with following content (#2971)
### What problem does this PR solve?

#2970 
### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-10-22 15:25:23 +08:00
adb0a93d95 add component invoke (#2967)
### What problem does this PR solve?

#2908

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-22 14:16:44 +08:00
226bdd6e99 add auto keywords and auto-question (#2965)
### What problem does this PR solve?

#2687

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-22 13:12:49 +08:00
5aa9d7787e [Bug]: When use OpenAI chat model , raise ERROR: 'CompletionUsage' object has no attribute 'get' #2948 (#2949)
[Bug]: When use OpenAI chat model , raise ERROR: 'CompletionUsage'
object has no attribute 'get' #2948

### What problem does this PR solve?

the detail of this PR is shown at
https://github.com/infiniflow/ragflow/issues/2948

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-22 11:40:05 +08:00
b2524eec49 fix sequence2txt error and usage total token issue (#2961)
### What problem does this PR solve?

#1363

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-22 11:38:37 +08:00
6a4858a7ee Fix thumbnail_img NoneType error (#2941)
### What problem does this PR solve?

fix thumbnail_img NoneType error

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-22 09:21:05 +08:00
1a623df849 DRAFT: Miscellaneous updates to HTTP API reference (#2923)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-10-21 19:50:45 +08:00
bfc07fe4f9 bigger resolution for OCR (#2919)
### What problem does this PR solve?



### Type of change

- [x] Performance Improvement
2024-10-21 16:25:42 +08:00
3e702aa4ac add package crawl4ai (#2918)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-21 15:07:45 +08:00
2ced25c676 fix thumbnail issue (#2917)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-21 14:33:26 +08:00
1935c3be1a Fix some issues in API (#2902)
### What problem does this PR solve?

Fix some issues in API

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-21 14:29:06 +08:00
609cfa7b5f feat: Replace crawler icon #2915 (#2916)
### What problem does this PR solve?

feat: Replace crawler icon #2915

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-21 14:28:30 +08:00
ac26d09a59 Feature/feat1017 (#2872)
### What problem does this PR solve?

1. fix: mid map show error in knowledge graph, juse because
```@antv/g6```version changed
2. feat: concurrent threads configuration support in graph extractor
3. fix: used tokens update failed for tenant
4. feat: timeout configuration support for llm
5. fix: regex error in graph extractor
6. feat: qwen rerank(```gte-rerank```) support
7. fix: timeout deal in knowledge graph index process. Now chat by
stream output, also, it is configuratable.
8. feat: ```qwen-long``` model configuration

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: chongchuanbing <chongchuanbing@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-21 12:11:08 +08:00
4bdf3fd48e Add agent component for web crawler (#2878)
### What problem does this PR solve?

Add agent component for  web crawler

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-21 11:38:41 +08:00
c1d0473f49 add zhipu glm-4-9b (#2912)
### What problem does this PR solve?

#2910

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-21 10:30:35 +08:00
e5f7733b31 Resolves #2905 openai compatible model provider add llama.cpp rerank support (#2906)
### What problem does this PR solve?
Resolve #2905 



due to the in-consistent of token size, I make it safe to limit 500 in
code, since there is no config param to control

my llama.cpp run set -ub to 1024:

${llama_path}/bin/llama-server --host 0.0.0.0 --port 9901 -ub 1024 -ngl
99 -m $gguf_file --reranking "$@"





### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Here is my test Ragflow use llama.cpp

```
lot update_slots: id  0 | task 458 | prompt done, n_past = 416, n_tokens = 416
slot      release: id  0 | task 458 | stop processing: n_past = 416, truncated = 0
slot launch_slot_: id  0 | task 459 | processing task
slot update_slots: id  0 | task 459 | tokenizing prompt, len = 2
slot update_slots: id  0 | task 459 | prompt tokenized, n_ctx_slot = 8192, n_keep = 0, n_prompt_tokens = 111
slot update_slots: id  0 | task 459 | kv cache rm [0, end)
slot update_slots: id  0 | task 459 | prompt processing progress, n_past = 111, n_tokens = 111, progress = 1.000000
slot update_slots: id  0 | task 459 | prompt done, n_past = 111, n_tokens = 111
slot      release: id  0 | task 459 | stop processing: n_past = 111, truncated = 0
srv  update_slots: all slots are idle
request: POST /rerank 172.23.0.4 200

```
2024-10-21 10:06:29 +08:00
5aec1e3e17 DRAFT: Miscellaneous updates to HTTP API. Tried to finish off Python API ref… (#2909)
…erence but failed.

### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-21 09:47:59 +08:00
1d6bcf5aa2 add nginx path for sdk handlers (#2899) (#2900)
### What problem does this PR solve?

add the nginx path `/api` for sdk handlers 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-21 09:47:45 +08:00
1e6d44d6ef DRAFT: Miscellaneous proofedits on Python APIs (#2903)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-19 19:46:13 +08:00
cec208051f DRAFT: Updated chunk APIs (#2901)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
2024-10-18 20:56:33 +08:00
526fcbbfde fix: Fixed the issue of error reporting when uploading files in the chat box #2897 (#2898)
### What problem does this PR solve?

fix: Fixed the issue of error reporting when uploading files in the chat
box #2897

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-18 17:21:12 +08:00
c760f058df add owner check for team work (#2892)
### What problem does this PR solve?

#2834

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-18 13:48:57 +08:00
8fdfa0f669 feat: Use Badge.Ribbon to distinguish the teams to which the knowledge base belongs #2846 (#2891)
### What problem does this PR solve?

feat: Use Badge.Ribbon to distinguish the teams to which the knowledge
base belongs #2846

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-18 12:50:06 +08:00
ceecac69e9 Delete useless files (#2889)
### What problem does this PR solve?

Delete useless files

### Type of change


- [x] Other (please describe):
Delete useless files

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-18 11:30:40 +08:00
e0c0bdeb0a add team tag to kb (#2890)
### What problem does this PR solve?
#2834

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-18 11:30:19 +08:00
cf3106040a feat: Bind data to TenantTable #2846 (#2883)
### What problem does this PR solve?

feat: Bind data to TenantTable #2846
feat: Add TenantTable

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-18 09:21:01 +08:00
791afbba15 Miscellaneous minor updates (#2885)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-17 19:52:35 +08:00
8358245f64 Draft: Updated file management-related APIs (#2882)
### What problem does this PR solve?

Updated file management-related APIs

### Type of change

- [x] Documentation Update
2024-10-17 18:19:17 +08:00
396bb4b688 Fixed docker build (#2881)
### What problem does this PR solve?

Fixed docker build

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-17 16:34:21 +08:00
167b4af52b feat: Load markdown file with "asset/source" #1739 (#2880)
### What problem does this PR solve?

feat: Load markdown file with "asset/source" #17339

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-17 16:03:13 +08:00
bedb05012d feat: Configure the root directory alias #1739 (#2875)
### What problem does this PR solve?

feat: Configure the root directory alias #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-17 11:36:01 +08:00
6a60e26020 update dashscope (#2871)
### What problem does this PR solve?
#2857
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-17 09:52:31 +08:00
6496055e23 Updated session APIs (#2868)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
2024-10-16 20:38:19 +08:00
dab92ac1e8 Refactor Chunk API (#2855)
### What problem does this PR solve?

Refactor Chunk API
#2846
### Type of change


- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-16 18:41:24 +08:00
b9fa00f341 add API for tenant function (#2866)
### What problem does this PR solve?

feat: API access key management
https://github.com/infiniflow/ragflow/issues/2846
feat: Render markdown file with remark-loader
https://github.com/infiniflow/ragflow/issues/2846

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-16 16:10:24 +08:00
e5d3ab0332 feat: API access key management #2846 (#2865)
### What problem does this PR solve?

feat: API access key management #2846
feat: Render markdown file with remark-loader #2846

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-16 15:57:39 +08:00
4991107822 Fix keys of Xinference deployed models, especially has the same model name with public hosted models. (#2832)
### What problem does this PR solve?

Fix keys of Xinference deployed models, especially has the same model
name with public hosted models.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: 0000sir <0000sir@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-16 10:21:08 +08:00
51ecda0ff5 refactor (#2858)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2024-10-16 10:17:05 +08:00
6850fd69c6 Enhance email validation: Allow top-level domains with 5 letters (#2856)
### What problem does this PR solve?

Currently singing up to ragflow using a mail-adress with associated
top-level domains that have more than 4 chars will fail due to a regex
validation that enforces just this.

In our use case, we'd like to use e-mail addresses with `.swiss`
top-level domains, which is a valid TLD associated with the country
switzerland in the IANA root database.

This change makes the validation accept 5-letter TLDs.


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Other (please describe): Making validation for lenient, accepting
more valid input.
2024-10-16 09:34:45 +08:00
e1e5711680 Feat:Compatible with Dify's External Knowledge API (#2848)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._
Fixes #2731 
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-15 17:47:24 +08:00
4463128436 add rm token (#2850)
### What problem does this PR solve?

#2846

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-15 16:48:38 +08:00
c8783672d7 refactor api util (#2849)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-10-15 16:11:26 +08:00
ce495e4e3e refine API token application (#2847)
### What problem does this PR solve?

#2846

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-15 15:47:40 +08:00
fcabdf7745 feat: Generate operator names in an auto-incremental manner #1739 (#2844)
### What problem does this PR solve?

feat: Generate operator names in an auto-incremental manner #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-15 15:36:09 +08:00
b540d41cdc let presentation do raptor (#2838)
### What problem does this PR solve?

#2837

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-15 10:11:09 +08:00
260d694bbc Updated chat APIs (#2831)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
2024-10-14 20:48:23 +08:00
6329427ad5 Refactor Document API (#2833)
### What problem does this PR solve?

Refactor Document API

### Type of change


- [x] Refactoring

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-14 20:03:33 +08:00
df223eddf3 feat: Fix translation issue of message_history_window_size #1739 (#2828)
### What problem does this PR solve?

feat: Fix translation issue of message_history_window_size #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-14 15:30:20 +08:00
85b359556e feat: Add message_history_window_size to CategorizeForm #1739 (#2826)
### What problem does this PR solve?

feat: Add message_history_window_size to CategorizeForm #1739
### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-14 15:18:45 +08:00
b164116277 refine token similarity (#2824)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-10-14 13:33:18 +08:00
8e5efcc47f Updated dataset APIs (#2820)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-10-12 20:07:21 +08:00
6eed115723 Refactor API for document and session (#2819)
### What problem does this PR solve?

Refactor API for document and session.

### Type of change


- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-12 19:35:19 +08:00
7d80fc474c refine components retrieval and rewrite (#2818)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-10-12 18:47:25 +08:00
a20b82092f Refactor Chat API (#2804)
### What problem does this PR solve?

Refactor Chat API

### Type of change

- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-12 13:48:43 +08:00
2a86472b88 fix chat and thumbnail bug (#2803)
### What problem does this PR solve?

1. fix white screen issue when chat response
2. thumbnail bug when document not support

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):

---------

Co-authored-by: chongchuanbing <chongchuanbing@gmail.com>
2024-10-11 16:10:27 +08:00
190eea7097 trival (#2808)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-11 15:33:38 +08:00
2d1c83da59 fix LIGHTEN issue (#2806)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-11 15:01:27 +08:00
3f065c75da support chat model in huggingface (#2802)
### What problem does this PR solve?

#2794

### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2024-10-11 14:45:48 +08:00
1bae479b37 Fixed broken links. (#2805)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-10-11 14:34:31 +08:00
5e7c1fb23a reduce rerank batch size (#2801)
### What problem does this PR solve?

### Type of change


- [x] Performance Improvement
2024-10-11 11:29:19 +08:00
bae30e5cc4 fix: document preview (#2795)
(cherry picked from commit 8d11a070b2fc88285a7cff1ab85ebefee84b6c64)

### What problem does this PR solve?

fix document preview error in file manager.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: chongchuanbing <chongchuanbing@gmail.com>
2024-10-11 11:26:59 +08:00
18f80743eb support api-version and change default-model in adding azure-openai and openai (#2799)
### What problem does this PR solve?
#2701 #2712 #2749

### Type of change
-[x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-11 11:26:42 +08:00
bfaef2cca6 feat: Arrange the form of generate, categorize, switch, retrieval operators vertically #1739 (#2800)
### What problem does this PR solve?

feat: Arrange the form of generate, categorize, switch, retrieval
operators vertically #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-11 11:06:14 +08:00
cbd7cd7c4d Refactor Dataset API (#2783)
### What problem does this PR solve?

Refactor Dataset API

### Type of change

- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-11 09:55:27 +08:00
a2f9c03a95 fix: Change document status with @tanstack/react-query #13306 (#2788)
### What problem does this PR solve?

fix: Change document status with @tanstack/react-query #13306

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-11 08:45:10 +08:00
2c56d274d8 Updated instructions on downloading RAGFlow Slim and RAGFlow all-in-one. (#2785)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-10-10 19:24:54 +08:00
7742f67481 refine agent (#2787)
### What problem does this PR solve?



### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [x] Performance Improvement
- [ ] Other (please describe):
2024-10-10 17:07:36 +08:00
6af9d4e5f9 Refactor README on different docker version. (#2775)
### What problem does this PR solve?

1. Use two env files for slim and full docker image.
2. Update README

### Type of change

- [x] Documentation Update
- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-10-10 15:30:32 +08:00
51efecf4b5 trival (#2779)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-10 11:05:03 +08:00
9dfcae2b5d Fix error commands (#2778)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-10 10:38:57 +08:00
66172cef3e fix: torch dependency start error (#2777)
### What problem does this PR solve?

when use slim image, remove ```torch``` denpendency.

### Type of change

- [✓] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: chongchuanbing <chongchuanbing@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-10 10:06:03 +08:00
29f022c91c fix bedrock issue (#2776)
### What problem does this PR solve?

#2722

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-10 09:13:35 +08:00
485bfd6c08 fix: Large document thumbnail display failed (#2763)
### What problem does this PR solve?

In MySQL, when the thumbnail base64 of a document is relatively large,
the display of the document's thumbnail fails.
Now, I put the document thumbnail into MiniIO storage.

### Type of change

- [✓] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: chongchuanbing <chongchuanbing@gmail.com>
2024-10-10 09:09:29 +08:00
f7a73c5149 Fix README and some comments (#2774)
### What problem does this PR solve?

1. Fix typo
2. Update comments.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-09 23:30:00 +08:00
5d966b1120 Fix README description (#2773)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-09 23:03:18 +08:00
ce79144e75 Use slim image as the default (#2772)
### What problem does this PR solve?

Use the slim docker image as the default.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-09 22:51:47 +08:00
d8566f0ddf HTTP API documents, part1 (#2713)
### What problem does this PR solve?

1. dataset: create/delete/list/get/update
2. files in dataset: upload/download/list/delete/get_info

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-09 20:26:51 +08:00
e904c134e7 feat: Add a note node to the agent canvas #2767 (#2768)
### What problem does this PR solve?

feat: Add a note node to the agent canvas #2767

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-09 19:39:04 +08:00
7fc3bb3241 Docs:fix user guide links (#2761)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-09 19:38:10 +08:00
20e63f8ec4 Fix docx images (#2756)
### What problem does this PR solve?

#2755 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-09 19:37:32 +08:00
2df15742fc fix xinference add rerank model bug (#2758)
### What problem does this PR solve?

Fix xinference add rerank model bug,
https://github.com/infiniflow/ragflow/issues/2294#issue-2510788135

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-09 19:37:11 +08:00
8f815a6c1e Fix component exesql (#2754)
### What problem does this PR solve?

#2700 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-09 17:53:36 +08:00
8f4bd10b19 Initial draft of Python APIs (#2719)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-10-09 15:30:22 +08:00
511d272d0d feat: The same query conditions on the search page should not request the interface every time the mind map drawer is opened. #2759 (#2760)
### What problem does this PR solve?

feat: The same query conditions on the search page should not request
the interface every time the mind map drawer is opened. #2759

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-09 15:28:28 +08:00
7f44cf543a move import positions (#2753)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2024-10-09 10:34:58 +08:00
16472eb3ea solve knowledgegraph issue when calling gemini model (#2738)
### What problem does this PR solve?
#2720

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-08 18:27:04 +08:00
d92acdcf1d Update azure_sas_conn.py - fixing container_url typo (#2740)
### What problem does this PR solve?

Fixes #2739

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-08 18:26:30 +08:00
2e33ed3ba0 Modified download_deps.py (#2747)
### What problem does this PR solve?

Modified download_deps.py

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe): CI
2024-10-08 17:40:06 +08:00
04ff9cda7c expand rerank range (#2746)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-10-08 16:34:33 +08:00
5cc9981a4d Fix LIGHTEN. Close #2723 (#2744)
### What problem does this PR solve?

Fix LIGHTEN
#2726 
#2723

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-08 15:58:14 +08:00
5845b2b137 feat: Add component TuShare #1739 (#2745)
### What problem does this PR solve?

feat: Add component  TuShare  #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-08 15:35:30 +08:00
b3b54680e7 Add query parts to lamp (#2736)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-08 12:53:04 +08:00
a3ab5ba9ac support sequence2txt and tts model in Xinference (#2696)
### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-08 10:43:18 +08:00
c552a02e7f chore: update operators.py (#2724)
### What problem does this PR solve?
substract -> subtract
### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-08 10:34:52 +08:00
a005be7c74 fix re.escape problem (#2735)
### What problem does this PR solve?

#2716

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-08 10:07:03 +08:00
6f7fcdc897 remove redeclared get_json_result func (#2675)
### What problem does this PR solve?

Redeclared 'get_json_result' defined above without usage
line 99 VS line 218

### Type of change

- [x] Other (please describe): remove redeclared func
2024-10-05 16:47:47 +08:00
34761fa4ca Fix/bedrock issues (#2718)
### What problem does this PR solve?

Adding a Bedrock API key for Claude Sonnet was broken. I find the issue
came up when trying to test the LLM configuration, the system is a
required parameter in boto3.

As well, there were problems in Bedrock implementation for embeddings
when trying to encode queries.

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2024-10-05 16:44:50 +08:00
abe9995a7c build multi-arch image (#2710)
### What problem does this PR solve?
build multi-arch image

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe): CI
2024-10-03 21:00:26 +08:00
7f2ee3bbe9 docs: Migrate README to Compose V2 syntax (#2711)
### What problem does this PR solve?

This PR updates README to reflect the migration from Compose V1
(`docker-compose`) to Compose V2 (`docker compose`):

### Type of change

- [x] Documentation Update

### Source

The migration details and differences between Compose V1 and Compose V2
are based on the official Docker documentation:
[Docker Docs: Migrate to Compose
V2](https://docs.docker.com/compose/releases/migrate/#what-are-the-differences-between-compose-v1-and-compose-v2)
2024-10-03 15:31:04 +08:00
a1ffc7fa2c Fix poetry cache (#2709)
### What problem does this PR solve?

Fix poetry cache

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe): CI
2024-10-02 21:15:30 +08:00
70c6b5a7f9 Fix Dockerfile.slim (#2708)
### What problem does this PR solve?
Fix Dockerfile.slim

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe): CI
2024-10-02 20:02:53 +08:00
1b80a693ba Updated Build Docker Image (#2706)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-02 19:43:22 +08:00
e46a4d1875 Fix Dockerfile for arm64 (#2705)
### What problem does this PR solve?

Fix Dockerfile for arm64

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):

---------

Co-authored-by: Ubuntu <ubuntu@arm-test.us-central1-f.c.ragflow-01.internal>
2024-10-02 19:41:56 +08:00
5f4d2dc4fe Updated Dockefile to use cache (#2703)
### What problem does this PR solve?

Updated Dockefile to use cache

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe): CI
2024-10-01 17:41:38 +08:00
62202b7eff fix: Fixed the issue where no error message was displayed when uploading a file that was too large #2258 (#2697)
### What problem does this PR solve?

fix: Fixed the issue where no error message was displayed when uploading
a file that was too large #2258

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-01 16:37:46 +08:00
1518824b0c Updated Dockerfile (#2695)
### What problem does this PR solve?

Updated Dockerfile

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-30 18:13:06 +08:00
0a7654c747 fix error in exception (#2694)
### What problem does this PR solve?
#2670

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-30 17:54:27 +08:00
d6db805885 Refactoring entity_resolution (#2692)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2024-09-30 17:18:09 +08:00
570ad420a8 remove unused import (#2679)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2024-09-30 16:59:39 +08:00
ae5a877ed4 Simplify the usage of dict (#2681)
### What problem does this PR solve?
Simplify the usage of dictionaries

### Type of change

- [x] Refactoring

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-30 16:54:25 +08:00
9945988e44 format mind_map_extractor code (#2686)
### What problem does this PR solve?

format mind_map_extractor code

### Type of change

- [x] Refactoring
2024-09-30 16:29:15 +08:00
79b8210498 refine readme (#2689)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2024-09-30 16:21:56 +08:00
c80d311474 fix tests.yml (#2688)
### What problem does this PR solve?

fix tests.yml

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe):
CI
2024-09-30 16:05:54 +08:00
64429578da added tests.yml (#2685)
### What problem does this PR solve?

added tests.yml

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe): CI
2024-09-30 14:58:34 +08:00
92a4a095c9 fix: Fixed an issue where quotes in messages could not be displayed #2677 (#2683)
### What problem does this PR solve?

fix: Fixed an issue where quotes in messages could not be displayed
#2677

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-30 12:40:12 +08:00
2368d738ab fix: Search page search results are cleared after output #2677 (#2678)
### What problem does this PR solve?

fix: Search page search results are cleared after output #2677

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-30 11:00:03 +08:00
833e3a08cd update poetry lock (#2676)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-30 10:59:47 +08:00
7a73fec2e5 upgrade opencv-python-headless (#2674)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-30 09:28:38 +08:00
2f8e0e66ef change opencv version (#2673)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-30 09:13:11 +08:00
5b4b252895 Fixed huggingface url (#2667)
### What problem does this PR solve?
Fixed huggingface url. Close #2665

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 20:38:11 +08:00
9081150c2c Translated Korean README (#2666)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-29 20:03:25 +08:00
cb295ec106 Translated Japanese README (#2664)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-29 19:27:48 +08:00
4f5210352c added back oc9 (#2663)
### What problem does this PR solve?

added back oc9

### Type of change

- [x] Refactoring
2024-09-29 18:32:48 +08:00
f98ec9034f Fix docker file bugs (#2662)
### What problem does this PR solve?

Fix docker file bugs

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-29 18:24:24 +08:00
4b8ecba32b Updated CN readme (#2661)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-29 17:27:15 +08:00
892166ec24 document preparation for release (#2660)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-09-29 16:29:02 +08:00
a411330b09 Add build image and launch from source in README (#2658)
### What problem does this PR solve?

Move the build image and launch from source back to README.

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-09-29 16:28:07 +08:00
5a8ae4a289 fix: Filter the timePeriod options based on the userType parameter #1739 (#2657)
### What problem does this PR solve?

fix: Filter the timePeriod options based on the userType parameter #1739

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-29 15:40:20 +08:00
3f16377412 change url of local llm deploy guide (#2659)
### What problem does this PR solve?


### Type of change

- [x] Other (please describe): I made a mistake with an URL and now I
need to change it
2024-09-29 15:39:05 +08:00
d3b37b0b70 fix: Fixed the issue where the error message was not displayed when uploading a file that was too large #1782 (#2654)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-29 15:22:05 +08:00
01db00b587 Updated component description (#2651)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-29 14:53:52 +08:00
25f07e8e29 fix template error (#2653)
### What problem does this PR solve?

#2478

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-29 14:47:06 +08:00
daa65199e8 trival (#2650)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 13:20:02 +08:00
fc867cb959 rename get_txt to get_text (#2649)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 12:47:09 +08:00
fb694143ee refine general purpose chat bot (#2648)
### What problem does this PR solve?

#2478

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-29 12:20:44 +08:00
a8280d9fd2 Add doc for dev image (#2641)
Add doc for dev image

### Type of change

- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-09-29 10:51:46 +08:00
aea553c3a8 Add get_txt function (#2639)
### What problem does this PR solve?

Add get_txt function to reduce duplicate code

### Type of change

- [x] Refactoring

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-29 10:29:56 +08:00
57237634f1 Refactoring large integers to improve readability (#2636)
### What problem does this PR solve?

Refactoring large integers

### Type of change

- [x] Refactoring
2024-09-29 10:17:42 +08:00
604061c4a5 Fix mutable default argument (#2635)
### What problem does this PR solve?

The default value of Python function parameters cannot be mutable.
Modifying this parameter inside the function will permanently change the
default value

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 10:16:00 +08:00
c103dd2746 change chunk.status to chunk.available (#2646)
### What problem does this PR solve?

#1102

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-29 10:13:07 +08:00
e82e8fde13 Fix logger error (#2643)
### What problem does this PR solve?

Fix logger error: AttributeError: 'Logger' object has no attribute
'basicConfig'

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 09:49:59 +08:00
a44ed9626a handle nits in task_executor (#2637)
### What problem does this PR solve?

- fix typo
- fix string format
- format import

### Type of change

- [x] Refactoring
2024-09-29 09:49:45 +08:00
ff9c11c970 fix: Fixed the issue where the conversation list was not displayed on the conversation page #2625 (#2638)
### What problem does this PR solve?

fix: Fixed the issue where the conversation list was not displayed on
the conversation page #2625

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 09:43:23 +08:00
674d342761 refine get_input (#2630)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-27 20:20:36 +08:00
a246e5644b feat: Add top_n to DeepLForm #1739 (#2629)
### What problem does this PR solve?

feat: Add top_n to DeepLForm #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-27 19:22:33 +08:00
96f56a3c43 add huggingface model (#2624)
### What problem does this PR solve?

#2469

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-27 19:15:38 +08:00
1b2f66fc11 Added doc on dev-slim (#2627)
Added doc on dev-slim

### Type of change

- [x] Documentation Update
- [x] Refactoring

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-27 19:15:27 +08:00
ca2de896c7 fix: Fixed an issue where the first message would be displayed when sending the second message #2625 (#2626)
### What problem does this PR solve?

fix: Fixed an issue where the first message would be displayed when
sending the second message #2625

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-27 18:20:19 +08:00
34abcf7704 style: fix typo and format code (#2618)
### What problem does this PR solve?

- Fix typo
- Remove unused import
- Format code

### Type of change

- [x] Other (please describe): typo and format
2024-09-27 13:17:25 +08:00
4c0b79c4f6 remove repeat func (#2619)
### What problem does this PR solve?

- remove repeat func

### Type of change

- [x] Other (please describe): remove repeat func
2024-09-27 13:15:26 +08:00
e11a74eed5 Update Yichat base_url (#2620)
### What problem does this PR solve?

Update Yichat base_url

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-09-27 12:55:58 +08:00
297b2d0ac9 force eml file to be parsed by EMAIL (#2615)
### What problem does this PR solve?
#2613
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-27 10:29:30 +08:00
b16f16e19e Bug fix - email processing could be run now from API (#2613)
### What problem does this PR solve?

If .eml file is uploaded, there is always General method chosen for
email processing, even if parsing_method is defined in the request. This
change solves this issue.

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Adam Kobus <adam.kobus@gitlab.eleader.biz>
2024-09-27 10:24:46 +08:00
35598c04ce fix generate bug (#2614)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-27 10:22:13 +08:00
09d1f7f333 Support agent for aibot (#2609)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-26 18:06:56 +08:00
240450ea52 Remove WenCai imageurl and update investment_advisor prompt (#2606)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-09-26 17:27:53 +08:00
1de3032650 fix AzureOpenAI issue` (#2608)
### What problem does this PR solve?

#1599

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-26 17:25:16 +08:00
41548bf019 Added two developer guide and removed from README ' builder docker image' and 'launch service from source' (#2590)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-09-26 16:15:57 +08:00
b68d349bd6 Fix: renrank_model and pdf_parser bugs | Update: session API (#2601)
### What problem does this PR solve?

Fix: renrank_model and pdf_parser bugs | Update: session API
#2575
#2559
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-09-26 16:05:25 +08:00
f6bfe4d970 feat: Add component Concentrator #1739 (#2604)
### What problem does this PR solve?

feat: Add component Concentrator #1739
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-26 14:47:28 +08:00
cb2ae708f3 Fix soft link. Close #2587 (#2602)
Fix soft link

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-26 14:33:38 +08:00
d7f26786d4 Update dsl_examples and Fix component concentrator (#2597)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-09-26 11:58:50 +08:00
b05fab14f7 Add component Concentrator (#2586)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-25 18:44:31 +08:00
e6da0c7c7b deprecate init a super user (#2589)
### What problem does this PR solve?
#2295

### Type of change

- [x] Refactoring
2024-09-25 18:30:27 +08:00
ef89e3ebea remove onnx copy command from dockerfile (#2584)
### What problem does this PR solve?

#2564

### Type of change

- [x] Refactoring
2024-09-25 17:14:59 +08:00
8ede1c7bf5 trival (#2582)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-25 16:26:44 +08:00
6363d58e98 Add template investment_advisor (#2580)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-25 16:22:06 +08:00
c262011393 revert error in Dockerfile (#2581)
### What problem does this PR solve?
#2295

### Type of change


- [x] Refactoring
2024-09-25 16:10:29 +08:00
dda1367ab2 make it lighten (#2577)
### What problem does this PR solve?

#2295

### Type of change

- [x] Refactoring
2024-09-25 13:38:40 +08:00
e4c9cf2264 feat: If the model is not set, a pop-up window will remind the user #2295 (#2574)
### What problem does this PR solve?

feat: If the model is not set, a pop-up window will remind the user
#2295

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-25 11:16:00 +08:00
e3b3ec3f79 multi-arch-build (#2571)
### What problem does this PR solve?

Build multi-arch docker image `infiniflow/ragflow:poetry` on
`linux/amd64` and `linux/arm64`.

### Type of change

- [x] Refactoring
2024-09-25 10:37:20 +08:00
08d5637770 Fix tokenizer bug (#2573)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-25 10:30:49 +08:00
7bb28ca2bd add lighten control (#2567)
### What problem does this PR solve?

#2295

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-24 19:22:01 +08:00
9251fb39af feat: Delete Model Provider #2376 (#2565)
### What problem does this PR solve?

feat: Delete Model Provider #2376

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-09-24 19:10:06 +08:00
91dbce30bd feat: Add component Jin10 #1739 (#2563)
### What problem does this PR solve?

feat: Add component Jin10  #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-24 18:54:09 +08:00
949a999478 feat: Add component YahooFinance #1739 (#2561)
### What problem does this PR solve?

feat: Add component YahooFinance #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-24 16:46:41 +08:00
d40041cc82 refine multi-turn chat in agent (#2560)
### What problem does this PR solve?

#2484

### Type of change

- [x] Performance Improvement
- [ ] Other (please describe):
2024-09-24 16:20:19 +08:00
832c90ac3e fix: Web code build fails on ARM machines #2554 (#2557)
### What problem does this PR solve?

fix: Web code build fails on ARM machines #2554

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-24 15:27:26 +08:00
7b3099b1a1 add an API of delete llm supplier (#2556)
### What problem does this PR solve?

#1853

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-24 15:24:15 +08:00
4681638974 Streaming output is supported, dialogue share is not (#2555)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-24 15:14:44 +08:00
ecf441c830 refine using rerank model (#2553)
### What problem does this PR solve?

#2552

### Type of change

- [x] Performance Improvement
2024-09-24 12:38:18 +08:00
d9c2a128a5 SparkTTS (#2535)
### What problem does this PR solve?

SparkTTS

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-09-24 12:15:12 +08:00
38e3475714 refine markdown prompt (#2551)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-09-24 12:04:16 +08:00
90644246d6 Updated README on debugging web and python (#2544)
### What problem does this PR solve?

Updated README on debugging web and python

### Type of change

- [x] Documentation Update
2024-09-24 11:46:03 +08:00
100c60017f fix component rewrite bug (#2549)
### What problem does this PR solve?

#2545

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-24 11:31:42 +08:00
51dd6d1f90 fix: Initial language is English, but the UI is in Chinese #2514 (#2541)
### What problem does this PR solve?

fix: Initial language is English, but the UI is in Chinese #2514

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-23 16:28:27 +08:00
521ea6afcb feat: Refine reteival of multi-turn conversation #2362 (#2539)
### What problem does this PR solve?

feat: Refine reteival of multi-turn conversation #2362

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-23 15:26:11 +08:00
dd019e7ba1 feat: Configurable for excel, html table or row based text #2516 (#2538)
### What problem does this PR solve?

feat: Configurable for excel, html table or row based text #2516

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-23 14:58:51 +08:00
db1be22a2f fix: Merge models of the same category #2479 (#2536)
### What problem does this PR solve?

fix: Merge models of the same category #2479

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-23 14:07:00 +08:00
139268de6f Reverted replacing npm with yarn (#2531)
Reverted replacing npm with yarn

### Type of change

- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-23 11:08:31 +08:00
f6ceb43e36 fix: Add model by ollama in model provider page, user can't choose the model in chat window. #2479 (#2529)
### What problem does this PR solve?

fix: Add model by ollama in model provider page, user can't choose the
model in chat window. #2479

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-23 10:53:18 +08:00
d8a43416f5 Rework Dockerfile.scratch (#2525)
### What problem does this PR solve?

Rework Dockerfile.scratch
- Multiple stage Dockerfile
- Removed conda
- Replaced pip with poetry
- Added missing dependencies and fixed package version conflicts
- Added deepdoc models

### Type of change

- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-23 10:00:44 +08:00
4a6a2a0f1b refine xinference (#2521)
### What problem does this PR solve?

#1588

### Type of change

- [x] Refactoring
2024-09-20 18:37:01 +08:00
9bbef8216d refine reteival of multi-turn conversation (#2520)
### What problem does this PR solve?

#2362 #2484

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Performance Improvement
2024-09-20 17:25:55 +08:00
78856703c4 make excel parsing configurable (#2517)
### What problem does this PR solve?

#2516

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-20 15:33:38 +08:00
099c37ba95 rm key set in xinference (#2511)
### What problem does this PR solve?

#2492

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-20 10:55:52 +08:00
a44f1f735d fix self deployed llm lost (#2510)
### What problem does this PR solve?

#2509 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-20 10:41:25 +08:00
ae6f68e625 Update README_zh.md (#2491)
核心镜像swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:dev 大小为 19.1G,
不是9G

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-20 10:22:47 +08:00
5dd19c6a57 remove setting-system/index.tsx error import (#2507)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

Regarding the code merge #ca0c22f3184b9229e7e86de699842bb3b0e502c2, the
ragflow/web code will not run. This commit solves this problem.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-20 10:21:48 +08:00
5968f148bc refactor add LLM (#2508)
### What problem does this PR solve?

#2487

### Type of change

- [x] Refactoring
2024-09-20 10:20:35 +08:00
4f962d6bff BugFix: Fixed api_key generation error for VolcEngine (#2502)
BugFix: Fixed api_key generation error for VolcEngine with python's
f-string syntax

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: 海贼宅 <stu_xyx@163.com>
2024-09-20 10:03:43 +08:00
ddb8be9219 Web: Display the icon of the currently used storage. (#2504)
https://github.com/infiniflow/ragflow/issues/2503


### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Before:

<img width="611" alt="image"
src="https://github.com/user-attachments/assets/02a3a1ee-7bfb-4fe0-9b15-11ced69cc8a3">

After:

<img width="796" alt="image"
src="https://github.com/user-attachments/assets/371136af-8d16-47aa-909b-26609d3ad60e">

<img width="557" alt="image"
src="https://github.com/user-attachments/assets/9268362f-2b6a-4ea1-9fe7-659f7292e5e1">
2024-09-20 09:49:16 +08:00
422c229e52 Storage: Rename all the variables about get file to storage from minio. (#2497)
https://github.com/infiniflow/ragflow/issues/2496

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-19 19:19:27 +08:00
b5d1d2fec4 refine TTS (#2500)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-19 19:15:16 +08:00
d545633a6c OpenAITTS (#2493)
### What problem does this PR solve?

OpenAITTS

### Type of change


- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-19 16:55:18 +08:00
af0b4b0828 fix(Add model api): Add VolcEngine to create api_key format error (#2490)
### What problem does this PR solve?


Add VolcEngine to create api_key format error
When constructing the json string, there was an extra "," at the end,
which caused a formatting error. This commit fixed the problem.


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-19 15:10:49 +08:00
6c6380d27a update document sdk (#2485)
### Type of change
#2485
- [x] Performance Improvement
2024-09-19 12:52:35 +08:00
2324b88579 fix parser for pptx of which files are from filemanager (#2482)
### What problem does this PR solve?

#2467

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-18 19:13:37 +08:00
2b0dc01a88 rename some attributes in document sdk (#2481)
### What problem does this PR solve?

#1102

### Type of change

- [x] Performance Improvement

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-18 18:46:37 +08:00
01acc3fd5a fix duplicated llm name betweeen different suppliers (#2477)
### What problem does this PR solve?

#2465

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-18 16:09:22 +08:00
2484e26cb5 fix superuser password not base64 encoded (#2475)
### What problem does this PR solve?

Fixes the _superuser_ `admin@ragflow.io` not being accessible due to how
entered passwords are used. Unless this is expected behavior?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-18 14:30:45 +08:00
7195742ca5 rename create_timestamp_flt to create_timestamp_float (#2473)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-09-18 12:50:05 +08:00
62cb5f1bac update document sdk (#2445)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-18 11:08:19 +08:00
e7dd487779 fix ppt file from filemanager error (#2470)
### What problem does this PR solve?

#2467

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-18 09:22:14 +08:00
e41268efc6 Add Multi-Language Descriptions for 'Switch' Component and Update Message Assistant Placeholder (#2450)
### What problem does this PR solve?

_This PR addresses the need to describe the "Switch" component across
different languages and corrects a misleading description for a
placeholder message not exclusively tied to a specific assistant type.
By providing clearer and more accurate descriptions, this PR aims to
improve user understanding and usability of the Switch component and the
"Message Resume Assistant..." placeholder in a multilingual context._

### Explanation of Changes

1. **Added Descriptions for "Switch" Component**: 
- Descriptions were added for the "Switch" component in three different
locales:
- **English (EN)**: Provides a concise description of what the "Switch"
component does, focusing on its ability to evaluate conditions and
direct the flow of execution.
- **Simplified Chinese (ZH)**: Translated the English description into
Simplified Chinese to cater to users who prefer this locale.
- **Traditional Chinese (ZH-Traditional)**: Added a Traditional Chinese
version of the description to support users in regions that use
Traditional Chinese.
   
2. **Corrected "Message Resume Assistant..." to "Message the
Assistant..."**:
- Updated the description from "Message Resume Assistant..." to "Message
the Assistant..." in the English locale. This correction makes the
description more generic and accurate, reflecting the placeholder's
broader functionality, which is not limited to Resume Assistants. It now
clearly communicates that the placeholder can be used with various types
of assistants, not just those related to resumes.

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-15 16:16:10 +08:00
884 changed files with 84895 additions and 27014 deletions

View File

@ -15,16 +15,16 @@ body:
value: "Please provide the following information to help us understand the issue." value: "Please provide the following information to help us understand the issue."
- type: input - type: input
attributes: attributes:
label: Branch name label: RAGFlow workspace code commit ID
description: Enter the name of the branch where you encountered the issue. description: Enter the commit ID associated with the issue.
placeholder: e.g., main placeholder: e.g., 26d3480e
validations: validations:
required: true required: true
- type: input - type: input
attributes: attributes:
label: Commit ID label: RAGFlow image version
description: Enter the commit ID associated with the issue. description: Enter the image version(shown in RAGFlow UI, `System` page) associated with the issue.
placeholder: e.g., c3b2a1 placeholder: e.g., 26d3480e(v0.13.0~174)
validations: validations:
required: true required: true
- type: textarea - type: textarea

124
.github/workflows/release.yml vendored Normal file
View File

@ -0,0 +1,124 @@
name: release
on:
schedule:
- cron: '0 13 * * *' # This schedule runs every 13:00:00Z(21:00:00+08:00)
# The "create tags" trigger is specifically focused on the creation of new tags, while the "push tags" trigger is activated when tags are pushed, including both new tag creations and updates to existing tags.
create:
tags:
- "v*.*.*" # normal release
- "nightly" # the only one mutable tag
# https://docs.github.com/en/actions/using-jobs/using-concurrency
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
release:
runs-on: [ "self-hosted", "overseas" ]
steps:
- name: Ensure workspace ownership
run: echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
# https://github.com/actions/checkout/blob/v3/README.md
- name: Check out code
uses: actions/checkout@v4
with:
token: ${{ secrets.MY_GITHUB_TOKEN }} # Use the secret as an environment variable
fetch-depth: 0
fetch-tags: true
- name: Prepare release body
run: |
if [[ $GITHUB_EVENT_NAME == 'create' ]]; then
RELEASE_TAG=${GITHUB_REF#refs/tags/}
if [[ $RELEASE_TAG == 'nightly' ]]; then
PRERELEASE=true
else
PRERELEASE=false
fi
echo "Workflow triggered by create tag: $RELEASE_TAG"
else
RELEASE_TAG=nightly
PRERELEASE=true
echo "Workflow triggered by schedule"
fi
echo "RELEASE_TAG=$RELEASE_TAG" >> $GITHUB_ENV
echo "PRERELEASE=$PRERELEASE" >> $GITHUB_ENV
RELEASE_DATETIME=$(date --rfc-3339=seconds)
echo Release $RELEASE_TAG created from $GITHUB_SHA at $RELEASE_DATETIME > release_body.md
- name: Move the existing mutable tag
# https://github.com/softprops/action-gh-release/issues/171
run: |
git fetch --tags
if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
# Determine if a given tag exists and matches a specific Git commit.
# actions/checkout@v4 fetch-tags doesn't work when triggered by schedule
if [ "$(git rev-parse -q --verify "refs/tags/$RELEASE_TAG")" = "$GITHUB_SHA" ]; then
echo "mutable tag $RELEASE_TAG exists and matches $GITHUB_SHA"
else
git tag -f $RELEASE_TAG $GITHUB_SHA
git push -f origin $RELEASE_TAG:refs/tags/$RELEASE_TAG
echo "created/moved mutable tag $RELEASE_TAG to $GITHUB_SHA"
fi
fi
- name: Create or overwrite a release
# https://github.com/actions/upload-release-asset has been replaced by https://github.com/softprops/action-gh-release
uses: softprops/action-gh-release@v2
with:
token: ${{ secrets.MY_GITHUB_TOKEN }} # Use the secret as an environment variable
prerelease: ${{ env.PRERELEASE }}
tag_name: ${{ env.RELEASE_TAG }}
# The body field does not support environment variable substitution directly.
body_path: release_body.md
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
# https://github.com/marketplace/actions/docker-login
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: infiniflow
password: ${{ secrets.DOCKERHUB_TOKEN }}
# https://github.com/marketplace/actions/build-and-push-docker-images
- name: Build and push full image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: infiniflow/ragflow:${{ env.RELEASE_TAG }}
file: Dockerfile
platforms: linux/amd64
# https://github.com/marketplace/actions/build-and-push-docker-images
- name: Build and push slim image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: infiniflow/ragflow:${{ env.RELEASE_TAG }}-slim
file: Dockerfile
build-args: LIGHTEN=1
platforms: linux/amd64
- name: Build ragflow-sdk
if: startsWith(github.ref, 'refs/tags/v')
run: |
cd sdk/python && \
uv build
- name: Publish package distributions to PyPI
if: startsWith(github.ref, 'refs/tags/v')
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: sdk/python/dist/
password: ${{ secrets.PYPI_API_TOKEN }}
verbose: true

137
.github/workflows/tests.yml vendored Normal file
View File

@ -0,0 +1,137 @@
name: tests
on:
push:
branches:
- 'main'
- '*.*.*'
paths-ignore:
- 'docs/**'
- '*.md'
- '*.mdx'
pull_request:
types: [ opened, synchronize, reopened, labeled ]
paths-ignore:
- 'docs/**'
- '*.md'
- '*.mdx'
# https://docs.github.com/en/actions/using-jobs/using-concurrency
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
ragflow_tests:
name: ragflow_tests
# https://docs.github.com/en/actions/using-jobs/using-conditions-to-control-job-execution
# https://github.com/orgs/community/discussions/26261
if: ${{ github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci') }}
runs-on: [ "self-hosted", "debug" ]
steps:
# https://github.com/hmarr/debug-action
#- uses: hmarr/debug-action@v2
- name: Show PR labels
run: |
echo "Workflow triggered by ${{ github.event_name }}"
if [[ ${{ github.event_name }} == 'pull_request' ]]; then
echo "PR labels: ${{ join(github.event.pull_request.labels.*.name, ', ') }}"
fi
- name: Ensure workspace ownership
run: echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
# https://github.com/actions/checkout/issues/1781
- name: Check out code
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
# https://github.com/astral-sh/ruff-action
- name: Static check with Ruff
uses: astral-sh/ruff-action@v2
with:
version: ">=0.8.2"
args: "check --ignore E402"
- name: Build ragflow:nightly-slim
run: |
RUNNER_WORKSPACE_PREFIX=${RUNNER_WORKSPACE_PREFIX:-$HOME}
sudo docker pull ubuntu:22.04
sudo docker build --progress=plain --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
- name: Build ragflow:nightly
run: |
sudo docker build --progress=plain --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
- name: Start ragflow:nightly-slim
run: |
echo "RAGFLOW_IMAGE=infiniflow/ragflow:nightly-slim" >> docker/.env
sudo docker compose -f docker/docker-compose.yml up -d
- name: Stop ragflow:nightly-slim
if: always() # always run this step even if previous steps failed
run: |
sudo docker compose -f docker/docker-compose.yml down -v
- name: Start ragflow:nightly
run: |
echo "RAGFLOW_IMAGE=infiniflow/ragflow:nightly" >> docker/.env
sudo docker compose -f docker/docker-compose.yml up -d
- name: Run sdk tests against Elasticsearch
run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
cd sdk/python && uv sync --python 3.10 --frozen && uv pip install . && source .venv/bin/activate && cd test/test_sdk_api && pytest -s --tb=short get_email.py t_dataset.py t_chat.py t_session.py t_document.py t_chunk.py
- name: Run frontend api tests against Elasticsearch
run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
cd sdk/python && uv sync --python 3.10 --frozen && uv pip install . && source .venv/bin/activate && cd test/test_frontend_api && pytest -s --tb=short get_email.py test_dataset.py
- name: Stop ragflow:nightly
if: always() # always run this step even if previous steps failed
run: |
sudo docker compose -f docker/docker-compose.yml down -v
- name: Start ragflow:nightly
run: |
sudo DOC_ENGINE=infinity docker compose -f docker/docker-compose.yml up -d
- name: Run sdk tests against Infinity
run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
cd sdk/python && uv sync --python 3.10 --frozen && uv pip install . && source .venv/bin/activate && cd test/test_sdk_api && pytest -s --tb=short get_email.py t_dataset.py t_chat.py t_session.py t_document.py t_chunk.py
- name: Run frontend api tests against Infinity
run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
cd sdk/python && uv sync --python 3.10 --frozen && uv pip install . && source .venv/bin/activate && cd test/test_frontend_api && pytest -s --tb=short get_email.py test_dataset.py
- name: Stop ragflow:nightly
if: always() # always run this step even if previous steps failed
run: |
sudo DOC_ENGINE=infinity docker compose -f docker/docker-compose.yml down -v

4
.gitignore vendored
View File

@ -35,4 +35,6 @@ rag/res/deepdoc
sdk/python/ragflow.egg-info/ sdk/python/ragflow.egg-info/
sdk/python/build/ sdk/python/build/
sdk/python/dist/ sdk/python/dist/
sdk/python/ragflow_sdk.egg-info/ sdk/python/ragflow_sdk.egg-info/
huggingface.co/
nltk_data/

View File

@ -1,16 +1,10 @@
---
sidebar_position: 0
slug: /contribution_guidelines
---
# Contribution guidelines # Contribution guidelines
Thanks for wanting to contribute to RAGFlow. This document offers guidlines and major considerations for submitting your contributions. This document offers guidelines and major considerations for submitting your contributions to RAGFlow.
- To report a bug, file a [GitHub issue](https://github.com/infiniflow/ragflow/issues/new/choose) with us. - To report a bug, file a [GitHub issue](https://github.com/infiniflow/ragflow/issues/new/choose) with us.
- For further questions, you can explore existing discussions or initiate a new one in [Discussions](https://github.com/orgs/infiniflow/discussions). - For further questions, you can explore existing discussions or initiate a new one in [Discussions](https://github.com/orgs/infiniflow/discussions).
## What you can contribute ## What you can contribute
The list below mentions some contributions you can make, but it is not a complete list. The list below mentions some contributions you can make, but it is not a complete list.
@ -27,7 +21,7 @@ The list below mentions some contributions you can make, but it is not a complet
### General workflow ### General workflow
1. Fork our GitHub repository. 1. Fork our GitHub repository.
2. Clone your fork to your local machine: 2. Clone your fork to your local machine:
`git clone git@github.com:<yourname>/ragflow.git` `git clone git@github.com:<yourname>/ragflow.git`
3. Create a local branch: 3. Create a local branch:
`git checkout -b my-branch` `git checkout -b my-branch`
@ -39,14 +33,16 @@ The list below mentions some contributions you can make, but it is not a complet
### Before filing a PR ### Before filing a PR
- Consider splitting a large PR into multiple smaller, standalone PRs to keep a traceable development history. - Consider splitting a large PR into multiple smaller, standalone PRs to keep a traceable development history.
- Ensure that your PR addresses just one issue, or keep any unrelated changes small. - Ensure that your PR addresses just one issue, or keep any unrelated changes small.
- Add test cases when contributing new features. They demonstrate that your code functions correctly and protect against potential issues from future changes. - Add test cases when contributing new features. They demonstrate that your code functions correctly and protect against potential issues from future changes.
### Describing your PR
### Describing your PR
- Ensure that your PR title is concise and clear, providing all the required information. - Ensure that your PR title is concise and clear, providing all the required information.
- Refer to a corresponding GitHub issue in your PR description if applicable. - Refer to a corresponding GitHub issue in your PR description if applicable.
- Include sufficient design details for *breaking changes* or *API changes* in your description. - Include sufficient design details for *breaking changes* or *API changes* in your description.
### Reviewing & merging a PR ### Reviewing & merging a PR
- Ensure that your PR passes all Continuous Integration (CI) tests before merging it.
Ensure that your PR passes all Continuous Integration (CI) tests before merging it.

View File

@ -1,23 +1,209 @@
FROM infiniflow/ragflow-base:v2.0 # base stage
USER root FROM ubuntu:22.04 AS base
USER root
SHELL ["/bin/bash", "-c"]
ARG NEED_MIRROR=0
ARG LIGHTEN=0
ENV LIGHTEN=${LIGHTEN}
WORKDIR /ragflow WORKDIR /ragflow
ADD ./web ./web # Copy models downloaded via download_deps.py
RUN cd ./web && npm i --force && npm run build RUN mkdir -p /ragflow/rag/res/deepdoc /root/.ragflow
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/huggingface.co,target=/huggingface.co \
cp /huggingface.co/InfiniFlow/huqie/huqie.txt.trie /ragflow/rag/res/ && \
tar --exclude='.*' -cf - \
/huggingface.co/InfiniFlow/text_concat_xgb_v1.0 \
/huggingface.co/InfiniFlow/deepdoc \
| tar -xf - --strip-components=3 -C /ragflow/rag/res/deepdoc
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/huggingface.co,target=/huggingface.co \
if [ "$LIGHTEN" != "1" ]; then \
(tar -cf - \
/huggingface.co/BAAI/bge-large-zh-v1.5 \
/huggingface.co/BAAI/bge-reranker-v2-m3 \
/huggingface.co/maidalun1020/bce-embedding-base_v1 \
/huggingface.co/maidalun1020/bce-reranker-base_v1 \
| tar -xf - --strip-components=2 -C /root/.ragflow) \
fi
ADD ./api ./api # https://github.com/chrismattmann/tika-python
ADD ./conf ./conf # This is the only way to run python-tika without internet access. Without this set, the default is to check the tika version and pull latest every time from Apache.
ADD ./deepdoc ./deepdoc RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/,target=/deps \
ADD ./rag ./rag cp -r /deps/nltk_data /root/ && \
ADD ./agent ./agent cp /deps/tika-server-standard-3.0.0.jar /deps/tika-server-standard-3.0.0.jar.md5 /ragflow/ && \
ADD ./graphrag ./graphrag cp /deps/cl100k_base.tiktoken /ragflow/9b5ad71b2ce5302211f9c61530b329a4922fc6a4
ENV TIKA_SERVER_JAR="file:///ragflow/tika-server-standard-3.0.0.jar"
ENV DEBIAN_FRONTEND=noninteractive
# Setup apt
# Python package and implicit dependencies:
# opencv-python: libglib2.0-0 libglx-mesa0 libgl1
# aspose-slides: pkg-config libicu-dev libgdiplus libssl1.1_1.1.1f-1ubuntu2_amd64.deb
# python-pptx: default-jdk tika-server-standard-3.0.0.jar
# selenium: libatk-bridge2.0-0 chrome-linux64-121-0-6167-85
# Building C extensions: libpython3-dev libgtk-4-1 libnss3 xdg-utils libgbm-dev
RUN --mount=type=cache,id=ragflow_apt,target=/var/cache/apt,sharing=locked \
if [ "$NEED_MIRROR" == "1" ]; then \
sed -i 's|http://archive.ubuntu.com|https://mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list; \
fi; \
rm -f /etc/apt/apt.conf.d/docker-clean && \
echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache && \
chmod 1777 /tmp && \
apt update && \
apt --no-install-recommends install -y ca-certificates && \
apt update && \
apt install -y libglib2.0-0 libglx-mesa0 libgl1 && \
apt install -y pkg-config libicu-dev libgdiplus && \
apt install -y default-jdk && \
apt install -y libatk-bridge2.0-0 && \
apt install -y libpython3-dev libgtk-4-1 libnss3 xdg-utils libgbm-dev && \
apt install -y python3-pip pipx nginx unzip curl wget git vim less
RUN if [ "$NEED_MIRROR" == "1" ]; then \
pip3 config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple && \
pip3 config set global.trusted-host pypi.tuna.tsinghua.edu.cn; \
mkdir -p /etc/uv && \
echo "[[index]]" > /etc/uv/uv.toml && \
echo 'url = "https://pypi.tuna.tsinghua.edu.cn/simple"' >> /etc/uv/uv.toml && \
echo "default = true" >> /etc/uv/uv.toml; \
fi; \
pipx install uv
ENV PYTHONDONTWRITEBYTECODE=1 DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1
ENV PATH=/root/.local/bin:$PATH
# nodejs 12.22 on Ubuntu 22.04 is too old
RUN --mount=type=cache,id=ragflow_apt,target=/var/cache/apt,sharing=locked \
curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \
apt purge -y nodejs npm cargo && \
apt autoremove -y && \
apt update && \
apt install -y nodejs
# A modern version of cargo is needed for the latest version of the Rust compiler.
RUN apt update && apt install -y curl build-essential \
&& if [ "$NEED_MIRROR" == "1" ]; then \
# Use TUNA mirrors for rustup/rust dist files
export RUSTUP_DIST_SERVER="https://mirrors.tuna.tsinghua.edu.cn/rustup"; \
export RUSTUP_UPDATE_ROOT="https://mirrors.tuna.tsinghua.edu.cn/rustup/rustup"; \
echo "Using TUNA mirrors for Rustup."; \
fi; \
# Force curl to use HTTP/1.1
curl --proto '=https' --tlsv1.2 --http1.1 -sSf https://sh.rustup.rs | bash -s -- -y --profile minimal \
&& echo 'export PATH="/root/.cargo/bin:${PATH}"' >> /root/.bashrc
ENV PATH="/root/.cargo/bin:${PATH}"
RUN cargo --version && rustc --version
# Add msssql ODBC driver
# macOS ARM64 environment, install msodbcsql18.
# general x86_64 environment, install msodbcsql17.
RUN --mount=type=cache,id=ragflow_apt,target=/var/cache/apt,sharing=locked \
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \
curl https://packages.microsoft.com/config/ubuntu/22.04/prod.list > /etc/apt/sources.list.d/mssql-release.list && \
apt update && \
arch="$(uname -m)"; \
if [ "$arch" = "arm64" ] || [ "$arch" = "aarch64" ]; then \
# ARM64 (macOS/Apple Silicon or Linux aarch64)
ACCEPT_EULA=Y apt install -y unixodbc-dev msodbcsql18; \
else \
# x86_64 or others
ACCEPT_EULA=Y apt install -y unixodbc-dev msodbcsql17; \
fi || \
{ echo "Failed to install ODBC driver"; exit 1; }
# Add dependencies of selenium
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/chrome-linux64-121-0-6167-85,target=/chrome-linux64.zip \
unzip /chrome-linux64.zip && \
mv chrome-linux64 /opt/chrome && \
ln -s /opt/chrome/chrome /usr/local/bin/
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/chromedriver-linux64-121-0-6167-85,target=/chromedriver-linux64.zip \
unzip -j /chromedriver-linux64.zip chromedriver-linux64/chromedriver && \
mv chromedriver /usr/local/bin/ && \
rm -f /usr/bin/google-chrome
# https://forum.aspose.com/t/aspose-slides-for-net-no-usable-version-of-libssl-found-with-linux-server/271344/13
# aspose-slides on linux/arm64 is unavailable
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/,target=/deps \
if [ "$(uname -m)" = "x86_64" ]; then \
dpkg -i /deps/libssl1.1_1.1.1f-1ubuntu2_amd64.deb; \
elif [ "$(uname -m)" = "aarch64" ]; then \
dpkg -i /deps/libssl1.1_1.1.1f-1ubuntu2_arm64.deb; \
fi
# builder stage
FROM base AS builder
USER root
WORKDIR /ragflow
# install dependencies from uv.lock file
COPY pyproject.toml uv.lock ./
# https://github.com/astral-sh/uv/issues/10462
# uv records index url into uv.lock but doesn't failover among multiple indexes
RUN --mount=type=cache,id=ragflow_uv,target=/root/.cache/uv,sharing=locked \
if [ "$NEED_MIRROR" == "1" ]; then \
sed -i 's|pypi.org|pypi.tuna.tsinghua.edu.cn|g' uv.lock; \
else \
sed -i 's|pypi.tuna.tsinghua.edu.cn|pypi.org|g' uv.lock; \
fi; \
if [ "$LIGHTEN" == "1" ]; then \
uv sync --python 3.10 --frozen; \
else \
uv sync --python 3.10 --frozen --all-extras; \
fi
COPY web web
COPY docs docs
RUN --mount=type=cache,id=ragflow_npm,target=/root/.npm,sharing=locked \
cd web && npm install && npm run build
COPY .git /ragflow/.git
RUN version_info=$(git describe --tags --match=v* --first-parent --always); \
if [ "$LIGHTEN" == "1" ]; then \
version_info="$version_info slim"; \
else \
version_info="$version_info full"; \
fi; \
echo "RAGFlow version: $version_info"; \
echo $version_info > /ragflow/VERSION
# production stage
FROM base AS production
USER root
WORKDIR /ragflow
# Copy Python environment and packages
ENV VIRTUAL_ENV=/ragflow/.venv
COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
ENV PATH="${VIRTUAL_ENV}/bin:${PATH}"
ENV PYTHONPATH=/ragflow/ ENV PYTHONPATH=/ragflow/
ENV HF_ENDPOINT=https://hf-mirror.com
ADD docker/entrypoint.sh ./entrypoint.sh COPY web web
ADD docker/.env ./ COPY api api
RUN chmod +x ./entrypoint.sh COPY conf conf
COPY deepdoc deepdoc
COPY rag rag
COPY agent agent
COPY graphrag graphrag
COPY pyproject.toml uv.lock ./
ENTRYPOINT ["./entrypoint.sh"] COPY docker/service_conf.yaml.template ./conf/service_conf.yaml.template
COPY docker/entrypoint.sh docker/entrypoint-parser.sh ./
RUN chmod +x ./entrypoint*.sh
# Copy compiled web pages
COPY --from=builder /ragflow/web/dist /ragflow/web/dist
COPY --from=builder /ragflow/VERSION /ragflow/VERSION
ENTRYPOINT ["./entrypoint.sh"]

View File

@ -1,43 +0,0 @@
FROM python:3.11
USER root
WORKDIR /ragflow
COPY requirements_arm.txt /ragflow/requirements.txt
RUN pip install nltk --default-timeout=10000
RUN pip install -i https://mirrors.aliyun.com/pypi/simple/ --default-timeout=1000 -r requirements.txt &&\
python -c "import nltk;nltk.download('punkt');nltk.download('wordnet')"
RUN apt-get update && \
apt-get install -y curl gnupg && \
rm -rf /var/lib/apt/lists/*
RUN curl -sL https://deb.nodesource.com/setup_20.x | bash - && \
apt-get install -y --fix-missing nodejs nginx ffmpeg libsm6 libxext6 libgl1
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
RUN pip install graspologic
ADD ./web ./web
RUN cd ./web && npm i --force && npm run build
ADD ./api ./api
ADD ./conf ./conf
ADD ./deepdoc ./deepdoc
ADD ./rag ./rag
ADD ./agent ./agent
ADD ./graphrag ./graphrag
ENV PYTHONPATH=/ragflow/
ENV HF_ENDPOINT=https://hf-mirror.com
ADD docker/entrypoint.sh ./entrypoint.sh
ADD docker/.env ./
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

View File

@ -1,27 +0,0 @@
FROM infiniflow/ragflow-base:v2.0
USER root
WORKDIR /ragflow
## for cuda > 12.0
RUN pip uninstall -y onnxruntime-gpu
RUN pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
ADD ./web ./web
RUN cd ./web && npm i --force && npm run build
ADD ./api ./api
ADD ./conf ./conf
ADD ./deepdoc ./deepdoc
ADD ./rag ./rag
ADD ./agent ./agent
ADD ./graphrag ./graphrag
ENV PYTHONPATH=/ragflow/
ENV HF_ENDPOINT=https://hf-mirror.com
ADD docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

10
Dockerfile.deps Normal file
View File

@ -0,0 +1,10 @@
# This builds an image that contains the resources needed by Dockerfile
#
FROM scratch
# Copy resources downloaded via download_deps.py
COPY chromedriver-linux64-121-0-6167-85 chrome-linux64-121-0-6167-85 cl100k_base.tiktoken libssl1.1_1.1.1f-1ubuntu2_amd64.deb libssl1.1_1.1.1f-1ubuntu2_arm64.deb tika-server-standard-3.0.0.jar tika-server-standard-3.0.0.jar.md5 libssl*.deb /
COPY nltk_data /nltk_data
COPY huggingface.co /huggingface.co

View File

@ -1,56 +0,0 @@
FROM ubuntu:22.04
USER root
WORKDIR /ragflow
RUN apt-get update && apt-get install -y wget curl build-essential libopenmpi-dev
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh && \
bash ~/miniconda.sh -b -p /root/miniconda3 && \
rm ~/miniconda.sh && ln -s /root/miniconda3/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
echo ". /root/miniconda3/etc/profile.d/conda.sh" >> ~/.bashrc && \
echo "conda activate base" >> ~/.bashrc
ENV PATH /root/miniconda3/bin:$PATH
RUN conda create -y --name py11 python=3.11
ENV CONDA_DEFAULT_ENV py11
ENV CONDA_PREFIX /root/miniconda3/envs/py11
ENV PATH $CONDA_PREFIX/bin:$PATH
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
RUN apt-get install -y nginx
ADD ./web ./web
ADD ./api ./api
ADD ./conf ./conf
ADD ./deepdoc ./deepdoc
ADD ./rag ./rag
ADD ./requirements.txt ./requirements.txt
ADD ./agent ./agent
ADD ./graphrag ./graphrag
RUN apt install openmpi-bin openmpi-common libopenmpi-dev
ENV LD_LIBRARY_PATH /usr/lib/x86_64-linux-gnu/openmpi/lib:$LD_LIBRARY_PATH
RUN rm /root/miniconda3/envs/py11/compiler_compat/ld
RUN cd ./web && npm i --force && npm run build
RUN conda run -n py11 pip install -i https://mirrors.aliyun.com/pypi/simple/ -r ./requirements.txt
RUN apt-get update && \
apt-get install -y libglib2.0-0 libgl1-mesa-glx && \
rm -rf /var/lib/apt/lists/*
RUN conda run -n py11 pip install -i https://mirrors.aliyun.com/pypi/simple/ ollama
RUN conda run -n py11 python -m nltk.downloader punkt
RUN conda run -n py11 python -m nltk.downloader wordnet
ENV PYTHONPATH=/ragflow/
ENV HF_ENDPOINT=https://hf-mirror.com
ADD docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

View File

@ -26,6 +26,7 @@ RUN dnf install -y nginx
ADD ./web ./web ADD ./web ./web
ADD ./api ./api ADD ./api ./api
ADD ./docs ./docs
ADD ./conf ./conf ADD ./conf ./conf
ADD ./deepdoc ./deepdoc ADD ./deepdoc ./deepdoc
ADD ./rag ./rag ADD ./rag ./rag
@ -37,7 +38,7 @@ RUN dnf install -y openmpi openmpi-devel python3-openmpi
ENV C_INCLUDE_PATH /usr/include/openmpi-x86_64:$C_INCLUDE_PATH ENV C_INCLUDE_PATH /usr/include/openmpi-x86_64:$C_INCLUDE_PATH
ENV LD_LIBRARY_PATH /usr/lib64/openmpi/lib:$LD_LIBRARY_PATH ENV LD_LIBRARY_PATH /usr/lib64/openmpi/lib:$LD_LIBRARY_PATH
RUN rm /root/miniconda3/envs/py11/compiler_compat/ld RUN rm /root/miniconda3/envs/py11/compiler_compat/ld
RUN cd ./web && npm i --force && npm run build RUN cd ./web && npm i && npm run build
RUN conda run -n py11 pip install $(grep -ivE "mpi4py" ./requirements.txt) # without mpi4py==3.1.5 RUN conda run -n py11 pip install $(grep -ivE "mpi4py" ./requirements.txt) # without mpi4py==3.1.5
RUN conda run -n py11 pip install redis RUN conda run -n py11 pip install redis
@ -52,6 +53,7 @@ RUN conda run -n py11 python -m nltk.downloader wordnet
ENV PYTHONPATH=/ragflow/ ENV PYTHONPATH=/ragflow/
ENV HF_ENDPOINT=https://hf-mirror.com ENV HF_ENDPOINT=https://hf-mirror.com
COPY docker/service_conf.yaml.template ./conf/service_conf.yaml.template
ADD docker/entrypoint.sh ./entrypoint.sh ADD docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh RUN chmod +x ./entrypoint.sh

276
README.md
View File

@ -7,34 +7,42 @@
<p align="center"> <p align="center">
<a href="./README.md">English</a> | <a href="./README.md">English</a> |
<a href="./README_zh.md">简体中文</a> | <a href="./README_zh.md">简体中文</a> |
<a href="./README_tzh.md">繁体中文</a> |
<a href="./README_ja.md">日本語</a> | <a href="./README_ja.md">日本語</a> |
<a href="./README_ko.md">한국어</a> <a href="./README_ko.md">한국어</a> |
<a href="./README_id.md">Bahasa Indonesia</a> |
<a href="/README_pt_br.md">Português (Brasil)</a>
</p> </p>
<p align="center"> <p align="center">
<a href="https://x.com/intent/follow?screen_name=infiniflowai" target="_blank">
<img src="https://img.shields.io/twitter/follow/infiniflow?logo=X&color=%20%23f5f5f5" alt="follow on X(Twitter)">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.16.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.16.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
</a> </a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.11.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.11.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE"> <a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license"> <img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a> </a>
</p> </p>
<h4 align="center"> <h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> | <a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/162">Roadmap</a> | <a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> | <a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/4XxujFgUN7">Discord</a> | <a href="https://discord.gg/4XxujFgUN7">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
<details open> <details open>
<summary></b>📕 Table of Contents</b></summary> <summary><b>📕 Table of Contents</b></summary>
- 💡 [What is RAGFlow?](#-what-is-ragflow) - 💡 [What is RAGFlow?](#-what-is-ragflow)
- 🎮 [Demo](#-demo) - 🎮 [Demo](#-demo)
- 📌 [Latest Updates](#-latest-updates) - 📌 [Latest Updates](#-latest-updates)
@ -42,8 +50,9 @@
- 🔎 [System Architecture](#-system-architecture) - 🔎 [System Architecture](#-system-architecture)
- 🎬 [Get Started](#-get-started) - 🎬 [Get Started](#-get-started)
- 🔧 [Configurations](#-configurations) - 🔧 [Configurations](#-configurations)
- 🛠️ [Build from source](#-build-from-source) - 🔧 [Build a docker image without embedding models](#-build-a-docker-image-without-embedding-models)
- 🛠️ [Launch service from source](#-launch-service-from-source) - 🔧 [Build a docker image including embedding models](#-build-a-docker-image-including-embedding-models)
- 🔨 [Launch service from source for development](#-launch-service-from-source-for-development)
- 📚 [Documentation](#-documentation) - 📚 [Documentation](#-documentation)
- 📜 [Roadmap](#-roadmap) - 📜 [Roadmap](#-roadmap)
- 🏄 [Community](#-community) - 🏄 [Community](#-community)
@ -53,34 +62,45 @@
## 💡 What is RAGFlow? ## 💡 What is RAGFlow?
[RAGFlow](https://ragflow.io/) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data. [RAGFlow](https://ragflow.io/) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document
understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models)
to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted
data.
## 🎮 Demo ## 🎮 Demo
Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io). Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/> <img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/>
<img src="https://github.com/infiniflow/ragflow/assets/12318111/b083d173-dadc-4ea9-bdeb-180d7df514eb" width="1200"/> <img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/>
</div> </div>
## 🔥 Latest Updates ## 🔥 Latest Updates
- 2024-09-13 Adds search mode for knowledge base Q&A. - 2025-02-05 Updates the model list of 'SILICONFLOW' and adds support for Deepseek-R1/DeepSeek-V3.
- 2024-09-09 Adds a medical consultant agent template. - 2025-01-26 Optimizes knowledge graph extraction and application, offering various configuration options.
- 2024-12-18 Upgrades Document Layout Analysis model in Deepdoc.
- 2024-12-04 Adds support for pagerank score in knowledge base.
- 2024-11-22 Adds more variables to Agent.
- 2024-11-01 Adds keyword extraction and related question generation to the parsed chunks to improve the accuracy of retrieval.
- 2024-08-22 Support text to SQL statements through RAG. - 2024-08-22 Support text to SQL statements through RAG.
- 2024-08-02 Supports GraphRAG inspired by [graphrag](https://github.com/microsoft/graphrag) and mind map.
- 2024-07-23 Supports audio file parsing.
- 2024-07-08 Supports workflow based on [Graph](./agent/README.md).
- 2024-06-27 Supports Markdown and Docx in the Q&A parsing method, extracting images from Docx files, extracting tables from Markdown files.
- 2024-05-23 Supports [RAPTOR](https://arxiv.org/html/2401.18059v1) for better text retrieval.
## 🎉 Stay Tuned
⭐️ Star our repository to stay up-to-date with exciting new features and improvements! Get instant notifications for new
releases! 🌟
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/18c9707e-b8aa-4caf-a154-037089c105ba" width="1200"/>
</div>
## 🌟 Key Features ## 🌟 Key Features
### 🍭 **"Quality in, quality out"** ### 🍭 **"Quality in, quality out"**
- [Deep document understanding](./deepdoc/README.md)-based knowledge extraction from unstructured data with complicated formats. - [Deep document understanding](./deepdoc/README.md)-based knowledge extraction from unstructured data with complicated
formats.
- Finds "needle in a data haystack" of literally unlimited tokens. - Finds "needle in a data haystack" of literally unlimited tokens.
### 🍱 **Template-based chunking** ### 🍱 **Template-based chunking**
@ -118,7 +138,8 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
- RAM >= 16 GB - RAM >= 16 GB
- Disk >= 50 GB - Disk >= 50 GB
- Docker >= 24.0.0 & Docker Compose >= v2.26.1 - Docker >= 24.0.0 & Docker Compose >= v2.26.1
> If you have not installed Docker on your local machine (Windows, Mac, or Linux), see [Install Docker Engine](https://docs.docker.com/engine/install/). > If you have not installed Docker on your local machine (Windows, Mac, or Linux),
> see [Install Docker Engine](https://docs.docker.com/engine/install/).
### 🚀 Start up the server ### 🚀 Start up the server
@ -137,7 +158,8 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
> $ sudo sysctl -w vm.max_map_count=262144 > $ sudo sysctl -w vm.max_map_count=262144
> ``` > ```
> >
> This change will be reset after a system reboot. To ensure your change remains permanent, add or update the `vm.max_map_count` value in **/etc/sysctl.conf** accordingly: > This change will be reset after a system reboot. To ensure your change remains permanent, add or update the
> `vm.max_map_count` value in **/etc/sysctl.conf** accordingly:
> >
> ```bash > ```bash
> vm.max_map_count=262144 > vm.max_map_count=262144
@ -149,18 +171,21 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
$ git clone https://github.com/infiniflow/ragflow.git $ git clone https://github.com/infiniflow/ragflow.git
``` ```
3. Build the pre-built Docker images and start up the server: 3. Start up the server using the pre-built Docker images:
> Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_VERSION` in **docker/.env** to the intended version, for example `RAGFLOW_VERSION=v0.11.0`, before running the following commands. > The command below downloads the `v0.16.0-slim` edition of the RAGFlow Docker image. Refer to the following table for descriptions of different RAGFlow editions. To download an RAGFlow edition different from `v0.16.0-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.16.0` for the full edition `v0.16.0`.
```bash ```bash
$ cd ragflow/docker $ cd ragflow
$ chmod +x ./entrypoint.sh $ docker compose -f docker/docker-compose.yml up -d
$ docker compose up -d
``` ```
> The core image is about 9 GB in size and may take a while to load. | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|-------------------|-----------------|-----------------------|--------------------------|
| v0.16.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.16.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
4. Check the server status after having the server up and running: 4. Check the server status after having the server up and running:
@ -171,165 +196,161 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
_The following output confirms a successful launch of the system:_ _The following output confirms a successful launch of the system:_
```bash ```bash
____ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __ ____ ___ ______ ______ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / / / __ \ / | / ____// ____// /____ _ __
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ / / /_/ // /| | / / __ / /_ / // __ \| | /| / /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/ / _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/____/ /_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0) * Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380 * Running on http://127.0.0.1:9380
* Running on http://x.x.x.x:9380 * Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit INFO:werkzeug:Press CTRL+C to quit
``` ```
> If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network abnormal` error because, at that moment, your RAGFlow may not be fully initialized.
> If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network anormal`
> error because, at that moment, your RAGFlow may not be fully initialized.
5. In your web browser, enter the IP address of your server and log in to RAGFlow. 5. In your web browser, enter the IP address of your server and log in to RAGFlow.
> With the default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (**sans** port number) as the default HTTP serving port `80` can be omitted when using the default configurations. > With the default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (**sans** port number) as the default
6. In [service_conf.yaml](./docker/service_conf.yaml), select the desired LLM factory in `user_default_llm` and update the `API_KEY` field with the corresponding API key. > HTTP serving port `80` can be omitted when using the default configurations.
6. In [service_conf.yaml.template](./docker/service_conf.yaml.template), select the desired LLM factory in `user_default_llm` and update
the `API_KEY` field with the corresponding API key.
> See [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) for more information. > See [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) for more information.
_The show is now on!_ _The show is on!_
## 🔧 Configurations ## 🔧 Configurations
When it comes to system configurations, you will need to manage the following files: When it comes to system configurations, you will need to manage the following files:
- [.env](./docker/.env): Keeps the fundamental setups for the system, such as `SVR_HTTP_PORT`, `MYSQL_PASSWORD`, and `MINIO_PASSWORD`. - [.env](./docker/.env): Keeps the fundamental setups for the system, such as `SVR_HTTP_PORT`, `MYSQL_PASSWORD`, and
- [service_conf.yaml](./docker/service_conf.yaml): Configures the back-end services. `MINIO_PASSWORD`.
- [service_conf.yaml.template](./docker/service_conf.yaml.template): Configures the back-end services. The environment variables in this file will be automatically populated when the Docker container starts. Any environment variables set within the Docker container will be available for use, allowing you to customize service behavior based on the deployment environment.
- [docker-compose.yml](./docker/docker-compose.yml): The system relies on [docker-compose.yml](./docker/docker-compose.yml) to start up. - [docker-compose.yml](./docker/docker-compose.yml): The system relies on [docker-compose.yml](./docker/docker-compose.yml) to start up.
You must ensure that changes to the [.env](./docker/.env) file are in line with what are in the [service_conf.yaml](./docker/service_conf.yaml) file. > The [./docker/README](./docker/README.md) file provides a detailed description of the environment settings and service
> configurations which can be used as `${ENV_VARS}` in the [service_conf.yaml.template](./docker/service_conf.yaml.template) file.
> The [./docker/README](./docker/README.md) file provides a detailed description of the environment settings and service configurations, and you are REQUIRED to ensure that all environment settings listed in the [./docker/README](./docker/README.md) file are aligned with the corresponding configurations in the [service_conf.yaml](./docker/service_conf.yaml) file. To update the default HTTP serving port (80), go to [docker-compose.yml](./docker/docker-compose.yml) and change `80:80`
to `<YOUR_SERVING_PORT>:80`.
To update the default HTTP serving port (80), go to [docker-compose.yml](./docker/docker-compose.yml) and change `80:80` to `<YOUR_SERVING_PORT>:80`. Updates to the above configurations require a reboot of all containers to take effect:
> Updates to all system configurations require a system reboot to take effect:
>
> ```bash > ```bash
> $ docker-compose up -d > $ docker compose -f docker/docker-compose.yml up -d
> ``` > ```
## 🛠️ Build from source ### Switch doc engine from Elasticsearch to Infinity
To build the Docker images from source: RAGFlow uses Elasticsearch by default for storing full text and vectors. To switch to [Infinity](https://github.com/infiniflow/infinity/), follow these steps:
1. Stop all running containers:
```bash
$ docker compose -f docker/docker-compose.yml down -v
```
2. Set `DOC_ENGINE` in **docker/.env** to `infinity`.
3. Start the containers:
```bash
$ docker compose -f docker/docker-compose.yml up -d
```
> [!WARNING]
> Switching to Infinity on a Linux/arm64 machine is not yet officially supported.
## 🔧 Build a Docker image without embedding models
This image is approximately 2 GB in size and relies on external LLM and embedding services.
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/ cd ragflow/
$ docker build -t infiniflow/ragflow:dev . docker build --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
``` ```
## 🛠️ Launch service from source ## 🔧 Build a Docker image including embedding models
To launch the service from source: This image is approximately 9 GB in size. As it includes embedding models, it relies on external LLM services only.
1. Clone the repository: ```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build -f Dockerfile -t infiniflow/ragflow:nightly .
```
## 🔨 Launch service from source for development
1. Install uv, or skip this step if it is already installed:
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git pipx install uv
$ cd ragflow/
``` ```
2. Create a virtual environment, ensuring that Anaconda or Miniconda is installed: 2. Clone the source code and install Python dependencies:
```bash ```bash
$ conda create -n ragflow python=3.11.0 git clone https://github.com/infiniflow/ragflow.git
$ conda activate ragflow cd ragflow/
$ pip install -r requirements.txt uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
```
```bash
# If your CUDA version is higher than 12.0, run the following additional commands:
$ pip uninstall -y onnxruntime-gpu
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
``` ```
3. Copy the entry script and configure environment variables: 3. Launch the dependent services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose:
```bash ```bash
# Get the Python path: docker compose -f docker/docker-compose-base.yml up -d
$ which python
# Get the ragflow project path:
$ pwd
```
```bash
$ cp docker/entrypoint.sh .
$ vi entrypoint.sh
``` ```
Add the following line to `/etc/hosts` to resolve all hosts specified in **docker/.env** to `127.0.0.1`:
```
127.0.0.1 es01 infinity mysql minio redis
```
4. If you cannot access HuggingFace, set the `HF_ENDPOINT` environment variable to use a mirror site:
```bash ```bash
# Adjust configurations according to your actual situation (the following two export commands are newly added):
# - Assign the result of `which python` to `PY`.
# - Assign the result of `pwd` to `PYTHONPATH`.
# - Comment out `LD_LIBRARY_PATH`, if it is configured.
# - Optional: Add Hugging Face mirror.
PY=${PY}
export PYTHONPATH=${PYTHONPATH}
export HF_ENDPOINT=https://hf-mirror.com export HF_ENDPOINT=https://hf-mirror.com
``` ```
4. Launch the third-party services (MinIO, Elasticsearch, Redis, and MySQL): 5. Launch backend service:
```bash ```bash
$ cd docker source .venv/bin/activate
$ docker compose -f docker-compose-base.yml up -d export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
``` ```
5. Check the configuration files, ensuring that: 6. Install frontend dependencies:
```bash
- The settings in **docker/.env** match those in **conf/service_conf.yaml**. cd web
- The IP addresses and ports for related services in **service_conf.yaml** match the local machine IP and ports exposed by the container. npm install
```
6. Launch the RAGFlow backend service: 7. Launch frontend service:
```bash ```bash
$ chmod +x ./entrypoint.sh npm run dev
$ bash ./entrypoint.sh
``` ```
7. Launch the frontend service: _The following output confirms a successful launch of the system:_
```bash ![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ vim .umirc.ts
# Update proxy.target to http://127.0.0.1:9380
$ npm run dev
```
8. Deploy the frontend service:
```bash
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ umi build
$ mkdir -p /ragflow/web
$ cp -r dist /ragflow/web
$ apt install nginx -y
$ cp ../docker/nginx/proxy.conf /etc/nginx
$ cp ../docker/nginx/nginx.conf /etc/nginx
$ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d
$ systemctl start nginx
```
## 📚 Documentation ## 📚 Documentation
- [Quickstart](https://ragflow.io/docs/dev/) - [Quickstart](https://ragflow.io/docs/dev/)
- [User guide](https://ragflow.io/docs/dev/category/user-guides) - [User guide](https://ragflow.io/docs/dev/category/guides)
- [References](https://ragflow.io/docs/dev/category/references) - [References](https://ragflow.io/docs/dev/category/references)
- [FAQ](https://ragflow.io/docs/dev/faq) - [FAQ](https://ragflow.io/docs/dev/faq)
## 📜 Roadmap ## 📜 Roadmap
See the [RAGFlow Roadmap 2024](https://github.com/infiniflow/ragflow/issues/162) See the [RAGFlow Roadmap 2025](https://github.com/infiniflow/ragflow/issues/4214)
## 🏄 Community ## 🏄 Community
@ -339,4 +360,5 @@ See the [RAGFlow Roadmap 2024](https://github.com/infiniflow/ragflow/issues/162)
## 🙌 Contributing ## 🙌 Contributing
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our [Contribution Guidelines](./docs/references/CONTRIBUTING.md) first. RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community.
If you would like to be a part, review our [Contribution Guidelines](./CONTRIBUTING.md) first.

333
README_id.md Normal file
View File

@ -0,0 +1,333 @@
<div align="center">
<a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.png" width="520" alt="Logo ragflow">
</a>
</div>
<p align="center">
<a href="./README.md">English</a> |
<a href="./README_zh.md">简体中文</a> |
<a href="./README_tzh.md">繁体中文</a> |
<a href="./README_ja.md">日本語</a> |
<a href="./README_ko.md">한국어</a> |
<a href="./README_id.md">Bahasa Indonesia</a> |
<a href="/README_pt_br.md">Português (Brasil)</a>
</p>
<p align="center">
<a href="https://x.com/intent/follow?screen_name=infiniflowai" target="_blank">
<img src="https://img.shields.io/twitter/follow/infiniflow?logo=X&color=%20%23f5f5f5" alt="Ikuti di X (Twitter)">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.16.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.16.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru">
</a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/Lisensi-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="Lisensi">
</a>
</p>
<h4 align="center">
<a href="https://ragflow.io/docs/dev/">Dokumentasi</a> |
<a href="https://github.com/infiniflow/ragflow/issues/4214">Peta Jalan</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/4XxujFgUN7">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a>
</h4>
<details open>
<summary></b>📕 Daftar Isi</b></summary>
- 💡 [Apa Itu RAGFlow?](#-apa-itu-ragflow)
- 🎮 [Demo](#-demo)
- 📌 [Pembaruan Terbaru](#-pembaruan-terbaru)
- 🌟 [Fitur Utama](#-fitur-utama)
- 🔎 [Arsitektur Sistem](#-arsitektur-sistem)
- 🎬 [Mulai](#-mulai)
- 🔧 [Konfigurasi](#-konfigurasi)
- 🔧 [Membangun Image Docker tanpa Model Embedding](#-membangun-image-docker-tanpa-model-embedding)
- 🔧 [Membangun Image Docker dengan Model Embedding](#-membangun-image-docker-dengan-model-embedding)
- 🔨 [Meluncurkan aplikasi dari Sumber untuk Pengembangan](#-meluncurkan-aplikasi-dari-sumber-untuk-pengembangan)
- 📚 [Dokumentasi](#-dokumentasi)
- 📜 [Peta Jalan](#-peta-jalan)
- 🏄 [Komunitas](#-komunitas)
- 🙌 [Kontribusi](#-kontribusi)
</details>
## 💡 Apa Itu RAGFlow?
[RAGFlow](https://ragflow.io/) adalah mesin RAG (Retrieval-Augmented Generation) open-source berbasis pemahaman dokumen yang mendalam. Platform ini menyediakan alur kerja RAG yang efisien untuk bisnis dengan berbagai skala, menggabungkan LLM (Large Language Models) untuk menyediakan kemampuan tanya-jawab yang benar dan didukung oleh referensi dari data terstruktur kompleks.
## 🎮 Demo
Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/>
<img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/>
</div>
## 🔥 Pembaruan Terbaru
- 2025-02-05 Memperbarui daftar model 'SILICONFLOW' dan menambahkan dukungan untuk Deepseek-R1/DeepSeek-V3.
- 2025-01-26 Optimalkan ekstraksi dan penerapan grafik pengetahuan dan sediakan berbagai opsi konfigurasi.
- 2024-12-18 Meningkatkan model Analisis Tata Letak Dokumen di Deepdoc.
- 2024-12-04 Mendukung skor pagerank ke basis pengetahuan.
- 2024-11-22 Peningkatan definisi dan penggunaan variabel di Agen.
- 2024-11-01 Penambahan ekstraksi kata kunci dan pembuatan pertanyaan terkait untuk meningkatkan akurasi pengambilan.
- 2024-08-22 Dukungan untuk teks ke pernyataan SQL melalui RAG.
## 🎉 Tetap Terkini
⭐️ Star repositori kami untuk tetap mendapat informasi tentang fitur baru dan peningkatan menarik! 🌟
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/18c9707e-b8aa-4caf-a154-037089c105ba" width="1200"/>
</div>
## 🌟 Fitur Utama
### 🍭 **"Kualitas Masuk, Kualitas Keluar"**
- Ekstraksi pengetahuan berbasis pemahaman dokumen mendalam dari data tidak terstruktur dengan format yang rumit.
- Menemukan "jarum di tumpukan data" dengan token yang hampir tidak terbatas.
### 🍱 **Pemotongan Berbasis Template**
- Cerdas dan dapat dijelaskan.
- Banyak pilihan template yang tersedia.
### 🌱 **Referensi yang Didasarkan pada Data untuk Mengurangi Hallusinasi**
- Visualisasi pemotongan teks memungkinkan intervensi manusia.
- Tampilan cepat referensi kunci dan referensi yang dapat dilacak untuk mendukung jawaban yang didasarkan pada fakta.
### 🍔 **Kompatibilitas dengan Sumber Data Heterogen**
- Mendukung Word, slide, excel, txt, gambar, salinan hasil scan, data terstruktur, halaman web, dan banyak lagi.
### 🛀 **Alur Kerja RAG yang Otomatis dan Mudah**
- Orkestrasi RAG yang ramping untuk bisnis kecil dan besar.
- LLM yang dapat dikonfigurasi serta model embedding.
- Peringkat ulang berpasangan dengan beberapa pengambilan ulang.
- API intuitif untuk integrasi yang mudah dengan bisnis.
## 🔎 Arsitektur Sistem
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div>
## 🎬 Mulai
### 📝 Prasyarat
- CPU >= 4 inti
- RAM >= 16 GB
- Disk >= 50 GB
- Docker >= 24.0.0 & Docker Compose >= v2.26.1
### 🚀 Menjalankan Server
1. Pastikan `vm.max_map_count` >= 262144:
> Untuk memeriksa nilai `vm.max_map_count`:
>
> ```bash
> $ sysctl vm.max_map_count
> ```
>
> Jika nilainya kurang dari 262144, setel ulang `vm.max_map_count` ke setidaknya 262144:
>
> ```bash
> # Dalam contoh ini, kita atur menjadi 262144:
> $ sudo sysctl -w vm.max_map_count=262144
> ```
>
> Perubahan ini akan hilang setelah sistem direboot. Untuk membuat perubahan ini permanen, tambahkan atau perbarui nilai
> `vm.max_map_count` di **/etc/sysctl.conf**:
>
> ```bash
> vm.max_map_count=262144
> ```
2. Clone repositori:
```bash
$ git clone https://github.com/infiniflow/ragflow.git
```
3. Bangun image Docker pre-built dan jalankan server:
> Perintah di bawah ini mengunduh edisi v0.16.0-slim dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.16.0-slim, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. Misalnya, atur RAGFLOW_IMAGE=infiniflow/ragflow:v0.16.0 untuk edisi lengkap v0.16.0.
```bash
$ cd ragflow
$ docker compose -f docker/docker-compose.yml up -d
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.16.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.16.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
4. Periksa status server setelah server aktif dan berjalan:
```bash
$ docker logs -f ragflow-server
```
_Output berikut menandakan bahwa sistem berhasil diluncurkan:_
```bash
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
* Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit
```
> Jika Anda melewatkan langkah ini dan langsung login ke RAGFlow, browser Anda mungkin menampilkan error `network anormal`
> karena RAGFlow mungkin belum sepenuhnya siap.
5. Buka browser web Anda, masukkan alamat IP server Anda, dan login ke RAGFlow.
> Dengan pengaturan default, Anda hanya perlu memasukkan `http://IP_DEVICE_ANDA` (**tanpa** nomor port) karena
> port HTTP default `80` bisa dihilangkan saat menggunakan konfigurasi default.
6. Dalam [service_conf.yaml.template](./docker/service_conf.yaml.template), pilih LLM factory yang diinginkan di `user_default_llm` dan perbarui
bidang `API_KEY` dengan kunci API yang sesuai.
> Lihat [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) untuk informasi lebih lanjut.
_Sistem telah siap digunakan!_
## 🔧 Konfigurasi
Untuk konfigurasi sistem, Anda perlu mengelola file-file berikut:
- [.env](./docker/.env): Menyimpan pengaturan dasar sistem, seperti `SVR_HTTP_PORT`, `MYSQL_PASSWORD`, dan
`MINIO_PASSWORD`.
- [service_conf.yaml.template](./docker/service_conf.yaml.template): Mengonfigurasi aplikasi backend.
- [docker-compose.yml](./docker/docker-compose.yml): Sistem ini bergantung pada [docker-compose.yml](./docker/docker-compose.yml) untuk memulai.
Untuk memperbarui port HTTP default (80), buka [docker-compose.yml](./docker/docker-compose.yml) dan ubah `80:80`
menjadi `<YOUR_SERVING_PORT>:80`.
Pembaruan konfigurasi ini memerlukan reboot semua kontainer agar efektif:
> ```bash
> $ docker compose -f docker/docker-compose.yml up -d
> ```
## 🔧 Membangun Docker Image tanpa Model Embedding
Image ini berukuran sekitar 2 GB dan bergantung pada aplikasi LLM eksternal dan embedding.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 Membangun Docker Image Termasuk Model Embedding
Image ini berukuran sekitar 9 GB. Karena sudah termasuk model embedding, ia hanya bergantung pada aplikasi LLM eksternal.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build -f Dockerfile -t infiniflow/ragflow:nightly .
```
## 🔨 Menjalankan Aplikasi dari untuk Pengembangan
1. Instal uv, atau lewati langkah ini jika sudah terinstal:
```bash
pipx install uv
```
2. Clone kode sumber dan instal dependensi Python:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
```
3. Jalankan aplikasi yang diperlukan (MinIO, Elasticsearch, Redis, dan MySQL) menggunakan Docker Compose:
```bash
docker compose -f docker/docker-compose-base.yml up -d
```
Tambahkan baris berikut ke `/etc/hosts` untuk memetakan semua host yang ditentukan di **conf/service_conf.yaml** ke `127.0.0.1`:
```
127.0.0.1 es01 infinity mysql minio redis
```
4. Jika Anda tidak dapat mengakses HuggingFace, atur variabel lingkungan `HF_ENDPOINT` untuk menggunakan situs mirror:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
5. Jalankan aplikasi backend:
```bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
6. Instal dependensi frontend:
```bash
cd web
npm install
```
7. Jalankan aplikasi frontend:
```bash
npm run dev
```
_Output berikut menandakan bahwa sistem berhasil diluncurkan:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
## 📚 Dokumentasi
- [Quickstart](https://ragflow.io/docs/dev/)
- [Panduan Pengguna](https://ragflow.io/docs/dev/category/guides)
- [Referensi](https://ragflow.io/docs/dev/category/references)
- [FAQ](https://ragflow.io/docs/dev/faq)
## 📜 Roadmap
Lihat [Roadmap RAGFlow 2025](https://github.com/infiniflow/ragflow/issues/4214)
## 🏄 Komunitas
- [Discord](https://discord.gg/4XxujFgUN7)
- [Twitter](https://twitter.com/infiniflowai)
- [GitHub Discussions](https://github.com/orgs/infiniflow/discussions)
## 🙌 Kontribusi
RAGFlow berkembang melalui kolaborasi open-source. Dalam semangat ini, kami menerima kontribusi dari komunitas.
Jika Anda ingin berpartisipasi, tinjau terlebih dahulu [Panduan Kontribusi](./CONTRIBUTING.md).

View File

@ -7,27 +7,34 @@
<p align="center"> <p align="center">
<a href="./README.md">English</a> | <a href="./README.md">English</a> |
<a href="./README_zh.md">简体中文</a> | <a href="./README_zh.md">简体中文</a> |
<a href="./README_tzh.md">繁体中文</a> |
<a href="./README_ja.md">日本語</a> | <a href="./README_ja.md">日本語</a> |
<a href="./README_ko.md">한국어</a> <a href="./README_ko.md">한국어</a> |
<a href="./README_id.md">Bahasa Indonesia</a> |
<a href="/README_pt_br.md">Português (Brasil)</a>
</p> </p>
<p align="center"> <p align="center">
<a href="https://x.com/intent/follow?screen_name=infiniflowai" target="_blank">
<img src="https://img.shields.io/twitter/follow/infiniflow?logo=X&color=%20%23f5f5f5" alt="follow on X(Twitter)">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.16.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.16.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
</a> </a>
<a href="https://demo.ragflow.io" target="_blank"> <a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a> <img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> </a>
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.11.0-brightgreen"
alt="docker pull infiniflow/ragflow:v0.11.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
</p> </p>
<h4 align="center"> <h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> | <a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/162">Roadmap</a> | <a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> | <a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/4XxujFgUN7">Discord</a> | <a href="https://discord.gg/4XxujFgUN7">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
@ -40,23 +47,29 @@
## 🎮 Demo ## 🎮 Demo
デモをお試しください:[https://demo.ragflow.io](https://demo.ragflow.io)。 デモをお試しください:[https://demo.ragflow.io](https://demo.ragflow.io)。
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/> <img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/>
<img src="https://github.com/infiniflow/ragflow/assets/12318111/b083d173-dadc-4ea9-bdeb-180d7df514eb" width="1200"/> <img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/>
</div> </div>
## 🔥 最新情報 ## 🔥 最新情報
- 2024-09-13 ナレッジベース Q&A の検索モードを追加しました。 - 2025-02-05 シリコン フローの St およびモデル リストを更新し、Deep Seek-R1/Deep Seek-V3 のサポートを追加しました。
- 2024-09-09 エージェントに医療相談テンプレートを追加しました - 2025-01-26 ナレッジ グラフの抽出と適用を最適化し、さまざまな構成オプションを提供します
- 2024-12-18 Deepdoc のドキュメント レイアウト分析モデルをアップグレードします。
- 2024-12-04 ナレッジ ベースへのページランク スコアをサポートしました。
- 2024-11-22 エージェントでの変数の定義と使用法を改善しました。
- 2024-11-01 再現の精度を向上させるために、解析されたチャンクにキーワード抽出と関連質問の生成を追加しました。
- 2024-08-22 RAG を介して SQL ステートメントへのテキストをサポートします。 - 2024-08-22 RAG を介して SQL ステートメントへのテキストをサポートします。
- 2024-08-02 [graphrag](https://github.com/microsoft/graphrag) からインスピレーションを得た GraphRAG とマインド マップをサポートします。
- 2024-07-23 音声ファイルの解析をサポートしました。
- 2024-07-08 [Graph](./agent/README.md) ベースのワークフローをサポート
- 2024-06-27 Q&A 解析メソッドで Markdown と Docx をサポートし、Docx ファイルから画像を抽出し、Markdown ファイルからテーブルを抽出します。
- 2024-05-23 より良いテキスト検索のために [RAPTOR](https://arxiv.org/html/2401.18059v1) をサポート。
## 🎉 続きを楽しみに
⭐️ リポジトリをスター登録して、エキサイティングな新機能やアップデートを最新の状態に保ちましょう!すべての新しいリリースに関する即時通知を受け取れます! 🌟
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/18c9707e-b8aa-4caf-a154-037089c105ba" width="1200"/>
</div>
## 🌟 主な特徴 ## 🌟 主な特徴
@ -133,15 +146,19 @@
3. ビルド済みの Docker イメージをビルドし、サーバーを起動する: 3. ビルド済みの Docker イメージをビルドし、サーバーを起動する:
> 以下のコマンドは、RAGFlow Docker イメージの v0.16.0-slim エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.16.0-slim とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。例えば、完全版 v0.16.0 をダウンロードするには、RAGFLOW_IMAGE=infiniflow/ragflow:v0.16.0 と設定します。
```bash ```bash
$ cd ragflow/docker $ cd ragflow
$ chmod +x ./entrypoint.sh $ docker compose -f docker/docker-compose.yml up -d
$ docker compose up -d
``` ```
> 上記のコマンドを実行すると、RAGFlowの開発版dockerイメージが自動的にダウンロードされます。 特定のバージョンのDockerイメージをダウンロードして実行したい場合は、docker/.envファイルのRAGFLOW_VERSION変数を見つけて、対応するバージョンに変更してください。 例えば、RAGFLOW_VERSION=v0.11.0として、上記のコマンドを実行してください。 | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
> コアイメージのサイズは約 9 GB で、ロードに時間がかかる場合があります。 | v0.16.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.16.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
4. サーバーを立ち上げた後、サーバーの状態を確認する: 4. サーバーを立ち上げた後、サーバーの状態を確認する:
@ -152,23 +169,23 @@
_以下の出力は、システムが正常に起動したことを確認するものです:_ _以下の出力は、システムが正常に起動したことを確認するものです:_
```bash ```bash
____ ______ __ ____ ___ ______ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __ / __ \ / | / ____// ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / / / /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ / / _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/ /_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
/____/
* Running on all addresses (0.0.0.0) * Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380 * Running on http://127.0.0.1:9380
* Running on http://x.x.x.x:9380 * Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit INFO:werkzeug:Press CTRL+C to quit
``` ```
> もし確認ステップをスキップして直接 RAGFlow にログインした場合、その時点で RAGFlow が完全に初期化されていない可能性があるため、ブラウザーがネットワーク異常エラーを表示するかもしれません。 > もし確認ステップをスキップして直接 RAGFlow にログインした場合、その時点で RAGFlow が完全に初期化されていない可能性があるため、ブラウザーがネットワーク異常エラーを表示するかもしれません。
5. ウェブブラウザで、プロンプトに従ってサーバーの IP アドレスを入力し、RAGFlow にログインします。 5. ウェブブラウザで、プロンプトに従ってサーバーの IP アドレスを入力し、RAGFlow にログインします。
> デフォルトの設定を使用する場合、デフォルトの HTTP サービングポート `80` は省略できるので、与えられたシナリオでは、`http://IP_OF_YOUR_MACHINE`(ポート番号は省略)だけを入力すればよい。 > デフォルトの設定を使用する場合、デフォルトの HTTP サービングポート `80` は省略できるので、与えられたシナリオでは、`http://IP_OF_YOUR_MACHINE`(ポート番号は省略)だけを入力すればよい。
6. [service_conf.yaml](./docker/service_conf.yaml) で、`user_default_llm` で希望の LLM ファクトリを選択し、`API_KEY` フィールドを対応する API キーで更新する。 6. [service_conf.yaml.template](./docker/service_conf.yaml.template) で、`user_default_llm` で希望の LLM ファクトリを選択し、`API_KEY` フィールドを対応する API キーで更新する。
> 詳しくは [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) を参照してください。 > 詳しくは [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) を参照してください。
@ -179,104 +196,125 @@
システムコンフィグに関しては、以下のファイルを管理する必要がある: システムコンフィグに関しては、以下のファイルを管理する必要がある:
- [.env](./docker/.env): `SVR_HTTP_PORT`、`MYSQL_PASSWORD`、`MINIO_PASSWORD` などのシステムの基本設定を保持する。 - [.env](./docker/.env): `SVR_HTTP_PORT`、`MYSQL_PASSWORD`、`MINIO_PASSWORD` などのシステムの基本設定を保持する。
- [service_conf.yaml](./docker/service_conf.yaml): バックエンドのサービスを設定します。 - [service_conf.yaml.template](./docker/service_conf.yaml.template): バックエンドのサービスを設定します。
- [docker-compose.yml](./docker/docker-compose.yml): システムの起動は [docker-compose.yml](./docker/docker-compose.yml) に依存している。 - [docker-compose.yml](./docker/docker-compose.yml): システムの起動は [docker-compose.yml](./docker/docker-compose.yml) に依存している。
[.env](./docker/.env) ファイルの変更が [service_conf.yaml](./docker/service_conf.yaml) ファイルの内容と一致していることを確認する必要があります。 [.env](./docker/.env) ファイルの変更が [service_conf.yaml.template](./docker/service_conf.yaml.template) ファイルの内容と一致していることを確認する必要があります。
> [./docker/README](./docker/README.md) ファイルは環境設定とサービスコンフィグの詳細な説明を提供し、[./docker/README](./docker/README.md) ファイルに記載されている全ての環境設定が [service_conf.yaml](./docker/service_conf.yaml) ファイルの対応するコンフィグと一致していることを確認することが義務付けられています。 > [./docker/README](./docker/README.md) ファイル ./docker/README には、service_conf.yaml.template ファイルで ${ENV_VARS} として使用できる環境設定とサービス構成の詳細な説明が含まれています。
デフォルトの HTTP サービングポート(80)を更新するには、[docker-compose.yml](./docker/docker-compose.yml) にアクセスして、`80:80` を `<YOUR_SERVING_PORT>:80` に変更します。 デフォルトの HTTP サービングポート(80)を更新するには、[docker-compose.yml](./docker/docker-compose.yml) にアクセスして、`80:80` を `<YOUR_SERVING_PORT>:80` に変更します。
> すべてのシステム設定のアップデートを有効にするには、システムの再起動が必要です: > すべてのシステム設定のアップデートを有効にするには、システムの再起動が必要です:
> >
> ```bash > ```bash
> $ docker-compose up -d > $ docker compose -f docker/docker-compose.yml up -d
> ``` > ```
## 🛠️ ソースからビルドする ### Elasticsearch から Infinity にドキュメントエンジンを切り替えます
ソースからDockerイメージをビルドするには: RAGFlow はデフォルトで Elasticsearch を使用して全文とベクトルを保存します。Infinityに切り替えhttps://github.com/infiniflow/infinity/)、次の手順に従います。
1. 実行中のすべてのコンテナを停止するには:
```bash
$ docker compose -f docker/docker-compose.yml down -v
```
2. **docker/.env** の「DOC \_ ENGINE」を「infinity」に設定します。
3. 起動コンテナ:
```bash
$ docker compose -f docker/docker-compose.yml up -d
```
> [!WARNING]
> Linux/arm64 マシンでの Infinity への切り替えは正式にサポートされていません。
## 🔧 ソースコードで Docker イメージを作成(埋め込みモデルなし)
この Docker イメージのサイズは約 1GB で、外部の大モデルと埋め込みサービスに依存しています。
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/ cd ragflow/
$ docker build -t infiniflow/ragflow:v0.11.0 . docker build --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
``` ```
## 🛠️ ソースコードからサービスを起動する方法 ## 🔧 ソースコードをコンパイルした Docker イメージ(埋め込みモデルを含む)
ソースコードからサービスを起動する場合は、以下の手順に従ってください: この Docker のサイズは約 9GB で、埋め込みモデルを含むため、外部の大モデルサービスのみが必要です。
1. リポジトリをクローンします
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
```
2. 仮想環境を作成しますAnacondaまたはMinicondaがインストールされていることを確認してください
```bash
$ conda create -n ragflow python=3.11.0
$ conda activate ragflow
$ pip install -r requirements.txt
```
CUDAのバージョンが12.0以上の場合、以下の追加コマンドを実行してください:
```bash
$ pip uninstall -y onnxruntime-gpu
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
```
3. エントリースクリプトをコピーし、環境変数を設定します
```bash
$ cp docker/entrypoint.sh .
$ vi entrypoint.sh
```
以下のコマンドで Python のパスとragflowプロジェクトのパスを取得します
```bash
$ which python
$ pwd
```
`which python` の出力を `PY` の値として、`pwd` の出力を `PYTHONPATH` の値として設定します。
`LD_LIBRARY_PATH` が既に設定されている場合は、コメントアウトできます。
```bash ```bash
# 実際の状況に応じて設定を調整してください。以下の二つの export は新たに追加された設定です git clone https://github.com/infiniflow/ragflow.git
PY=${PY} cd ragflow/
export PYTHONPATH=${PYTHONPATH} docker build -f Dockerfile -t infiniflow/ragflow:nightly .
# オプションHugging Face ミラーを追加
export HF_ENDPOINT=https://hf-mirror.com
``` ```
4. 基本サービスを起動しま ## 🔨 ソースコードからサービスを起動する方法
```bash
$ cd docker
$ docker compose -f docker-compose-base.yml up -d
```
5. 設定ファイルを確認します 1. uv をインストールする。すでにインストールされている場合は、このステップをスキップしてください:
**docker/.env** 内の設定が**conf/service_conf.yaml**内の設定と一致していることを確認してください。**service_conf.yaml**内の関連サービスのIPアドレスとポートは、ローカルマシンのIPアドレスとコンテナが公開するポートに変更する必要があります。
6. サービスを起動します ```bash
```bash pipx install uv
$ chmod +x ./entrypoint.sh ```
$ bash ./entrypoint.sh
``` 2. ソースコードをクローンし、Python の依存関係をインストールする:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
```
3. Docker Compose を使用して依存サービスMinIO、Elasticsearch、Redis、MySQLを起動する:
```bash
docker compose -f docker/docker-compose-base.yml up -d
```
`/etc/hosts` に以下の行を追加して、**conf/service_conf.yaml** に指定されたすべてのホストを `127.0.0.1` に解決します:
```
127.0.0.1 es01 infinity mysql minio redis
```
4. HuggingFace にアクセスできない場合は、`HF_ENDPOINT` 環境変数を設定してミラーサイトを使用してください:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
5. バックエンドサービスを起動する:
```bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
6. フロントエンドの依存関係をインストールする:
```bash
cd web
npm install
```
7. フロントエンドサービスを起動する:
```bash
npm run dev
```
_以下の画面で、システムが正常に起動したことを示します:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
## 📚 ドキュメンテーション ## 📚 ドキュメンテーション
- [Quickstart](https://ragflow.io/docs/dev/) - [Quickstart](https://ragflow.io/docs/dev/)
- [User guide](https://ragflow.io/docs/dev/category/user-guides) - [User guide](https://ragflow.io/docs/dev/category/guides)
- [References](https://ragflow.io/docs/dev/category/references) - [References](https://ragflow.io/docs/dev/category/references)
- [FAQ](https://ragflow.io/docs/dev/faq) - [FAQ](https://ragflow.io/docs/dev/faq)
## 📜 ロードマップ ## 📜 ロードマップ
[RAGFlow ロードマップ 2024](https://github.com/infiniflow/ragflow/issues/162) を参照 [RAGFlow ロードマップ 2025](https://github.com/infiniflow/ragflow/issues/4214) を参照
## 🏄 コミュニティ ## 🏄 コミュニティ
@ -286,4 +324,4 @@ $ bash ./entrypoint.sh
## 🙌 コントリビュート ## 🙌 コントリビュート
RAGFlow はオープンソースのコラボレーションによって発展してきました。この精神に基づき、私たちはコミュニティからの多様なコントリビュートを受け入れています。 参加を希望される方は、まず [コントリビューションガイド](./docs/references/CONTRIBUTING.md)をご覧ください。 RAGFlow はオープンソースのコラボレーションによって発展してきました。この精神に基づき、私たちはコミュニティからの多様なコントリビュートを受け入れています。 参加を希望される方は、まず [コントリビューションガイド](./CONTRIBUTING.md)をご覧ください。

View File

@ -7,92 +7,99 @@
<p align="center"> <p align="center">
<a href="./README.md">English</a> | <a href="./README.md">English</a> |
<a href="./README_zh.md">简体中文</a> | <a href="./README_zh.md">简体中文</a> |
<a href="./README_tzh.md">繁体中文</a> |
<a href="./README_ja.md">日本語</a> | <a href="./README_ja.md">日本語</a> |
<a href="./README_ko.md">한국어</a> | <a href="./README_ko.md">한국어</a> |
<a href="./README_id.md">Bahasa Indonesia</a> |
<a href="/README_pt_br.md">Português (Brasil)</a>
</p> </p>
<p align="center"> <p align="center">
<a href="https://x.com/intent/follow?screen_name=infiniflowai" target="_blank">
<img src="https://img.shields.io/twitter/follow/infiniflow?logo=X&color=%20%23f5f5f5" alt="follow on X(Twitter)">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.16.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.16.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
</a> </a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.11.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.11.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE"> <a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license"> <img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a> </a>
</p> </p>
<h4 align="center"> <h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> | <a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/162">Roadmap</a> | <a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> | <a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/4XxujFgUN7">Discord</a> | <a href="https://discord.gg/4XxujFgUN7">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
## 💡 RAGFlow란? ## 💡 RAGFlow란?
[RAGFlow](https://ragflow.io/)는 심층 문서 이해에 기반한 오픈소스 RAG (Retrieval-Augmented Generation) 엔진입니다. 이 엔진은 대규모 언어 모델(LLM)과 결합하여 정확한 질문 응답 기능을 제공하며, 다양한 복잡한 형식의 데이터에서 신뢰할 수 있는 출처를 바탕으로 한 인용을 통해 이를 뒷받침합니다. RAGFlow는 규모에 상관없이 모든 기업에 최적화된 RAG 워크플로우를 제공합니다. [RAGFlow](https://ragflow.io/)는 심층 문서 이해에 기반한 오픈소스 RAG (Retrieval-Augmented Generation) 엔진입니다. 이 엔진은 대규모 언어 모델(LLM)과 결합하여 정확한 질문 응답 기능을 제공하며, 다양한 복잡한 형식의 데이터에서 신뢰할 수 있는 출처를 바탕으로 한 인용을 통해 이를 뒷받침합니다. RAGFlow는 규모에 상관없이 모든 기업에 최적화된 RAG 워크플로우를 제공합니다.
## 🎮 데모 ## 🎮 데모
데모를 [https://demo.ragflow.io](https://demo.ragflow.io)에서 실행해 보세요. 데모를 [https://demo.ragflow.io](https://demo.ragflow.io)에서 실행해 보세요.
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/> <img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/>
<img src="https://github.com/infiniflow/ragflow/assets/12318111/b083d173-dadc-4ea9-bdeb-180d7df514eb" width="1200"/> <img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/>
</div> </div>
## 🔥 업데이트 ## 🔥 업데이트
- 2024-09-13 지식베이스 Q&A 검색 모드를 추가합니다. - 2025-02-05 'SILICONFLOW' 모델 목록을 업데이트하고 Deepseek-R1/DeepSeek-V3에 대한 지원을 추가합니다.
- 2025-01-26 지식 그래프 추출 및 적용을 최적화하고 다양한 구성 옵션을 제공합니다.
- 2024-09-09 Agent에 의료상담 템플릿을 추가하였습니다. - 2024-12-18 Deepdoc의 문서 레이아웃 분석 모델 업그레이드.
- 2024-12-04 지식베이스에 대한 페이지랭크 점수를 지원합니다.
- 2024-11-22 에이전트의 변수 정의 및 사용을 개선했습니다.
- 2024-11-01 파싱된 청크에 키워드 추출 및 관련 질문 생성을 추가하여 재현율을 향상시킵니다.
- 2024-08-22 RAG를 통해 SQL 문에 텍스트를 지원합니다. - 2024-08-22 RAG를 통해 SQL 문에 텍스트를 지원합니다.
- 2024-08-02: [graphrag](https://github.com/microsoft/graphrag)와 마인드맵에서 영감을 받은 GraphRAG를 지원합니다.
- 2024-07-23: 오디오 파일 분석을 지원합니다. ## 🎉 계속 지켜봐 주세요
- 2024-07-08: [Graph](./agent/README.md)를 기반으로 한 워크플로우를 지원합니다.
- 2024-06-27 Q&A 구문 분석 방식에서 Markdown 및 Docx를 지원하고, Docx 파일에서 이미지 추출, Markdown 파일에서 테이블 추출을 지원합니다.
- 2024-05-23: 더 나은 텍스트 검색을 위해 [RAPTOR](https://arxiv.org/html/2401.18059v1)를 지원합니다.
⭐️우리의 저장소를 즐겨찾기에 등록하여 흥미로운 새로운 기능과 업데이트를 최신 상태로 유지하세요! 모든 새로운 릴리스에 대한 즉시 알림을 받으세요! 🌟
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/18c9707e-b8aa-4caf-a154-037089c105ba" width="1200"/>
</div>
## 🌟 주요 기능 ## 🌟 주요 기능
### 🍭 **"Quality in, quality out"** ### 🍭 **"Quality in, quality out"**
- [심층 문서 이해](./deepdoc/README.md)를 기반으로 복잡한 형식의 비정형 데이터에서 지식을 추출합니다. - [심층 문서 이해](./deepdoc/README.md)를 기반으로 복잡한 형식의 비정형 데이터에서 지식을 추출합니다.
- 문자 그대로 무한한 토큰에서 "데이터 속의 바늘"을 찾아냅니다. - 문자 그대로 무한한 토큰에서 "데이터 속의 바늘"을 찾아냅니다.
### 🍱 **템플릿 기반의 chunking** ### 🍱 **템플릿 기반의 chunking**
- 똑똑하고 설명 가능한 방식. - 똑똑하고 설명 가능한 방식.
- 다양한 템플릿 옵션을 제공합니다. - 다양한 템플릿 옵션을 제공합니다.
### 🌱 **할루시네이션을 줄인 신뢰할 수 있는 인용** ### 🌱 **할루시네이션을 줄인 신뢰할 수 있는 인용**
- 텍스트 청킹을 시각화하여 사용자가 개입할 수 있도록 합니다. - 텍스트 청킹을 시각화하여 사용자가 개입할 수 있도록 합니다.
- 중요한 참고 자료와 추적 가능한 인용을 빠르게 확인하여 신뢰할 수 있는 답변을 지원합니다. - 중요한 참고 자료와 추적 가능한 인용을 빠르게 확인하여 신뢰할 수 있는 답변을 지원합니다.
### 🍔 **다른 종류의 데이터 소스와의 호환성** ### 🍔 **다른 종류의 데이터 소스와의 호환성**
- 워드, 슬라이드, 엑셀, 텍스트 파일, 이미지, 스캔본, 구조화된 데이터, 웹 페이지 등을 지원합니다. - 워드, 슬라이드, 엑셀, 텍스트 파일, 이미지, 스캔본, 구조화된 데이터, 웹 페이지 등을 지원합니다.
### 🛀 **자동화되고 손쉬운 RAG 워크플로우** ### 🛀 **자동화되고 손쉬운 RAG 워크플로우**
- 개인 및 대규모 비즈니스에 맞춘 효율적인 RAG 오케스트레이션. - 개인 및 대규모 비즈니스에 맞춘 효율적인 RAG 오케스트레이션.
- 구성 가능한 LLM 및 임베딩 모델. - 구성 가능한 LLM 및 임베딩 모델.
- 다중 검색과 결합된 re-ranking. - 다중 검색과 결합된 re-ranking.
- 비즈니스와 원활하게 통합할 수 있는 직관적인 API. - 비즈니스와 원활하게 통합할 수 있는 직관적인 API.
## 🔎 시스템 아키텍처 ## 🔎 시스템 아키텍처
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
@ -100,17 +107,19 @@
</div> </div>
## 🎬 시작하기 ## 🎬 시작하기
### 📝 사전 준비 사항 ### 📝 사전 준비 사항
- CPU >= 4 cores - CPU >= 4 cores
- RAM >= 16 GB - RAM >= 16 GB
- Disk >= 50 GB - Disk >= 50 GB
- Docker >= 24.0.0 & Docker Compose >= v2.26.1 - Docker >= 24.0.0 & Docker Compose >= v2.26.1
> 로컬 머신(Windows, Mac, Linux)에 Docker가 설치되지 않은 경우, [Docker 엔진 설치]((https://docs.docker.com/engine/install/))를 참조하세요. > 로컬 머신(Windows, Mac, Linux)에 Docker가 설치되지 않은 경우, [Docker 엔진 설치](<(https://docs.docker.com/engine/install/)>)를 참조하세요.
### 🚀 서버 시작하기 ### 🚀 서버 시작하기
1. `vm.max_map_count`가 262144 이상인지 확인하세요: 1. `vm.max_map_count`가 262144 이상인지 확인하세요:
> `vm.max_map_count`의 값을 아래 명령어를 통해 확인하세요: > `vm.max_map_count`의 값을 아래 명령어를 통해 확인하세요:
> >
> ```bash > ```bash
@ -138,15 +147,19 @@
3. 미리 빌드된 Docker 이미지를 생성하고 서버를 시작하세요: 3. 미리 빌드된 Docker 이미지를 생성하고 서버를 시작하세요:
> 다음 명령어를 실행하면 *dev* 버전의 RAGFlow Docker 이미지가 자동으로 다운로드니다. 특정 Docker 버전을 다운로드하고 실행하려면, **docker/.env** 파일에서 `RAGFLOW_VERSION`을 원하는 버전으로 업데이트한 후, 예를 들어 `RAGFLOW_VERSION=v0.11.0`로 업데이트 한 뒤, 다음 명령어를 실행하세요. > 아래 명령어 RAGFlow Docker 이미지의 v0.16.0-slim 버전을 다운로드니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.16.0-slim과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. 예를 들어, 전체 버전인 v0.16.0을 다운로드하려면 RAGFLOW_IMAGE=infiniflow/ragflow:v0.16.0로 설정합니다.
```bash
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
```
> 기본 이미지는 약 9GB 크기이며 로드하는 데 시간이 걸릴 수 있습니다.
```bash
$ cd ragflow
$ docker compose -f docker/docker-compose.yml up -d
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.16.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.16.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
4. 서버가 시작된 후 서버 상태를 확인하세요: 4. 서버가 시작된 후 서버 상태를 확인하세요:
@ -157,23 +170,24 @@
_다음 출력 결과로 시스템이 성공적으로 시작되었음을 확인합니다:_ _다음 출력 결과로 시스템이 성공적으로 시작되었음을 확인합니다:_
```bash ```bash
____ ______ __ ____ ___ ______ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __ / __ \ / | / ____// ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / / / /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ / / _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/ /_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
/____/
* Running on all addresses (0.0.0.0) * Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380 * Running on http://127.0.0.1:9380
* Running on http://x.x.x.x:9380 * Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit INFO:werkzeug:Press CTRL+C to quit
``` ```
> 만약 확인 단계를 건너뛰고 바로 RAGFlow에 로그인하면, RAGFlow가 완전히 초기화되지 않았기 때문에 브라우저에서 `network abnormal` 오류가 발생할 수 있습니다.
> 만약 확인 단계를 건너뛰고 바로 RAGFlow에 로그인하면, RAGFlow가 완전히 초기화되지 않았기 때문에 브라우저에서 `network anormal` 오류가 발생할 수 있습니다.
5. 웹 브라우저에 서버의 IP 주소를 입력하고 RAGFlow에 로그인하세요. 5. 웹 브라우저에 서버의 IP 주소를 입력하고 RAGFlow에 로그인하세요.
> 기본 설정을 사용할 경우, `http://IP_OF_YOUR_MACHINE`만 입력하면 됩니다 (포트 번호는 제외). 기본 HTTP 서비스 포트 `80`은 기본 구성으로 사용할 때 생략할 수 있습니다. > 기본 설정을 사용할 경우, `http://IP_OF_YOUR_MACHINE`만 입력하면 됩니다 (포트 번호는 제외). 기본 HTTP 서비스 포트 `80`은 기본 구성으로 사용할 때 생략할 수 있습니다.
6. [service_conf.yaml](./docker/service_conf.yaml) 파일에서 원하는 LLM 팩토리를 `user_default_llm`에 선택하고, `API_KEY` 필드를 해당 API 키로 업데이트하세요. 6. [service_conf.yaml.template](./docker/service_conf.yaml.template) 파일에서 원하는 LLM 팩토리를 `user_default_llm`에 선택하고, `API_KEY` 필드를 해당 API 키로 업데이트하세요.
> 자세한 내용은 [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup)를 참조하세요. > 자세한 내용은 [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup)를 참조하세요.
_이제 쇼가 시작됩니다!_ _이제 쇼가 시작됩니다!_
@ -183,136 +197,124 @@
시스템 설정과 관련하여 다음 파일들을 관리해야 합니다: 시스템 설정과 관련하여 다음 파일들을 관리해야 합니다:
- [.env](./docker/.env): `SVR_HTTP_PORT`, `MYSQL_PASSWORD`, `MINIO_PASSWORD`와 같은 시스템의 기본 설정을 포함합니다. - [.env](./docker/.env): `SVR_HTTP_PORT`, `MYSQL_PASSWORD`, `MINIO_PASSWORD`와 같은 시스템의 기본 설정을 포함합니다.
- [service_conf.yaml](./docker/service_conf.yaml): 백엔드 서비스를 구성합니다. - [service_conf.yaml.template](./docker/service_conf.yaml.template): 백엔드 서비스를 구성합니다.
- [docker-compose.yml](./docker/docker-compose.yml): 시스템은 [docker-compose.yml](./docker/docker-compose.yml)을 사용하여 시작됩니다. - [docker-compose.yml](./docker/docker-compose.yml): 시스템은 [docker-compose.yml](./docker/docker-compose.yml)을 사용하여 시작됩니다.
[.env](./docker/.env) 파일의 변경 사항이 [service_conf.yaml](./docker/service_conf.yaml) 파일의 내용과 일치하도록 해야 합니다. [.env](./docker/.env) 파일의 변경 사항이 [service_conf.yaml.template](./docker/service_conf.yaml.template) 파일의 내용과 일치하도록 해야 합니다.
> [./docker/README](./docker/README.md) 파일에는 환경 설정과 서비스 구성에 대한 자세한 설명이 있으며, [./docker/README](./docker/README.md) 파일에 나열된 모든 환경 설정이 [service_conf.yaml](./docker/service_conf.yaml) 파일의 해당 구성과 일치하도록 해야 합니다. > [./docker/README](./docker/README.md) 파일 ./docker/README은 service_conf.yaml.template 파일에서 ${ENV_VARS}로 사용할 수 있는 환경 설정과 서비스 구성에 대한 자세한 설명을 제공합니다.
기본 HTTP 서비스 포트(80)를 업데이트하려면 [docker-compose.yml](./docker/docker-compose.yml) 파일에서 `80:80`을 `<YOUR_SERVING_PORT>:80`으로 변경하세요. 기본 HTTP 서비스 포트(80)를 업데이트하려면 [docker-compose.yml](./docker/docker-compose.yml) 파일에서 `80:80`을 `<YOUR_SERVING_PORT>:80`으로 변경하세요.
> 모든 시스템 구성 업데이트는 적용되기 위해 시스템 재부팅이 필요합니다. > 모든 시스템 구성 업데이트는 적용되기 위해 시스템 재부팅이 필요합니다.
> >
> ```bash > ```bash
> $ docker-compose up -d > $ docker compose -f docker/docker-compose.yml up -d
> ``` > ```
## 🛠️ 소스에서 빌드하기 ### Elasticsearch 에서 Infinity 로 문서 엔진 전환
Docker 이미지를 소스에서 빌드하려면: RAGFlow 는 기본적으로 Elasticsearch 를 사용하여 전체 텍스트 및 벡터를 저장합니다. [Infinity]로 전환(https://github.com/infiniflow/infinity/), 다음 절차를 따르십시오.
1. 실행 중인 모든 컨테이너를 중지합니다.
```bash
$docker compose-f docker/docker-compose.yml down -v
```
2. **docker/.env**의 "DOC_ENGINE" 을 "infinity" 로 설정합니다.
3. 컨테이너 부팅:
```bash
$docker compose-f docker/docker-compose.yml up -d
```
> [!WARNING]
> Linux/arm64 시스템에서 Infinity로 전환하는 것은 공식적으로 지원되지 않습니다.
## 🔧 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함하지 않음)
이 Docker 이미지의 크기는 약 1GB이며, 외부 대형 모델과 임베딩 서비스에 의존합니다.
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/ cd ragflow/
$ docker build -t infiniflow/ragflow:dev . docker build --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
``` ```
## 🔧 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함)
## 🛠️ 소스에서 서비스 시작하기 이 Docker의 크기는 약 9GB이며, 이미 임베딩 모델을 포함하고 있으므로 외부 대형 모델 서비스에만 의존하면 됩니다.
서비스를 소스에서 시작하려면: ```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build -f Dockerfile -t infiniflow/ragflow:nightly .
```
1. 레포지토리를 클론하세요: ## 🔨 소스 코드로 서비스를 시작합니다.
1. uv를 설치하거나 이미 설치된 경우 이 단계를 건너뜁니다:
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git pipx install uv
$ cd ragflow/
``` ```
2. 가상 환경을 생성하고, Anaconda 또는 Miniconda가 설치되어 있는지 확인하세요: 2. 소스 코드를 클론하고 Python 의존성을 설치합니다:
```bash
$ conda create -n ragflow python=3.11.0 ```bash
$ conda activate ragflow git clone https://github.com/infiniflow/ragflow.git
$ pip install -r requirements.txt cd ragflow/
``` uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
```
```bash
# CUDA 버전이 12.0보다 높은 경우, 다음 명령어를 추가로 실행하세요: 3. Docker Compose를 사용하여 의존 서비스(MinIO, Elasticsearch, Redis 및 MySQL)를 시작합니다:
$ pip uninstall -y onnxruntime-gpu
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/ ```bash
``` docker compose -f docker/docker-compose-base.yml up -d
```
3. 진입 스크립트를 복사하고 환경 변수를 설정하세요:
```bash `/etc/hosts` 에 다음 줄을 추가하여 **conf/service_conf.yaml** 에 지정된 모든 호스트를 `127.0.0.1` 로 해결합니다:
# 파이썬 경로를 받아옵니다:
$ which python ```
# RAGFlow 프로젝트 경로를 받아옵니다: 127.0.0.1 es01 infinity mysql minio redis
$ pwd ```
```
4. HuggingFace에 접근할 수 없는 경우, `HF_ENDPOINT` 환경 변수를 설정하여 미러 사이트를 사용하세요:
```bash
$ cp docker/entrypoint.sh .
$ vi entrypoint.sh
```
```bash ```bash
# 실제 상황에 맞게 설정 조정하기 (다음 두 개의 export 명령어는 새로 추가되었습니다):
# - `which python`의 결과를 `PY`에 할당합니다.
# - `pwd`의 결과를 `PYTHONPATH`에 할당합니다.
# - `LD_LIBRARY_PATH`가 설정되어 있는 경우 주석 처리합니다.
# - 선택 사항: Hugging Face 미러 추가.
PY=${PY}
export PYTHONPATH=${PYTHONPATH}
export HF_ENDPOINT=https://hf-mirror.com export HF_ENDPOINT=https://hf-mirror.com
``` ```
4. 다른 서비스(MinIO, Elasticsearch, Redis, MySQL)를 시작하세요: 5. 백엔드 서비스를 시작합니다:
```bash
$ cd docker
$ docker compose -f docker-compose-base.yml up -d
```
5. 설정 파일을 확인하여 다음 사항을 확인하세요:
- **docker/.env**의 설정이 **conf/service_conf.yaml**의 설정과 일치하는지 확인합니다.
- **service_conf.yaml**의 관련 서비스에 대한 IP 주소와 포트가 로컬 머신의 IP 주소와 컨테이너에서 노출된 포트와 일치하는지 확인합니다.
6. RAGFlow 백엔드 서비스를 시작합니다:
```bash ```bash
$ chmod +x ./entrypoint.sh source .venv/bin/activate
$ bash ./entrypoint.sh export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
``` ```
6. 프론트엔드 의존성을 설치합니다:
```bash
cd web
npm install
```
7. 프론트엔드 서비스를 시작합니다: 7. 프론트엔드 서비스를 시작합니다:
```bash ```bash
$ cd web npm run dev
$ npm install --registry=https://registry.npmmirror.com --force
$ vim .umirc.ts
# proxy.target을 http://127.0.0.1:9380로 업데이트합니다.
$ npm run dev
``` ```
8. 프론트엔드 서비스를 배포합니다: _다음 인터페이스는 시스템이 성공적으로 시작되었음을 나타냅니다:_
```bash ![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ umi build
$ mkdir -p /ragflow/web
$ cp -r dist /ragflow/web
$ apt install nginx -y
$ cp ../docker/nginx/proxy.conf /etc/nginx
$ cp ../docker/nginx/nginx.conf /etc/nginx
$ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d
$ systemctl start nginx
```
## 📚 문서 ## 📚 문서
- [Quickstart](https://ragflow.io/docs/dev/) - [Quickstart](https://ragflow.io/docs/dev/)
- [User guide](https://ragflow.io/docs/dev/category/user-guides) - [User guide](https://ragflow.io/docs/dev/category/guides)
- [References](https://ragflow.io/docs/dev/category/references) - [References](https://ragflow.io/docs/dev/category/references)
- [FAQ](https://ragflow.io/docs/dev/faq) - [FAQ](https://ragflow.io/docs/dev/faq)
## 📜 로드맵 ## 📜 로드맵
[RAGFlow 로드맵 2024](https://github.com/infiniflow/ragflow/issues/162)을 확인하세요. [RAGFlow 로드맵 2025](https://github.com/infiniflow/ragflow/issues/4214)을 확인하세요.
## 🏄 커뮤니티 ## 🏄 커뮤니티
@ -322,4 +324,4 @@ $ docker compose up -d
## 🙌 컨트리뷰션 ## 🙌 컨트리뷰션
RAGFlow는 오픈소스 협업을 통해 발전합니다. 이러한 정신을 바탕으로, 우리는 커뮤니티의 다양한 기여를 환영합니다. 참여하고 싶으시다면, 먼저 [가이드라인](./docs/references/CONTRIBUTING.md)을 검토해 주세요. RAGFlow는 오픈소스 협업을 통해 발전합니다. 이러한 정신을 바탕으로, 우리는 커뮤니티의 다양한 기여를 환영합니다. 참여하고 싶으시다면, 먼저 [가이드라인](./CONTRIBUTING.md)을 검토해 주세요.

354
README_pt_br.md Normal file
View File

@ -0,0 +1,354 @@
<div align="center">
<a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.png" width="520" alt="ragflow logo">
</a>
</div>
<p align="center">
<a href="./README.md">English</a> |
<a href="./README_zh.md">简体中文</a> |
<a href="./README_tzh.md">繁体中文</a> |
<a href="./README_ja.md">日本語</a> |
<a href="./README_ko.md">한국어</a> |
<a href="./README_id.md">Bahasa Indonesia</a> |
<a href="/README_pt_br.md">Português (Brasil)</a>
</p>
<p align="center">
<a href="https://x.com/intent/follow?screen_name=infiniflowai" target="_blank">
<img src="https://img.shields.io/twitter/follow/infiniflow?logo=X&color=%20%23f5f5f5" alt="seguir no X(Twitter)">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Badge Estático" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.16.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.16.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Última%20Relese" alt="Última Versão">
</a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="licença">
</a>
</p>
<h4 align="center">
<a href="https://ragflow.io/docs/dev/">Documentação</a> |
<a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/4XxujFgUN7">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a>
</h4>
<details open>
<summary></b>📕 Índice</b></summary>
- 💡 [O que é o RAGFlow?](#-o-que-é-o-ragflow)
- 🎮 [Demo](#-demo)
- 📌 [Últimas Atualizações](#-últimas-atualizações)
- 🌟 [Principais Funcionalidades](#-principais-funcionalidades)
- 🔎 [Arquitetura do Sistema](#-arquitetura-do-sistema)
- 🎬 [Primeiros Passos](#-primeiros-passos)
- 🔧 [Configurações](#-configurações)
- 🔧 [Construir uma imagem docker sem incorporar modelos](#-construir-uma-imagem-docker-sem-incorporar-modelos)
- 🔧 [Construir uma imagem docker incluindo modelos](#-construir-uma-imagem-docker-incluindo-modelos)
- 🔨 [Lançar serviço a partir do código-fonte para desenvolvimento](#-lançar-serviço-a-partir-do-código-fonte-para-desenvolvimento)
- 📚 [Documentação](#-documentação)
- 📜 [Roadmap](#-roadmap)
- 🏄 [Comunidade](#-comunidade)
- 🙌 [Contribuindo](#-contribuindo)
</details>
## 💡 O que é o RAGFlow?
[RAGFlow](https://ragflow.io/) é um mecanismo RAG (Geração Aumentada por Recuperação) de código aberto baseado em entendimento profundo de documentos. Ele oferece um fluxo de trabalho RAG simplificado para empresas de qualquer porte, combinando LLMs (Modelos de Linguagem de Grande Escala) para fornecer capacidades de perguntas e respostas verídicas, respaldadas por citações bem fundamentadas de diversos dados complexos formatados.
## 🎮 Demo
Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/>
<img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/>
</div>
## 🔥 Últimas Atualizações
- 05-02-2025 Atualiza a lista de modelos de 'SILICONFLOW' e adiciona suporte para Deepseek-R1/DeepSeek-V3.
- 26-01-2025 Otimize a extração e aplicação de gráficos de conhecimento e forneça uma variedade de opções de configuração.
- 18-12-2024 Atualiza o modelo de Análise de Layout de Documentos no Deepdoc.
- 04-12-2024 Adiciona suporte para pontuação de pagerank na base de conhecimento.
- 22-11-2024 Adiciona mais variáveis para o Agente.
- 01-11-2024 Adiciona extração de palavras-chave e geração de perguntas relacionadas aos blocos analisados para melhorar a precisão da recuperação.
- 22-08-2024 Suporta conversão de texto para comandos SQL via RAG.
## 🎉 Fique Ligado
⭐️ Dê uma estrela no nosso repositório para se manter atualizado com novas funcionalidades e melhorias empolgantes! Receba notificações instantâneas sobre novos lançamentos! 🌟
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/18c9707e-b8aa-4caf-a154-037089c105ba" width="1200"/>
</div>
## 🌟 Principais Funcionalidades
### 🍭 **"Qualidade entra, qualidade sai"**
- Extração de conhecimento baseada em [entendimento profundo de documentos](./deepdoc/README.md) a partir de dados não estruturados com formatos complicados.
- Encontra a "agulha no palheiro de dados" de literalmente tokens ilimitados.
### 🍱 **Fragmentação baseada em templates**
- Inteligente e explicável.
- Muitas opções de templates para escolher.
### 🌱 **Citações fundamentadas com menos alucinações**
- Visualização da fragmentação de texto para permitir intervenção humana.
- Visualização rápida das referências chave e citações rastreáveis para apoiar respostas fundamentadas.
### 🍔 **Compatibilidade com fontes de dados heterogêneas**
- Suporta Word, apresentações, excel, txt, imagens, cópias digitalizadas, dados estruturados, páginas da web e mais.
### 🛀 **Fluxo de trabalho RAG automatizado e sem esforço**
- Orquestração RAG simplificada voltada tanto para negócios pessoais quanto grandes empresas.
- Modelos LLM e de incorporação configuráveis.
- Múltiplas recuperações emparelhadas com reclassificação fundida.
- APIs intuitivas para integração sem problemas com os negócios.
## 🔎 Arquitetura do Sistema
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div>
## 🎬 Primeiros Passos
### 📝 Pré-requisitos
- CPU >= 4 núcleos
- RAM >= 16 GB
- Disco >= 50 GB
- Docker >= 24.0.0 & Docker Compose >= v2.26.1
> Se você não instalou o Docker na sua máquina local (Windows, Mac ou Linux), veja [Instalar Docker Engine](https://docs.docker.com/engine/install/).
### 🚀 Iniciar o servidor
1. Certifique-se de que `vm.max_map_count` >= 262144:
> Para verificar o valor de `vm.max_map_count`:
>
> ```bash
> $ sysctl vm.max_map_count
> ```
>
> Se necessário, redefina `vm.max_map_count` para um valor de pelo menos 262144:
>
> ```bash
> # Neste caso, defina para 262144:
> $ sudo sysctl -w vm.max_map_count=262144
> ```
>
> Essa mudança será resetada após a reinicialização do sistema. Para garantir que a alteração permaneça permanente, adicione ou atualize o valor de `vm.max_map_count` em **/etc/sysctl.conf**:
>
> ```bash
> vm.max_map_count=262144
> ```
2. Clone o repositório:
```bash
$ git clone https://github.com/infiniflow/ragflow.git
```
3. Inicie o servidor usando as imagens Docker pré-compiladas:
> O comando abaixo baixa a edição `v0.16.0-slim` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.16.0-slim`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. Por exemplo: defina `RAGFLOW_IMAGE=infiniflow/ragflow:v0.16.0` para a edição completa `v0.16.0`.
```bash
$ cd ragflow
$ docker compose -f docker/docker-compose.yml up -d
```
| Tag da imagem RAGFlow | Tamanho da imagem (GB) | Possui modelos de incorporação? | Estável? |
| --------------------- | ---------------------- | ------------------------------- | ------------------------ |
| v0.16.0 | ~9 | :heavy_check_mark: | Lançamento estável |
| v0.16.0-slim | ~2 | ❌ | Lançamento estável |
| nightly | ~9 | :heavy_check_mark: | _Instável_ build noturno |
| nightly-slim | ~2 | ❌ | _Instável_ build noturno |
4. Verifique o status do servidor após tê-lo iniciado:
```bash
$ docker logs -f ragflow-server
```
_O seguinte resultado confirma o lançamento bem-sucedido do sistema:_
```bash
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Rodando em todos os endereços (0.0.0.0)
* Rodando em http://127.0.0.1:9380
* Rodando em http://x.x.x.x:9380
INFO:werkzeug:Pressione CTRL+C para sair
```
> Se você pular essa etapa de confirmação e acessar diretamente o RAGFlow, seu navegador pode exibir um erro `network anormal`, pois, nesse momento, seu RAGFlow pode não estar totalmente inicializado.
5. No seu navegador, insira o endereço IP do seu servidor e faça login no RAGFlow.
> Com as configurações padrão, você só precisa digitar `http://IP_DO_SEU_MÁQUINA` (**sem** o número da porta), pois a porta HTTP padrão `80` pode ser omitida ao usar as configurações padrão.
6. Em [service_conf.yaml.template](./docker/service_conf.yaml.template), selecione a fábrica LLM desejada em `user_default_llm` e atualize o campo `API_KEY` com a chave de API correspondente.
> Consulte [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) para mais informações.
_O show está no ar!_
## 🔧 Configurações
Quando se trata de configurações do sistema, você precisará gerenciar os seguintes arquivos:
- [.env](./docker/.env): Contém as configurações fundamentais para o sistema, como `SVR_HTTP_PORT`, `MYSQL_PASSWORD` e `MINIO_PASSWORD`.
- [service_conf.yaml.template](./docker/service_conf.yaml.template): Configura os serviços de back-end. As variáveis de ambiente neste arquivo serão automaticamente preenchidas quando o contêiner Docker for iniciado. Quaisquer variáveis de ambiente definidas dentro do contêiner Docker estarão disponíveis para uso, permitindo personalizar o comportamento do serviço com base no ambiente de implantação.
- [docker-compose.yml](./docker/docker-compose.yml): O sistema depende do [docker-compose.yml](./docker/docker-compose.yml) para iniciar.
> O arquivo [./docker/README](./docker/README.md) fornece uma descrição detalhada das configurações do ambiente e dos serviços, que podem ser usadas como `${ENV_VARS}` no arquivo [service_conf.yaml.template](./docker/service_conf.yaml.template).
Para atualizar a porta HTTP de serviço padrão (80), vá até [docker-compose.yml](./docker/docker-compose.yml) e altere `80:80` para `<SUA_PORTA_DE_SERVIÇO>:80`.
Atualizações nas configurações acima exigem um reinício de todos os contêineres para que tenham efeito:
> ```bash
> $ docker compose -f docker/docker-compose.yml up -d
> ```
### Mudar o mecanismo de documentos de Elasticsearch para Infinity
O RAGFlow usa o Elasticsearch por padrão para armazenar texto completo e vetores. Para mudar para o [Infinity](https://github.com/infiniflow/infinity/), siga estas etapas:
1. Pare todos os contêineres em execução:
```bash
$ docker compose -f docker/docker-compose.yml down -v
```
2. Defina `DOC_ENGINE` no **docker/.env** para `infinity`.
3. Inicie os contêineres:
```bash
$ docker compose -f docker/docker-compose.yml up -d
```
> [!ATENÇÃO]
> A mudança para o Infinity em uma máquina Linux/arm64 ainda não é oficialmente suportada.
## 🔧 Criar uma imagem Docker sem modelos de incorporação
Esta imagem tem cerca de 2 GB de tamanho e depende de serviços externos de LLM e incorporação.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 Criar uma imagem Docker incluindo modelos de incorporação
Esta imagem tem cerca de 9 GB de tamanho. Como inclui modelos de incorporação, depende apenas de serviços externos de LLM.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build -f Dockerfile -t infiniflow/ragflow:nightly .
```
## 🔨 Lançar o serviço a partir do código-fonte para desenvolvimento
1. Instale o `uv`, ou pule esta etapa se ele já estiver instalado:
```bash
pipx install uv
```
2. Clone o código-fonte e instale as dependências Python:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
uv sync --python 3.10 --all-extras # instala os módulos Python dependentes do RAGFlow
```
3. Inicie os serviços dependentes (MinIO, Elasticsearch, Redis e MySQL) usando Docker Compose:
```bash
docker compose -f docker/docker-compose-base.yml up -d
```
Adicione a seguinte linha ao arquivo `/etc/hosts` para resolver todos os hosts especificados em **docker/.env** para `127.0.0.1`:
```
127.0.0.1 es01 infinity mysql minio redis
```
4. Se não conseguir acessar o HuggingFace, defina a variável de ambiente `HF_ENDPOINT` para usar um site espelho:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
5. Lance o serviço de back-end:
```bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
6. Instale as dependências do front-end:
```bash
cd web
npm install
```
7. Lance o serviço de front-end:
```bash
npm run dev
```
_O seguinte resultado confirma o lançamento bem-sucedido do sistema:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
## 📚 Documentação
- [Início rápido](https://ragflow.io/docs/dev/)
- [Guia do usuário](https://ragflow.io/docs/dev/category/guides)
- [Referências](https://ragflow.io/docs/dev/category/references)
- [FAQ](https://ragflow.io/docs/dev/faq)
## 📜 Roadmap
Veja o [RAGFlow Roadmap 2025](https://github.com/infiniflow/ragflow/issues/4214)
## 🏄 Comunidade
- [Discord](https://discord.gg/4XxujFgUN7)
- [Twitter](https://twitter.com/infiniflowai)
- [GitHub Discussions](https://github.com/orgs/infiniflow/discussions)
## 🙌 Contribuindo
O RAGFlow prospera por meio da colaboração de código aberto. Com esse espírito, abraçamos contribuições diversas da comunidade.
Se você deseja fazer parte, primeiro revise nossas [Diretrizes de Contribuição](./CONTRIBUTING.md).

353
README_tzh.md Normal file
View File

@ -0,0 +1,353 @@
<div align="center">
<a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.png" width="350" alt="ragflow logo">
</a>
</div>
<p align="center">
<a href="./README.md">English</a> |
<a href="./README_zh.md">简体中文</a> |
<a href="./README_ja.md">日本語</a> |
<a href="./README_ko.md">한국어</a> |
<a href="./README_id.md">Bahasa Indonesia</a> |
<a href="/README_pt_br.md">Português (Brasil)</a>
</p>
<p align="center">
<a href="https://x.com/intent/follow?screen_name=infiniflowai" target="_blank">
<img src="https://img.shields.io/twitter/follow/infiniflow?logo=X&color=%20%23f5f5f5" alt="follow on X(Twitter)">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.16.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.16.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
</a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
</p>
<h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/4XxujFgUN7">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a>
</h4>
## 💡 RAGFlow 是什麼?
[RAGFlow](https://ragflow.io/) 是一款基於深度文件理解所建構的開源 RAGRetrieval-Augmented Generation引擎。 RAGFlow 可以為各種規模的企業及個人提供一套精簡的 RAG 工作流程結合大語言模型LLM針對用戶各類不同的複雜格式數據提供可靠的問答以及有理有據的引用。
## 🎮 Demo 試用
請登入網址 [https://demo.ragflow.io](https://demo.ragflow.io) 試用 demo。
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/>
<img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/>
</div>
## 🔥 近期更新
- 2025-02-05 更新「SILICONFLOW」的型號清單並新增 Deepseek-R1/DeepSeek-V3 的支援。
- 2025-01-26 最佳化知識圖譜的擷取與應用,提供了多種配置選擇。
- 2024-12-18 升級了 Deepdoc 的文檔佈局分析模型。
- 2024-12-04 支援知識庫的 Pagerank 分數。
- 2024-11-22 完善了 Agent 中的變數定義和使用。
- 2024-11-01 對解析後的 chunk 加入關鍵字抽取和相關問題產生以提高回想的準確度。
- 2024-08-22 支援用 RAG 技術實現從自然語言到 SQL 語句的轉換。
## 🎉 關注項目
⭐️ 點擊右上角的 Star 追蹤 RAGFlow可以取得最新發布的即時通知 !🌟
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/18c9707e-b8aa-4caf-a154-037089c105ba" width="1200"/>
</div>
## 🌟 主要功能
### 🍭 **"Quality in, quality out"**
- 基於[深度文件理解](./deepdoc/README.md),能夠從各類複雜格式的非結構化資料中提取真知灼見。
- 真正在無限上下文token的場景下快速完成大海撈針測試。
### 🍱 **基於模板的文字切片**
- 不只是智能,更重要的是可控可解釋。
- 多種文字範本可供選擇
### 🌱 **有理有據、最大程度降低幻覺hallucination**
- 文字切片過程視覺化,支援手動調整。
- 有理有據:答案提供關鍵引用的快照並支持追根溯源。
### 🍔 **相容各類異質資料來源**
- 支援豐富的文件類型,包括 Word 文件、PPT、excel 表格、txt 檔案、圖片、PDF、影印件、影印件、結構化資料、網頁等。
### 🛀 **全程無憂、自動化的 RAG 工作流程**
- 全面優化的 RAG 工作流程可以支援從個人應用乃至超大型企業的各類生態系統。
- 大語言模型 LLM 以及向量模型皆支援配置。
- 基於多路召回、融合重排序。
- 提供易用的 API可輕鬆整合到各類企業系統。
## 🔎 系統架構
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div>
## 🎬 快速開始
### 📝 前提條件
- CPU >= 4 核
- RAM >= 16 GB
- Disk >= 50 GB
- Docker >= 24.0.0 & Docker Compose >= v2.26.1
> 如果你並沒有在本機安裝 DockerWindows、Mac或 Linux, 可以參考文件 [Install Docker Engine](https://docs.docker.com/engine/install/) 自行安裝。
### 🚀 啟動伺服器
1. 確保 `vm.max_map_count` 不小於 262144
> 如需確認 `vm.max_map_count` 的大小:
>
> ```bash
> $ sysctl vm.max_map_count
> ```
>
> 如果 `vm.max_map_count` 的值小於 262144可以進行重設
>
> ```bash
> # 這裡我們設為 262144:
> $ sudo sysctl -w vm.max_map_count=262144
> ```
>
> 你的改動會在下次系統重新啟動時被重置。如果希望做永久改動,還需要在 **/etc/sysctl.conf** 檔案裡把 `vm.max_map_count` 的值再相應更新一遍:
>
> ```bash
> vm.max_map_count=262144
> ```
2. 克隆倉庫:
```bash
$ git clone https://github.com/infiniflow/ragflow.git
```
3. 進入 **docker** 資料夾,利用事先編譯好的 Docker 映像啟動伺服器:
> 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.16.0-slim`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.16.0-slim` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。例如,你可以透過設定 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.16.0` 來下載 RAGFlow 鏡像的 `v0.16.0` 完整發行版。
```bash
$ cd ragflow
$ docker compose -f docker/docker-compose.yml up -d
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.16.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.16.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
> [!TIP]
> 如果你遇到 Docker 映像檔拉不下來的問題,可以在 **docker/.env** 檔案內根據變數 `RAGFLOW_IMAGE` 的註解提示選擇華為雲或阿里雲的對應映像。
>
> - 華為雲鏡像名:`swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow`
> - 阿里雲鏡像名:`registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow`
4. 伺服器啟動成功後再次確認伺服器狀態:
```bash
$ docker logs -f ragflow-server
```
_出現以下介面提示說明伺服器啟動成功_
```bash
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
* Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit
```
> 如果您跳過這一步驟系統確認步驟就登入 RAGFlow你的瀏覽器有可能會提示 `network anormal` 或 `網路異常`,因為 RAGFlow 可能並未完全啟動成功。
5. 在你的瀏覽器中輸入你的伺服器對應的 IP 位址並登入 RAGFlow。
> 上面這個範例中,您只需輸入 http://IP_OF_YOUR_MACHINE 即可:未改動過設定則無需輸入連接埠(預設的 HTTP 服務連接埠 80
6. 在 [service_conf.yaml.template](./docker/service_conf.yaml.template) 檔案的 `user_default_llm` 欄位設定 LLM factory並在 `API_KEY` 欄填入和你選擇的大模型相對應的 API key。
> 詳見 [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup)。
_好戲開始接著奏樂接著舞 _
## 🔧 系統配置
系統配置涉及以下三份文件:
- [.env](./docker/.env):存放一些基本的系統環境變量,例如 `SVR_HTTP_PORT`、`MYSQL_PASSWORD`、`MINIO_PASSWORD` 等。
- [service_conf.yaml.template](./docker/service_conf.yaml.template):設定各類別後台服務。
- [docker-compose.yml](./docker/docker-compose.yml): 系統依賴該檔案完成啟動。
請務必確保 [.env](./docker/.env) 檔案中的變數設定與 [service_conf.yaml.template](./docker/service_conf.yaml.template) 檔案中的設定保持一致!
如果無法存取映像網站 hub.docker.com 或模型網站 huggingface.co請依照 [.env](./docker/.env) 註解修改 `RAGFLOW_IMAGE` 和 `HF_ENDPOINT`。
> [./docker/README](./docker/README.md) 解釋了 [service_conf.yaml.template](./docker/service_conf.yaml.template) 用到的環境變數設定和服務配置。
如需更新預設的 HTTP 服務連接埠(80), 可以在[docker-compose.yml](./docker/docker-compose.yml) 檔案中將配置`80:80` 改為`<YOUR_SERVING_PORT>:80` 。
> 所有系統配置都需要透過系統重新啟動生效:
>
> ```bash
> $ docker compose -f docker/docker-compose.yml up -d
> ```
###把文檔引擎從 Elasticsearch 切換成為 Infinity
RAGFlow 預設使用 Elasticsearch 儲存文字和向量資料. 如果要切換為 [Infinity](https://github.com/infiniflow/infinity/), 可以按照下面步驟進行:
1. 停止所有容器運作:
```bash
$ docker compose -f docker/docker-compose.yml down -v
```
2. 設定 **docker/.env** 目錄中的 `DOC_ENGINE` 為 `infinity`.
3. 啟動容器:
```bash
$ docker compose -f docker/docker-compose.yml up -d
```
> [!WARNING]
> Infinity 目前官方並未正式支援在 Linux/arm64 架構下的機器上運行.
## 🔧 原始碼編譯 Docker 映像(不含 embedding 模型)
本 Docker 映像大小約 2 GB 左右並且依賴外部的大模型和 embedding 服務。
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 原始碼編譯 Docker 映像(包含 embedding 模型)
本 Docker 大小約 9 GB 左右。由於已包含 embedding 模型,所以只需依賴外部的大模型服務即可。
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
```
## 🔨 以原始碼啟動服務
1. 安裝 uv。如已安裝可跳過此步驟
```bash
pipx install uv
export UV_INDEX=https://pypi.tuna.tsinghua.edu.cn/simple
```
2. 下載原始碼並安裝 Python 依賴:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
```
3. 透過 Docker Compose 啟動依賴的服務MinIO, Elasticsearch, Redis, and MySQL
```bash
docker compose -f docker/docker-compose-base.yml up -d
```
在 `/etc/hosts` 中加入以下程式碼,將 **conf/service_conf.yaml** 檔案中的所有 host 位址都解析為 `127.0.0.1`
```
127.0.0.1 es01 infinity mysql minio redis
```
4. 如果無法存取 HuggingFace可以把環境變數 `HF_ENDPOINT` 設為對應的鏡像網站:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
5.啟動後端服務:
『`bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
6. 安裝前端依賴:
『`bash
cd web
npm install
```
7. 啟動前端服務:
`bash
npm run dev
```
以下界面說明系統已成功啟動_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
```
## 📚 技術文檔
- [Quickstart](https://ragflow.io/docs/dev/)
- [User guide](https://ragflow.io/docs/dev/category/guides)
- [References](https://ragflow.io/docs/dev/category/references)
- [FAQ](https://ragflow.io/docs/dev/faq)
## 📜 路線圖
詳見 [RAGFlow Roadmap 2025](https://github.com/infiniflow/ragflow/issues/4214) 。
## 🏄 開源社群
- [Discord](https://discord.gg/4XxujFgUN7)
- [Twitter](https://twitter.com/infiniflowai)
- [GitHub Discussions](https://github.com/orgs/infiniflow/discussions)
## 🙌 貢獻指南
RAGFlow 只有透過開源協作才能蓬勃發展。秉持這項精神,我們歡迎來自社區的各種貢獻。如果您有意參與其中,請查閱我們的 [貢獻者指南](./CONTRIBUTING.md) 。
## 🤝 商務合作
- [預約諮詢](https://aao615odquw.feishu.cn/share/base/form/shrcnjw7QleretCLqh1nuPo1xxh)
## 👥 加入社區
掃二維碼加入 RAGFlow 小助手,進 RAGFlow 交流群。
<p align="center">
<img src="https://github.com/infiniflow/ragflow/assets/7248/bccf284f-46f2-4445-9809-8f1030fb7585" width=50% height=50%>
</p>

View File

@ -7,26 +7,34 @@
<p align="center"> <p align="center">
<a href="./README.md">English</a> | <a href="./README.md">English</a> |
<a href="./README_zh.md">简体中文</a> | <a href="./README_zh.md">简体中文</a> |
<a href="./README_tzh.md">繁体中文</a> |
<a href="./README_ja.md">日本語</a> | <a href="./README_ja.md">日本語</a> |
<a href="./README_ko.md">한국어</a> <a href="./README_ko.md">한국어</a> |
<a href="./README_id.md">Bahasa Indonesia</a> |
<a href="/README_pt_br.md">Português (Brasil)</a>
</p> </p>
<p align="center"> <p align="center">
<a href="https://x.com/intent/follow?screen_name=infiniflowai" target="_blank">
<img src="https://img.shields.io/twitter/follow/infiniflow?logo=X&color=%20%23f5f5f5" alt="follow on X(Twitter)">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.16.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.16.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
</a> </a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.11.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.11.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE"> <a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license"> <img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a> </a>
</p> </p>
<h4 align="center"> <h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> | <a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/162">Roadmap</a> | <a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> | <a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/4XxujFgUN7">Discord</a> | <a href="https://discord.gg/4XxujFgUN7">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
@ -39,22 +47,29 @@
## 🎮 Demo 试用 ## 🎮 Demo 试用
请登录网址 [https://demo.ragflow.io](https://demo.ragflow.io) 试用 demo。 请登录网址 [https://demo.ragflow.io](https://demo.ragflow.io) 试用 demo。
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/> <img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/>
<img src="https://github.com/infiniflow/ragflow/assets/12318111/b083d173-dadc-4ea9-bdeb-180d7df514eb" width="1200"/> <img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/>
</div> </div>
## 🔥 近期更新 ## 🔥 近期更新
- 2024-09-13 增加知识库问答搜索模式 - 2025-02-05 更新硅基流动的模型列表,增加了对 Deepseek-R1/DeepSeek-V3 的支持
- 2024-09-09 在 Agent 中加入医疗问诊模板 - 2025-01-26 优化知识图谱的提取和应用,提供了多种配置选择
- 2024-08-22 支持用RAG技术实现从自然语言到SQL语句的转换 - 2024-12-18 升级了 Deepdoc 的文档布局分析模型
- 2024-08-02 支持 GraphRAG 启发于 [graphrag](https://github.com/microsoft/graphrag) 和思维导图 - 2024-12-04 支持知识库的 Pagerank 分数
- 2024-07-23 支持解析音频文件 - 2024-11-22 完善了 Agent 中的变量定义和使用
- 2024-07-08 支持 Agentic RAG: 基于 [Graph](./agent/README.md) 的工作流 - 2024-11-01 对解析后的 chunk 加入关键词抽取和相关问题生成以提高召回的准确度
- 2024-06-27 Q&A 解析方式支持 Markdown 文件和 Docx 文件,支持提取出 Docx 文件中的图片和 Markdown 文件中的表格 - 2024-08-22 支持用 RAG 技术实现从自然语言到 SQL 语句的转换
- 2024-05-23 实现 [RAPTOR](https://arxiv.org/html/2401.18059v1) 提供更好的文本检索。
## 🎉 关注项目
⭐️ 点击右上角的 Star 关注 RAGFlow可以获取最新发布的实时通知 !🌟
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/18c9707e-b8aa-4caf-a154-037089c105ba" width="1200"/>
</div>
## 🌟 主要功能 ## 🌟 主要功能
@ -131,15 +146,25 @@
3. 进入 **docker** 文件夹,利用提前编译好的 Docker 镜像启动服务器: 3. 进入 **docker** 文件夹,利用提前编译好的 Docker 镜像启动服务器:
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.16.0-slim`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.16.0-slim` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。比如,你可以通过设置 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.16.0` 来下载 RAGFlow 镜像的 `v0.16.0` 完整发行版。
```bash ```bash
$ cd ragflow/docker $ cd ragflow
$ chmod +x ./entrypoint.sh $ docker compose -f docker/docker-compose.yml up -d
$ docker compose -f docker-compose-CN.yml up -d
``` ```
> 请注意,运行上述命令会自动下载 RAGFlow 的开发版本 docker 镜像。如果你想下载并运行特定版本的 docker 镜像,请在 docker/.env 文件中找到 RAGFLOW_VERSION 变量,将其改为对应版本。例如 RAGFLOW_VERSION=v0.11.0,然后运行上述命令。 | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.16.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.16.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
> 核心镜像文件大约 9 GB可能需要一定时间拉取。请耐心等待。 > [!TIP]
> 如果你遇到 Docker 镜像拉不下来的问题,可以在 **docker/.env** 文件内根据变量 `RAGFLOW_IMAGE` 的注释提示选择华为云或者阿里云的相应镜像。
>
> - 华为云镜像名:`swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow`
> - 阿里云镜像名:`registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow`
4. 服务器启动成功后再次确认服务器状态: 4. 服务器启动成功后再次确认服务器状态:
@ -150,23 +175,23 @@
_出现以下界面提示说明服务器启动成功_ _出现以下界面提示说明服务器启动成功_
```bash ```bash
____ ______ __ ____ ___ ______ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __ / __ \ / | / ____// ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / / / /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ / / _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/ /_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
/____/
* Running on all addresses (0.0.0.0) * Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380 * Running on http://127.0.0.1:9380
* Running on http://x.x.x.x:9380 * Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit INFO:werkzeug:Press CTRL+C to quit
``` ```
> 如果您跳过这一步系统确认步骤就登录 RAGFlow你的浏览器有可能会提示 `network abnormal` 或 `网络异常`,因为 RAGFlow 可能并未完全启动成功。
> 如果您跳过这一步系统确认步骤就登录 RAGFlow你的浏览器有可能会提示 `network anormal` 或 `网络异常`,因为 RAGFlow 可能并未完全启动成功。
5. 在你的浏览器中输入你的服务器对应的 IP 地址并登录 RAGFlow。 5. 在你的浏览器中输入你的服务器对应的 IP 地址并登录 RAGFlow。
> 上面这个例子中,您只需输入 http://IP_OF_YOUR_MACHINE 即可:未改动过配置则无需输入端口(默认的 HTTP 服务端口 80 > 上面这个例子中,您只需输入 http://IP_OF_YOUR_MACHINE 即可:未改动过配置则无需输入端口(默认的 HTTP 服务端口 80
6. 在 [service_conf.yaml](./docker/service_conf.yaml) 文件的 `user_default_llm` 栏配置 LLM factory并在 `API_KEY` 栏填写和你选择的大模型相对应的 API key。 6. 在 [service_conf.yaml.template](./docker/service_conf.yaml.template) 文件的 `user_default_llm` 栏配置 LLM factory并在 `API_KEY` 栏填写和你选择的大模型相对应的 API key。
> 详见 [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup)。 > 详见 [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup)。
@ -177,133 +202,132 @@
系统配置涉及以下三份文件: 系统配置涉及以下三份文件:
- [.env](./docker/.env):存放一些基本的系统环境变量,比如 `SVR_HTTP_PORT`、`MYSQL_PASSWORD`、`MINIO_PASSWORD` 等。 - [.env](./docker/.env):存放一些基本的系统环境变量,比如 `SVR_HTTP_PORT`、`MYSQL_PASSWORD`、`MINIO_PASSWORD` 等。
- [service_conf.yaml](./docker/service_conf.yaml):配置各类后台服务。 - [service_conf.yaml.template](./docker/service_conf.yaml.template):配置各类后台服务。
- [docker-compose-CN.yml](./docker/docker-compose-CN.yml): 系统依赖该文件完成启动。 - [docker-compose.yml](./docker/docker-compose.yml): 系统依赖该文件完成启动。
请务必确保 [.env](./docker/.env) 文件中的变量设置与 [service_conf.yaml](./docker/service_conf.yaml) 文件中的配置保持一致! 请务必确保 [.env](./docker/.env) 文件中的变量设置与 [service_conf.yaml.template](./docker/service_conf.yaml.template) 文件中的配置保持一致!
> [./docker/README](./docker/README.md) 文件提供了环境变量设置和服务配置的详细信息。请**一定要**确保 [./docker/README](./docker/README.md) 文件当中列出来的环境变量的值与 [service_conf.yaml](./docker/service_conf.yaml) 文件当中的系统配置保持一致 如果不能访问镜像站点 hub.docker.com 或者模型站点 huggingface.co请按照 [.env](./docker/.env) 注释修改 `RAGFLOW_IMAGE` 和 `HF_ENDPOINT`
如需更新默认的 HTTP 服务端口(80), 可以在 [docker-compose-CN.yml](./docker/docker-compose-CN.yml) 文件中将配置 `80:80` 改为 `<YOUR_SERVING_PORT>:80` > [./docker/README](./docker/README.md) 解释了 [service_conf.yaml.template](./docker/service_conf.yaml.template) 用到的环境变量设置和服务配置
如需更新默认的 HTTP 服务端口(80), 可以在 [docker-compose.yml](./docker/docker-compose.yml) 文件中将配置 `80:80` 改为 `<YOUR_SERVING_PORT>:80`。
> 所有系统配置都需要通过系统重启生效: > 所有系统配置都需要通过系统重启生效:
> >
> ```bash > ```bash
> $ docker compose -f docker-compose-CN.yml up -d > $ docker compose -f docker/docker-compose.yml up -d
> ``` > ```
## 🛠️ 源码编译、安装 Docker 镜像 ### 把文档引擎从 Elasticsearch 切换成为 Infinity
如需从源码安装 Docker 镜像: RAGFlow 默认使用 Elasticsearch 存储文本和向量数据. 如果要切换为 [Infinity](https://github.com/infiniflow/infinity/), 可以按照下面步骤进行:
1. 停止所有容器运行:
```bash
$ docker compose -f docker/docker-compose.yml down -v
```
2. 设置 **docker/.env** 目录中的 `DOC_ENGINE` 为 `infinity`.
3. 启动容器:
```bash
$ docker compose -f docker/docker-compose.yml up -d
```
> [!WARNING]
> Infinity 目前官方并未正式支持在 Linux/arm64 架构下的机器上运行.
## 🔧 源码编译 Docker 镜像(不含 embedding 模型)
本 Docker 镜像大小约 2 GB 左右并且依赖外部的大模型和 embedding 服务。
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/ cd ragflow/
$ docker build -t infiniflow/ragflow:v0.11.0 . docker build --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
``` ```
## 🛠️ 源码启动服务 ## 🔧 源码编译 Docker 镜像(包含 embedding 模型)
如需从源码启动服务,请参考以下步骤: 本 Docker 大小约 9 GB 左右。由于已包含 embedding 模型,所以只需依赖外部的大模型服务即可。
1. 克隆仓库
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/ cd ragflow/
docker build --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
``` ```
2. 创建虚拟环境(确保已安装 Anaconda 或 Miniconda ## 🔨 以源代码启动服务
```bash 1. 安装 uv。如已经安装可跳过本步骤
$ conda create -n ragflow python=3.11.0
$ conda activate ragflow
$ pip install -r requirements.txt
```
如果 cuda > 12.0,需额外执行以下命令:
```bash
$ pip uninstall -y onnxruntime-gpu
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
```
3. 拷贝入口脚本并配置环境变量 ```bash
pipx install uv
export UV_INDEX=https://pypi.tuna.tsinghua.edu.cn/simple
```
```bash 2. 下载源代码并安装 Python 依赖:
$ cp docker/entrypoint.sh .
$ vi entrypoint.sh
```
使用以下命令获取python路径及ragflow项目路径
```bash
$ which python
$ pwd
```
将上述 `which python` 的输出作为 `PY` 的值,将 `pwd` 的输出作为 `PYTHONPATH` 的值。 ```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
```
`LD_LIBRARY_PATH` 如果环境已经配置好,可以注释掉。 3. 通过 Docker Compose 启动依赖的服务MinIO, Elasticsearch, Redis, and MySQL
```bash ```bash
# 此处配置需要按照实际情况调整,两个 export 为新增配置 docker compose -f docker/docker-compose-base.yml up -d
PY=${PY} ```
export PYTHONPATH=${PYTHONPATH}
# 可选:添加 Hugging Face 镜像
export HF_ENDPOINT=https://hf-mirror.com
```
4. 启动基础服务 在 `/etc/hosts` 中添加以下代码,将 **conf/service_conf.yaml** 文件中的所有 host 地址都解析为 `127.0.0.1`
```bash ```
$ cd docker 127.0.0.1 es01 infinity mysql minio redis
$ docker compose -f docker-compose-base.yml up -d ```
```
5. 检查配置文件 4. 如果无法访问 HuggingFace可以把环境变量 `HF_ENDPOINT` 设成相应的镜像站点:
确保**docker/.env**中的配置与**conf/service_conf.yaml**中配置一致, **service_conf.yaml**中相关服务的IP地址与端口应该改成本机IP地址及容器映射出来的端口。
6. 启动服务 ```bash
export HF_ENDPOINT=https://hf-mirror.com
```
```bash 5. 启动后端服务:
$ chmod +x ./entrypoint.sh
$ bash ./entrypoint.sh
```
7. 启动WebUI服务 ```bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
```bash 6. 安装前端依赖:
$ cd web ```bash
$ npm install --registry=https://registry.npmmirror.com --force cd web
$ vim .umirc.ts npm install
# 修改proxy.target为http://127.0.0.1:9380 ```
$ npm run dev 7. 启动前端服务:
```
8. 部署WebUI服务 ```bash
npm run dev
```
_以下界面说明系统已经成功启动_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
```bash
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ umi build
$ mkdir -p /ragflow/web
$ cp -r dist /ragflow/web
$ apt install nginx -y
$ cp ../docker/nginx/proxy.conf /etc/nginx
$ cp ../docker/nginx/nginx.conf /etc/nginx
$ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d
$ systemctl start nginx
```
## 📚 技术文档 ## 📚 技术文档
- [Quickstart](https://ragflow.io/docs/dev/) - [Quickstart](https://ragflow.io/docs/dev/)
- [User guide](https://ragflow.io/docs/dev/category/user-guides) - [User guide](https://ragflow.io/docs/dev/category/guides)
- [References](https://ragflow.io/docs/dev/category/references) - [References](https://ragflow.io/docs/dev/category/references)
- [FAQ](https://ragflow.io/docs/dev/faq) - [FAQ](https://ragflow.io/docs/dev/faq)
## 📜 路线图 ## 📜 路线图
详见 [RAGFlow Roadmap 2024](https://github.com/infiniflow/ragflow/issues/162) 。 详见 [RAGFlow Roadmap 2025](https://github.com/infiniflow/ragflow/issues/4214) 。
## 🏄 开源社区 ## 🏄 开源社区
@ -313,7 +337,7 @@ $ systemctl start nginx
## 🙌 贡献指南 ## 🙌 贡献指南
RAGFlow 只有通过开源协作才能蓬勃发展。秉持这一精神,我们欢迎来自社区的各种贡献。如果您有意参与其中,请查阅我们的 [贡献者指南](./docs/references/CONTRIBUTING.md) 。 RAGFlow 只有通过开源协作才能蓬勃发展。秉持这一精神,我们欢迎来自社区的各种贡献。如果您有意参与其中,请查阅我们的 [贡献者指南](./CONTRIBUTING.md) 。
## 🤝 商务合作 ## 🤝 商务合作
@ -326,4 +350,3 @@ RAGFlow 只有通过开源协作才能蓬勃发展。秉持这一精神,我们
<p align="center"> <p align="center">
<img src="https://github.com/infiniflow/ragflow/assets/7248/bccf284f-46f2-4445-9809-8f1030fb7585" width=50% height=50%> <img src="https://github.com/infiniflow/ragflow/assets/7248/bccf284f-46f2-4445-9809-8f1030fb7585" width=50% height=50%>
</p> </p>

View File

@ -10,7 +10,7 @@ It is used to compose a complex work flow or agent.
And this graph is beyond the DAG that we can use circles to describe our agent or work flow. And this graph is beyond the DAG that we can use circles to describe our agent or work flow.
Under this folder, we propose a test tool ./test/client.py which can test the DSLs such as json files in folder ./test/dsl_examples. Under this folder, we propose a test tool ./test/client.py which can test the DSLs such as json files in folder ./test/dsl_examples.
Please use this client at the same folder you start RAGFlow. If it's run by Docker, please go into the container before running the client. Please use this client at the same folder you start RAGFlow. If it's run by Docker, please go into the container before running the client.
Otherwise, correct configurations in conf/service_conf.yaml is essential. Otherwise, correct configurations in service_conf.yaml is essential.
```bash ```bash
PYTHONPATH=path/to/ragflow python graph/test/client.py -h PYTHONPATH=path/to/ragflow python graph/test/client.py -h

View File

@ -11,7 +11,7 @@
在这个文件夹下,我们提出了一个测试工具 ./test/client.py 在这个文件夹下,我们提出了一个测试工具 ./test/client.py
它可以测试像文件夹./test/dsl_examples下一样的DSL文件。 它可以测试像文件夹./test/dsl_examples下一样的DSL文件。
请在启动 RAGFlow 的同一文件夹中使用此客户端。如果它是通过 Docker 运行的,请在运行客户端之前进入容器。 请在启动 RAGFlow 的同一文件夹中使用此客户端。如果它是通过 Docker 运行的,请在运行客户端之前进入容器。
否则,正确配置 conf/service_conf.yaml 文件是必不可少的。 否则,正确配置 service_conf.yaml 文件是必不可少的。
```bash ```bash
PYTHONPATH=path/to/ragflow python graph/test/client.py -h PYTHONPATH=path/to/ragflow python graph/test/client.py -h

View File

@ -0,0 +1,18 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from beartype.claw import beartype_this_package
beartype_this_package()

View File

@ -13,9 +13,8 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import importlib import logging
import json import json
import traceback
from abc import ABC from abc import ABC
from copy import deepcopy from copy import deepcopy
from functools import partial from functools import partial
@ -24,7 +23,6 @@ import pandas as pd
from agent.component import component_class from agent.component import component_class
from agent.component.base import ComponentBase from agent.component.base import ComponentBase
from agent.settings import flow_logger, DEBUG
class Canvas(ABC): class Canvas(ABC):
@ -88,7 +86,8 @@ class Canvas(ABC):
} }
}, },
"downstream": [], "downstream": [],
"upstream": [] "upstream": [],
"parent_id": ""
} }
}, },
"history": [], "history": [],
@ -139,7 +138,8 @@ class Canvas(ABC):
"components": {} "components": {}
} }
for k in self.dsl.keys(): for k in self.dsl.keys():
if k in ["components"]:continue if k in ["components"]:
continue
dsl[k] = deepcopy(self.dsl[k]) dsl[k] = deepcopy(self.dsl[k])
for k, cpn in self.components.items(): for k, cpn in self.components.items():
@ -162,8 +162,13 @@ class Canvas(ABC):
self.components[k]["obj"].reset() self.components[k]["obj"].reset()
self._embed_id = "" self._embed_id = ""
def get_compnent_name(self, cid):
for n in self.dsl["graph"]["nodes"]:
if cid == n["id"]:
return n["data"]["name"]
return ""
def run(self, **kwargs): def run(self, **kwargs):
ans = ""
if self.answer: if self.answer:
cpn_id = self.answer[0] cpn_id = self.answer[0]
self.answer.pop(0) self.answer.pop(0)
@ -173,71 +178,107 @@ class Canvas(ABC):
ans = ComponentBase.be_output(str(e)) ans = ComponentBase.be_output(str(e))
self.path[-1].append(cpn_id) self.path[-1].append(cpn_id)
if kwargs.get("stream"): if kwargs.get("stream"):
assert isinstance(ans, partial) for an in ans():
return ans yield an
self.history.append(("assistant", ans.to_dict("records"))) else:
return ans yield ans
return
if not self.path: if not self.path:
self.components["begin"]["obj"].run(self.history, **kwargs) self.components["begin"]["obj"].run(self.history, **kwargs)
self.path.append(["begin"]) self.path.append(["begin"])
self.path.append([]) self.path.append([])
ran = -1 ran = -1
waiting = []
without_dependent_checking = []
def prepare2run(cpns): def prepare2run(cpns):
nonlocal ran, ans nonlocal ran, ans
for c in cpns: for c in cpns:
if self.path[-1] and c == self.path[-1][-1]: continue if self.path[-1] and c == self.path[-1][-1]:
continue
cpn = self.components[c]["obj"] cpn = self.components[c]["obj"]
if cpn.component_name == "Answer": if cpn.component_name == "Answer":
self.answer.append(c) self.answer.append(c)
else: else:
if DEBUG: print("RUN: ", c) logging.debug(f"Canvas.prepare2run: {c}")
if cpn.component_name == "Generate": if c not in without_dependent_checking:
cpids = cpn.get_dependent_components() cpids = cpn.get_dependent_components()
if any([c not in self.path[-1] for c in cpids]): if any([cc not in self.path[-1] for cc in cpids]):
if c not in waiting:
waiting.append(c)
continue continue
ans = cpn.run(self.history, **kwargs) yield "*'{}'* is running...🕞".format(self.get_compnent_name(c))
if cpn.component_name.lower() == "iteration":
st_cpn = cpn.get_start()
assert st_cpn, "Start component not found for Iteration."
if not st_cpn["obj"].end():
cpn = st_cpn["obj"]
c = cpn._id
try:
ans = cpn.run(self.history, **kwargs)
except Exception as e:
logging.exception(f"Canvas.run got exception: {e}")
self.path[-1].append(c)
ran += 1
raise e
self.path[-1].append(c) self.path[-1].append(c)
ran += 1 ran += 1
prepare2run(self.components[self.path[-2][-1]]["downstream"]) downstream = self.components[self.path[-2][-1]]["downstream"]
if not downstream and self.components[self.path[-2][-1]].get("parent_id"):
cid = self.path[-2][-1]
pid = self.components[cid]["parent_id"]
o, _ = self.components[cid]["obj"].output(allow_partial=False)
oo, _ = self.components[pid]["obj"].output(allow_partial=False)
self.components[pid]["obj"].set(pd.concat([oo, o], ignore_index=True))
downstream = [pid]
for m in prepare2run(downstream):
yield {"content": m, "running_status": True}
while 0 <= ran < len(self.path[-1]): while 0 <= ran < len(self.path[-1]):
if DEBUG: print(ran, self.path) logging.debug(f"Canvas.run: {ran} {self.path}")
cpn_id = self.path[-1][ran] cpn_id = self.path[-1][ran]
cpn = self.get_component(cpn_id) cpn = self.get_component(cpn_id)
if not cpn["downstream"]: break if not any([cpn["downstream"], cpn.get("parent_id"), waiting]):
break
loop = self._find_loop() loop = self._find_loop()
if loop: raise OverflowError(f"Too much loops: {loop}") if loop:
raise OverflowError(f"Too much loops: {loop}")
if cpn["obj"].component_name.lower() in ["switch", "categorize", "relevant"]: if cpn["obj"].component_name.lower() in ["switch", "categorize", "relevant"]:
switch_out = cpn["obj"].output()[1].iloc[0, 0] switch_out = cpn["obj"].output()[1].iloc[0, 0]
assert switch_out in self.components, \ assert switch_out in self.components, \
"{}'s output: {} not valid.".format(cpn_id, switch_out) "{}'s output: {} not valid.".format(cpn_id, switch_out)
try: for m in prepare2run([switch_out]):
prepare2run([switch_out]) yield {"content": m, "running_status": True}
except Exception as e:
for p in [c for p in self.path for c in p][::-1]:
if p.lower().find("answer") >= 0:
self.get_component(p)["obj"].set_exception(e)
prepare2run([p])
break
traceback.print_exc()
break
continue continue
try: downstream = cpn["downstream"]
prepare2run(cpn["downstream"]) if not downstream and cpn.get("parent_id"):
except Exception as e: pid = cpn["parent_id"]
for p in [c for p in self.path for c in p][::-1]: _, o = cpn["obj"].output(allow_partial=False)
if p.lower().find("answer") >= 0: _, oo = self.components[pid]["obj"].output(allow_partial=False)
self.get_component(p)["obj"].set_exception(e) self.components[pid]["obj"].set_output(pd.concat([oo.dropna(axis=1), o.dropna(axis=1)], ignore_index=True))
prepare2run([p]) downstream = [pid]
break
traceback.print_exc() for m in prepare2run(downstream):
break yield {"content": m, "running_status": True}
if ran >= len(self.path[-1]) and waiting:
without_dependent_checking = waiting
waiting = []
for m in prepare2run(without_dependent_checking):
yield {"content": m, "running_status": True}
without_dependent_checking = []
ran -= 1
if self.answer: if self.answer:
cpn_id = self.answer[0] cpn_id = self.answer[0]
@ -246,11 +287,13 @@ class Canvas(ABC):
self.path[-1].append(cpn_id) self.path[-1].append(cpn_id)
if kwargs.get("stream"): if kwargs.get("stream"):
assert isinstance(ans, partial) assert isinstance(ans, partial)
return ans for an in ans():
yield an
else:
yield ans
self.history.append(("assistant", ans.to_dict("records"))) else:
raise Exception("The dialog flow has no way to interact with you. Please add an 'Interact' component to the end of the flow.")
return ans
def get_component(self, cpn_id): def get_component(self, cpn_id):
return self.components[cpn_id] return self.components[cpn_id]
@ -260,9 +303,11 @@ class Canvas(ABC):
def get_history(self, window_size): def get_history(self, window_size):
convs = [] convs = []
for role, obj in self.history[(window_size + 1) * -1:]: for role, obj in self.history[window_size * -1:]:
convs.append({"role": role, "content": (obj if role == "user" else if isinstance(obj, list) and obj and all([isinstance(o, dict) for o in obj]):
'\n'.join(pd.DataFrame(obj)['content']))}) convs.append({"role": role, "content": '\n'.join([str(s.get("content", "")) for s in obj])})
else:
convs.append({"role": role, "content": str(obj)})
return convs return convs
def add_user_input(self, question): def add_user_input(self, question):
@ -276,19 +321,22 @@ class Canvas(ABC):
def _find_loop(self, max_loops=6): def _find_loop(self, max_loops=6):
path = self.path[-1][::-1] path = self.path[-1][::-1]
if len(path) < 2: return False if len(path) < 2:
return False
for i in range(len(path)): for i in range(len(path)):
if path[i].lower().find("answer") >= 0: if path[i].lower().find("answer") == 0 or path[i].lower().find("iterationitem") == 0:
path = path[:i] path = path[:i]
break break
if len(path) < 2: return False if len(path) < 2:
return False
for l in range(2, len(path) // 2): for loc in range(2, len(path) // 2):
pat = ",".join(path[0:l]) pat = ",".join(path[0:loc])
path_str = ",".join(path) path_str = ",".join(path)
if len(pat) >= len(path_str): return False if len(pat) >= len(path_str):
return False
loop = max_loops loop = max_loops
while path_str.find(pat) == 0 and loop >= 0: while path_str.find(pat) == 0 and loop >= 0:
loop -= 1 loop -= 1
@ -296,10 +344,23 @@ class Canvas(ABC):
return False return False
path_str = path_str[len(pat)+1:] path_str = path_str[len(pat)+1:]
if loop < 0: if loop < 0:
pat = " => ".join([p.split(":")[0] for p in path[0:l]]) pat = " => ".join([p.split(":")[0] for p in path[0:loc]])
return pat + " => " + pat return pat + " => " + pat
return False return False
def get_prologue(self): def get_prologue(self):
return self.components["begin"]["obj"]._param.prologue return self.components["begin"]["obj"]._param.prologue
def set_global_param(self, **kwargs):
for k, v in kwargs.items():
for q in self.components["begin"]["obj"]._param.query:
if k != q["key"]:
continue
q["value"] = v
def get_preset_param(self):
return self.components["begin"]["obj"]._param.query
def get_component_input_elements(self, cpnnm):
return self.components[cpnnm]["obj"].get_input_elements()

View File

@ -1,3 +1,19 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import importlib import importlib
from .begin import Begin, BeginParam from .begin import Begin, BeginParam
from .generate import Generate, GenerateParam from .generate import Generate, GenerateParam
@ -9,6 +25,7 @@ from .relevant import Relevant, RelevantParam
from .message import Message, MessageParam from .message import Message, MessageParam
from .rewrite import RewriteQuestion, RewriteQuestionParam from .rewrite import RewriteQuestion, RewriteQuestionParam
from .keyword import KeywordExtract, KeywordExtractParam from .keyword import KeywordExtract, KeywordExtractParam
from .concentrator import Concentrator, ConcentratorParam
from .baidu import Baidu, BaiduParam from .baidu import Baidu, BaiduParam
from .duckduckgo import DuckDuckGo, DuckDuckGoParam from .duckduckgo import DuckDuckGo, DuckDuckGoParam
from .wikipedia import Wikipedia, WikipediaParam from .wikipedia import Wikipedia, WikipediaParam
@ -27,9 +44,90 @@ from .wencai import WenCai, WenCaiParam
from .jin10 import Jin10, Jin10Param from .jin10 import Jin10, Jin10Param
from .tushare import TuShare, TuShareParam from .tushare import TuShare, TuShareParam
from .akshare import AkShare, AkShareParam from .akshare import AkShare, AkShareParam
from .crawler import Crawler, CrawlerParam
from .invoke import Invoke, InvokeParam
from .template import Template, TemplateParam
from .email import Email, EmailParam
from .iteration import Iteration, IterationParam
from .iterationitem import IterationItem, IterationItemParam
def component_class(class_name): def component_class(class_name):
m = importlib.import_module("agent.component") m = importlib.import_module("agent.component")
c = getattr(m, class_name) c = getattr(m, class_name)
return c return c
__all__ = [
"Begin",
"BeginParam",
"Generate",
"GenerateParam",
"Retrieval",
"RetrievalParam",
"Answer",
"AnswerParam",
"Categorize",
"CategorizeParam",
"Switch",
"SwitchParam",
"Relevant",
"RelevantParam",
"Message",
"MessageParam",
"RewriteQuestion",
"RewriteQuestionParam",
"KeywordExtract",
"KeywordExtractParam",
"Concentrator",
"ConcentratorParam",
"Baidu",
"BaiduParam",
"DuckDuckGo",
"DuckDuckGoParam",
"Wikipedia",
"WikipediaParam",
"PubMed",
"PubMedParam",
"ArXiv",
"ArXivParam",
"Google",
"GoogleParam",
"Bing",
"BingParam",
"GoogleScholar",
"GoogleScholarParam",
"DeepL",
"DeepLParam",
"GitHub",
"GitHubParam",
"BaiduFanyi",
"BaiduFanyiParam",
"QWeather",
"QWeatherParam",
"ExeSQL",
"ExeSQLParam",
"YahooFinance",
"YahooFinanceParam",
"WenCai",
"WenCaiParam",
"Jin10",
"Jin10Param",
"TuShare",
"TuShareParam",
"AkShare",
"AkShareParam",
"Crawler",
"CrawlerParam",
"Invoke",
"InvokeParam",
"Iteration",
"IterationParam",
"IterationItem",
"IterationItemParam",
"Template",
"TemplateParam",
"Email",
"EmailParam",
"component_class"
]

View File

@ -1,56 +1,56 @@
# #
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved. # Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
# You may obtain a copy of the License at # You may obtain a copy of the License at
# #
# http://www.apache.org/licenses/LICENSE-2.0 # http://www.apache.org/licenses/LICENSE-2.0
# #
# Unless required by applicable law or agreed to in writing, software # Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, # distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from abc import ABC from abc import ABC
import pandas as pd import pandas as pd
import akshare as ak from agent.component.base import ComponentBase, ComponentParamBase
from agent.component.base import ComponentBase, ComponentParamBase
class AkShareParam(ComponentParamBase):
class AkShareParam(ComponentParamBase): """
""" Define the AkShare component parameters.
Define the AkShare component parameters. """
"""
def __init__(self):
def __init__(self): super().__init__()
super().__init__() self.top_n = 10
self.top_n = 10
def check(self):
def check(self): self.check_positive_integer(self.top_n, "Top N")
self.check_positive_integer(self.top_n, "Top N")
class AkShare(ComponentBase, ABC):
class AkShare(ComponentBase, ABC): component_name = "AkShare"
component_name = "AkShare"
def _run(self, history, **kwargs):
def _run(self, history, **kwargs): import akshare as ak
ans = self.get_input() ans = self.get_input()
ans = ",".join(ans["content"]) if "content" in ans else "" ans = ",".join(ans["content"]) if "content" in ans else ""
if not ans: if not ans:
return AkShare.be_output("") return AkShare.be_output("")
try: try:
ak_res = [] ak_res = []
stock_news_em_df = ak.stock_news_em(symbol=ans) stock_news_em_df = ak.stock_news_em(symbol=ans)
stock_news_em_df = stock_news_em_df.head(self._param.top_n) stock_news_em_df = stock_news_em_df.head(self._param.top_n)
ak_res = [{"content": '<a href="' + i["新闻链接"] + '">' + i["新闻标题"] + '</a>\n 新闻内容: ' + i[ ak_res = [{"content": '<a href="' + i["新闻链接"] + '">' + i["新闻标题"] + '</a>\n 新闻内容: ' + i[
"新闻内容"] + " \n发布时间:" + i["发布时间"] + " \n文章来源: " + i["文章来源"]} for index, i in stock_news_em_df.iterrows()] "新闻内容"] + " \n发布时间:" + i["发布时间"] + " \n文章来源: " + i["文章来源"]} for index, i in stock_news_em_df.iterrows()]
except Exception as e: except Exception as e:
return AkShare.be_output("**ERROR**: " + str(e)) return AkShare.be_output("**ERROR**: " + str(e))
if not ak_res: if not ak_res:
return AkShare.be_output("") return AkShare.be_output("")
return pd.DataFrame(ak_res) return pd.DataFrame(ak_res)

View File

@ -16,6 +16,7 @@
import random import random
from abc import ABC from abc import ABC
from functools import partial from functools import partial
from typing import Tuple, Union
import pandas as pd import pandas as pd
@ -76,4 +77,13 @@ class Answer(ComponentBase, ABC):
def set_exception(self, e): def set_exception(self, e):
self.exception = e self.exception = e
def output(self, allow_partial=True) -> Tuple[str, Union[pd.DataFrame, partial]]:
if allow_partial:
return super.output()
for r, c in self._canvas.history[::-1]:
if r == "user":
return self._param.output_var_name, pd.DataFrame([{"content": c}])
self._param.output_var_name, pd.DataFrame([])

View File

@ -13,13 +13,12 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
from abc import ABC from abc import ABC
import arxiv import arxiv
import pandas as pd import pandas as pd
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
class ArXivParam(ComponentParamBase): class ArXivParam(ComponentParamBase):
""" """
Define the ArXiv component parameters. Define the ArXiv component parameters.
@ -65,5 +64,5 @@ class ArXiv(ComponentBase, ABC):
return ArXiv.be_output("") return ArXiv.be_output("")
df = pd.DataFrame(arxiv_res) df = pd.DataFrame(arxiv_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::") logging.debug(f"df: {str(df)}")
return df return df

View File

@ -13,13 +13,11 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import random import logging
from abc import ABC from abc import ABC
from functools import partial
import pandas as pd import pandas as pd
import requests import requests
import re import re
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
@ -46,7 +44,7 @@ class Baidu(ComponentBase, ABC):
return Baidu.be_output("") return Baidu.be_output("")
try: try:
url = 'https://www.baidu.com/s?wd=' + ans + '&rn=' + str(self._param.top_n) url = 'http://www.baidu.com/s?wd=' + ans + '&rn=' + str(self._param.top_n)
headers = { headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'} 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'}
response = requests.get(url=url, headers=headers) response = requests.get(url=url, headers=headers)
@ -64,6 +62,6 @@ class Baidu(ComponentBase, ABC):
return Baidu.be_output("") return Baidu.be_output("")
df = pd.DataFrame(baidu_res) df = pd.DataFrame(baidu_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::") logging.debug(f"df: {str(df)}")
return df return df

View File

@ -36,13 +36,9 @@ class BaiduFanyiParam(ComponentParamBase):
self.domain = 'finance' self.domain = 'finance'
def check(self): def check(self):
self.check_positive_integer(self.top_n, "Top N")
self.check_empty(self.appid, "BaiduFanyi APPID") self.check_empty(self.appid, "BaiduFanyi APPID")
self.check_empty(self.secret_key, "BaiduFanyi Secret Key") self.check_empty(self.secret_key, "BaiduFanyi Secret Key")
self.check_valid_value(self.trans_type, "Translate type", ['translate', 'fieldtranslate']) self.check_valid_value(self.trans_type, "Translate type", ['translate', 'fieldtranslate'])
self.check_valid_value(self.trans_type, "Translate domain",
['it', 'finance', 'machinery', 'senimed', 'novel', 'academic', 'aerospace', 'wiki',
'news', 'law', 'contract'])
self.check_valid_value(self.source_lang, "Source language", self.check_valid_value(self.source_lang, "Source language",
['auto', 'zh', 'en', 'yue', 'wyw', 'jp', 'kor', 'fra', 'spa', 'th', 'ara', 'ru', 'pt', ['auto', 'zh', 'en', 'yue', 'wyw', 'jp', 'kor', 'fra', 'spa', 'th', 'ara', 'ru', 'pt',
'de', 'it', 'el', 'nl', 'pl', 'bul', 'est', 'dan', 'fin', 'cs', 'rom', 'slo', 'swe', 'de', 'it', 'el', 'nl', 'pl', 'bul', 'est', 'dan', 'fin', 'cs', 'rom', 'slo', 'swe',
@ -97,3 +93,4 @@ class BaiduFanyi(ComponentBase, ABC):
except Exception as e: except Exception as e:
BaiduFanyi.be_output("**Error**:" + str(e)) BaiduFanyi.be_output("**Error**:" + str(e))

View File

@ -17,14 +17,13 @@ from abc import ABC
import builtins import builtins
import json import json
import os import os
from copy import deepcopy import logging
from functools import partial from functools import partial
from typing import List, Dict, Tuple, Union from typing import Tuple, Union
import pandas as pd import pandas as pd
from agent import settings from agent import settings
from agent.settings import flow_logger, DEBUG
_FEEDED_DEPRECATED_PARAMS = "_feeded_deprecated_params" _FEEDED_DEPRECATED_PARAMS = "_feeded_deprecated_params"
_DEPRECATED_PARAMS = "_deprecated_params" _DEPRECATED_PARAMS = "_deprecated_params"
@ -36,6 +35,9 @@ class ComponentParamBase(ABC):
def __init__(self): def __init__(self):
self.output_var_name = "output" self.output_var_name = "output"
self.message_history_window_size = 22 self.message_history_window_size = 22
self.query = []
self.inputs = []
self.debug_inputs = []
def set_name(self, name: str): def set_name(self, name: str):
self._name = name self._name = name
@ -81,7 +83,6 @@ class ComponentParamBase(ABC):
return {name: True for name in self.get_feeded_deprecated_params()} return {name: True for name in self.get_feeded_deprecated_params()}
def __str__(self): def __str__(self):
return json.dumps(self.as_dict(), ensure_ascii=False) return json.dumps(self.as_dict(), ensure_ascii=False)
def as_dict(self): def as_dict(self):
@ -359,13 +360,13 @@ class ComponentParamBase(ABC):
def _warn_deprecated_param(self, param_name, descr): def _warn_deprecated_param(self, param_name, descr):
if self._deprecated_params_set.get(param_name): if self._deprecated_params_set.get(param_name):
flow_logger.warning( logging.warning(
f"{descr} {param_name} is deprecated and ignored in this version." f"{descr} {param_name} is deprecated and ignored in this version."
) )
def _warn_to_deprecate_param(self, param_name, descr, new_param): def _warn_to_deprecate_param(self, param_name, descr, new_param):
if self._deprecated_params_set.get(param_name): if self._deprecated_params_set.get(param_name):
flow_logger.warning( logging.warning(
f"{descr} {param_name} will be deprecated in future release; " f"{descr} {param_name} will be deprecated in future release; "
f"please use {new_param} instead." f"please use {new_param} instead."
) )
@ -385,10 +386,14 @@ class ComponentBase(ABC):
""" """
return """{{ return """{{
"component_name": "{}", "component_name": "{}",
"params": {} "params": {},
"output": {},
"inputs": {}
}}""".format(self.component_name, }}""".format(self.component_name,
self._param self._param,
) json.dumps(json.loads(str(self._param)).get("output", {}), ensure_ascii=False),
json.dumps(json.loads(str(self._param)).get("inputs", []), ensure_ascii=False)
)
def __init__(self, canvas, id, param: ComponentParamBase): def __init__(self, canvas, id, param: ComponentParamBase):
self._canvas = canvas self._canvas = canvas
@ -396,9 +401,17 @@ class ComponentBase(ABC):
self._param = param self._param = param
self._param.check() self._param.check()
def get_dependent_components(self):
cpnts = set([para["component_id"].split("@")[0] for para in self._param.query \
if para.get("component_id") \
and para["component_id"].lower().find("answer") < 0 \
and para["component_id"].lower().find("begin") < 0])
return list(cpnts)
def run(self, history, **kwargs): def run(self, history, **kwargs):
flow_logger.info("{}, history: {}, kwargs: {}".format(self, json.dumps(history, ensure_ascii=False), logging.debug("{}, history: {}, kwargs: {}".format(self, json.dumps(history, ensure_ascii=False),
json.dumps(kwargs, ensure_ascii=False))) json.dumps(kwargs, ensure_ascii=False)))
self._param.debug_inputs = []
try: try:
res = self._run(history, **kwargs) res = self._run(history, **kwargs)
self.set_output(res) self.set_output(res)
@ -413,9 +426,14 @@ class ComponentBase(ABC):
def output(self, allow_partial=True) -> Tuple[str, Union[pd.DataFrame, partial]]: def output(self, allow_partial=True) -> Tuple[str, Union[pd.DataFrame, partial]]:
o = getattr(self._param, self._param.output_var_name) o = getattr(self._param, self._param.output_var_name)
if not isinstance(o, partial) and not isinstance(o, pd.DataFrame): if not isinstance(o, partial):
if not isinstance(o, list): o = [o] if not isinstance(o, pd.DataFrame):
o = pd.DataFrame(o) if isinstance(o, list):
return self._param.output_var_name, pd.DataFrame(o)
if o is None:
return self._param.output_var_name, pd.DataFrame()
return self._param.output_var_name, pd.DataFrame([{"content": str(o)}])
return self._param.output_var_name, o
if allow_partial or not isinstance(o, partial): if allow_partial or not isinstance(o, partial):
if not isinstance(o, partial) and not isinstance(o, pd.DataFrame): if not isinstance(o, partial) and not isinstance(o, pd.DataFrame):
@ -426,53 +444,121 @@ class ComponentBase(ABC):
for oo in o(): for oo in o():
if not isinstance(oo, pd.DataFrame): if not isinstance(oo, pd.DataFrame):
outs = pd.DataFrame(oo if isinstance(oo, list) else [oo]) outs = pd.DataFrame(oo if isinstance(oo, list) else [oo])
else: outs = oo else:
outs = oo
return self._param.output_var_name, outs return self._param.output_var_name, outs
def reset(self): def reset(self):
setattr(self._param, self._param.output_var_name, None) setattr(self._param, self._param.output_var_name, None)
self._param.inputs = []
def set_output(self, v: pd.DataFrame): def set_output(self, v):
setattr(self._param, self._param.output_var_name, v) setattr(self._param, self._param.output_var_name, v)
def get_input(self): def get_input(self):
upstream_outs = [] if self._param.debug_inputs:
return pd.DataFrame([{"content": v["value"]} for v in self._param.debug_inputs if v.get("value")])
reversed_cpnts = [] reversed_cpnts = []
if len(self._canvas.path) > 1: if len(self._canvas.path) > 1:
reversed_cpnts.extend(self._canvas.path[-2]) reversed_cpnts.extend(self._canvas.path[-2])
reversed_cpnts.extend(self._canvas.path[-1]) reversed_cpnts.extend(self._canvas.path[-1])
if DEBUG: print(self.component_name, reversed_cpnts[::-1]) if self._param.query:
self._param.inputs = []
outs = []
for q in self._param.query:
if q.get("component_id"):
if q["component_id"].split("@")[0].lower().find("begin") >= 0:
cpn_id, key = q["component_id"].split("@")
for p in self._canvas.get_component(cpn_id)["obj"]._param.query:
if p["key"] == key:
outs.append(pd.DataFrame([{"content": p.get("value", "")}]))
self._param.inputs.append({"component_id": q["component_id"],
"content": p.get("value", "")})
break
else:
assert False, f"Can't find parameter '{key}' for {cpn_id}"
continue
if q["component_id"].lower().find("answer") == 0:
txt = []
for r, c in self._canvas.history[::-1][:self._param.message_history_window_size][::-1]:
txt.append(f"{r.upper()}: {c}")
txt = "\n".join(txt)
self._param.inputs.append({"content": txt, "component_id": q["component_id"]})
outs.append(pd.DataFrame([{"content": txt}]))
continue
outs.append(self._canvas.get_component(q["component_id"])["obj"].output(allow_partial=False)[1])
self._param.inputs.append({"component_id": q["component_id"],
"content": "\n".join(
[str(d["content"]) for d in outs[-1].to_dict('records')])})
elif q.get("value"):
self._param.inputs.append({"component_id": None, "content": q["value"]})
outs.append(pd.DataFrame([{"content": q["value"]}]))
if outs:
df = pd.concat(outs, ignore_index=True)
if "content" in df:
df = df.drop_duplicates(subset=['content']).reset_index(drop=True)
return df
upstream_outs = []
for u in reversed_cpnts[::-1]: for u in reversed_cpnts[::-1]:
if self.get_component_name(u) in ["switch"]: continue if self.get_component_name(u) in ["switch", "concentrator"]:
continue
if self.component_name.lower() == "generate" and self.get_component_name(u) == "retrieval": if self.component_name.lower() == "generate" and self.get_component_name(u) == "retrieval":
o = self._canvas.get_component(u)["obj"].output(allow_partial=False)[1] o = self._canvas.get_component(u)["obj"].output(allow_partial=False)[1]
if o is not None: if o is not None:
o["component_id"] = u
upstream_outs.append(o) upstream_outs.append(o)
continue continue
if u not in self._canvas.get_component(self._id)["upstream"]: continue #if self.component_name.lower()!="answer" and u not in self._canvas.get_component(self._id)["upstream"]: continue
if self.component_name.lower().find("switch") < 0 \ if self.component_name.lower().find("switch") < 0 \
and self.get_component_name(u) in ["relevant", "categorize"]: and self.get_component_name(u) in ["relevant", "categorize"]:
continue continue
if u.lower().find("answer") >= 0: if u.lower().find("answer") >= 0:
for r, c in self._canvas.history[::-1]: for r, c in self._canvas.history[::-1]:
if r == "user": if r == "user":
upstream_outs.append(pd.DataFrame([{"content": c}])) upstream_outs.append(pd.DataFrame([{"content": c, "component_id": u}]))
break break
break break
if self.component_name.lower().find("answer") >= 0 and self.get_component_name(u) in ["relevant"]: if self.component_name.lower().find("answer") >= 0 and self.get_component_name(u) in ["relevant"]:
continue continue
o = self._canvas.get_component(u)["obj"].output(allow_partial=False)[1] o = self._canvas.get_component(u)["obj"].output(allow_partial=False)[1]
if o is not None: if o is not None:
o["component_id"] = u
upstream_outs.append(o) upstream_outs.append(o)
break break
if upstream_outs: assert upstream_outs, "Can't inference the where the component input is. Please identify whose output is this component's input."
df = pd.concat(upstream_outs, ignore_index=True)
if "content" in df: df = pd.concat(upstream_outs, ignore_index=True)
df = df.drop_duplicates(subset=['content']).reset_index(drop=True) if "content" in df:
return df df = df.drop_duplicates(subset=['content']).reset_index(drop=True)
return pd.DataFrame()
self._param.inputs = []
for _, r in df.iterrows():
self._param.inputs.append({"component_id": r["component_id"], "content": r["content"]})
return df
def get_input_elements(self):
assert self._param.query, "Please identify input parameters firstly."
eles = []
for q in self._param.query:
if q.get("component_id"):
cpn_id = q["component_id"]
if cpn_id.split("@")[0].lower().find("begin") >= 0:
cpn_id, key = cpn_id.split("@")
eles.extend(self._canvas.get_component(cpn_id)["obj"]._param.query)
continue
eles.append({"name": self._canvas.get_compnent_name(cpn_id), "key": cpn_id})
else:
eles.append({"key": q["value"], "name": q["value"], "value": q["value"]})
return eles
def get_stream_input(self): def get_stream_input(self):
reversed_cpnts = [] reversed_cpnts = []
@ -481,7 +567,8 @@ class ComponentBase(ABC):
reversed_cpnts.extend(self._canvas.path[-1]) reversed_cpnts.extend(self._canvas.path[-1])
for u in reversed_cpnts[::-1]: for u in reversed_cpnts[::-1]:
if self.get_component_name(u) in ["switch", "answer"]: continue if self.get_component_name(u) in ["switch", "answer"]:
continue
return self._canvas.get_component(u)["obj"].output()[1] return self._canvas.get_component(u)["obj"].output()[1]
@staticmethod @staticmethod
@ -490,3 +577,10 @@ class ComponentBase(ABC):
def get_component_name(self, cpn_id): def get_component_name(self, cpn_id):
return self._canvas.get_component(cpn_id)["obj"].component_name.lower() return self._canvas.get_component(cpn_id)["obj"].component_name.lower()
def debug(self, **kwargs):
return self._run([], **kwargs)
def get_parent(self):
pid = self._canvas.get_component(self._id)["parent_id"]
return self._canvas.get_component(pid)["obj"]

View File

@ -26,6 +26,7 @@ class BeginParam(ComponentParamBase):
def __init__(self): def __init__(self):
super().__init__() super().__init__()
self.prologue = "Hi! I'm your smart assistant. What can I do for you?" self.prologue = "Hi! I'm your smart assistant. What can I do for you?"
self.query = []
def check(self): def check(self):
return True return True
@ -42,7 +43,7 @@ class Begin(ComponentBase):
def stream_output(self): def stream_output(self):
res = {"content": self._param.prologue} res = {"content": self._param.prologue}
yield res yield res
self.set_output(res) self.set_output(self.be_output(res))

View File

@ -13,13 +13,12 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
from abc import ABC from abc import ABC
import requests import requests
import pandas as pd import pandas as pd
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
class BingParam(ComponentParamBase): class BingParam(ComponentParamBase):
""" """
Define the Bing component parameters. Define the Bing component parameters.
@ -81,5 +80,5 @@ class Bing(ComponentBase, ABC):
return Bing.be_output("") return Bing.be_output("")
df = pd.DataFrame(bing_res) df = pd.DataFrame(bing_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::") logging.debug(f"df: {str(df)}")
return df return df

View File

@ -13,11 +13,11 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
from abc import ABC from abc import ABC
from api.db import LLMType from api.db import LLMType
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
from agent.component import GenerateParam, Generate from agent.component import GenerateParam, Generate
from agent.settings import DEBUG
class CategorizeParam(GenerateParam): class CategorizeParam(GenerateParam):
@ -34,15 +34,18 @@ class CategorizeParam(GenerateParam):
super().check() super().check()
self.check_empty(self.category_description, "[Categorize] Category examples") self.check_empty(self.category_description, "[Categorize] Category examples")
for k, v in self.category_description.items(): for k, v in self.category_description.items():
if not k: raise ValueError(f"[Categorize] Category name can not be empty!") if not k:
if not v.get("to"): raise ValueError(f"[Categorize] 'To' of category {k} can not be empty!") raise ValueError("[Categorize] Category name can not be empty!")
if not v.get("to"):
raise ValueError(f"[Categorize] 'To' of category {k} can not be empty!")
def get_prompt(self): def get_prompt(self, chat_hist):
cate_lines = [] cate_lines = []
for c, desc in self.category_description.items(): for c, desc in self.category_description.items():
for l in desc.get("examples", "").split("\n"): for line in desc.get("examples", "").split("\n"):
if not l: continue if not line:
cate_lines.append("Question: {}\tCategory: {}".format(l, c)) continue
cate_lines.append("USER: {}\nCategory: {}".format(line, c))
descriptions = [] descriptions = []
for c, desc in self.category_description.items(): for c, desc in self.category_description.items():
if desc.get("description"): if desc.get("description"):
@ -59,11 +62,15 @@ class CategorizeParam(GenerateParam):
{} {}
You could learn from the above examples. You could learn from the above examples.
Just mention the category names, no need for any additional words. Just mention the category names, no need for any additional words.
---- Real Data ----
{}
""".format( """.format(
len(self.category_description.keys()), len(self.category_description.keys()),
"/".join(list(self.category_description.keys())), "/".join(list(self.category_description.keys())),
"\n".join(descriptions), "\n".join(descriptions),
"- ".join(cate_lines) "- ".join(cate_lines),
chat_hist
) )
return self.prompt return self.prompt
@ -73,15 +80,18 @@ class Categorize(Generate, ABC):
def _run(self, history, **kwargs): def _run(self, history, **kwargs):
input = self.get_input() input = self.get_input()
input = "Question: " + ("; ".join(input["content"]) if "content" in input else "") + "Category: "
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id) chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
ans = chat_mdl.chat(self._param.get_prompt(), [{"role": "user", "content": input}], ans = chat_mdl.chat(self._param.get_prompt(input), [{"role": "user", "content": "\nCategory: "}],
self._param.gen_conf()) self._param.gen_conf())
if DEBUG: print(ans, ":::::::::::::::::::::::::::::::::", input) logging.debug(f"input: {input}, answer: {str(ans)}")
for c in self._param.category_description.keys(): for c in self._param.category_description.keys():
if ans.lower().find(c.lower()) >= 0: if ans.lower().find(c.lower()) >= 0:
return Categorize.be_output(self._param.category_description[c]["to"]) return Categorize.be_output(self._param.category_description[c]["to"])
return Categorize.be_output(list(self._param.category_description.items())[-1][1]["to"]) return Categorize.be_output(list(self._param.category_description.items())[-1][1]["to"])
def debug(self, **kwargs):
df = self._run([], **kwargs)
cpn_id = df.iloc[0, 0]
return Categorize.be_output(self._canvas.get_compnent_name(cpn_id))

View File

@ -1,75 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
import pandas as pd
from api.db import LLMType
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle
from api.settings import retrievaler
from agent.component.base import ComponentBase, ComponentParamBase
class CiteParam(ComponentParamBase):
"""
Define the Retrieval component parameters.
"""
def __init__(self):
super().__init__()
self.cite_sources = []
def check(self):
self.check_empty(self.cite_source, "Please specify where you want to cite from.")
class Cite(ComponentBase, ABC):
component_name = "Cite"
def _run(self, history, **kwargs):
input = "\n- ".join(self.get_input()["content"])
sources = [self._canvas.get_component(cpn_id).output()[1] for cpn_id in self._param.cite_source]
query = []
for role, cnt in history[::-1][:self._param.message_history_window_size]:
if role != "user":continue
query.append(cnt)
query = "\n".join(query)
kbs = KnowledgebaseService.get_by_ids(self._param.kb_ids)
if not kbs:
raise ValueError("Can't find knowledgebases by {}".format(self._param.kb_ids))
embd_nms = list(set([kb.embd_id for kb in kbs]))
assert len(embd_nms) == 1, "Knowledge bases use different embedding models."
embd_mdl = LLMBundle(kbs[0].tenant_id, LLMType.EMBEDDING, embd_nms[0])
rerank_mdl = None
if self._param.rerank_id:
rerank_mdl = LLMBundle(kbs[0].tenant_id, LLMType.RERANK, self._param.rerank_id)
kbinfos = retrievaler.retrieval(query, embd_mdl, kbs[0].tenant_id, self._param.kb_ids,
1, self._param.top_n,
self._param.similarity_threshold, 1 - self._param.keywords_similarity_weight,
aggs=False, rerank_mdl=rerank_mdl)
if not kbinfos["chunks"]: return pd.DataFrame()
df = pd.DataFrame(kbinfos["chunks"])
df["content"] = df["content_with_weight"]
del df["content_with_weight"]
return df

View File

@ -0,0 +1,36 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
from agent.component.base import ComponentBase, ComponentParamBase
class ConcentratorParam(ComponentParamBase):
"""
Define the Concentrator component parameters.
"""
def __init__(self):
super().__init__()
def check(self):
return True
class Concentrator(ComponentBase, ABC):
component_name = "Concentrator"
def _run(self, history, **kwargs):
return Concentrator.be_output("")

View File

@ -0,0 +1,67 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
import asyncio
from crawl4ai import AsyncWebCrawler
from agent.component.base import ComponentBase, ComponentParamBase
from api.utils.web_utils import is_valid_url
class CrawlerParam(ComponentParamBase):
"""
Define the Crawler component parameters.
"""
def __init__(self):
super().__init__()
self.proxy = None
self.extract_type = "markdown"
def check(self):
self.check_valid_value(self.extract_type, "Type of content from the crawler", ['html', 'markdown', 'content'])
class Crawler(ComponentBase, ABC):
component_name = "Crawler"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else ""
if not is_valid_url(ans):
return Crawler.be_output("URL not valid")
try:
result = asyncio.run(self.get_web(ans))
return Crawler.be_output(result)
except Exception as e:
return Crawler.be_output(f"An unexpected error occurred: {str(e)}")
async def get_web(self, url):
proxy = self._param.proxy if self._param.proxy else None
async with AsyncWebCrawler(verbose=True, proxy=proxy) as crawler:
result = await crawler.arun(
url=url,
bypass_cache=True
)
if self._param.extract_type == 'html':
return result.cleaned_html
elif self._param.extract_type == 'markdown':
return result.markdown
elif self._param.extract_type == 'content':
result.extracted_content
return result.markdown

View File

@ -14,7 +14,6 @@
# limitations under the License. # limitations under the License.
# #
from abc import ABC from abc import ABC
import re
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
import deepl import deepl

View File

@ -13,10 +13,10 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
from abc import ABC from abc import ABC
from duckduckgo_search import DDGS from duckduckgo_search import DDGS
import pandas as pd import pandas as pd
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
@ -62,5 +62,5 @@ class DuckDuckGo(ComponentBase, ABC):
return DuckDuckGo.be_output("") return DuckDuckGo.be_output("")
df = pd.DataFrame(duck_res) df = pd.DataFrame(duck_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::") logging.debug("df: {df}")
return df return df

138
agent/component/email.py Normal file
View File

@ -0,0 +1,138 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
import json
import smtplib
import logging
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.header import Header
from email.utils import formataddr
from agent.component.base import ComponentBase, ComponentParamBase
class EmailParam(ComponentParamBase):
"""
Define the Email component parameters.
"""
def __init__(self):
super().__init__()
# Fixed configuration parameters
self.smtp_server = "" # SMTP server address
self.smtp_port = 465 # SMTP port
self.email = "" # Sender email
self.password = "" # Email authorization code
self.sender_name = "" # Sender name
def check(self):
# Check required parameters
self.check_empty(self.smtp_server, "SMTP Server")
self.check_empty(self.email, "Email")
self.check_empty(self.password, "Password")
self.check_empty(self.sender_name, "Sender Name")
class Email(ComponentBase, ABC):
component_name = "Email"
def _run(self, history, **kwargs):
# Get upstream component output and parse JSON
ans = self.get_input()
content = "".join(ans["content"]) if "content" in ans else ""
if not content:
return Email.be_output("No content to send")
success = False
try:
# Parse JSON string passed from upstream
email_data = json.loads(content)
# Validate required fields
if "to_email" not in email_data:
return Email.be_output("Missing required field: to_email")
# Create email object
msg = MIMEMultipart('alternative')
# Properly handle sender name encoding
msg['From'] = formataddr((str(Header(self._param.sender_name,'utf-8')), self._param.email))
msg['To'] = email_data["to_email"]
if "cc_email" in email_data and email_data["cc_email"]:
msg['Cc'] = email_data["cc_email"]
msg['Subject'] = Header(email_data.get("subject", "No Subject"), 'utf-8').encode()
# Use content from email_data or default content
email_content = email_data.get("content", "No content provided")
# msg.attach(MIMEText(email_content, 'plain', 'utf-8'))
msg.attach(MIMEText(email_content, 'html', 'utf-8'))
# Connect to SMTP server and send
logging.info(f"Connecting to SMTP server {self._param.smtp_server}:{self._param.smtp_port}")
context = smtplib.ssl.create_default_context()
with smtplib.SMTP_SSL(self._param.smtp_server, self._param.smtp_port, context=context) as server:
# Login
logging.info(f"Attempting to login with email: {self._param.email}")
server.login(self._param.email, self._param.password)
# Get all recipient list
recipients = [email_data["to_email"]]
if "cc_email" in email_data and email_data["cc_email"]:
recipients.extend(email_data["cc_email"].split(','))
# Send email
logging.info(f"Sending email to recipients: {recipients}")
try:
server.send_message(msg, self._param.email, recipients)
success = True
except Exception as e:
logging.error(f"Error during send_message: {str(e)}")
# Try alternative method
server.sendmail(self._param.email, recipients, msg.as_string())
success = True
try:
server.quit()
except Exception as e:
# Ignore errors when closing connection
logging.warning(f"Non-fatal error during connection close: {str(e)}")
if success:
return Email.be_output("Email sent successfully")
except json.JSONDecodeError:
error_msg = "Invalid JSON format in input"
logging.error(error_msg)
return Email.be_output(error_msg)
except smtplib.SMTPAuthenticationError:
error_msg = "SMTP Authentication failed. Please check your email and authorization code."
logging.error(error_msg)
return Email.be_output(f"Failed to send email: {error_msg}")
except smtplib.SMTPConnectError:
error_msg = f"Failed to connect to SMTP server {self._param.smtp_server}:{self._param.smtp_port}"
logging.error(error_msg)
return Email.be_output(f"Failed to send email: {error_msg}")
except smtplib.SMTPException as e:
error_msg = f"SMTP error occurred: {str(e)}"
logging.error(error_msg)
return Email.be_output(f"Failed to send email: {error_msg}")
except Exception as e:
error_msg = f"Unexpected error: {str(e)}"
logging.error(error_msg)
return Email.be_output(f"Failed to send email: {error_msg}")

View File

@ -15,12 +15,17 @@
# #
from abc import ABC from abc import ABC
import re import re
from copy import deepcopy
import pandas as pd import pandas as pd
from peewee import MySQLDatabase, PostgresqlDatabase import pymysql
from agent.component.base import ComponentBase, ComponentParamBase import psycopg2
from agent.component import GenerateParam, Generate
import pyodbc
import logging
class ExeSQLParam(ComponentParamBase): class ExeSQLParam(GenerateParam):
""" """
Define the ExeSQL component parameters. Define the ExeSQL component parameters.
""" """
@ -37,63 +42,114 @@ class ExeSQLParam(ComponentParamBase):
self.top_n = 30 self.top_n = 30
def check(self): def check(self):
self.check_valid_value(self.db_type, "Choose DB type", ['mysql', 'postgresql', 'mariadb']) super().check()
self.check_valid_value(self.db_type, "Choose DB type", ['mysql', 'postgresql', 'mariadb', 'mssql'])
self.check_empty(self.database, "Database name") self.check_empty(self.database, "Database name")
self.check_empty(self.username, "database username") self.check_empty(self.username, "database username")
self.check_empty(self.host, "IP Address") self.check_empty(self.host, "IP Address")
self.check_positive_integer(self.port, "IP Port") self.check_positive_integer(self.port, "IP Port")
self.check_empty(self.password, "Database password") self.check_empty(self.password, "Database password")
self.check_positive_integer(self.top_n, "Number of records") self.check_positive_integer(self.top_n, "Number of records")
if self.database == "rag_flow":
if self.host == "ragflow-mysql":
raise ValueError("The host is not accessible.")
if self.password == "infini_rag_flow":
raise ValueError("The host is not accessible.")
class ExeSQL(ComponentBase, ABC): class ExeSQL(Generate, ABC):
component_name = "ExeSQL" component_name = "ExeSQL"
def _run(self, history, **kwargs): def _refactor(self,ans):
if not hasattr(self, "_loop"): match = re.search(r"```sql\s*(.*?)\s*```", ans, re.DOTALL)
setattr(self, "_loop", 0) if match:
if self._loop >= self._param.loop: ans = match.group(1) # Query content
self._loop = 0 return ans
raise Exception("Maximum loop time exceeds. Can't query the correct data via SQL statement.") else:
self._loop += 1 print("no markdown")
ans = re.sub(r'^.*?SELECT ', 'SELECT ', (ans), flags=re.IGNORECASE)
ans = self.get_input()
ans = "".join(ans["content"]) if "content" in ans else ""
ans = re.sub(r'^.*?SELECT ', 'SELECT ', repr(ans), flags=re.IGNORECASE)
ans = re.sub(r';.*?SELECT ', '; SELECT ', ans, flags=re.IGNORECASE) ans = re.sub(r';.*?SELECT ', '; SELECT ', ans, flags=re.IGNORECASE)
ans = re.sub(r';[^;]*$', r';', ans) ans = re.sub(r';[^;]*$', r';', ans)
if not ans: if not ans:
raise Exception("SQL statement not found!") raise Exception("SQL statement not found!")
return ans
def _run(self, history, **kwargs):
ans = self.get_input()
ans = "".join([str(a) for a in ans["content"]]) if "content" in ans else ""
ans = self._refactor(ans)
logging.info("db_type: ",self._param.db_type)
if self._param.db_type in ["mysql", "mariadb"]: if self._param.db_type in ["mysql", "mariadb"]:
db = MySQLDatabase(self._param.database, user=self._param.username, host=self._param.host, db = pymysql.connect(db=self._param.database, user=self._param.username, host=self._param.host,
port=self._param.port, password=self._param.password) port=self._param.port, password=self._param.password)
elif self._param.db_type == 'postgresql': elif self._param.db_type == 'postgresql':
db = PostgresqlDatabase(self._param.database, user=self._param.username, host=self._param.host, db = psycopg2.connect(dbname=self._param.database, user=self._param.username, host=self._param.host,
port=self._param.port, password=self._param.password) port=self._param.port, password=self._param.password)
elif self._param.db_type == 'mssql':
conn_str = (
r'DRIVER={ODBC Driver 17 for SQL Server};'
r'SERVER=' + self._param.host + ',' + str(self._param.port) + ';'
r'DATABASE=' + self._param.database + ';'
r'UID=' + self._param.username + ';'
r'PWD=' + self._param.password
)
db = pyodbc.connect(conn_str)
try: try:
db.connect() cursor = db.cursor()
except Exception as e: except Exception as e:
raise Exception("Database Connection Failed! \n" + str(e)) raise Exception("Database Connection Failed! \n" + str(e))
if not hasattr(self, "_loop"):
setattr(self, "_loop", 0)
self._loop += 1
input_list=re.split(r';', ans.replace(r"\n", " "))
sql_res = [] sql_res = []
for single_sql in re.split(r';', ans.replace(r"\n", " ")): for i in range(len(input_list)):
if not single_sql: single_sql=input_list[i]
continue while self._loop <= self._param.loop:
try: self._loop+=1
query = db.execute_sql(single_sql) if not single_sql:
if query.rowcount == 0: break
sql_res.append({"content": "\nTotal: " + str(query.rowcount) + "\n No record in the database!"}) try:
continue logging.info("single_sql: ", single_sql)
single_res = pd.DataFrame([i for i in query.fetchmany(size=self._param.top_n)]) cursor.execute(single_sql)
single_res.columns = [i[0] for i in query.description] if cursor.rowcount == 0:
sql_res.append({"content": "\nTotal: " + str(query.rowcount) + "\n" + single_res.to_markdown()}) sql_res.append({"content": "No record in the database!"})
except Exception as e: break
sql_res.append({"content": "**Error**:" + str(e) + "\nError SQL Statement:" + single_sql}) if self._param.db_type == 'mssql':
pass single_res = pd.DataFrame.from_records(cursor.fetchmany(self._param.top_n),columns = [desc[0] for desc in cursor.description])
else:
single_res = pd.DataFrame([i for i in cursor.fetchmany(self._param.top_n)])
single_res.columns = [i[0] for i in cursor.description]
sql_res.append({"content": single_res.to_markdown()})
break
except Exception as e:
single_sql = self._regenerate_sql(single_sql, str(e), **kwargs)
single_sql = self._refactor(single_sql)
if self._loop > self._param.loop:
sql_res.append({"content": "Can't query the correct data via SQL statement."})
# raise Exception("Maximum loop time exceeds. Can't query the correct data via SQL statement.")
db.close() db.close()
if not sql_res: if not sql_res:
return ExeSQL.be_output("") return ExeSQL.be_output("")
return pd.DataFrame(sql_res) return pd.DataFrame(sql_res)
def _regenerate_sql(self, failed_sql, error_message,**kwargs):
prompt = f'''
## You are the Repair SQL Statement Helper, please modify the original SQL statement based on the SQL query error report.
## The original SQL statement is as follows:{failed_sql}.
## The contents of the SQL query error report is as follows:{error_message}.
## Answer only the modified SQL statement. Please do not give any explanation, just answer the code.
'''
self._param.prompt=prompt
kwargs_ = deepcopy(kwargs)
kwargs_["stream"] = False
response = Generate._run(self, [], **kwargs_)
try:
regenerated_sql = response.loc[0,"content"]
return regenerated_sql
except Exception as e:
logging.error(f"Failed to regenerate SQL: {e}")
return None
def debug(self, **kwargs):
return self._run([], **kwargs)

View File

@ -17,8 +17,10 @@ import re
from functools import partial from functools import partial
import pandas as pd import pandas as pd
from api.db import LLMType from api.db import LLMType
from api.db.services.conversation_service import structure_answer
from api.db.services.dialog_service import message_fit_in
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
from api.settings import retrievaler from api import settings
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
@ -50,11 +52,16 @@ class GenerateParam(ComponentParamBase):
def gen_conf(self): def gen_conf(self):
conf = {} conf = {}
if self.max_tokens > 0: conf["max_tokens"] = self.max_tokens if self.max_tokens > 0:
if self.temperature > 0: conf["temperature"] = self.temperature conf["max_tokens"] = self.max_tokens
if self.top_p > 0: conf["top_p"] = self.top_p if self.temperature > 0:
if self.presence_penalty > 0: conf["presence_penalty"] = self.presence_penalty conf["temperature"] = self.temperature
if self.frequency_penalty > 0: conf["frequency_penalty"] = self.frequency_penalty if self.top_p > 0:
conf["top_p"] = self.top_p
if self.presence_penalty > 0:
conf["presence_penalty"] = self.presence_penalty
if self.frequency_penalty > 0:
conf["frequency_penalty"] = self.frequency_penalty
return conf return conf
@ -62,23 +69,28 @@ class Generate(ComponentBase):
component_name = "Generate" component_name = "Generate"
def get_dependent_components(self): def get_dependent_components(self):
cpnts = [para["component_id"] for para in self._param.parameters] cpnts = set([para["component_id"].split("@")[0] for para in self._param.parameters \
return cpnts if para.get("component_id") \
and para["component_id"].lower().find("answer") < 0 \
and para["component_id"].lower().find("begin") < 0])
return list(cpnts)
def set_cite(self, retrieval_res, answer): def set_cite(self, retrieval_res, answer):
retrieval_res = retrieval_res.dropna(subset=["vector", "content_ltks"]).reset_index(drop=True) retrieval_res = retrieval_res.dropna(subset=["vector", "content_ltks"]).reset_index(drop=True)
if "empty_response" in retrieval_res.columns: if "empty_response" in retrieval_res.columns:
retrieval_res["empty_response"].fillna("", inplace=True) retrieval_res["empty_response"].fillna("", inplace=True)
answer, idx = retrievaler.insert_citations(answer, [ck["content_ltks"] for _, ck in retrieval_res.iterrows()], answer, idx = settings.retrievaler.insert_citations(answer,
[ck["vector"] for _, ck in retrieval_res.iterrows()], [ck["content_ltks"] for _, ck in retrieval_res.iterrows()],
LLMBundle(self._canvas.get_tenant_id(), LLMType.EMBEDDING, [ck["vector"] for _, ck in retrieval_res.iterrows()],
self._canvas.get_embedding_model()), tkweight=0.7, LLMBundle(self._canvas.get_tenant_id(), LLMType.EMBEDDING,
vtweight=0.3) self._canvas.get_embedding_model()), tkweight=0.7,
vtweight=0.3)
doc_ids = set([]) doc_ids = set([])
recall_docs = [] recall_docs = []
for i in idx: for i in idx:
did = retrieval_res.loc[int(i), "doc_id"] did = retrieval_res.loc[int(i), "doc_id"]
if did in doc_ids: continue if did in doc_ids:
continue
doc_ids.add(did) doc_ids.add(did)
recall_docs.append({"doc_id": did, "doc_name": retrieval_res.loc[int(i), "docnm_kwd"]}) recall_docs.append({"doc_id": did, "doc_name": retrieval_res.loc[int(i), "docnm_kwd"]})
@ -91,28 +103,71 @@ class Generate(ComponentBase):
} }
if answer.lower().find("invalid key") >= 0 or answer.lower().find("invalid api") >= 0: if answer.lower().find("invalid key") >= 0 or answer.lower().find("invalid api") >= 0:
answer += " Please set LLM API-Key in 'User Setting -> Model Providers -> API-Key'" answer += " Please set LLM API-Key in 'User Setting -> Model providers -> API-Key'"
res = {"content": answer, "reference": reference} res = {"content": answer, "reference": reference}
res = structure_answer(None, res, "", "")
return res return res
def get_input_elements(self):
if self._param.parameters:
return [{"key": "user", "name": "Input your question here:"}, *self._param.parameters]
return [{"key": "user", "name": "Input your question here:"}]
def _run(self, history, **kwargs): def _run(self, history, **kwargs):
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id) chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
prompt = self._param.prompt prompt = self._param.prompt
retrieval_res = self.get_input() retrieval_res = []
input = (" - " + "\n - ".join(retrieval_res["content"])) if "content" in retrieval_res else "" self._param.inputs = []
for para in self._param.parameters: for para in self._param.parameters:
cpn = self._canvas.get_component(para["component_id"])["obj"] if not para.get("component_id"):
continue
component_id = para["component_id"].split("@")[0]
if para["component_id"].lower().find("@") >= 0:
cpn_id, key = para["component_id"].split("@")
for p in self._canvas.get_component(cpn_id)["obj"]._param.query:
if p["key"] == key:
kwargs[para["key"]] = p.get("value", "")
self._param.inputs.append(
{"component_id": para["component_id"], "content": kwargs[para["key"]]})
break
else:
assert False, f"Can't find parameter '{key}' for {cpn_id}"
continue
cpn = self._canvas.get_component(component_id)["obj"]
if cpn.component_name.lower() == "answer":
hist = self._canvas.get_history(1)
if hist:
hist = hist[0]["content"]
else:
hist = ""
kwargs[para["key"]] = hist
continue
_, out = cpn.output(allow_partial=False) _, out = cpn.output(allow_partial=False)
if "content" not in out.columns: if "content" not in out.columns:
kwargs[para["key"]] = "Nothing" kwargs[para["key"]] = ""
else: else:
kwargs[para["key"]] = " - " + "\n - ".join(out["content"]) if cpn.component_name.lower() == "retrieval":
retrieval_res.append(out)
kwargs[para["key"]] = " - "+"\n - ".join([o if isinstance(o, str) else str(o) for o in out["content"]])
self._param.inputs.append({"component_id": para["component_id"], "content": kwargs[para["key"]]})
if retrieval_res:
retrieval_res = pd.concat(retrieval_res, ignore_index=True)
else:
retrieval_res = pd.DataFrame([])
kwargs["input"] = input
for n, v in kwargs.items(): for n, v in kwargs.items():
prompt = re.sub(r"\{%s\}" % n, re.escape(str(v)), prompt) prompt = re.sub(r"\{%s\}" % re.escape(n), str(v).replace("\\", " "), prompt)
if not self._param.inputs and prompt.find("{input}") >= 0:
retrieval_res = self.get_input()
input = (" - " + "\n - ".join(
[c for c in retrieval_res["content"] if isinstance(c, str)])) if "content" in retrieval_res else ""
prompt = re.sub(r"\{input\}", re.escape(input), prompt)
downstreams = self._canvas.get_component(self._id)["downstream"] downstreams = self._canvas.get_component(self._id)["downstream"]
if kwargs.get("stream") and len(downstreams) == 1 and self._canvas.get_component(downstreams[0])[ if kwargs.get("stream") and len(downstreams) == 1 and self._canvas.get_component(downstreams[0])[
@ -122,13 +177,19 @@ class Generate(ComponentBase):
if "empty_response" in retrieval_res.columns and not "".join(retrieval_res["content"]): if "empty_response" in retrieval_res.columns and not "".join(retrieval_res["content"]):
res = {"content": "\n- ".join(retrieval_res["empty_response"]) if "\n- ".join( res = {"content": "\n- ".join(retrieval_res["empty_response"]) if "\n- ".join(
retrieval_res["empty_response"]) else "Nothing found in knowledgebase!", "reference": []} retrieval_res["empty_response"]) else "Nothing found in knowledgebase!", "reference": []}
return Generate.be_output(res) return pd.DataFrame([res])
msg = self._canvas.get_history(self._param.message_history_window_size)
if len(msg) < 1:
msg.append({"role": "user", "content": ""})
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(chat_mdl.max_length * 0.97))
if len(msg) < 2:
msg.append({"role": "user", "content": ""})
ans = chat_mdl.chat(msg[0]["content"], msg[1:], self._param.gen_conf())
ans = chat_mdl.chat(prompt, self._canvas.get_history(self._param.message_history_window_size),
self._param.gen_conf())
if self._param.cite and "content_ltks" in retrieval_res.columns and "vector" in retrieval_res.columns: if self._param.cite and "content_ltks" in retrieval_res.columns and "vector" in retrieval_res.columns:
df = self.set_cite(retrieval_res, ans) res = self.set_cite(retrieval_res, ans)
return pd.DataFrame(df) return pd.DataFrame([res])
return Generate.be_output(ans) return Generate.be_output(ans)
@ -141,9 +202,14 @@ class Generate(ComponentBase):
self.set_output(res) self.set_output(res)
return return
msg = self._canvas.get_history(self._param.message_history_window_size)
if len(msg) < 1:
msg.append({"role": "user", "content": ""})
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(chat_mdl.max_length * 0.97))
if len(msg) < 2:
msg.append({"role": "user", "content": ""})
answer = "" answer = ""
for ans in chat_mdl.chat_streamly(prompt, self._canvas.get_history(self._param.message_history_window_size), for ans in chat_mdl.chat_streamly(msg[0]["content"], msg[1:], self._param.gen_conf()):
self._param.gen_conf()):
res = {"content": ans, "reference": []} res = {"content": ans, "reference": []}
answer = ans answer = ans
yield res yield res
@ -152,4 +218,17 @@ class Generate(ComponentBase):
res = self.set_cite(retrieval_res, answer) res = self.set_cite(retrieval_res, answer)
yield res yield res
self.set_output(res) self.set_output(Generate.be_output(res))
def debug(self, **kwargs):
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
prompt = self._param.prompt
for para in self._param.debug_inputs:
kwargs[para["key"]] = para.get("value", "")
for n, v in kwargs.items():
prompt = re.sub(r"\{%s\}" % re.escape(n), str(v).replace("\\", " "), prompt)
ans = chat_mdl.chat(prompt, [{"role": "user", "content": kwargs.get("user", "")}], self._param.gen_conf())
return pd.DataFrame([ans])

View File

@ -13,10 +13,10 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
from abc import ABC from abc import ABC
import pandas as pd import pandas as pd
import requests import requests
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
@ -57,5 +57,5 @@ class GitHub(ComponentBase, ABC):
return GitHub.be_output("") return GitHub.be_output("")
df = pd.DataFrame(github_res) df = pd.DataFrame(github_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::") logging.debug(f"df: {df}")
return df return df

View File

@ -13,10 +13,10 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
from abc import ABC from abc import ABC
from serpapi import GoogleSearch from serpapi import GoogleSearch
import pandas as pd import pandas as pd
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
@ -85,12 +85,12 @@ class Google(ComponentBase, ABC):
"hl": self._param.language, "num": self._param.top_n}) "hl": self._param.language, "num": self._param.top_n})
google_res = [{"content": '<a href="' + i["link"] + '">' + i["title"] + '</a> ' + i["snippet"]} for i in google_res = [{"content": '<a href="' + i["link"] + '">' + i["title"] + '</a> ' + i["snippet"]} for i in
client.get_dict()["organic_results"]] client.get_dict()["organic_results"]]
except Exception as e: except Exception:
return Google.be_output("**ERROR**: Existing Unavailable Parameters!") return Google.be_output("**ERROR**: Existing Unavailable Parameters!")
if not google_res: if not google_res:
return Google.be_output("") return Google.be_output("")
df = pd.DataFrame(google_res) df = pd.DataFrame(google_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::") logging.debug(f"df: {df}")
return df return df

View File

@ -13,9 +13,9 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
from abc import ABC from abc import ABC
import pandas as pd import pandas as pd
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
from scholarly import scholarly from scholarly import scholarly
@ -58,13 +58,13 @@ class GoogleScholar(ComponentBase, ABC):
'pub_url'] + '"></a> ' + "\n author: " + ",".join(pub['bib']['author']) + '\n Abstract: ' + pub[ 'pub_url'] + '"></a> ' + "\n author: " + ",".join(pub['bib']['author']) + '\n Abstract: ' + pub[
'bib'].get('abstract', 'no abstract')}) 'bib'].get('abstract', 'no abstract')})
except StopIteration or Exception as e: except StopIteration or Exception:
print("**ERROR** " + str(e)) logging.exception("GoogleScholar")
break break
if not scholar_res: if not scholar_res:
return GoogleScholar.be_output("") return GoogleScholar.be_output("")
df = pd.DataFrame(scholar_res) df = pd.DataFrame(scholar_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::") logging.debug(f"df: {df}")
return df return df

116
agent/component/invoke.py Normal file
View File

@ -0,0 +1,116 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
import re
from abc import ABC
import requests
from deepdoc.parser import HtmlParser
from agent.component.base import ComponentBase, ComponentParamBase
class InvokeParam(ComponentParamBase):
"""
Define the Crawler component parameters.
"""
def __init__(self):
super().__init__()
self.proxy = None
self.headers = ""
self.method = "get"
self.variables = []
self.url = ""
self.timeout = 60
self.clean_html = False
def check(self):
self.check_valid_value(self.method.lower(), "Type of content from the crawler", ['get', 'post', 'put'])
self.check_empty(self.url, "End point URL")
self.check_positive_integer(self.timeout, "Timeout time in second")
self.check_boolean(self.clean_html, "Clean HTML")
class Invoke(ComponentBase, ABC):
component_name = "Invoke"
def _run(self, history, **kwargs):
args = {}
for para in self._param.variables:
if para.get("component_id"):
if '@' in para["component_id"]:
component = para["component_id"].split('@')[0]
field = para["component_id"].split('@')[1]
cpn = self._canvas.get_component(component)["obj"]
for param in cpn._param.query:
if param["key"] == field:
if "value" in param:
args[para["key"]] = param["value"]
else:
cpn = self._canvas.get_component(para["component_id"])["obj"]
if cpn.component_name.lower() == "answer":
args[para["key"]] = self._canvas.get_history(1)[0]["content"]
continue
_, out = cpn.output(allow_partial=False)
if not out.empty:
args[para["key"]] = "\n".join(out["content"])
else:
args[para["key"]] = para["value"]
url = self._param.url.strip()
if url.find("http") != 0:
url = "http://" + url
method = self._param.method.lower()
headers = {}
if self._param.headers:
headers = json.loads(self._param.headers)
proxies = None
if re.sub(r"https?:?/?/?", "", self._param.proxy):
proxies = {"http": self._param.proxy, "https": self._param.proxy}
if method == 'get':
response = requests.get(url=url,
params=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html:
sections = HtmlParser()(None, response.content)
return Invoke.be_output("\n".join(sections))
return Invoke.be_output(response.text)
if method == 'put':
response = requests.put(url=url,
data=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html:
sections = HtmlParser()(None, response.content)
return Invoke.be_output("\n".join(sections))
return Invoke.be_output(response.text)
if method == 'post':
response = requests.post(url=url,
json=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html:
sections = HtmlParser()(None, response.content)
return Invoke.be_output("\n".join(sections))
return Invoke.be_output(response.text)

View File

@ -0,0 +1,45 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
from agent.component.base import ComponentBase, ComponentParamBase
class IterationParam(ComponentParamBase):
"""
Define the Iteration component parameters.
"""
def __init__(self):
super().__init__()
self.delimiter = ","
def check(self):
self.check_empty(self.delimiter, "Delimiter")
class Iteration(ComponentBase, ABC):
component_name = "Iteration"
def get_start(self):
for cid in self._canvas.components.keys():
if self._canvas.get_component(cid)["obj"].component_name.lower() != "iterationitem":
continue
if self._canvas.get_component(cid)["parent_id"] == self._id:
return self._canvas.get_component(cid)
def _run(self, history, **kwargs):
return self.output(allow_partial=False)[1]

View File

@ -0,0 +1,49 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
import pandas as pd
from agent.component.base import ComponentBase, ComponentParamBase
class IterationItemParam(ComponentParamBase):
"""
Define the IterationItem component parameters.
"""
def check(self):
return True
class IterationItem(ComponentBase, ABC):
component_name = "IterationItem"
def __init__(self, canvas, id, param: ComponentParamBase):
super().__init__(canvas, id, param)
self._idx = 0
def _run(self, history, **kwargs):
parent = self.get_parent()
ans = parent.get_input()
ans = parent._param.delimiter.join(ans["content"]) if "content" in ans else ""
ans = [a.strip() for a in ans.split(parent._param.delimiter)]
df = pd.DataFrame([{"content": ans[self._idx]}])
self._idx += 1
if self._idx >= len(ans):
self._idx = -1
return df
def end(self):
return self._idx == -1

View File

@ -1,130 +1,130 @@
# #
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved. # Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
# You may obtain a copy of the License at # You may obtain a copy of the License at
# #
# http://www.apache.org/licenses/LICENSE-2.0 # http://www.apache.org/licenses/LICENSE-2.0
# #
# Unless required by applicable law or agreed to in writing, software # Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, # distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import json import json
from abc import ABC from abc import ABC
import pandas as pd import pandas as pd
import requests import requests
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
class Jin10Param(ComponentParamBase): class Jin10Param(ComponentParamBase):
""" """
Define the Jin10 component parameters. Define the Jin10 component parameters.
""" """
def __init__(self): def __init__(self):
super().__init__() super().__init__()
self.type = "flash" self.type = "flash"
self.secret_key = "xxx" self.secret_key = "xxx"
self.flash_type = '1' self.flash_type = '1'
self.calendar_type = 'cj' self.calendar_type = 'cj'
self.calendar_datatype = 'data' self.calendar_datatype = 'data'
self.symbols_type = 'GOODS' self.symbols_type = 'GOODS'
self.symbols_datatype = 'symbols' self.symbols_datatype = 'symbols'
self.contain = "" self.contain = ""
self.filter = "" self.filter = ""
def check(self): def check(self):
self.check_valid_value(self.type, "Type", ['flash', 'calendar', 'symbols', 'news']) self.check_valid_value(self.type, "Type", ['flash', 'calendar', 'symbols', 'news'])
self.check_valid_value(self.flash_type, "Flash Type", ['1', '2', '3', '4', '5']) self.check_valid_value(self.flash_type, "Flash Type", ['1', '2', '3', '4', '5'])
self.check_valid_value(self.calendar_type, "Calendar Type", ['cj', 'qh', 'hk', 'us']) self.check_valid_value(self.calendar_type, "Calendar Type", ['cj', 'qh', 'hk', 'us'])
self.check_valid_value(self.calendar_datatype, "Calendar DataType", ['data', 'event', 'holiday']) self.check_valid_value(self.calendar_datatype, "Calendar DataType", ['data', 'event', 'holiday'])
self.check_valid_value(self.symbols_type, "Symbols Type", ['GOODS', 'FOREX', 'FUTURE', 'CRYPTO']) self.check_valid_value(self.symbols_type, "Symbols Type", ['GOODS', 'FOREX', 'FUTURE', 'CRYPTO'])
self.check_valid_value(self.symbols_datatype, 'Symbols DataType', ['symbols', 'quotes']) self.check_valid_value(self.symbols_datatype, 'Symbols DataType', ['symbols', 'quotes'])
class Jin10(ComponentBase, ABC): class Jin10(ComponentBase, ABC):
component_name = "Jin10" component_name = "Jin10"
def _run(self, history, **kwargs): def _run(self, history, **kwargs):
ans = self.get_input() ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else "" ans = " - ".join(ans["content"]) if "content" in ans else ""
if not ans: if not ans:
return Jin10.be_output("") return Jin10.be_output("")
jin10_res = [] jin10_res = []
headers = {'secret-key': self._param.secret_key} headers = {'secret-key': self._param.secret_key}
try: try:
if self._param.type == "flash": if self._param.type == "flash":
params = { params = {
'category': self._param.flash_type, 'category': self._param.flash_type,
'contain': self._param.contain, 'contain': self._param.contain,
'filter': self._param.filter 'filter': self._param.filter
} }
response = requests.get( response = requests.get(
url='https://open-data-api.jin10.com/data-api/flash?category=' + self._param.flash_type, url='https://open-data-api.jin10.com/data-api/flash?category=' + self._param.flash_type,
headers=headers, data=json.dumps(params)) headers=headers, data=json.dumps(params))
response = response.json() response = response.json()
for i in response['data']: for i in response['data']:
jin10_res.append({"content": i['data']['content']}) jin10_res.append({"content": i['data']['content']})
if self._param.type == "calendar": if self._param.type == "calendar":
params = { params = {
'category': self._param.calendar_type 'category': self._param.calendar_type
} }
response = requests.get( response = requests.get(
url='https://open-data-api.jin10.com/data-api/calendar/' + self._param.calendar_datatype + '?category=' + self._param.calendar_type, url='https://open-data-api.jin10.com/data-api/calendar/' + self._param.calendar_datatype + '?category=' + self._param.calendar_type,
headers=headers, data=json.dumps(params)) headers=headers, data=json.dumps(params))
response = response.json() response = response.json()
jin10_res.append({"content": pd.DataFrame(response['data']).to_markdown()}) jin10_res.append({"content": pd.DataFrame(response['data']).to_markdown()})
if self._param.type == "symbols": if self._param.type == "symbols":
params = { params = {
'type': self._param.symbols_type 'type': self._param.symbols_type
} }
if self._param.symbols_datatype == "quotes": if self._param.symbols_datatype == "quotes":
params['codes'] = 'BTCUSD' params['codes'] = 'BTCUSD'
response = requests.get( response = requests.get(
url='https://open-data-api.jin10.com/data-api/' + self._param.symbols_datatype + '?type=' + self._param.symbols_type, url='https://open-data-api.jin10.com/data-api/' + self._param.symbols_datatype + '?type=' + self._param.symbols_type,
headers=headers, data=json.dumps(params)) headers=headers, data=json.dumps(params))
response = response.json() response = response.json()
if self._param.symbols_datatype == "symbols": if self._param.symbols_datatype == "symbols":
for i in response['data']: for i in response['data']:
i['Commodity Code'] = i['c'] i['Commodity Code'] = i['c']
i['Stock Exchange'] = i['e'] i['Stock Exchange'] = i['e']
i['Commodity Name'] = i['n'] i['Commodity Name'] = i['n']
i['Commodity Type'] = i['t'] i['Commodity Type'] = i['t']
del i['c'], i['e'], i['n'], i['t'] del i['c'], i['e'], i['n'], i['t']
if self._param.symbols_datatype == "quotes": if self._param.symbols_datatype == "quotes":
for i in response['data']: for i in response['data']:
i['Selling Price'] = i['a'] i['Selling Price'] = i['a']
i['Buying Price'] = i['b'] i['Buying Price'] = i['b']
i['Commodity Code'] = i['c'] i['Commodity Code'] = i['c']
i['Stock Exchange'] = i['e'] i['Stock Exchange'] = i['e']
i['Highest Price'] = i['h'] i['Highest Price'] = i['h']
i['Yesterdays Closing Price'] = i['hc'] i['Yesterdays Closing Price'] = i['hc']
i['Lowest Price'] = i['l'] i['Lowest Price'] = i['l']
i['Opening Price'] = i['o'] i['Opening Price'] = i['o']
i['Latest Price'] = i['p'] i['Latest Price'] = i['p']
i['Market Quote Time'] = i['t'] i['Market Quote Time'] = i['t']
del i['a'], i['b'], i['c'], i['e'], i['h'], i['hc'], i['l'], i['o'], i['p'], i['t'] del i['a'], i['b'], i['c'], i['e'], i['h'], i['hc'], i['l'], i['o'], i['p'], i['t']
jin10_res.append({"content": pd.DataFrame(response['data']).to_markdown()}) jin10_res.append({"content": pd.DataFrame(response['data']).to_markdown()})
if self._param.type == "news": if self._param.type == "news":
params = { params = {
'contain': self._param.contain, 'contain': self._param.contain,
'filter': self._param.filter 'filter': self._param.filter
} }
response = requests.get( response = requests.get(
url='https://open-data-api.jin10.com/data-api/news', url='https://open-data-api.jin10.com/data-api/news',
headers=headers, data=json.dumps(params)) headers=headers, data=json.dumps(params))
response = response.json() response = response.json()
jin10_res.append({"content": pd.DataFrame(response['data']).to_markdown()}) jin10_res.append({"content": pd.DataFrame(response['data']).to_markdown()})
except Exception as e: except Exception as e:
return Jin10.be_output("**ERROR**: " + str(e)) return Jin10.be_output("**ERROR**: " + str(e))
if not jin10_res: if not jin10_res:
return Jin10.be_output("") return Jin10.be_output("")
return pd.DataFrame(jin10_res) return pd.DataFrame(jin10_res)

View File

@ -13,12 +13,12 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
import re import re
from abc import ABC from abc import ABC
from api.db import LLMType from api.db import LLMType
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
from agent.component import GenerateParam, Generate from agent.component import GenerateParam, Generate
from agent.settings import DEBUG
class KeywordExtractParam(GenerateParam): class KeywordExtractParam(GenerateParam):
@ -50,16 +50,16 @@ class KeywordExtract(Generate, ABC):
component_name = "KeywordExtract" component_name = "KeywordExtract"
def _run(self, history, **kwargs): def _run(self, history, **kwargs):
q = "" query = self.get_input()
for r, c in self._canvas.history[::-1]: query = str(query["content"][0]) if "content" in query else ""
if r == "user":
q += c
break
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id) chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
ans = chat_mdl.chat(self._param.get_prompt(), [{"role": "user", "content": q}], ans = chat_mdl.chat(self._param.get_prompt(), [{"role": "user", "content": query}],
self._param.gen_conf()) self._param.gen_conf())
ans = re.sub(r".*keyword:", "", ans).strip() ans = re.sub(r".*keyword:", "", ans).strip()
if DEBUG: print(ans, ":::::::::::::::::::::::::::::::::") logging.debug(f"ans: {ans}")
return KeywordExtract.be_output(ans) return KeywordExtract.be_output(ans)
def debug(self, **kwargs):
return self._run([], **kwargs)

View File

@ -13,12 +13,12 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
from abc import ABC from abc import ABC
from Bio import Entrez from Bio import Entrez
import re import re
import pandas as pd import pandas as pd
import xml.etree.ElementTree as ET import xml.etree.ElementTree as ET
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
@ -65,5 +65,5 @@ class PubMed(ComponentBase, ABC):
return PubMed.be_output("") return PubMed.be_output("")
df = pd.DataFrame(pubmed_res) df = pd.DataFrame(pubmed_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::") logging.debug(f"df: {df}")
return df return df

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
from abc import ABC from abc import ABC
from api.db import LLMType from api.db import LLMType
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
@ -70,11 +71,13 @@ class Relevant(Generate, ABC):
ans = chat_mdl.chat(self._param.get_prompt(), [{"role": "user", "content": ans}], ans = chat_mdl.chat(self._param.get_prompt(), [{"role": "user", "content": ans}],
self._param.gen_conf()) self._param.gen_conf())
print(ans, ":::::::::::::::::::::::::::::::::") logging.debug(ans)
if ans.lower().find("yes") >= 0: if ans.lower().find("yes") >= 0:
return Relevant.be_output(self._param.yes) return Relevant.be_output(self._param.yes)
if ans.lower().find("no") >= 0: if ans.lower().find("no") >= 0:
return Relevant.be_output(self._param.no) return Relevant.be_output(self._param.no)
assert False, f"Relevant component got: {ans}" assert False, f"Relevant component got: {ans}"
def debug(self, **kwargs):
return self._run([], **kwargs)

View File

@ -13,14 +13,16 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
from abc import ABC from abc import ABC
import pandas as pd import pandas as pd
from api.db import LLMType from api.db import LLMType
from api.db.services.dialog_service import label_question
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
from api.settings import retrievaler from api import settings
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
@ -43,22 +45,19 @@ class RetrievalParam(ComponentParamBase):
self.check_decimal_float(self.similarity_threshold, "[Retrieval] Similarity threshold") self.check_decimal_float(self.similarity_threshold, "[Retrieval] Similarity threshold")
self.check_decimal_float(self.keywords_similarity_weight, "[Retrieval] Keywords similarity weight") self.check_decimal_float(self.keywords_similarity_weight, "[Retrieval] Keywords similarity weight")
self.check_positive_number(self.top_n, "[Retrieval] Top N") self.check_positive_number(self.top_n, "[Retrieval] Top N")
self.check_empty(self.kb_ids, "[Retrieval] Knowledge bases")
class Retrieval(ComponentBase, ABC): class Retrieval(ComponentBase, ABC):
component_name = "Retrieval" component_name = "Retrieval"
def _run(self, history, **kwargs): def _run(self, history, **kwargs):
query = [] query = self.get_input()
for role, cnt in history[::-1][:self._param.message_history_window_size]: query = str(query["content"][0]) if "content" in query else ""
if role != "user":continue
query.append(cnt)
# query = "\n".join(query)
query = query[0]
kbs = KnowledgebaseService.get_by_ids(self._param.kb_ids) kbs = KnowledgebaseService.get_by_ids(self._param.kb_ids)
if not kbs: if not kbs:
raise ValueError("Can't find knowledgebases by {}".format(self._param.kb_ids)) return Retrieval.be_output("")
embd_nms = list(set([kb.embd_id for kb in kbs])) embd_nms = list(set([kb.embd_id for kb in kbs]))
assert len(embd_nms) == 1, "Knowledge bases use different embedding models." assert len(embd_nms) == 1, "Knowledge bases use different embedding models."
@ -69,10 +68,11 @@ class Retrieval(ComponentBase, ABC):
if self._param.rerank_id: if self._param.rerank_id:
rerank_mdl = LLMBundle(kbs[0].tenant_id, LLMType.RERANK, self._param.rerank_id) rerank_mdl = LLMBundle(kbs[0].tenant_id, LLMType.RERANK, self._param.rerank_id)
kbinfos = retrievaler.retrieval(query, embd_mdl, kbs[0].tenant_id, self._param.kb_ids, kbinfos = settings.retrievaler.retrieval(query, embd_mdl, kbs[0].tenant_id, self._param.kb_ids,
1, self._param.top_n, 1, self._param.top_n,
self._param.similarity_threshold, 1 - self._param.keywords_similarity_weight, self._param.similarity_threshold, 1 - self._param.keywords_similarity_weight,
aggs=False, rerank_mdl=rerank_mdl) aggs=False, rerank_mdl=rerank_mdl,
rank_feature=label_question(query, kbs))
if not kbinfos["chunks"]: if not kbinfos["chunks"]:
df = Retrieval.be_output("") df = Retrieval.be_output("")
@ -83,7 +83,7 @@ class Retrieval(ComponentBase, ABC):
df = pd.DataFrame(kbinfos["chunks"]) df = pd.DataFrame(kbinfos["chunks"])
df["content"] = df["content_with_weight"] df["content"] = df["content_with_weight"]
del df["content_with_weight"] del df["content_with_weight"]
print(">>>>>>>>>>>>>>>>>>>>>>>>>>\n", query, df) logging.debug("{} {}".format(query, df))
return df return df

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
from abc import ABC from abc import ABC
from api.db import LLMType from api.db import LLMType
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
@ -28,12 +29,11 @@ class RewriteQuestionParam(GenerateParam):
super().__init__() super().__init__()
self.temperature = 0.9 self.temperature = 0.9
self.prompt = "" self.prompt = ""
self.loop = 1
def check(self): def check(self):
super().check() super().check()
def get_prompt(self): def get_prompt(self, conv):
self.prompt = """ self.prompt = """
You are an expert at query expansion to generate a paraphrasing of a question. You are an expert at query expansion to generate a paraphrasing of a question.
I can't retrieval relevant information from the knowledge base by using user's question directly. I can't retrieval relevant information from the knowledge base by using user's question directly.
@ -43,6 +43,40 @@ class RewriteQuestionParam(GenerateParam):
And return 5 versions of question and one is from translation. And return 5 versions of question and one is from translation.
Just list the question. No other words are needed. Just list the question. No other words are needed.
""" """
return f"""
Role: A helpful assistant
Task: Generate a full user question that would follow the conversation.
Requirements & Restrictions:
- Text generated MUST be in the same language of the original user's question.
- If the user's latest question is completely, don't do anything, just return the original question.
- DON'T generate anything except a refined question.
######################
-Examples-
######################
# Example 1
## Conversation
USER: What is the name of Donald Trump's father?
ASSISTANT: Fred Trump.
USER: And his mother?
###############
Output: What's the name of Donald Trump's mother?
------------
# Example 2
## Conversation
USER: What is the name of Donald Trump's father?
ASSISTANT: Fred Trump.
USER: And his mother?
ASSISTANT: Mary Trump.
User: What's her full name?
###############
Output: What's the full name of Donald Trump's mother Mary Trump?
######################
# Real Data
## Conversation
{conv}
###############
"""
return self.prompt return self.prompt
@ -50,23 +84,22 @@ class RewriteQuestion(Generate, ABC):
component_name = "RewriteQuestion" component_name = "RewriteQuestion"
def _run(self, history, **kwargs): def _run(self, history, **kwargs):
if not hasattr(self, "_loop"): hist = self._canvas.get_history(self._param.message_history_window_size)
setattr(self, "_loop", 0) conv = []
if self._loop >= self._param.loop: for m in hist:
self._loop = 0 if m["role"] not in ["user", "assistant"]:
raise Exception("Sorry! Nothing relevant found.") continue
self._loop += 1 conv.append("{}: {}".format(m["role"].upper(), m["content"]))
q = "Question: " conv = "\n".join(conv)
for r, c in self._canvas.history[::-1]:
if r == "user":
q += c
break
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id) chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
ans = chat_mdl.chat(self._param.get_prompt(), [{"role": "user", "content": q}], ans = chat_mdl.chat(self._param.get_prompt(conv), [{"role": "user", "content": "Output: "}],
self._param.gen_conf()) self._param.gen_conf())
self._canvas.history.pop()
self._canvas.history.append(("user", ans))
print(ans, ":::::::::::::::::::::::::::::::::") logging.debug(ans)
return RewriteQuestion.be_output(ans) return RewriteQuestion.be_output(ans)

View File

@ -41,42 +41,48 @@ class SwitchParam(ComponentParamBase):
def check(self): def check(self):
self.check_empty(self.conditions, "[Switch] conditions") self.check_empty(self.conditions, "[Switch] conditions")
for cond in self.conditions: for cond in self.conditions:
if not cond["to"]: raise ValueError(f"[Switch] 'To' can not be empty!") if not cond["to"]:
raise ValueError("[Switch] 'To' can not be empty!")
class Switch(ComponentBase, ABC): class Switch(ComponentBase, ABC):
component_name = "Switch" component_name = "Switch"
def get_dependent_components(self):
res = []
for cond in self._param.conditions:
for item in cond["items"]:
if not item["cpn_id"]:
continue
if item["cpn_id"].find("begin") >= 0:
continue
cid = item["cpn_id"].split("@")[0]
res.append(cid)
return list(set(res))
def _run(self, history, **kwargs): def _run(self, history, **kwargs):
for cond in self._param.conditions: for cond in self._param.conditions:
res = []
if len(cond["items"]) == 1:
out = self._canvas.get_component(cond["items"][0]["cpn_id"])["obj"].output()[1]
cpn_input = "" if "content" not in out.columns else " ".join(out["content"])
if self.process_operator(cpn_input, cond["items"][0]["operator"], cond["items"][0]["value"]):
return Switch.be_output(cond["to"])
continue
if cond["logical_operator"] == "and":
res = True
for item in cond["items"]:
out = self._canvas.get_component(item["cpn_id"])["obj"].output()[1]
cpn_input = "" if "content" not in out.columns else " ".join(out["content"])
if not self.process_operator(cpn_input, item["operator"], item["value"]):
res = False
break
if res:
return Switch.be_output(cond["to"])
continue
res = False
for item in cond["items"]: for item in cond["items"]:
out = self._canvas.get_component(item["cpn_id"])["obj"].output()[1] if not item["cpn_id"]:
cpn_input = "" if "content" not in out.columns else " ".join(out["content"]) continue
if self.process_operator(cpn_input, item["operator"], item["value"]): cid = item["cpn_id"].split("@")[0]
res = True if item["cpn_id"].find("@") > 0:
break cpn_id, key = item["cpn_id"].split("@")
if res: for p in self._canvas.get_component(cid)["obj"]._param.query:
if p["key"] == key:
res.append(self.process_operator(p.get("value",""), item["operator"], item.get("value", "")))
break
else:
out = self._canvas.get_component(cid)["obj"].output()[1]
cpn_input = "" if "content" not in out.columns else " ".join([str(s) for s in out["content"]])
res.append(self.process_operator(cpn_input, item["operator"], item.get("value", "")))
if cond["logical_operator"] != "and" and any(res):
return Switch.be_output(cond["to"])
if all(res):
return Switch.be_output(cond["to"]) return Switch.be_output(cond["to"])
return Switch.be_output(self._param.end_cpn_id) return Switch.be_output(self._param.end_cpn_id)
@ -104,22 +110,22 @@ class Switch(ComponentBase, ABC):
elif operator == ">": elif operator == ">":
try: try:
return True if float(input) > float(value) else False return True if float(input) > float(value) else False
except Exception as e: except Exception:
return True if input > value else False return True if input > value else False
elif operator == "<": elif operator == "<":
try: try:
return True if float(input) < float(value) else False return True if float(input) < float(value) else False
except Exception as e: except Exception:
return True if input < value else False return True if input < value else False
elif operator == "": elif operator == "":
try: try:
return True if float(input) >= float(value) else False return True if float(input) >= float(value) else False
except Exception as e: except Exception:
return True if input >= value else False return True if input >= value else False
elif operator == "": elif operator == "":
try: try:
return True if float(input) <= float(value) else False return True if float(input) <= float(value) else False
except Exception as e: except Exception:
return True if input <= value else False return True if input <= value else False
raise ValueError('Not supported operator' + operator) raise ValueError('Not supported operator' + operator)

123
agent/component/template.py Normal file
View File

@ -0,0 +1,123 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
import re
from agent.component.base import ComponentBase, ComponentParamBase
from jinja2 import Template as Jinja2Template
class TemplateParam(ComponentParamBase):
"""
Define the Generate component parameters.
"""
def __init__(self):
super().__init__()
self.content = ""
self.parameters = []
def check(self):
self.check_empty(self.content, "[Template] Content")
return True
class Template(ComponentBase):
component_name = "Template"
def get_dependent_components(self):
cpnts = set(
[
para["component_id"].split("@")[0]
for para in self._param.parameters
if para.get("component_id")
and para["component_id"].lower().find("answer") < 0
and para["component_id"].lower().find("begin") < 0
]
)
return list(cpnts)
def _run(self, history, **kwargs):
content = self._param.content
self._param.inputs = []
for para in self._param.parameters:
if not para.get("component_id"):
continue
component_id = para["component_id"].split("@")[0]
if para["component_id"].lower().find("@") >= 0:
cpn_id, key = para["component_id"].split("@")
for p in self._canvas.get_component(cpn_id)["obj"]._param.query:
if p["key"] == key:
value = p.get("value", "")
self.make_kwargs(para, kwargs, value)
break
else:
assert False, f"Can't find parameter '{key}' for {cpn_id}"
continue
cpn = self._canvas.get_component(component_id)["obj"]
if cpn.component_name.lower() == "answer":
hist = self._canvas.get_history(1)
if hist:
hist = hist[0]["content"]
else:
hist = ""
self.make_kwargs(para, kwargs, hist)
continue
_, out = cpn.output(allow_partial=False)
result = ""
if "content" in out.columns:
result = "\n".join(
[o if isinstance(o, str) else str(o) for o in out["content"]]
)
self.make_kwargs(para, kwargs, result)
template = Jinja2Template(content)
try:
content = template.render(kwargs)
except Exception:
pass
for n, v in kwargs.items():
try:
v = json.dumps(v, ensure_ascii=False)
except Exception:
pass
content = re.sub(
r"\{%s\}" % re.escape(n), v, content
)
content = re.sub(
r"(\\\"|\")", "", content
)
content = re.sub(
r"(#+)", r" \1 ", content
)
return Template.be_output(content)
def make_kwargs(self, para, kwargs, value):
self._param.inputs.append(
{"component_id": para["component_id"], "content": value}
)
try:
value = json.loads(value)
except Exception:
pass
kwargs[para["key"]] = value

View File

@ -1,72 +1,72 @@
# #
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved. # Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
# You may obtain a copy of the License at # You may obtain a copy of the License at
# #
# http://www.apache.org/licenses/LICENSE-2.0 # http://www.apache.org/licenses/LICENSE-2.0
# #
# Unless required by applicable law or agreed to in writing, software # Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, # distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import json import json
from abc import ABC from abc import ABC
import pandas as pd import pandas as pd
import time import time
import requests import requests
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
class TuShareParam(ComponentParamBase): class TuShareParam(ComponentParamBase):
""" """
Define the TuShare component parameters. Define the TuShare component parameters.
""" """
def __init__(self): def __init__(self):
super().__init__() super().__init__()
self.token = "xxx" self.token = "xxx"
self.src = "eastmoney" self.src = "eastmoney"
self.start_date = "2024-01-01 09:00:00" self.start_date = "2024-01-01 09:00:00"
self.end_date = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()) self.end_date = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
self.keyword = "" self.keyword = ""
def check(self): def check(self):
self.check_valid_value(self.src, "Quick News Source", self.check_valid_value(self.src, "Quick News Source",
["sina", "wallstreetcn", "10jqka", "eastmoney", "yuncaijing", "fenghuang", "jinrongjie"]) ["sina", "wallstreetcn", "10jqka", "eastmoney", "yuncaijing", "fenghuang", "jinrongjie"])
class TuShare(ComponentBase, ABC): class TuShare(ComponentBase, ABC):
component_name = "TuShare" component_name = "TuShare"
def _run(self, history, **kwargs): def _run(self, history, **kwargs):
ans = self.get_input() ans = self.get_input()
ans = ",".join(ans["content"]) if "content" in ans else "" ans = ",".join(ans["content"]) if "content" in ans else ""
if not ans: if not ans:
return TuShare.be_output("") return TuShare.be_output("")
try: try:
tus_res = [] tus_res = []
params = { params = {
"api_name": "news", "api_name": "news",
"token": self._param.token, "token": self._param.token,
"params": {"src": self._param.src, "start_date": self._param.start_date, "params": {"src": self._param.src, "start_date": self._param.start_date,
"end_date": self._param.end_date} "end_date": self._param.end_date}
} }
response = requests.post(url="http://api.tushare.pro", data=json.dumps(params).encode('utf-8')) response = requests.post(url="http://api.tushare.pro", data=json.dumps(params).encode('utf-8'))
response = response.json() response = response.json()
if response['code'] != 0: if response['code'] != 0:
return TuShare.be_output(response['msg']) return TuShare.be_output(response['msg'])
df = pd.DataFrame(response['data']['items']) df = pd.DataFrame(response['data']['items'])
df.columns = response['data']['fields'] df.columns = response['data']['fields']
tus_res.append({"content": (df[df['content'].str.contains(self._param.keyword, case=False)]).to_markdown()}) tus_res.append({"content": (df[df['content'].str.contains(self._param.keyword, case=False)]).to_markdown()})
except Exception as e: except Exception as e:
return TuShare.be_output("**ERROR**: " + str(e)) return TuShare.be_output("**ERROR**: " + str(e))
if not tus_res: if not tus_res:
return TuShare.be_output("") return TuShare.be_output("")
return pd.DataFrame(tus_res) return pd.DataFrame(tus_res)

View File

@ -1,74 +1,80 @@
# #
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved. # Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
# You may obtain a copy of the License at # You may obtain a copy of the License at
# #
# http://www.apache.org/licenses/LICENSE-2.0 # http://www.apache.org/licenses/LICENSE-2.0
# #
# Unless required by applicable law or agreed to in writing, software # Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, # distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from abc import ABC from abc import ABC
import pandas as pd import pandas as pd
import pywencai import pywencai
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
class WenCaiParam(ComponentParamBase): class WenCaiParam(ComponentParamBase):
""" """
Define the WenCai component parameters. Define the WenCai component parameters.
""" """
def __init__(self): def __init__(self):
super().__init__() super().__init__()
self.top_n = 10 self.top_n = 10
self.query_type = "stock" self.query_type = "stock"
def check(self): def check(self):
self.check_positive_integer(self.top_n, "Top N") self.check_positive_integer(self.top_n, "Top N")
self.check_valid_value(self.query_type, "Query type", self.check_valid_value(self.query_type, "Query type",
['stock', 'zhishu', 'fund', 'hkstock', 'usstock', 'threeboard', 'conbond', 'insurance', ['stock', 'zhishu', 'fund', 'hkstock', 'usstock', 'threeboard', 'conbond', 'insurance',
'futures', 'lccp', 'futures', 'lccp',
'foreign_exchange']) 'foreign_exchange'])
class WenCai(ComponentBase, ABC): class WenCai(ComponentBase, ABC):
component_name = "WenCai" component_name = "WenCai"
def _run(self, history, **kwargs): def _run(self, history, **kwargs):
ans = self.get_input() ans = self.get_input()
ans = ",".join(ans["content"]) if "content" in ans else "" ans = ",".join(ans["content"]) if "content" in ans else ""
if not ans: if not ans:
return WenCai.be_output("") return WenCai.be_output("")
try: try:
wencai_res = [] wencai_res = []
res = pywencai.get(query=ans, query_type=self._param.query_type, perpage=self._param.top_n) res = pywencai.get(query=ans, query_type=self._param.query_type, perpage=self._param.top_n)
if isinstance(res, pd.DataFrame): if isinstance(res, pd.DataFrame):
wencai_res.append({"content": res.to_markdown()}) wencai_res.append({"content": res.to_markdown()})
if isinstance(res, dict): if isinstance(res, dict):
for item in res.items(): for item in res.items():
if isinstance(item[1], list): if isinstance(item[1], list):
wencai_res.append({"content": item[0] + "\n" + pd.DataFrame(item[1]).to_markdown()}) wencai_res.append({"content": item[0] + "\n" + pd.DataFrame(item[1]).to_markdown()})
continue continue
if isinstance(item[1], str): if isinstance(item[1], str):
wencai_res.append({"content": item[0] + "\n" + item[1]}) wencai_res.append({"content": item[0] + "\n" + item[1]})
continue continue
if isinstance(item[1], dict): if isinstance(item[1], dict):
if "meta" in item[1].keys(): if "meta" in item[1].keys():
continue continue
wencai_res.append({"content": pd.DataFrame.from_dict(item[1], orient='index').to_markdown()}) wencai_res.append({"content": pd.DataFrame.from_dict(item[1], orient='index').to_markdown()})
continue continue
wencai_res.append({"content": item[0] + "\n" + str(item[1])}) if isinstance(item[1], pd.DataFrame):
except Exception as e: if "image_url" in item[1].columns:
return WenCai.be_output("**ERROR**: " + str(e)) continue
wencai_res.append({"content": item[1].to_markdown()})
if not wencai_res: continue
return WenCai.be_output("")
wencai_res.append({"content": item[0] + "\n" + str(item[1])})
return pd.DataFrame(wencai_res) except Exception as e:
return WenCai.be_output("**ERROR**: " + str(e))
if not wencai_res:
return WenCai.be_output("")
return pd.DataFrame(wencai_res)

View File

@ -13,12 +13,10 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import random import logging
from abc import ABC from abc import ABC
from functools import partial
import wikipedia import wikipedia
import pandas as pd import pandas as pd
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
@ -65,5 +63,5 @@ class Wikipedia(ComponentBase, ABC):
return Wikipedia.be_output("") return Wikipedia.be_output("")
df = pd.DataFrame(wiki_res) df = pd.DataFrame(wiki_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::") logging.debug(f"df: {df}")
return df return df

View File

@ -1,83 +1,84 @@
# #
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved. # Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
# You may obtain a copy of the License at # You may obtain a copy of the License at
# #
# http://www.apache.org/licenses/LICENSE-2.0 # http://www.apache.org/licenses/LICENSE-2.0
# #
# Unless required by applicable law or agreed to in writing, software # Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, # distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from abc import ABC import logging
import pandas as pd from abc import ABC
from agent.component.base import ComponentBase, ComponentParamBase import pandas as pd
import yfinance as yf from agent.component.base import ComponentBase, ComponentParamBase
import yfinance as yf
class YahooFinanceParam(ComponentParamBase):
""" class YahooFinanceParam(ComponentParamBase):
Define the YahooFinance component parameters. """
""" Define the YahooFinance component parameters.
"""
def __init__(self):
super().__init__() def __init__(self):
self.info = True super().__init__()
self.history = False self.info = True
self.count = False self.history = False
self.financials = False self.count = False
self.income_stmt = False self.financials = False
self.balance_sheet = False self.income_stmt = False
self.cash_flow_statement = False self.balance_sheet = False
self.news = True self.cash_flow_statement = False
self.news = True
def check(self):
self.check_boolean(self.info, "get all stock info") def check(self):
self.check_boolean(self.history, "get historical market data") self.check_boolean(self.info, "get all stock info")
self.check_boolean(self.count, "show share count") self.check_boolean(self.history, "get historical market data")
self.check_boolean(self.financials, "show financials") self.check_boolean(self.count, "show share count")
self.check_boolean(self.income_stmt, "income statement") self.check_boolean(self.financials, "show financials")
self.check_boolean(self.balance_sheet, "balance sheet") self.check_boolean(self.income_stmt, "income statement")
self.check_boolean(self.cash_flow_statement, "cash flow statement") self.check_boolean(self.balance_sheet, "balance sheet")
self.check_boolean(self.news, "show news") self.check_boolean(self.cash_flow_statement, "cash flow statement")
self.check_boolean(self.news, "show news")
class YahooFinance(ComponentBase, ABC):
component_name = "YahooFinance" class YahooFinance(ComponentBase, ABC):
component_name = "YahooFinance"
def _run(self, history, **kwargs):
ans = self.get_input() def _run(self, history, **kwargs):
ans = "".join(ans["content"]) if "content" in ans else "" ans = self.get_input()
if not ans: ans = "".join(ans["content"]) if "content" in ans else ""
return YahooFinance.be_output("") if not ans:
return YahooFinance.be_output("")
yohoo_res = []
try: yohoo_res = []
msft = yf.Ticker(ans) try:
if self._param.info: msft = yf.Ticker(ans)
yohoo_res.append({"content": "info:\n" + pd.Series(msft.info).to_markdown() + "\n"}) if self._param.info:
if self._param.history: yohoo_res.append({"content": "info:\n" + pd.Series(msft.info).to_markdown() + "\n"})
yohoo_res.append({"content": "history:\n" + msft.history().to_markdown() + "\n"}) if self._param.history:
if self._param.financials: yohoo_res.append({"content": "history:\n" + msft.history().to_markdown() + "\n"})
yohoo_res.append({"content": "calendar:\n" + pd.DataFrame(msft.calendar).to_markdown() + "\n"}) if self._param.financials:
if self._param.balance_sheet: yohoo_res.append({"content": "calendar:\n" + pd.DataFrame(msft.calendar).to_markdown() + "\n"})
yohoo_res.append({"content": "balance sheet:\n" + msft.balance_sheet.to_markdown() + "\n"}) if self._param.balance_sheet:
yohoo_res.append( yohoo_res.append({"content": "balance sheet:\n" + msft.balance_sheet.to_markdown() + "\n"})
{"content": "quarterly balance sheet:\n" + msft.quarterly_balance_sheet.to_markdown() + "\n"}) yohoo_res.append(
if self._param.cash_flow_statement: {"content": "quarterly balance sheet:\n" + msft.quarterly_balance_sheet.to_markdown() + "\n"})
yohoo_res.append({"content": "cash flow statement:\n" + msft.cashflow.to_markdown() + "\n"}) if self._param.cash_flow_statement:
yohoo_res.append( yohoo_res.append({"content": "cash flow statement:\n" + msft.cashflow.to_markdown() + "\n"})
{"content": "quarterly cash flow statement:\n" + msft.quarterly_cashflow.to_markdown() + "\n"}) yohoo_res.append(
if self._param.news: {"content": "quarterly cash flow statement:\n" + msft.quarterly_cashflow.to_markdown() + "\n"})
yohoo_res.append({"content": "news:\n" + pd.DataFrame(msft.news).to_markdown() + "\n"}) if self._param.news:
except Exception as e: yohoo_res.append({"content": "news:\n" + pd.DataFrame(msft.news).to_markdown() + "\n"})
print("**ERROR** " + str(e)) except Exception:
logging.exception("YahooFinance got exception")
if not yohoo_res:
return YahooFinance.be_output("") if not yohoo_res:
return YahooFinance.be_output("")
return pd.DataFrame(yohoo_res)
return pd.DataFrame(yohoo_res)

View File

@ -1,5 +1,5 @@
# #
# Copyright 2019 The FATE Authors. All Rights Reserved. # Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -13,22 +13,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
# Logger
import os
from api.utils.file_utils import get_project_base_directory
from api.utils.log_utils import LoggerFactory, getLogger
DEBUG = 0
LoggerFactory.set_directory(
os.path.join(
get_project_base_directory(),
"logs",
"flow"))
# {CRITICAL: 50, FATAL:50, ERROR:40, WARNING:30, WARN:30, INFO:20, DEBUG:10, NOTSET:0}
LoggerFactory.LEVEL = 30
flow_logger = getLogger("flow")
database_logger = getLogger("database")
FLOAT_ZERO = 1e-8 FLOAT_ZERO = 1e-8
PARAM_MAXDEPTH = 5 PARAM_MAXDEPTH = 5

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -43,6 +43,7 @@ if __name__ == '__main__':
else: else:
print(ans["content"]) print(ans["content"])
if DEBUG: print(canvas.path) if DEBUG:
print(canvas.path)
question = input("\n==================== User =====================\n> ") question = input("\n==================== User =====================\n> ")
canvas.add_user_input(question) canvas.add_user_input(question)

View File

@ -26,20 +26,48 @@
"category_description": { "category_description": {
"product_related": { "product_related": {
"description": "The question is about the product usage, appearance and how it works.", "description": "The question is about the product usage, appearance and how it works.",
"examples": "Why it always beaming?\nHow to install it onto the wall?\nIt leaks, what to do?" "examples": "Why it always beaming?\nHow to install it onto the wall?\nIt leaks, what to do?",
"to": "message:0"
}, },
"others": { "others": {
"description": "The question is not about the product usage, appearance and how it works.", "description": "The question is not about the product usage, appearance and how it works.",
"examples": "How are you doing?\nWhat is your name?\nAre you a robot?\nWhat's the weather?\nWill it rain?" "examples": "How are you doing?\nWhat is your name?\nAre you a robot?\nWhat's the weather?\nWill it rain?",
"to": "message:1"
} }
} }
} }
}, },
"downstream": [], "downstream": ["message:0","message:1"],
"upstream": ["answer:0"] "upstream": ["answer:0"]
},
"message:0": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 0!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["categorize:0"]
},
"message:1": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 1!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["categorize:0"]
} }
}, },
"history": [], "history": [],
"messages": [],
"path": [], "path": [],
"reference": [],
"answer": [] "answer": []
} }

View File

@ -0,0 +1,113 @@
{
"components": {
"begin": {
"obj":{
"component_name": "Begin",
"params": {
"prologue": "Hi there!"
}
},
"downstream": ["answer:0"],
"upstream": []
},
"answer:0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["categorize:0"],
"upstream": ["begin"]
},
"categorize:0": {
"obj": {
"component_name": "Categorize",
"params": {
"llm_id": "deepseek-chat",
"category_description": {
"product_related": {
"description": "The question is about the product usage, appearance and how it works.",
"examples": "Why it always beaming?\nHow to install it onto the wall?\nIt leaks, what to do?",
"to": "concentrator:0"
},
"others": {
"description": "The question is not about the product usage, appearance and how it works.",
"examples": "How are you doing?\nWhat is your name?\nAre you a robot?\nWhat's the weather?\nWill it rain?",
"to": "concentrator:1"
}
}
}
},
"downstream": ["concentrator:0","concentrator:1"],
"upstream": ["answer:0"]
},
"concentrator:0": {
"obj": {
"component_name": "Concentrator",
"params": {}
},
"downstream": ["message:0"],
"upstream": ["categorize:0"]
},
"concentrator:1": {
"obj": {
"component_name": "Concentrator",
"params": {}
},
"downstream": ["message:1_0","message:1_1","message:1_2"],
"upstream": ["categorize:0"]
},
"message:0": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 0_0!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:0"]
},
"message:1_0": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 1_0!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:1"]
},
"message:1_1": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 1_1!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:1"]
},
"message:1_2": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 1_2!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:1"]
}
},
"history": [],
"messages": [],
"path": [],
"reference": [],
"answer": []
}

View File

@ -0,0 +1,18 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from beartype.claw import beartype_this_package
beartype_this_package()

View File

@ -13,14 +13,16 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
import os import os
import sys import sys
import logging
from importlib.util import module_from_spec, spec_from_file_location from importlib.util import module_from_spec, spec_from_file_location
from pathlib import Path from pathlib import Path
from flask import Blueprint, Flask from flask import Blueprint, Flask
from werkzeug.wrappers.request import Request from werkzeug.wrappers.request import Request
from flask_cors import CORS from flask_cors import CORS
from flasgger import Swagger
from itsdangerous.url_safe import URLSafeTimedSerializer as Serializer
from api.db import StatusEnum from api.db import StatusEnum
from api.db.db_models import close_connection from api.db.db_models import close_connection
@ -29,32 +31,60 @@ from api.utils import CustomJSONEncoder, commands
from flask_session import Session from flask_session import Session
from flask_login import LoginManager from flask_login import LoginManager
from api.settings import SECRET_KEY, stat_logger from api import settings
from api.settings import API_VERSION, access_logger
from api.utils.api_utils import server_error_response from api.utils.api_utils import server_error_response
from itsdangerous.url_safe import URLSafeTimedSerializer as Serializer from api.constants import API_VERSION
__all__ = ['app'] __all__ = ["app"]
logger = logging.getLogger('flask.app')
for h in access_logger.handlers:
logger.addHandler(h)
Request.json = property(lambda self: self.get_json(force=True, silent=True)) Request.json = property(lambda self: self.get_json(force=True, silent=True))
app = Flask(__name__) app = Flask(__name__)
CORS(app, supports_credentials=True,max_age=2592000)
# Add this at the beginning of your file to configure Swagger UI
swagger_config = {
"headers": [],
"specs": [
{
"endpoint": "apispec",
"route": "/apispec.json",
"rule_filter": lambda rule: True, # Include all endpoints
"model_filter": lambda tag: True, # Include all models
}
],
"static_url_path": "/flasgger_static",
"swagger_ui": True,
"specs_route": "/apidocs/",
}
swagger = Swagger(
app,
config=swagger_config,
template={
"swagger": "2.0",
"info": {
"title": "RAGFlow API",
"description": "",
"version": "1.0.0",
},
"securityDefinitions": {
"ApiKeyAuth": {"type": "apiKey", "name": "Authorization", "in": "header"}
},
},
)
CORS(app, supports_credentials=True, max_age=2592000)
app.url_map.strict_slashes = False app.url_map.strict_slashes = False
app.json_encoder = CustomJSONEncoder app.json_encoder = CustomJSONEncoder
app.errorhandler(Exception)(server_error_response) app.errorhandler(Exception)(server_error_response)
## convince for dev and debug ## convince for dev and debug
#app.config["LOGIN_DISABLED"] = True # app.config["LOGIN_DISABLED"] = True
app.config["SESSION_PERMANENT"] = False app.config["SESSION_PERMANENT"] = False
app.config["SESSION_TYPE"] = "filesystem" app.config["SESSION_TYPE"] = "filesystem"
app.config['MAX_CONTENT_LENGTH'] = int(os.environ.get("MAX_CONTENT_LENGTH", 128 * 1024 * 1024)) app.config["MAX_CONTENT_LENGTH"] = int(
os.environ.get("MAX_CONTENT_LENGTH", 128 * 1024 * 1024)
)
Session(app) Session(app)
login_manager = LoginManager() login_manager = LoginManager()
@ -64,17 +94,23 @@ commands.register_commands(app)
def search_pages_path(pages_dir): def search_pages_path(pages_dir):
app_path_list = [path for path in pages_dir.glob('*_app.py') if not path.name.startswith('.')] app_path_list = [
api_path_list = [path for path in pages_dir.glob('*sdk/*.py') if not path.name.startswith('.')] path for path in pages_dir.glob("*_app.py") if not path.name.startswith(".")
]
api_path_list = [
path for path in pages_dir.glob("*sdk/*.py") if not path.name.startswith(".")
]
app_path_list.extend(api_path_list) app_path_list.extend(api_path_list)
return app_path_list return app_path_list
def register_page(page_path): def register_page(page_path):
path = f'{page_path}' path = f"{page_path}"
page_name = page_path.stem.rstrip('_app') page_name = page_path.stem.rstrip("_app")
module_name = '.'.join(page_path.parts[page_path.parts.index('api'):-1] + (page_name,)) module_name = ".".join(
page_path.parts[page_path.parts.index("api"): -1] + (page_name,)
)
spec = spec_from_file_location(module_name, page_path) spec = spec_from_file_location(module_name, page_path)
page = module_from_spec(spec) page = module_from_spec(spec)
@ -82,8 +118,10 @@ def register_page(page_path):
page.manager = Blueprint(page_name, module_name) page.manager = Blueprint(page_name, module_name)
sys.modules[module_name] = page sys.modules[module_name] = page
spec.loader.exec_module(page) spec.loader.exec_module(page)
page_name = getattr(page, 'page_name', page_name) page_name = getattr(page, "page_name", page_name)
url_prefix = f'/api/{API_VERSION}/{page_name}' if "/sdk/" in path else f'/{API_VERSION}/{page_name}' url_prefix = (
f"/api/{API_VERSION}" if "/sdk/" in path else f"/{API_VERSION}/{page_name}"
)
app.register_blueprint(page.manager, url_prefix=url_prefix) app.register_blueprint(page.manager, url_prefix=url_prefix)
return url_prefix return url_prefix
@ -91,31 +129,31 @@ def register_page(page_path):
pages_dir = [ pages_dir = [
Path(__file__).parent, Path(__file__).parent,
Path(__file__).parent.parent / 'api' / 'apps', Path(__file__).parent.parent / "api" / "apps",
Path(__file__).parent.parent / 'api' / 'apps' / 'sdk', Path(__file__).parent.parent / "api" / "apps" / "sdk",
] ]
client_urls_prefix = [ client_urls_prefix = [
register_page(path) register_page(path) for dir in pages_dir for path in search_pages_path(dir)
for dir in pages_dir
for path in search_pages_path(dir)
] ]
@login_manager.request_loader @login_manager.request_loader
def load_user(web_request): def load_user(web_request):
jwt = Serializer(secret_key=SECRET_KEY) jwt = Serializer(secret_key=settings.SECRET_KEY)
authorization = web_request.headers.get("Authorization") authorization = web_request.headers.get("Authorization")
if authorization: if authorization:
try: try:
access_token = str(jwt.loads(authorization)) access_token = str(jwt.loads(authorization))
user = UserService.query(access_token=access_token, status=StatusEnum.VALID.value) user = UserService.query(
access_token=access_token, status=StatusEnum.VALID.value
)
if user: if user:
return user[0] return user[0]
else: else:
return None return None
except Exception as e: except Exception as e:
stat_logger.exception(e) logging.warning(f"load_user got exception {e}")
return None return None
else: else:
return None return None
@ -123,4 +161,4 @@ def load_user(web_request):
@app.teardown_request @app.teardown_request
def _db_close(exc): def _db_close(exc):
close_connection() close_connection()

View File

@ -22,43 +22,37 @@ from api.db.services.llm_service import TenantLLMService
from flask_login import login_required, current_user from flask_login import login_required, current_user
from api.db import FileType, LLMType, ParserType, FileSource from api.db import FileType, LLMType, ParserType, FileSource
from api.db.db_models import APIToken, API4Conversation, Task, File from api.db.db_models import APIToken, Task, File
from api.db.services import duplicate_name from api.db.services import duplicate_name
from api.db.services.api_service import APITokenService, API4ConversationService from api.db.services.api_service import APITokenService, API4ConversationService
from api.db.services.dialog_service import DialogService, chat from api.db.services.dialog_service import DialogService, chat, keyword_extraction, label_question
from api.db.services.document_service import DocumentService, doc_upload_and_parse from api.db.services.document_service import DocumentService, doc_upload_and_parse
from api.db.services.file2document_service import File2DocumentService from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.task_service import queue_tasks, TaskService from api.db.services.task_service import queue_tasks, TaskService
from api.db.services.user_service import UserTenantService from api.db.services.user_service import UserTenantService
from api.settings import RetCode, retrievaler from api import settings
from api.utils import get_uuid, current_timestamp, datetime_format from api.utils import get_uuid, current_timestamp, datetime_format
from api.utils.api_utils import server_error_response, get_data_error_result, get_json_result, validate_request from api.utils.api_utils import server_error_response, get_data_error_result, get_json_result, validate_request, \
from itsdangerous import URLSafeTimedSerializer generate_confirmation_token
from api.utils.file_utils import filename_type, thumbnail from api.utils.file_utils import filename_type, thumbnail
from rag.nlp import keyword_extraction
from rag.utils.storage_factory import STORAGE_IMPL from rag.utils.storage_factory import STORAGE_IMPL
from api.db.services.canvas_service import CanvasTemplateService, UserCanvasService from api.db.services.canvas_service import UserCanvasService
from agent.canvas import Canvas from agent.canvas import Canvas
from functools import partial from functools import partial
def generate_confirmation_token(tenent_id): @manager.route('/new_token', methods=['POST']) # noqa: F821
serializer = URLSafeTimedSerializer(tenent_id)
return "ragflow-" + serializer.dumps(get_uuid(), salt=tenent_id)[2:34]
@manager.route('/new_token', methods=['POST'])
@login_required @login_required
def new_token(): def new_token():
req = request.json req = request.json
try: try:
tenants = UserTenantService.query(user_id=current_user.id) tenants = UserTenantService.query(user_id=current_user.id)
if not tenants: if not tenants:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
tenant_id = tenants[0].tenant_id tenant_id = tenants[0].tenant_id
obj = {"tenant_id": tenant_id, "token": generate_confirmation_token(tenant_id), obj = {"tenant_id": tenant_id, "token": generate_confirmation_token(tenant_id),
@ -74,20 +68,20 @@ def new_token():
obj["dialog_id"] = req["dialog_id"] obj["dialog_id"] = req["dialog_id"]
if not APITokenService.save(**obj): if not APITokenService.save(**obj):
return get_data_error_result(retmsg="Fail to new a dialog!") return get_data_error_result(message="Fail to new a dialog!")
return get_json_result(data=obj) return get_json_result(data=obj)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/token_list', methods=['GET']) @manager.route('/token_list', methods=['GET']) # noqa: F821
@login_required @login_required
def token_list(): def token_list():
try: try:
tenants = UserTenantService.query(user_id=current_user.id) tenants = UserTenantService.query(user_id=current_user.id)
if not tenants: if not tenants:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
id = request.args["dialog_id"] if "dialog_id" in request.args else request.args["canvas_id"] id = request.args["dialog_id"] if "dialog_id" in request.args else request.args["canvas_id"]
objs = APITokenService.query(tenant_id=tenants[0].tenant_id, dialog_id=id) objs = APITokenService.query(tenant_id=tenants[0].tenant_id, dialog_id=id)
@ -96,7 +90,7 @@ def token_list():
return server_error_response(e) return server_error_response(e)
@manager.route('/rm', methods=['POST']) @manager.route('/rm', methods=['POST']) # noqa: F821
@validate_request("tokens", "tenant_id") @validate_request("tokens", "tenant_id")
@login_required @login_required
def rm(): def rm():
@ -110,13 +104,13 @@ def rm():
return server_error_response(e) return server_error_response(e)
@manager.route('/stats', methods=['GET']) @manager.route('/stats', methods=['GET']) # noqa: F821
@login_required @login_required
def stats(): def stats():
try: try:
tenants = UserTenantService.query(user_id=current_user.id) tenants = UserTenantService.query(user_id=current_user.id)
if not tenants: if not tenants:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
objs = API4ConversationService.stats( objs = API4ConversationService.stats(
tenants[0].tenant_id, tenants[0].tenant_id,
request.args.get( request.args.get(
@ -141,14 +135,13 @@ def stats():
return server_error_response(e) return server_error_response(e)
@manager.route('/new_conversation', methods=['GET']) @manager.route('/new_conversation', methods=['GET']) # noqa: F821
def set_conversation(): def set_conversation():
token = request.headers.get('Authorization').split()[1] token = request.headers.get('Authorization').split()[1]
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json
try: try:
if objs[0].source == "agent": if objs[0].source == "agent":
e, cvs = UserCanvasService.get_by_id(objs[0].dialog_id) e, cvs = UserCanvasService.get_by_id(objs[0].dialog_id)
@ -169,7 +162,7 @@ def set_conversation():
else: else:
e, dia = DialogService.get_by_id(objs[0].dialog_id) e, dia = DialogService.get_by_id(objs[0].dialog_id)
if not e: if not e:
return get_data_error_result(retmsg="Dialog not found") return get_data_error_result(message="Dialog not found")
conv = { conv = {
"id": get_uuid(), "id": get_uuid(),
"dialog_id": dia.id, "dialog_id": dia.id,
@ -182,19 +175,20 @@ def set_conversation():
return server_error_response(e) return server_error_response(e)
@manager.route('/completion', methods=['POST']) @manager.route('/completion', methods=['POST']) # noqa: F821
@validate_request("conversation_id", "messages") @validate_request("conversation_id", "messages")
def completion(): def completion():
token = request.headers.get('Authorization').split()[1] token = request.headers.get('Authorization').split()[1]
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json req = request.json
e, conv = API4ConversationService.get_by_id(req["conversation_id"]) e, conv = API4ConversationService.get_by_id(req["conversation_id"])
if not e: if not e:
return get_data_error_result(retmsg="Conversation not found!") return get_data_error_result(message="Conversation not found!")
if "quote" not in req: req["quote"] = False if "quote" not in req:
req["quote"] = False
msg = [] msg = []
for m in req["messages"]: for m in req["messages"]:
@ -203,7 +197,8 @@ def completion():
if m["role"] == "assistant" and not msg: if m["role"] == "assistant" and not msg:
continue continue
msg.append(m) msg.append(m)
if not msg[-1].get("id"): msg[-1]["id"] = get_uuid() if not msg[-1].get("id"):
msg[-1]["id"] = get_uuid()
message_id = msg[-1]["id"] message_id = msg[-1]["id"]
def fillin_conv(ans): def fillin_conv(ans):
@ -263,19 +258,20 @@ def completion():
ans = {"answer": ans["content"], "reference": ans.get("reference", [])} ans = {"answer": ans["content"], "reference": ans.get("reference", [])}
fillin_conv(ans) fillin_conv(ans)
rename_field(ans) rename_field(ans)
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, yield "data:" + json.dumps({"code": 0, "message": "", "data": ans},
ensure_ascii=False) + "\n\n" ensure_ascii=False) + "\n\n"
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id}) canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
canvas.history.append(("assistant", final_ans["content"]))
if final_ans.get("reference"): if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"]) canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas)) cvs.dsl = json.loads(str(canvas))
API4ConversationService.append_message(conv.id, conv.to_dict()) API4ConversationService.append_message(conv.id, conv.to_dict())
except Exception as e: except Exception as e:
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e), yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}}, "data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n" ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n" yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
resp = Response(sse(), mimetype="text/event-stream") resp = Response(sse(), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache") resp.headers.add_header("Cache-control", "no-cache")
@ -295,12 +291,12 @@ def completion():
API4ConversationService.append_message(conv.id, conv.to_dict()) API4ConversationService.append_message(conv.id, conv.to_dict())
rename_field(result) rename_field(result)
return get_json_result(data=result) return get_json_result(data=result)
#******************For dialog****************** # ******************For dialog******************
conv.message.append(msg[-1]) conv.message.append(msg[-1])
e, dia = DialogService.get_by_id(conv.dialog_id) e, dia = DialogService.get_by_id(conv.dialog_id)
if not e: if not e:
return get_data_error_result(retmsg="Dialog not found!") return get_data_error_result(message="Dialog not found!")
del req["conversation_id"] del req["conversation_id"]
del req["messages"] del req["messages"]
@ -315,14 +311,14 @@ def completion():
for ans in chat(dia, msg, True, **req): for ans in chat(dia, msg, True, **req):
fillin_conv(ans) fillin_conv(ans)
rename_field(ans) rename_field(ans)
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, yield "data:" + json.dumps({"code": 0, "message": "", "data": ans},
ensure_ascii=False) + "\n\n" ensure_ascii=False) + "\n\n"
API4ConversationService.append_message(conv.id, conv.to_dict()) API4ConversationService.append_message(conv.id, conv.to_dict())
except Exception as e: except Exception as e:
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e), yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}}, "data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n" ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n" yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
if req.get("stream", True): if req.get("stream", True):
resp = Response(stream(), mimetype="text/event-stream") resp = Response(stream(), mimetype="text/event-stream")
@ -331,7 +327,7 @@ def completion():
resp.headers.add_header("X-Accel-Buffering", "no") resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8") resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp return resp
answer = None answer = None
for ans in chat(dia, msg, **req): for ans in chat(dia, msg, **req):
answer = ans answer = ans
@ -345,25 +341,25 @@ def completion():
return server_error_response(e) return server_error_response(e)
@manager.route('/conversation/<conversation_id>', methods=['GET']) @manager.route('/conversation/<conversation_id>', methods=['GET']) # noqa: F821
# @login_required # @login_required
def get(conversation_id): def get(conversation_id):
token = request.headers.get('Authorization').split()[1] token = request.headers.get('Authorization').split()[1]
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
try: try:
e, conv = API4ConversationService.get_by_id(conversation_id) e, conv = API4ConversationService.get_by_id(conversation_id)
if not e: if not e:
return get_data_error_result(retmsg="Conversation not found!") return get_data_error_result(message="Conversation not found!")
conv = conv.to_dict() conv = conv.to_dict()
if token != APIToken.query(dialog_id=conv['dialog_id'])[0].token: if token != APIToken.query(dialog_id=conv['dialog_id'])[0].token:
return get_json_result(data=False, retmsg='Token is not valid for this conversation_id!"', return get_json_result(data=False, message='Authentication error: API key is invalid for this conversation_id!"',
retcode=RetCode.AUTHENTICATION_ERROR) code=settings.RetCode.AUTHENTICATION_ERROR)
for referenct_i in conv['reference']: for referenct_i in conv['reference']:
if referenct_i is None or len(referenct_i) == 0: if referenct_i is None or len(referenct_i) == 0:
continue continue
@ -376,14 +372,14 @@ def get(conversation_id):
return server_error_response(e) return server_error_response(e)
@manager.route('/document/upload', methods=['POST']) @manager.route('/document/upload', methods=['POST']) # noqa: F821
@validate_request("kb_name") @validate_request("kb_name")
def upload(): def upload():
token = request.headers.get('Authorization').split()[1] token = request.headers.get('Authorization').split()[1]
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
kb_name = request.form.get("kb_name").strip() kb_name = request.form.get("kb_name").strip()
tenant_id = objs[0].tenant_id tenant_id = objs[0].tenant_id
@ -392,19 +388,19 @@ def upload():
e, kb = KnowledgebaseService.get_by_name(kb_name, tenant_id) e, kb = KnowledgebaseService.get_by_name(kb_name, tenant_id)
if not e: if not e:
return get_data_error_result( return get_data_error_result(
retmsg="Can't find this knowledgebase!") message="Can't find this knowledgebase!")
kb_id = kb.id kb_id = kb.id
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
if 'file' not in request.files: if 'file' not in request.files:
return get_json_result( return get_json_result(
data=False, retmsg='No file part!', retcode=RetCode.ARGUMENT_ERROR) data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
file = request.files['file'] file = request.files['file']
if file.filename == '': if file.filename == '':
return get_json_result( return get_json_result(
data=False, retmsg='No file selected!', retcode=RetCode.ARGUMENT_ERROR) data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
root_folder = FileService.get_root_folder(tenant_id) root_folder = FileService.get_root_folder(tenant_id)
pf_id = root_folder["id"] pf_id = root_folder["id"]
@ -415,7 +411,7 @@ def upload():
try: try:
if DocumentService.get_doc_count(kb.tenant_id) >= int(os.environ.get('MAX_FILE_NUM_PER_USER', 8192)): if DocumentService.get_doc_count(kb.tenant_id) >= int(os.environ.get('MAX_FILE_NUM_PER_USER', 8192)):
return get_data_error_result( return get_data_error_result(
retmsg="Exceed the maximum file number of a free user!") message="Exceed the maximum file number of a free user!")
filename = duplicate_name( filename = duplicate_name(
DocumentService.query, DocumentService.query,
@ -424,7 +420,7 @@ def upload():
filetype = filename_type(filename) filetype = filename_type(filename)
if not filetype: if not filetype:
return get_data_error_result( return get_data_error_result(
retmsg="This type of file has not been supported yet!") message="This type of file has not been supported yet!")
location = filename location = filename
while STORAGE_IMPL.obj_exist(kb_id, location): while STORAGE_IMPL.obj_exist(kb_id, location):
@ -454,6 +450,8 @@ def upload():
doc["parser_id"] = ParserType.AUDIO.value doc["parser_id"] = ParserType.AUDIO.value
if re.search(r"\.(ppt|pptx|pages)$", filename): if re.search(r"\.(ppt|pptx|pages)$", filename):
doc["parser_id"] = ParserType.PRESENTATION.value doc["parser_id"] = ParserType.PRESENTATION.value
if re.search(r"\.(eml)$", filename):
doc["parser_id"] = ParserType.EMAIL.value
doc_result = DocumentService.insert(doc) doc_result = DocumentService.insert(doc)
FileService.add_file_from_kb(doc, kb_folder["id"], kb.tenant_id) FileService.add_file_from_kb(doc, kb_folder["id"], kb.tenant_id)
@ -471,14 +469,14 @@ def upload():
# if str(req["run"]) == TaskStatus.CANCEL.value: # if str(req["run"]) == TaskStatus.CANCEL.value:
tenant_id = DocumentService.get_tenant_id(doc["id"]) tenant_id = DocumentService.get_tenant_id(doc["id"])
if not tenant_id: if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
# e, doc = DocumentService.get_by_id(doc["id"]) # e, doc = DocumentService.get_by_id(doc["id"])
TaskService.filter_delete([Task.doc_id == doc["id"]]) TaskService.filter_delete([Task.doc_id == doc["id"]])
e, doc = DocumentService.get_by_id(doc["id"]) e, doc = DocumentService.get_by_id(doc["id"])
doc = doc.to_dict() doc = doc.to_dict()
doc["tenant_id"] = tenant_id doc["tenant_id"] = tenant_id
bucket, name = File2DocumentService.get_minio_address(doc_id=doc["id"]) bucket, name = File2DocumentService.get_storage_address(doc_id=doc["id"])
queue_tasks(doc, bucket, name) queue_tasks(doc, bucket, name)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@ -486,37 +484,37 @@ def upload():
return get_json_result(data=doc_result.to_json()) return get_json_result(data=doc_result.to_json())
@manager.route('/document/upload_and_parse', methods=['POST']) @manager.route('/document/upload_and_parse', methods=['POST']) # noqa: F821
@validate_request("conversation_id") @validate_request("conversation_id")
def upload_parse(): def upload_parse():
token = request.headers.get('Authorization').split()[1] token = request.headers.get('Authorization').split()[1]
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
if 'file' not in request.files: if 'file' not in request.files:
return get_json_result( return get_json_result(
data=False, retmsg='No file part!', retcode=RetCode.ARGUMENT_ERROR) data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist('file') file_objs = request.files.getlist('file')
for file_obj in file_objs: for file_obj in file_objs:
if file_obj.filename == '': if file_obj.filename == '':
return get_json_result( return get_json_result(
data=False, retmsg='No file selected!', retcode=RetCode.ARGUMENT_ERROR) data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
doc_ids = doc_upload_and_parse(request.form.get("conversation_id"), file_objs, objs[0].tenant_id) doc_ids = doc_upload_and_parse(request.form.get("conversation_id"), file_objs, objs[0].tenant_id)
return get_json_result(data=doc_ids) return get_json_result(data=doc_ids)
@manager.route('/list_chunks', methods=['POST']) @manager.route('/list_chunks', methods=['POST']) # noqa: F821
# @login_required # @login_required
def list_chunks(): def list_chunks():
token = request.headers.get('Authorization').split()[1] token = request.headers.get('Authorization').split()[1]
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json req = request.json
@ -530,15 +528,16 @@ def list_chunks():
doc_id = req['doc_id'] doc_id = req['doc_id']
else: else:
return get_json_result( return get_json_result(
data=False, retmsg="Can't find doc_name or doc_id" data=False, message="Can't find doc_name or doc_id"
) )
kb_ids = KnowledgebaseService.get_kb_ids(tenant_id)
res = retrievaler.chunk_list(doc_id=doc_id, tenant_id=tenant_id) res = settings.retrievaler.chunk_list(doc_id, tenant_id, kb_ids)
res = [ res = [
{ {
"content": res_item["content_with_weight"], "content": res_item["content_with_weight"],
"doc_name": res_item["docnm_kwd"], "doc_name": res_item["docnm_kwd"],
"img_id": res_item["img_id"] "image_id": res_item["img_id"]
} for res_item in res } for res_item in res
] ]
@ -548,14 +547,14 @@ def list_chunks():
return get_json_result(data=res) return get_json_result(data=res)
@manager.route('/list_kb_docs', methods=['POST']) @manager.route('/list_kb_docs', methods=['POST']) # noqa: F821
# @login_required # @login_required
def list_kb_docs(): def list_kb_docs():
token = request.headers.get('Authorization').split()[1] token = request.headers.get('Authorization').split()[1]
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json req = request.json
tenant_id = objs[0].tenant_id tenant_id = objs[0].tenant_id
@ -565,7 +564,7 @@ def list_kb_docs():
e, kb = KnowledgebaseService.get_by_name(kb_name, tenant_id) e, kb = KnowledgebaseService.get_by_name(kb_name, tenant_id)
if not e: if not e:
return get_data_error_result( return get_data_error_result(
retmsg="Can't find this knowledgebase!") message="Can't find this knowledgebase!")
kb_id = kb.id kb_id = kb.id
except Exception as e: except Exception as e:
@ -587,28 +586,29 @@ def list_kb_docs():
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/document/infos', methods=['POST'])
@manager.route('/document/infos', methods=['POST']) # noqa: F821
@validate_request("doc_ids") @validate_request("doc_ids")
def docinfos(): def docinfos():
token = request.headers.get('Authorization').split()[1] token = request.headers.get('Authorization').split()[1]
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json req = request.json
doc_ids = req["doc_ids"] doc_ids = req["doc_ids"]
docs = DocumentService.get_by_ids(doc_ids) docs = DocumentService.get_by_ids(doc_ids)
return get_json_result(data=list(docs.dicts())) return get_json_result(data=list(docs.dicts()))
@manager.route('/document', methods=['DELETE']) @manager.route('/document', methods=['DELETE']) # noqa: F821
# @login_required # @login_required
def document_rm(): def document_rm():
token = request.headers.get('Authorization').split()[1] token = request.headers.get('Authorization').split()[1]
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
tenant_id = objs[0].tenant_id tenant_id = objs[0].tenant_id
req = request.json req = request.json
@ -620,7 +620,7 @@ def document_rm():
if not doc_ids: if not doc_ids:
return get_json_result( return get_json_result(
data=False, retmsg="Can't find doc_names or doc_ids" data=False, message="Can't find doc_names or doc_ids"
) )
except Exception as e: except Exception as e:
@ -635,16 +635,16 @@ def document_rm():
try: try:
e, doc = DocumentService.get_by_id(doc_id) e, doc = DocumentService.get_by_id(doc_id)
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id) tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id: if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
b, n = File2DocumentService.get_minio_address(doc_id=doc_id) b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
if not DocumentService.remove_document(doc, tenant_id): if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result( return get_data_error_result(
retmsg="Database error (Document removal)!") message="Database error (Document removal)!")
f2d = File2DocumentService.get_by_document_id(doc_id) f2d = File2DocumentService.get_by_document_id(doc_id)
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id]) FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
@ -655,12 +655,12 @@ def document_rm():
errors += str(e) errors += str(e)
if errors: if errors:
return get_json_result(data=False, retmsg=errors, retcode=RetCode.SERVER_ERROR) return get_json_result(data=False, message=errors, code=settings.RetCode.SERVER_ERROR)
return get_json_result(data=True) return get_json_result(data=True)
@manager.route('/completion_aibotk', methods=['POST']) @manager.route('/completion_aibotk', methods=['POST']) # noqa: F821
@validate_request("Authorization", "conversation_id", "word") @validate_request("Authorization", "conversation_id", "word")
def completion_faq(): def completion_faq():
import base64 import base64
@ -670,36 +670,101 @@ def completion_faq():
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
e, conv = API4ConversationService.get_by_id(req["conversation_id"]) e, conv = API4ConversationService.get_by_id(req["conversation_id"])
if not e: if not e:
return get_data_error_result(retmsg="Conversation not found!") return get_data_error_result(message="Conversation not found!")
if "quote" not in req: req["quote"] = True if "quote" not in req:
req["quote"] = True
msg = [] msg = []
msg.append({"role": "user", "content": req["word"]}) msg.append({"role": "user", "content": req["word"]})
if not msg[-1].get("id"):
msg[-1]["id"] = get_uuid()
message_id = msg[-1]["id"]
def fillin_conv(ans):
nonlocal conv, message_id
if not conv.reference:
conv.reference.append(ans["reference"])
else:
conv.reference[-1] = ans["reference"]
conv.message[-1] = {"role": "assistant", "content": ans["answer"], "id": message_id}
ans["id"] = message_id
try: try:
if conv.source == "agent":
conv.message.append(msg[-1])
e, cvs = UserCanvasService.get_by_id(conv.dialog_id)
if not e:
return server_error_response("canvas not found.")
if not isinstance(cvs.dsl, str):
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []})
final_ans = {"reference": [], "doc_aggs": []}
canvas = Canvas(cvs.dsl, objs[0].tenant_id)
canvas.messages.append(msg[-1])
canvas.add_user_input(msg[-1]["content"])
answer = canvas.run(stream=False)
assert answer is not None, "Nothing. Is it over?"
data_type_picture = {
"type": 3,
"url": "base64 content"
}
data = [
{
"type": 1,
"content": ""
}
]
final_ans["content"] = "\n".join(answer["content"]) if "content" in answer else ""
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
ans = {"answer": final_ans["content"], "reference": final_ans.get("reference", [])}
data[0]["content"] += re.sub(r'##\d\$\$', '', ans["answer"])
fillin_conv(ans)
API4ConversationService.append_message(conv.id, conv.to_dict())
chunk_idxs = [int(match[2]) for match in re.findall(r'##\d\$\$', ans["answer"])]
for chunk_idx in chunk_idxs[:1]:
if ans["reference"]["chunks"][chunk_idx]["img_id"]:
try:
bkt, nm = ans["reference"]["chunks"][chunk_idx]["img_id"].split("-")
response = STORAGE_IMPL.get(bkt, nm)
data_type_picture["url"] = base64.b64encode(response).decode('utf-8')
data.append(data_type_picture)
break
except Exception as e:
return server_error_response(e)
response = {"code": 200, "msg": "success", "data": data}
return response
# ******************For dialog******************
conv.message.append(msg[-1]) conv.message.append(msg[-1])
e, dia = DialogService.get_by_id(conv.dialog_id) e, dia = DialogService.get_by_id(conv.dialog_id)
if not e: if not e:
return get_data_error_result(retmsg="Dialog not found!") return get_data_error_result(message="Dialog not found!")
del req["conversation_id"] del req["conversation_id"]
if not conv.reference: if not conv.reference:
conv.reference = [] conv.reference = []
conv.message.append({"role": "assistant", "content": ""}) conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []}) conv.reference.append({"chunks": [], "doc_aggs": []})
def fillin_conv(ans):
nonlocal conv
if not conv.reference:
conv.reference.append(ans["reference"])
else:
conv.reference[-1] = ans["reference"]
conv.message[-1] = {"role": "assistant", "content": ans["answer"]}
data_type_picture = { data_type_picture = {
"type": 3, "type": 3,
"url": "base64 content" "url": "base64 content"
@ -737,17 +802,17 @@ def completion_faq():
return server_error_response(e) return server_error_response(e)
@manager.route('/retrieval', methods=['POST']) @manager.route('/retrieval', methods=['POST']) # noqa: F821
@validate_request("kb_id", "question") @validate_request("kb_id", "question")
def retrieval(): def retrieval():
token = request.headers.get('Authorization').split()[1] token = request.headers.get('Authorization').split()[1]
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json req = request.json
kb_ids = req.get("kb_id",[]) kb_ids = req.get("kb_id", [])
doc_ids = req.get("doc_ids", []) doc_ids = req.get("doc_ids", [])
question = req.get("question") question = req.get("question")
page = int(req.get("page", 1)) page = int(req.get("page", 1))
@ -761,26 +826,27 @@ def retrieval():
embd_nms = list(set([kb.embd_id for kb in kbs])) embd_nms = list(set([kb.embd_id for kb in kbs]))
if len(embd_nms) != 1: if len(embd_nms) != 1:
return get_json_result( return get_json_result(
data=False, retmsg='Knowledge bases use different embedding models or does not exist."', retcode=RetCode.AUTHENTICATION_ERROR) data=False, message='Knowledge bases use different embedding models or does not exist."',
code=settings.RetCode.AUTHENTICATION_ERROR)
embd_mdl = TenantLLMService.model_instance( embd_mdl = TenantLLMService.model_instance(
kbs[0].tenant_id, LLMType.EMBEDDING.value, llm_name=kbs[0].embd_id) kbs[0].tenant_id, LLMType.EMBEDDING.value, llm_name=kbs[0].embd_id)
rerank_mdl = None rerank_mdl = None
if req.get("rerank_id"): if req.get("rerank_id"):
rerank_mdl = TenantLLMService.model_instance( rerank_mdl = TenantLLMService.model_instance(
kbs[0].tenant_id, LLMType.RERANK.value, llm_name=req["rerank_id"]) kbs[0].tenant_id, LLMType.RERANK.value, llm_name=req["rerank_id"])
if req.get("keyword", False): if req.get("keyword", False):
chat_mdl = TenantLLMService.model_instance(kbs[0].tenant_id, LLMType.CHAT) chat_mdl = TenantLLMService.model_instance(kbs[0].tenant_id, LLMType.CHAT)
question += keyword_extraction(chat_mdl, question) question += keyword_extraction(chat_mdl, question)
ranks = retrievaler.retrieval(question, embd_mdl, kbs[0].tenant_id, kb_ids, page, size, ranks = settings.retrievaler.retrieval(question, embd_mdl, kbs[0].tenant_id, kb_ids, page, size,
similarity_threshold, vector_similarity_weight, top, similarity_threshold, vector_similarity_weight, top,
doc_ids, rerank_mdl=rerank_mdl) doc_ids, rerank_mdl=rerank_mdl,
rank_feature=label_question(question, kbs))
for c in ranks["chunks"]: for c in ranks["chunks"]:
if "vector" in c: c.pop("vector", None)
del c["vector"]
return get_json_result(data=ranks) return get_json_result(data=ranks)
except Exception as e: except Exception as e:
if str(e).find("not_found") > 0: if str(e).find("not_found") > 0:
return get_json_result(data=False, retmsg=f'No chunk found! Check the chunk status please!', return get_json_result(data=False, message='No chunk found! Check the chunk status please!',
retcode=RetCode.DATA_ERROR) code=settings.RetCode.DATA_ERROR)
return server_error_response(e) return server_error_response(e)

View File

@ -14,7 +14,7 @@
# limitations under the License. # limitations under the License.
# #
import json import json
from functools import partial import traceback
from flask import request, Response from flask import request, Response
from flask_login import login_required, current_user from flask_login import login_required, current_user
from api.db.services.canvas_service import CanvasTemplateService, UserCanvasService from api.db.services.canvas_service import CanvasTemplateService, UserCanvasService
@ -23,15 +23,16 @@ from api.utils import get_uuid
from api.utils.api_utils import get_json_result, server_error_response, validate_request, get_data_error_result from api.utils.api_utils import get_json_result, server_error_response, validate_request, get_data_error_result
from agent.canvas import Canvas from agent.canvas import Canvas
from peewee import MySQLDatabase, PostgresqlDatabase from peewee import MySQLDatabase, PostgresqlDatabase
from api.db.db_models import APIToken
@manager.route('/templates', methods=['GET']) @manager.route('/templates', methods=['GET']) # noqa: F821
@login_required @login_required
def templates(): def templates():
return get_json_result(data=[c.to_dict() for c in CanvasTemplateService.get_all()]) return get_json_result(data=[c.to_dict() for c in CanvasTemplateService.get_all()])
@manager.route('/list', methods=['GET']) @manager.route('/list', methods=['GET']) # noqa: F821
@login_required @login_required
def canvas_list(): def canvas_list():
return get_json_result(data=sorted([c.to_dict() for c in \ return get_json_result(data=sorted([c.to_dict() for c in \
@ -39,53 +40,68 @@ def canvas_list():
) )
@manager.route('/rm', methods=['POST']) @manager.route('/rm', methods=['POST']) # noqa: F821
@validate_request("canvas_ids") @validate_request("canvas_ids")
@login_required @login_required
def rm(): def rm():
for i in request.json["canvas_ids"]: for i in request.json["canvas_ids"]:
if not UserCanvasService.query(user_id=current_user.id,id=i): if not UserCanvasService.query(user_id=current_user.id,id=i):
return get_json_result( return get_json_result(
data=False, retmsg=f'Only owner of canvas authorized for this operation.', data=False, message='Only owner of canvas authorized for this operation.',
retcode=RetCode.OPERATING_ERROR) code=RetCode.OPERATING_ERROR)
UserCanvasService.delete_by_id(i) UserCanvasService.delete_by_id(i)
return get_json_result(data=True) return get_json_result(data=True)
@manager.route('/set', methods=['POST']) @manager.route('/set', methods=['POST']) # noqa: F821
@validate_request("dsl", "title") @validate_request("dsl", "title")
@login_required @login_required
def save(): def save():
req = request.json req = request.json
req["user_id"] = current_user.id req["user_id"] = current_user.id
if not isinstance(req["dsl"], str): req["dsl"] = json.dumps(req["dsl"], ensure_ascii=False) if not isinstance(req["dsl"], str):
req["dsl"] = json.dumps(req["dsl"], ensure_ascii=False)
req["dsl"] = json.loads(req["dsl"]) req["dsl"] = json.loads(req["dsl"])
if "id" not in req: if "id" not in req:
if UserCanvasService.query(user_id=current_user.id, title=req["title"].strip()): if UserCanvasService.query(user_id=current_user.id, title=req["title"].strip()):
return server_error_response(ValueError("Duplicated title.")) return get_data_error_result(message=f"{req['title'].strip()} already exists.")
req["id"] = get_uuid() req["id"] = get_uuid()
if not UserCanvasService.save(**req): if not UserCanvasService.save(**req):
return get_data_error_result(retmsg="Fail to save canvas.") return get_data_error_result(message="Fail to save canvas.")
else: else:
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]): if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
return get_json_result( return get_json_result(
data=False, retmsg=f'Only owner of canvas authorized for this operation.', data=False, message='Only owner of canvas authorized for this operation.',
retcode=RetCode.OPERATING_ERROR) code=RetCode.OPERATING_ERROR)
UserCanvasService.update_by_id(req["id"], req) UserCanvasService.update_by_id(req["id"], req)
return get_json_result(data=req) return get_json_result(data=req)
@manager.route('/get/<canvas_id>', methods=['GET']) @manager.route('/get/<canvas_id>', methods=['GET']) # noqa: F821
@login_required @login_required
def get(canvas_id): def get(canvas_id):
e, c = UserCanvasService.get_by_id(canvas_id) e, c = UserCanvasService.get_by_id(canvas_id)
if not e: if not e:
return get_data_error_result(retmsg="canvas not found.") return get_data_error_result(message="canvas not found.")
return get_json_result(data=c.to_dict())
@manager.route('/getsse/<canvas_id>', methods=['GET']) # type: ignore # noqa: F821
def getsse(canvas_id):
token = request.headers.get('Authorization').split()
if len(token) != 2:
return get_data_error_result(message='Authorization is not valid!"')
token = token[1]
objs = APIToken.query(beta=token)
if not objs:
return get_data_error_result(message='Authentication error: API key is invalid!"')
e, c = UserCanvasService.get_by_id(canvas_id)
if not e:
return get_data_error_result(message="canvas not found.")
return get_json_result(data=c.to_dict()) return get_json_result(data=c.to_dict())
@manager.route('/completion', methods=['POST']) @manager.route('/completion', methods=['POST']) # noqa: F821
@validate_request("id") @validate_request("id")
@login_required @login_required
def run(): def run():
@ -93,11 +109,11 @@ def run():
stream = req.get("stream", True) stream = req.get("stream", True)
e, cvs = UserCanvasService.get_by_id(req["id"]) e, cvs = UserCanvasService.get_by_id(req["id"])
if not e: if not e:
return get_data_error_result(retmsg="canvas not found.") return get_data_error_result(message="canvas not found.")
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]): if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
return get_json_result( return get_json_result(
data=False, retmsg=f'Only owner of canvas authorized for this operation.', data=False, message='Only owner of canvas authorized for this operation.',
retcode=RetCode.OPERATING_ERROR) code=RetCode.OPERATING_ERROR)
if not isinstance(cvs.dsl, str): if not isinstance(cvs.dsl, str):
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False) cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
@ -109,35 +125,43 @@ def run():
if "message" in req: if "message" in req:
canvas.messages.append({"role": "user", "content": req["message"], "id": message_id}) canvas.messages.append({"role": "user", "content": req["message"], "id": message_id})
canvas.add_user_input(req["message"]) canvas.add_user_input(req["message"])
answer = canvas.run(stream=stream)
print(canvas)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
assert answer is not None, "Nothing. Is it over?"
if stream: if stream:
assert isinstance(answer, partial), "Nothing. Is it over?"
def sse(): def sse():
nonlocal answer, cvs nonlocal answer, cvs
try: try:
for ans in answer(): for ans in canvas.run(stream=True):
if ans.get("running_status"):
yield "data:" + json.dumps({"code": 0, "message": "",
"data": {"answer": ans["content"],
"running_status": True}},
ensure_ascii=False) + "\n\n"
continue
for k in ans.keys(): for k in ans.keys():
final_ans[k] = ans[k] final_ans[k] = ans[k]
ans = {"answer": ans["content"], "reference": ans.get("reference", [])} ans = {"answer": ans["content"], "reference": ans.get("reference", [])}
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n" yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id}) canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
canvas.history.append(("assistant", final_ans["content"]))
if not canvas.path[-1]:
canvas.path.pop(-1)
if final_ans.get("reference"): if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"]) canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas)) cvs.dsl = json.loads(str(canvas))
UserCanvasService.update_by_id(req["id"], cvs.to_dict()) UserCanvasService.update_by_id(req["id"], cvs.to_dict())
except Exception as e: except Exception as e:
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e), cvs.dsl = json.loads(str(canvas))
if not canvas.path[-1]:
canvas.path.pop(-1)
UserCanvasService.update_by_id(req["id"], cvs.to_dict())
traceback.print_exc()
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}}, "data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n" ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n" yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
resp = Response(sse(), mimetype="text/event-stream") resp = Response(sse(), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache") resp.headers.add_header("Cache-control", "no-cache")
@ -146,16 +170,19 @@ def run():
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8") resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp return resp
final_ans["content"] = "\n".join(answer["content"]) if "content" in answer else "" for answer in canvas.run(stream=False):
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id}) if answer.get("running_status"):
if final_ans.get("reference"): continue
canvas.reference.append(final_ans["reference"]) final_ans["content"] = "\n".join(answer["content"]) if "content" in answer else ""
cvs.dsl = json.loads(str(canvas)) canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
UserCanvasService.update_by_id(req["id"], cvs.to_dict()) if final_ans.get("reference"):
return get_json_result(data={"answer": final_ans["content"], "reference": final_ans.get("reference", [])}) canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
UserCanvasService.update_by_id(req["id"], cvs.to_dict())
return get_json_result(data={"answer": final_ans["content"], "reference": final_ans.get("reference", [])})
@manager.route('/reset', methods=['POST']) @manager.route('/reset', methods=['POST']) # noqa: F821
@validate_request("id") @validate_request("id")
@login_required @login_required
def reset(): def reset():
@ -163,11 +190,11 @@ def reset():
try: try:
e, user_canvas = UserCanvasService.get_by_id(req["id"]) e, user_canvas = UserCanvasService.get_by_id(req["id"])
if not e: if not e:
return get_data_error_result(retmsg="canvas not found.") return get_data_error_result(message="canvas not found.")
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]): if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
return get_json_result( return get_json_result(
data=False, retmsg=f'Only owner of canvas authorized for this operation.', data=False, message='Only owner of canvas authorized for this operation.',
retcode=RetCode.OPERATING_ERROR) code=RetCode.OPERATING_ERROR)
canvas = Canvas(json.dumps(user_canvas.dsl), current_user.id) canvas = Canvas(json.dumps(user_canvas.dsl), current_user.id)
canvas.reset() canvas.reset()
@ -178,7 +205,51 @@ def reset():
return server_error_response(e) return server_error_response(e)
@manager.route('/test_db_connect', methods=['POST']) @manager.route('/input_elements', methods=['GET']) # noqa: F821
@login_required
def input_elements():
cvs_id = request.args.get("id")
cpn_id = request.args.get("component_id")
try:
e, user_canvas = UserCanvasService.get_by_id(cvs_id)
if not e:
return get_data_error_result(message="canvas not found.")
if not UserCanvasService.query(user_id=current_user.id, id=cvs_id):
return get_json_result(
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
canvas = Canvas(json.dumps(user_canvas.dsl), current_user.id)
return get_json_result(data=canvas.get_component_input_elements(cpn_id))
except Exception as e:
return server_error_response(e)
@manager.route('/debug', methods=['POST']) # noqa: F821
@validate_request("id", "component_id", "params")
@login_required
def debug():
req = request.json
for p in req["params"]:
assert p.get("key")
try:
e, user_canvas = UserCanvasService.get_by_id(req["id"])
if not e:
return get_data_error_result(message="canvas not found.")
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
return get_json_result(
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
canvas = Canvas(json.dumps(user_canvas.dsl), current_user.id)
canvas.get_component(req["component_id"])["obj"]._param.debug_inputs = req["params"]
df = canvas.get_component(req["component_id"])["obj"].debug()
return get_json_result(data=df.to_dict(orient="records"))
except Exception as e:
return server_error_response(e)
@manager.route('/test_db_connect', methods=['POST']) # noqa: F821
@validate_request("db_type", "database", "username", "host", "port", "password") @validate_request("db_type", "database", "username", "host", "port", "password")
@login_required @login_required
def test_db_connect(): def test_db_connect():
@ -190,8 +261,26 @@ def test_db_connect():
elif req["db_type"] == 'postgresql': elif req["db_type"] == 'postgresql':
db = PostgresqlDatabase(req["database"], user=req["username"], host=req["host"], port=req["port"], db = PostgresqlDatabase(req["database"], user=req["username"], host=req["host"], port=req["port"],
password=req["password"]) password=req["password"])
db.connect() elif req["db_type"] == 'mssql':
import pyodbc
connection_string = (
f"DRIVER={{ODBC Driver 17 for SQL Server}};"
f"SERVER={req['host']},{req['port']};"
f"DATABASE={req['database']};"
f"UID={req['username']};"
f"PWD={req['password']};"
)
db = pyodbc.connect(connection_string)
cursor = db.cursor()
cursor.execute("SELECT 1")
cursor.close()
else:
return server_error_response("Unsupported database type.")
if req["db_type"] != 'mssql':
db.connect()
db.close() db.close()
return get_json_result(data="Database Connection Successful!") return get_json_result(data="Database Connection Successful!")
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)

View File

@ -15,29 +15,28 @@
# #
import datetime import datetime
import json import json
import traceback
from flask import request from flask import request
from flask_login import login_required, current_user from flask_login import login_required, current_user
from elasticsearch_dsl import Q
from api.db.services.dialog_service import keyword_extraction, label_question
from rag.app.qa import rmPrefix, beAdoc from rag.app.qa import rmPrefix, beAdoc
from rag.nlp import search, rag_tokenizer, keyword_extraction from rag.nlp import search, rag_tokenizer
from rag.utils.es_conn import ELASTICSEARCH from rag.settings import PAGERANK_FLD
from rag.utils import rmSpace from rag.utils import rmSpace
from api.db import LLMType, ParserType from api.db import LLMType, ParserType
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import TenantLLMService from api.db.services.llm_service import LLMBundle
from api.db.services.user_service import UserTenantService from api.db.services.user_service import UserTenantService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.db.services.document_service import DocumentService from api.db.services.document_service import DocumentService
from api.settings import RetCode, retrievaler, kg_retrievaler from api import settings
from api.utils.api_utils import get_json_result from api.utils.api_utils import get_json_result
import hashlib import xxhash
import re import re
@manager.route('/list', methods=['POST']) @manager.route('/list', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("doc_id") @validate_request("doc_id")
def list_chunk(): def list_chunk():
@ -49,16 +48,17 @@ def list_chunk():
try: try:
tenant_id = DocumentService.get_tenant_id(req["doc_id"]) tenant_id = DocumentService.get_tenant_id(req["doc_id"])
if not tenant_id: if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
e, doc = DocumentService.get_by_id(doc_id) e, doc = DocumentService.get_by_id(doc_id)
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
kb_ids = KnowledgebaseService.get_kb_ids(tenant_id)
query = { query = {
"doc_ids": [doc_id], "page": page, "size": size, "question": question, "sort": True "doc_ids": [doc_id], "page": page, "size": size, "question": question, "sort": True
} }
if "available_int" in req: if "available_int" in req:
query["available_int"] = int(req["available_int"]) query["available_int"] = int(req["available_int"])
sres = retrievaler.search(query, search.index_name(tenant_id), highlight=True) sres = settings.retrievaler.search(query, search.index_name(tenant_id), kb_ids, highlight=True)
res = {"total": sres.total, "chunks": [], "doc": doc.to_dict()} res = {"total": sres.total, "chunks": [], "doc": doc.to_dict()}
for id in sres.ids: for id in sres.ids:
d = { d = {
@ -69,60 +69,54 @@ def list_chunk():
"doc_id": sres.field[id]["doc_id"], "doc_id": sres.field[id]["doc_id"],
"docnm_kwd": sres.field[id]["docnm_kwd"], "docnm_kwd": sres.field[id]["docnm_kwd"],
"important_kwd": sres.field[id].get("important_kwd", []), "important_kwd": sres.field[id].get("important_kwd", []),
"img_id": sres.field[id].get("img_id", ""), "question_kwd": sres.field[id].get("question_kwd", []),
"available_int": sres.field[id].get("available_int", 1), "image_id": sres.field[id].get("img_id", ""),
"positions": sres.field[id].get("position_int", "").split("\t") "available_int": int(sres.field[id].get("available_int", 1)),
"positions": sres.field[id].get("position_int", []),
} }
if len(d["positions"]) % 5 == 0: assert isinstance(d["positions"], list)
poss = [] assert len(d["positions"]) == 0 or (isinstance(d["positions"][0], list) and len(d["positions"][0]) == 5)
for i in range(0, len(d["positions"]), 5):
poss.append([float(d["positions"][i]), float(d["positions"][i + 1]), float(d["positions"][i + 2]),
float(d["positions"][i + 3]), float(d["positions"][i + 4])])
d["positions"] = poss
res["chunks"].append(d) res["chunks"].append(d)
return get_json_result(data=res) return get_json_result(data=res)
except Exception as e: except Exception as e:
if str(e).find("not_found") > 0: if str(e).find("not_found") > 0:
return get_json_result(data=False, retmsg=f'No chunk found!', return get_json_result(data=False, message='No chunk found!',
retcode=RetCode.DATA_ERROR) code=settings.RetCode.DATA_ERROR)
return server_error_response(e) return server_error_response(e)
@manager.route('/get', methods=['GET']) @manager.route('/get', methods=['GET']) # noqa: F821
@login_required @login_required
def get(): def get():
chunk_id = request.args["chunk_id"] chunk_id = request.args["chunk_id"]
try: try:
tenants = UserTenantService.query(user_id=current_user.id) tenants = UserTenantService.query(user_id=current_user.id)
if not tenants: if not tenants:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
res = ELASTICSEARCH.get( tenant_id = tenants[0].tenant_id
chunk_id, search.index_name(
tenants[0].tenant_id)) kb_ids = KnowledgebaseService.get_kb_ids(tenant_id)
if not res.get("found"): chunk = settings.docStoreConn.get(chunk_id, search.index_name(tenant_id), kb_ids)
return server_error_response("Chunk not found") if chunk is None:
id = res["_id"] return server_error_response(Exception("Chunk not found"))
res = res["_source"]
res["chunk_id"] = id
k = [] k = []
for n in res.keys(): for n in chunk.keys():
if re.search(r"(_vec$|_sm_|_tks|_ltks)", n): if re.search(r"(_vec$|_sm_|_tks|_ltks)", n):
k.append(n) k.append(n)
for n in k: for n in k:
del res[n] del chunk[n]
return get_json_result(data=res) return get_json_result(data=chunk)
except Exception as e: except Exception as e:
if str(e).find("NotFoundError") >= 0: if str(e).find("NotFoundError") >= 0:
return get_json_result(data=False, retmsg=f'Chunk not found!', return get_json_result(data=False, message='Chunk not found!',
retcode=RetCode.DATA_ERROR) code=settings.RetCode.DATA_ERROR)
return server_error_response(e) return server_error_response(e)
@manager.route('/set', methods=['POST']) @manager.route('/set', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("doc_id", "chunk_id", "content_with_weight", @validate_request("doc_id", "chunk_id", "content_with_weight")
"important_kwd")
def set(): def set():
req = request.json req = request.json
d = { d = {
@ -130,74 +124,80 @@ def set():
"content_with_weight": req["content_with_weight"]} "content_with_weight": req["content_with_weight"]}
d["content_ltks"] = rag_tokenizer.tokenize(req["content_with_weight"]) d["content_ltks"] = rag_tokenizer.tokenize(req["content_with_weight"])
d["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(d["content_ltks"]) d["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(d["content_ltks"])
d["important_kwd"] = req["important_kwd"] if "important_kwd" in req:
d["important_tks"] = rag_tokenizer.tokenize(" ".join(req["important_kwd"])) d["important_kwd"] = req["important_kwd"]
d["important_tks"] = rag_tokenizer.tokenize(" ".join(req["important_kwd"]))
if "question_kwd" in req:
d["question_kwd"] = req["question_kwd"]
d["question_tks"] = rag_tokenizer.tokenize("\n".join(req["question_kwd"]))
if "tag_kwd" in req:
d["tag_kwd"] = req["tag_kwd"]
if "tag_feas" in req:
d["tag_feas"] = req["tag_feas"]
if "available_int" in req: if "available_int" in req:
d["available_int"] = req["available_int"] d["available_int"] = req["available_int"]
try: try:
tenant_id = DocumentService.get_tenant_id(req["doc_id"]) tenant_id = DocumentService.get_tenant_id(req["doc_id"])
if not tenant_id: if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
embd_id = DocumentService.get_embd_id(req["doc_id"]) embd_id = DocumentService.get_embd_id(req["doc_id"])
embd_mdl = TenantLLMService.model_instance( embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING, embd_id)
tenant_id, LLMType.EMBEDDING.value, embd_id)
e, doc = DocumentService.get_by_id(req["doc_id"]) e, doc = DocumentService.get_by_id(req["doc_id"])
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
if doc.parser_id == ParserType.QA: if doc.parser_id == ParserType.QA:
arr = [ arr = [
t for t in re.split( t for t in re.split(
r"[\n\t]", r"[\n\t]",
req["content_with_weight"]) if len(t) > 1] req["content_with_weight"]) if len(t) > 1]
if len(arr) != 2: q, a = rmPrefix(arr[0]), rmPrefix("\n".join(arr[1:]))
return get_data_error_result( d = beAdoc(d, q, a, not any(
retmsg="Q&A must be separated by TAB/ENTER key.")
q, a = rmPrefix(arr[0]), rmPrefix(arr[1])
d = beAdoc(d, arr[0], arr[1], not any(
[rag_tokenizer.is_chinese(t) for t in q + a])) [rag_tokenizer.is_chinese(t) for t in q + a]))
v, c = embd_mdl.encode([doc.name, req["content_with_weight"]]) v, c = embd_mdl.encode([doc.name, req["content_with_weight"] if not d.get("question_kwd") else "\n".join(d["question_kwd"])])
v = 0.1 * v[0] + 0.9 * v[1] if doc.parser_id != ParserType.QA else v[1] v = 0.1 * v[0] + 0.9 * v[1] if doc.parser_id != ParserType.QA else v[1]
d["q_%d_vec" % len(v)] = v.tolist() d["q_%d_vec" % len(v)] = v.tolist()
ELASTICSEARCH.upsert([d], search.index_name(tenant_id)) settings.docStoreConn.update({"id": req["chunk_id"]}, d, search.index_name(tenant_id), doc.kb_id)
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/switch', methods=['POST']) @manager.route('/switch', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("chunk_ids", "available_int", "doc_id") @validate_request("chunk_ids", "available_int", "doc_id")
def switch(): def switch():
req = request.json req = request.json
try: try:
tenant_id = DocumentService.get_tenant_id(req["doc_id"]) e, doc = DocumentService.get_by_id(req["doc_id"])
if not tenant_id: if not e:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Document not found!")
if not ELASTICSEARCH.upsert([{"id": i, "available_int": int(req["available_int"])} for i in req["chunk_ids"]], for cid in req["chunk_ids"]:
search.index_name(tenant_id)): if not settings.docStoreConn.update({"id": cid},
return get_data_error_result(retmsg="Index updating failure") {"available_int": int(req["available_int"])},
search.index_name(DocumentService.get_tenant_id(req["doc_id"])),
doc.kb_id):
return get_data_error_result(message="Index updating failure")
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/rm', methods=['POST']) @manager.route('/rm', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("chunk_ids", "doc_id") @validate_request("chunk_ids", "doc_id")
def rm(): def rm():
req = request.json req = request.json
try: try:
if not ELASTICSEARCH.deleteByQuery(
Q("ids", values=req["chunk_ids"]), search.index_name(current_user.id)):
return get_data_error_result(retmsg="Index updating failure")
e, doc = DocumentService.get_by_id(req["doc_id"]) e, doc = DocumentService.get_by_id(req["doc_id"])
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
if not settings.docStoreConn.delete({"id": req["chunk_ids"]}, search.index_name(current_user.id), doc.kb_id):
return get_data_error_result(message="Index updating failure")
deleted_chunk_ids = req["chunk_ids"] deleted_chunk_ids = req["chunk_ids"]
chunk_number = len(deleted_chunk_ids) chunk_number = len(deleted_chunk_ids)
DocumentService.decrement_chunk_num(doc.id, doc.kb_id, 1, chunk_number, 0) DocumentService.decrement_chunk_num(doc.id, doc.kb_id, 1, chunk_number, 0)
@ -206,42 +206,48 @@ def rm():
return server_error_response(e) return server_error_response(e)
@manager.route('/create', methods=['POST']) @manager.route('/create', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("doc_id", "content_with_weight") @validate_request("doc_id", "content_with_weight")
def create(): def create():
req = request.json req = request.json
md5 = hashlib.md5() chunck_id = xxhash.xxh64((req["content_with_weight"] + req["doc_id"]).encode("utf-8")).hexdigest()
md5.update((req["content_with_weight"] + req["doc_id"]).encode("utf-8"))
chunck_id = md5.hexdigest()
d = {"id": chunck_id, "content_ltks": rag_tokenizer.tokenize(req["content_with_weight"]), d = {"id": chunck_id, "content_ltks": rag_tokenizer.tokenize(req["content_with_weight"]),
"content_with_weight": req["content_with_weight"]} "content_with_weight": req["content_with_weight"]}
d["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(d["content_ltks"]) d["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(d["content_ltks"])
d["important_kwd"] = req.get("important_kwd", []) d["important_kwd"] = req.get("important_kwd", [])
d["important_tks"] = rag_tokenizer.tokenize(" ".join(req.get("important_kwd", []))) d["important_tks"] = rag_tokenizer.tokenize(" ".join(req.get("important_kwd", [])))
d["question_kwd"] = req.get("question_kwd", [])
d["question_tks"] = rag_tokenizer.tokenize("\n".join(req.get("question_kwd", [])))
d["create_time"] = str(datetime.datetime.now()).replace("T", " ")[:19] d["create_time"] = str(datetime.datetime.now()).replace("T", " ")[:19]
d["create_timestamp_flt"] = datetime.datetime.now().timestamp() d["create_timestamp_flt"] = datetime.datetime.now().timestamp()
try: try:
e, doc = DocumentService.get_by_id(req["doc_id"]) e, doc = DocumentService.get_by_id(req["doc_id"])
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
d["kb_id"] = [doc.kb_id] d["kb_id"] = [doc.kb_id]
d["docnm_kwd"] = doc.name d["docnm_kwd"] = doc.name
d["title_tks"] = rag_tokenizer.tokenize(doc.name)
d["doc_id"] = doc.id d["doc_id"] = doc.id
tenant_id = DocumentService.get_tenant_id(req["doc_id"]) tenant_id = DocumentService.get_tenant_id(req["doc_id"])
if not tenant_id: if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
e, kb = KnowledgebaseService.get_by_id(doc.kb_id)
if not e:
return get_data_error_result(message="Knowledgebase not found!")
if kb.pagerank:
d[PAGERANK_FLD] = kb.pagerank
embd_id = DocumentService.get_embd_id(req["doc_id"]) embd_id = DocumentService.get_embd_id(req["doc_id"])
embd_mdl = TenantLLMService.model_instance( embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING.value, embd_id)
tenant_id, LLMType.EMBEDDING.value, embd_id)
v, c = embd_mdl.encode([doc.name, req["content_with_weight"]]) v, c = embd_mdl.encode([doc.name, req["content_with_weight"] if not d["question_kwd"] else "\n".join(d["question_kwd"])])
v = 0.1 * v[0] + 0.9 * v[1] v = 0.1 * v[0] + 0.9 * v[1]
d["q_%d_vec" % len(v)] = v.tolist() d["q_%d_vec" % len(v)] = v.tolist()
ELASTICSEARCH.upsert([d], search.index_name(tenant_id)) settings.docStoreConn.insert([d], search.index_name(tenant_id), doc.kb_id)
DocumentService.increment_chunk_num( DocumentService.increment_chunk_num(
doc.id, doc.kb_id, c, 1, 0) doc.id, doc.kb_id, c, 1, 0)
@ -250,7 +256,7 @@ def create():
return server_error_response(e) return server_error_response(e)
@manager.route('/retrieval_test', methods=['POST']) @manager.route('/retrieval_test', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("kb_id", "question") @validate_request("kb_id", "question")
def retrieval_test(): def retrieval_test():
@ -258,74 +264,106 @@ def retrieval_test():
page = int(req.get("page", 1)) page = int(req.get("page", 1))
size = int(req.get("size", 30)) size = int(req.get("size", 30))
question = req["question"] question = req["question"]
kb_id = req["kb_id"] kb_ids = req["kb_id"]
if isinstance(kb_id, str): kb_id = [kb_id] if isinstance(kb_ids, str):
kb_ids = [kb_ids]
doc_ids = req.get("doc_ids", []) doc_ids = req.get("doc_ids", [])
similarity_threshold = float(req.get("similarity_threshold", 0.0)) similarity_threshold = float(req.get("similarity_threshold", 0.0))
vector_similarity_weight = float(req.get("vector_similarity_weight", 0.3)) vector_similarity_weight = float(req.get("vector_similarity_weight", 0.3))
use_kg = req.get("use_kg", False)
top = int(req.get("top_k", 1024)) top = int(req.get("top_k", 1024))
tenant_ids = []
try: try:
tenants = UserTenantService.query(user_id=current_user.id) tenants = UserTenantService.query(user_id=current_user.id)
for kid in kb_id: for kb_id in kb_ids:
for tenant in tenants: for tenant in tenants:
if KnowledgebaseService.query( if KnowledgebaseService.query(
tenant_id=tenant.tenant_id, id=kid): tenant_id=tenant.tenant_id, id=kb_id):
tenant_ids.append(tenant.tenant_id)
break break
else: else:
return get_json_result( return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.', data=False, message='Only owner of knowledgebase authorized for this operation.',
retcode=RetCode.OPERATING_ERROR) code=settings.RetCode.OPERATING_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_id[0]) e, kb = KnowledgebaseService.get_by_id(kb_ids[0])
if not e: if not e:
return get_data_error_result(retmsg="Knowledgebase not found!") return get_data_error_result(message="Knowledgebase not found!")
embd_mdl = TenantLLMService.model_instance( embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
rerank_mdl = None rerank_mdl = None
if req.get("rerank_id"): if req.get("rerank_id"):
rerank_mdl = TenantLLMService.model_instance( rerank_mdl = LLMBundle(kb.tenant_id, LLMType.RERANK.value, llm_name=req["rerank_id"])
kb.tenant_id, LLMType.RERANK.value, llm_name=req["rerank_id"])
if req.get("keyword", False): if req.get("keyword", False):
chat_mdl = TenantLLMService.model_instance(kb.tenant_id, LLMType.CHAT) chat_mdl = LLMBundle(kb.tenant_id, LLMType.CHAT)
question += keyword_extraction(chat_mdl, question) question += keyword_extraction(chat_mdl, question)
retr = retrievaler if kb.parser_id != ParserType.KG else kg_retrievaler labels = label_question(question, [kb])
ranks = retr.retrieval(question, embd_mdl, kb.tenant_id, kb_id, page, size, ranks = settings.retrievaler.retrieval(question, embd_mdl, tenant_ids, kb_ids, page, size,
similarity_threshold, vector_similarity_weight, top, similarity_threshold, vector_similarity_weight, top,
doc_ids, rerank_mdl=rerank_mdl, highlight=req.get("highlight")) doc_ids, rerank_mdl=rerank_mdl, highlight=req.get("highlight"),
rank_feature=labels
)
if use_kg:
ck = settings.kg_retrievaler.retrieval(question,
tenant_ids,
kb_ids,
embd_mdl,
LLMBundle(kb.tenant_id, LLMType.CHAT))
if ck["content_with_weight"]:
ranks["chunks"].insert(0, ck)
for c in ranks["chunks"]: for c in ranks["chunks"]:
if "vector" in c: c.pop("vector", None)
del c["vector"] ranks["labels"] = labels
return get_json_result(data=ranks) return get_json_result(data=ranks)
except Exception as e: except Exception as e:
if str(e).find("not_found") > 0: if str(e).find("not_found") > 0:
return get_json_result(data=False, retmsg=f'No chunk found! Check the chunk status please!', return get_json_result(data=False, message='No chunk found! Check the chunk status please!',
retcode=RetCode.DATA_ERROR) code=settings.RetCode.DATA_ERROR)
return server_error_response(e) return server_error_response(e)
@manager.route('/knowledge_graph', methods=['GET']) @manager.route('/knowledge_graph', methods=['GET']) # noqa: F821
@login_required @login_required
def knowledge_graph(): def knowledge_graph():
doc_id = request.args["doc_id"] doc_id = request.args["doc_id"]
tenant_id = DocumentService.get_tenant_id(doc_id)
kb_ids = KnowledgebaseService.get_kb_ids(tenant_id)
req = { req = {
"doc_ids":[doc_id], "doc_ids": [doc_id],
"knowledge_graph_kwd": ["graph", "mind_map"] "knowledge_graph_kwd": ["graph", "mind_map"]
} }
tenant_id = DocumentService.get_tenant_id(doc_id) sres = settings.retrievaler.search(req, search.index_name(tenant_id), kb_ids)
sres = retrievaler.search(req, search.index_name(tenant_id))
obj = {"graph": {}, "mind_map": {}} obj = {"graph": {}, "mind_map": {}}
for id in sres.ids[:2]: for id in sres.ids[:2]:
ty = sres.field[id]["knowledge_graph_kwd"] ty = sres.field[id]["knowledge_graph_kwd"]
try: try:
obj[ty] = json.loads(sres.field[id]["content_with_weight"]) content_json = json.loads(sres.field[id]["content_with_weight"])
except Exception as e: except Exception:
print(traceback.format_exc(), flush=True) continue
if ty == 'mind_map':
node_dict = {}
def repeat_deal(content_json, node_dict):
if 'id' in content_json:
if content_json['id'] in node_dict:
node_name = content_json['id']
content_json['id'] += f"({node_dict[content_json['id']]})"
node_dict[node_name] += 1
else:
node_dict[content_json['id']] = 1
if 'children' in content_json and content_json['children']:
for item in content_json['children']:
repeat_deal(item, node_dict)
repeat_deal(content_json, node_dict)
obj[ty] = content_json
return get_json_result(data=obj) return get_json_result(data=obj)

View File

@ -17,35 +17,39 @@ import json
import re import re
import traceback import traceback
from copy import deepcopy from copy import deepcopy
from api.db.db_models import APIToken
from api.db.services.conversation_service import ConversationService, structure_answer
from api.db.services.user_service import UserTenantService from api.db.services.user_service import UserTenantService
from flask import request, Response from flask import request, Response
from flask_login import login_required, current_user from flask_login import login_required, current_user
from api.db import LLMType from api.db import LLMType
from api.db.services.dialog_service import DialogService, ConversationService, chat, ask from api.db.services.dialog_service import DialogService, chat, ask, label_question
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle, TenantService, TenantLLMService from api.db.services.llm_service import LLMBundle, TenantService
from api.settings import RetCode, retrievaler from api import settings
from api.utils import get_uuid
from api.utils.api_utils import get_json_result from api.utils.api_utils import get_json_result
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from graphrag.mind_map_extractor import MindMapExtractor from graphrag.general.mind_map_extractor import MindMapExtractor
@manager.route('/set', methods=['POST']) @manager.route('/set', methods=['POST']) # noqa: F821
@login_required @login_required
def set_conversation(): def set_conversation():
req = request.json req = request.json
conv_id = req.get("conversation_id") conv_id = req.get("conversation_id")
if conv_id: is_new = req.get("is_new")
del req["is_new"]
if not is_new:
del req["conversation_id"] del req["conversation_id"]
try: try:
if not ConversationService.update_by_id(conv_id, req): if not ConversationService.update_by_id(conv_id, req):
return get_data_error_result(retmsg="Conversation not found!") return get_data_error_result(message="Conversation not found!")
e, conv = ConversationService.get_by_id(conv_id) e, conv = ConversationService.get_by_id(conv_id)
if not e: if not e:
return get_data_error_result( return get_data_error_result(
retmsg="Fail to update a conversation!") message="Fail to update a conversation!")
conv = conv.to_dict() conv = conv.to_dict()
return get_json_result(data=conv) return get_json_result(data=conv)
except Exception as e: except Exception as e:
@ -54,46 +58,84 @@ def set_conversation():
try: try:
e, dia = DialogService.get_by_id(req["dialog_id"]) e, dia = DialogService.get_by_id(req["dialog_id"])
if not e: if not e:
return get_data_error_result(retmsg="Dialog not found") return get_data_error_result(message="Dialog not found")
conv = { conv = {
"id": get_uuid(), "id": conv_id,
"dialog_id": req["dialog_id"], "dialog_id": req["dialog_id"],
"name": req.get("name", "New conversation"), "name": req.get("name", "New conversation"),
"message": [{"role": "assistant", "content": dia.prompt_config["prologue"]}] "message": [{"role": "assistant", "content": dia.prompt_config["prologue"]}]
} }
ConversationService.save(**conv) ConversationService.save(**conv)
e, conv = ConversationService.get_by_id(conv["id"])
if not e:
return get_data_error_result(retmsg="Fail to new a conversation!")
conv = conv.to_dict()
return get_json_result(data=conv) return get_json_result(data=conv)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/get', methods=['GET']) @manager.route('/get', methods=['GET']) # noqa: F821
@login_required @login_required
def get(): def get():
conv_id = request.args["conversation_id"] conv_id = request.args["conversation_id"]
try: try:
e, conv = ConversationService.get_by_id(conv_id) e, conv = ConversationService.get_by_id(conv_id)
if not e: if not e:
return get_data_error_result(retmsg="Conversation not found!") return get_data_error_result(message="Conversation not found!")
tenants = UserTenantService.query(user_id=current_user.id) tenants = UserTenantService.query(user_id=current_user.id)
avatar =None
for tenant in tenants: for tenant in tenants:
if DialogService.query(tenant_id=tenant.tenant_id, id=conv.dialog_id): dialog = DialogService.query(tenant_id=tenant.tenant_id, id=conv.dialog_id)
if dialog and len(dialog)>0:
avatar = dialog[0].icon
break break
else: else:
return get_json_result( return get_json_result(
data=False, retmsg=f'Only owner of conversation authorized for this operation.', data=False, message='Only owner of conversation authorized for this operation.',
retcode=RetCode.OPERATING_ERROR) code=settings.RetCode.OPERATING_ERROR)
def get_value(d, k1, k2):
return d.get(k1, d.get(k2))
for ref in conv.reference:
if isinstance(ref, list):
continue
ref["chunks"] = [{
"id": get_value(ck, "chunk_id", "id"),
"content": get_value(ck, "content", "content_with_weight"),
"document_id": get_value(ck, "doc_id", "document_id"),
"document_name": get_value(ck, "docnm_kwd", "document_name"),
"dataset_id": get_value(ck, "kb_id", "dataset_id"),
"image_id": get_value(ck, "image_id", "img_id"),
"positions": get_value(ck, "positions", "position_int"),
} for ck in ref.get("chunks", [])]
conv = conv.to_dict() conv = conv.to_dict()
conv["avatar"]=avatar
return get_json_result(data=conv) return get_json_result(data=conv)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/getsse/<dialog_id>', methods=['GET']) # type: ignore # noqa: F821
def getsse(dialog_id):
token = request.headers.get('Authorization').split()
if len(token) != 2:
return get_data_error_result(message='Authorization is not valid!"')
token = token[1]
objs = APIToken.query(beta=token)
if not objs:
return get_data_error_result(message='Authentication error: API key is invalid!"')
try:
e, conv = DialogService.get_by_id(dialog_id)
if not e:
return get_data_error_result(message="Dialog not found!")
conv = conv.to_dict()
conv["avatar"]= conv["icon"]
del conv["icon"]
return get_json_result(data=conv)
except Exception as e:
return server_error_response(e)
@manager.route('/rm', methods=['POST']) @manager.route('/rm', methods=['POST']) # noqa: F821
@login_required @login_required
def rm(): def rm():
conv_ids = request.json["conversation_ids"] conv_ids = request.json["conversation_ids"]
@ -101,48 +143,46 @@ def rm():
for cid in conv_ids: for cid in conv_ids:
exist, conv = ConversationService.get_by_id(cid) exist, conv = ConversationService.get_by_id(cid)
if not exist: if not exist:
return get_data_error_result(retmsg="Conversation not found!") return get_data_error_result(message="Conversation not found!")
tenants = UserTenantService.query(user_id=current_user.id) tenants = UserTenantService.query(user_id=current_user.id)
for tenant in tenants: for tenant in tenants:
if DialogService.query(tenant_id=tenant.tenant_id, id=conv.dialog_id): if DialogService.query(tenant_id=tenant.tenant_id, id=conv.dialog_id):
break break
else: else:
return get_json_result( return get_json_result(
data=False, retmsg=f'Only owner of conversation authorized for this operation.', data=False, message='Only owner of conversation authorized for this operation.',
retcode=RetCode.OPERATING_ERROR) code=settings.RetCode.OPERATING_ERROR)
ConversationService.delete_by_id(cid) ConversationService.delete_by_id(cid)
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/list', methods=['GET']) @manager.route('/list', methods=['GET']) # noqa: F821
@login_required @login_required
def list_convsersation(): def list_convsersation():
dialog_id = request.args["dialog_id"] dialog_id = request.args["dialog_id"]
try: try:
if not DialogService.query(tenant_id=current_user.id, id=dialog_id): if not DialogService.query(tenant_id=current_user.id, id=dialog_id):
return get_json_result( return get_json_result(
data=False, retmsg=f'Only owner of dialog authorized for this operation.', data=False, message='Only owner of dialog authorized for this operation.',
retcode=RetCode.OPERATING_ERROR) code=settings.RetCode.OPERATING_ERROR)
convs = ConversationService.query( convs = ConversationService.query(
dialog_id=dialog_id, dialog_id=dialog_id,
order_by=ConversationService.model.create_time, order_by=ConversationService.model.create_time,
reverse=True) reverse=True)
convs = [d.to_dict() for d in convs] convs = [d.to_dict() for d in convs]
return get_json_result(data=convs) return get_json_result(data=convs)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/completion', methods=['POST']) @manager.route('/completion', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("conversation_id", "messages") @validate_request("conversation_id", "messages")
def completion(): def completion():
req = request.json req = request.json
# req = {"conversation_id": "9aaaca4c11d311efa461fa163e197198", "messages": [
# {"role": "user", "content": "上海有吗?"}
# ]}
msg = [] msg = []
for m in req["messages"]: for m in req["messages"]:
if m["role"] == "system": if m["role"] == "system":
@ -154,41 +194,49 @@ def completion():
try: try:
e, conv = ConversationService.get_by_id(req["conversation_id"]) e, conv = ConversationService.get_by_id(req["conversation_id"])
if not e: if not e:
return get_data_error_result(retmsg="Conversation not found!") return get_data_error_result(message="Conversation not found!")
conv.message = deepcopy(req["messages"]) conv.message = deepcopy(req["messages"])
e, dia = DialogService.get_by_id(conv.dialog_id) e, dia = DialogService.get_by_id(conv.dialog_id)
if not e: if not e:
return get_data_error_result(retmsg="Dialog not found!") return get_data_error_result(message="Dialog not found!")
del req["conversation_id"] del req["conversation_id"]
del req["messages"] del req["messages"]
if not conv.reference: if not conv.reference:
conv.reference = [] conv.reference = []
conv.message.append({"role": "assistant", "content": "", "id": message_id}) else:
def get_value(d, k1, k2):
return d.get(k1, d.get(k2))
for ref in conv.reference:
if isinstance(ref, list):
continue
ref["chunks"] = [{
"id": get_value(ck, "chunk_id", "id"),
"content": get_value(ck, "content", "content_with_weight"),
"document_id": get_value(ck, "doc_id", "document_id"),
"document_name": get_value(ck, "docnm_kwd", "document_name"),
"dataset_id": get_value(ck, "kb_id", "dataset_id"),
"image_id": get_value(ck, "image_id", "img_id"),
"positions": get_value(ck, "positions", "position_int"),
} for ck in ref.get("chunks", [])]
if not conv.reference:
conv.reference = []
conv.reference.append({"chunks": [], "doc_aggs": []}) conv.reference.append({"chunks": [], "doc_aggs": []})
def fillin_conv(ans):
nonlocal conv, message_id
if not conv.reference:
conv.reference.append(ans["reference"])
else:
conv.reference[-1] = ans["reference"]
conv.message[-1] = {"role": "assistant", "content": ans["answer"],
"id": message_id, "prompt": ans.get("prompt", "")}
ans["id"] = message_id
def stream(): def stream():
nonlocal dia, msg, req, conv nonlocal dia, msg, req, conv
try: try:
for ans in chat(dia, msg, True, **req): for ans in chat(dia, msg, True, **req):
fillin_conv(ans) ans = structure_answer(conv, ans, message_id, conv.id)
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n" yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
ConversationService.update_by_id(conv.id, conv.to_dict()) ConversationService.update_by_id(conv.id, conv.to_dict())
except Exception as e: except Exception as e:
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e), traceback.print_exc()
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}}, "data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n" ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n" yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
if req.get("stream", True): if req.get("stream", True):
resp = Response(stream(), mimetype="text/event-stream") resp = Response(stream(), mimetype="text/event-stream")
@ -201,8 +249,7 @@ def completion():
else: else:
answer = None answer = None
for ans in chat(dia, msg, **req): for ans in chat(dia, msg, **req):
answer = ans answer = structure_answer(conv, ans, message_id, req["conversation_id"])
fillin_conv(ans)
ConversationService.update_by_id(conv.id, conv.to_dict()) ConversationService.update_by_id(conv.id, conv.to_dict())
break break
return get_json_result(data=answer) return get_json_result(data=answer)
@ -210,28 +257,29 @@ def completion():
return server_error_response(e) return server_error_response(e)
@manager.route('/tts', methods=['POST']) @manager.route('/tts', methods=['POST']) # noqa: F821
@login_required @login_required
def tts(): def tts():
req = request.json req = request.json
text = req["text"] text = req["text"]
tenants = TenantService.get_by_user_id(current_user.id) tenants = TenantService.get_info_by(current_user.id)
if not tenants: if not tenants:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
tts_id = tenants[0]["tts_id"] tts_id = tenants[0]["tts_id"]
if not tts_id: if not tts_id:
return get_data_error_result(retmsg="No default TTS model is set") return get_data_error_result(message="No default TTS model is set")
tts_mdl = LLMBundle(tenants[0]["tenant_id"], LLMType.TTS, tts_id) tts_mdl = LLMBundle(tenants[0]["tenant_id"], LLMType.TTS, tts_id)
def stream_audio(): def stream_audio():
try: try:
for chunk in tts_mdl.tts(text): for txt in re.split(r"[,。/《》?;:!\n\r:;]+", text):
yield chunk for chunk in tts_mdl.tts(txt):
yield chunk
except Exception as e: except Exception as e:
yield ("data:" + json.dumps({"retcode": 500, "retmsg": str(e), yield ("data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e)}}, "data": {"answer": "**ERROR**: " + str(e)}},
ensure_ascii=False)).encode('utf-8') ensure_ascii=False)).encode('utf-8')
@ -243,14 +291,14 @@ def tts():
return resp return resp
@manager.route('/delete_msg', methods=['POST']) @manager.route('/delete_msg', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("conversation_id", "message_id") @validate_request("conversation_id", "message_id")
def delete_msg(): def delete_msg():
req = request.json req = request.json
e, conv = ConversationService.get_by_id(req["conversation_id"]) e, conv = ConversationService.get_by_id(req["conversation_id"])
if not e: if not e:
return get_data_error_result(retmsg="Conversation not found!") return get_data_error_result(message="Conversation not found!")
conv = conv.to_dict() conv = conv.to_dict()
for i, msg in enumerate(conv["message"]): for i, msg in enumerate(conv["message"]):
@ -266,14 +314,14 @@ def delete_msg():
return get_json_result(data=conv) return get_json_result(data=conv)
@manager.route('/thumbup', methods=['POST']) @manager.route('/thumbup', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("conversation_id", "message_id") @validate_request("conversation_id", "message_id")
def thumbup(): def thumbup():
req = request.json req = request.json
e, conv = ConversationService.get_by_id(req["conversation_id"]) e, conv = ConversationService.get_by_id(req["conversation_id"])
if not e: if not e:
return get_data_error_result(retmsg="Conversation not found!") return get_data_error_result(message="Conversation not found!")
up_down = req.get("set") up_down = req.get("set")
feedback = req.get("feedback", "") feedback = req.get("feedback", "")
conv = conv.to_dict() conv = conv.to_dict()
@ -281,32 +329,35 @@ def thumbup():
if req["message_id"] == msg.get("id", "") and msg.get("role", "") == "assistant": if req["message_id"] == msg.get("id", "") and msg.get("role", "") == "assistant":
if up_down: if up_down:
msg["thumbup"] = True msg["thumbup"] = True
if "feedback" in msg: del msg["feedback"] if "feedback" in msg:
del msg["feedback"]
else: else:
msg["thumbup"] = False msg["thumbup"] = False
if feedback: msg["feedback"] = feedback if feedback:
msg["feedback"] = feedback
break break
ConversationService.update_by_id(conv["id"], conv) ConversationService.update_by_id(conv["id"], conv)
return get_json_result(data=conv) return get_json_result(data=conv)
@manager.route('/ask', methods=['POST']) @manager.route('/ask', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("question", "kb_ids") @validate_request("question", "kb_ids")
def ask_about(): def ask_about():
req = request.json req = request.json
uid = current_user.id uid = current_user.id
def stream(): def stream():
nonlocal req, uid nonlocal req, uid
try: try:
for ans in ask(req["question"], req["kb_ids"], uid): for ans in ask(req["question"], req["kb_ids"], uid):
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n" yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
except Exception as e: except Exception as e:
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e), yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}}, "data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n" ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n" yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
resp = Response(stream(), mimetype="text/event-stream") resp = Response(stream(), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache") resp.headers.add_header("Cache-control", "no-cache")
@ -316,7 +367,7 @@ def ask_about():
return resp return resp
@manager.route('/mindmap', methods=['POST']) @manager.route('/mindmap', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("question", "kb_ids") @validate_request("question", "kb_ids")
def mindmap(): def mindmap():
@ -324,13 +375,15 @@ def mindmap():
kb_ids = req["kb_ids"] kb_ids = req["kb_ids"]
e, kb = KnowledgebaseService.get_by_id(kb_ids[0]) e, kb = KnowledgebaseService.get_by_id(kb_ids[0])
if not e: if not e:
return get_data_error_result(retmsg="Knowledgebase not found!") return get_data_error_result(message="Knowledgebase not found!")
embd_mdl = TenantLLMService.model_instance( embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING, llm_name=kb.embd_id)
kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
chat_mdl = LLMBundle(current_user.id, LLMType.CHAT) chat_mdl = LLMBundle(current_user.id, LLMType.CHAT)
ranks = retrievaler.retrieval(req["question"], embd_mdl, kb.tenant_id, kb_ids, 1, 12, question = req["question"]
0.3, 0.3, aggs=False) ranks = settings.retrievaler.retrieval(question, embd_mdl, kb.tenant_id, kb_ids, 1, 12,
0.3, 0.3, aggs=False,
rank_feature=label_question(question, [kb])
)
mindmap = MindMapExtractor(chat_mdl) mindmap = MindMapExtractor(chat_mdl)
mind_map = mindmap([c["content_with_weight"] for c in ranks["chunks"]]).output mind_map = mindmap([c["content_with_weight"] for c in ranks["chunks"]]).output
if "error" in mind_map: if "error" in mind_map:
@ -338,7 +391,7 @@ def mindmap():
return get_json_result(data=mind_map) return get_json_result(data=mind_map)
@manager.route('/related_questions', methods=['POST']) @manager.route('/related_questions', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("question") @validate_request("question")
def related_questions(): def related_questions():

View File

@ -1,878 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import pathlib
import re
import warnings
from functools import partial
from io import BytesIO
from elasticsearch_dsl import Q
from flask import request, send_file
from flask_login import login_required, current_user
from httpx import HTTPError
from api.contants import NAME_LENGTH_LIMIT
from api.db import FileType, ParserType, FileSource, TaskStatus
from api.db import StatusEnum
from api.db.db_models import File
from api.db.services import duplicate_name
from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.user_service import TenantService
from api.settings import RetCode
from api.utils import get_uuid
from api.utils.api_utils import construct_json_result, construct_error_response
from api.utils.api_utils import construct_result, validate_request
from api.utils.file_utils import filename_type, thumbnail
from rag.app import book, laws, manual, naive, one, paper, presentation, qa, resume, table, picture, audio, email
from rag.nlp import search
from rag.utils.es_conn import ELASTICSEARCH
from rag.utils.storage_factory import STORAGE_IMPL
MAXIMUM_OF_UPLOADING_FILES = 256
# ------------------------------ create a dataset ---------------------------------------
@manager.route("/", methods=["POST"])
@login_required # use login
@validate_request("name") # check name key
def create_dataset():
# Check if Authorization header is present
authorization_token = request.headers.get("Authorization")
if not authorization_token:
return construct_json_result(code=RetCode.AUTHENTICATION_ERROR, message="Authorization header is missing.")
# TODO: Login or API key
# objs = APIToken.query(token=authorization_token)
#
# # Authorization error
# if not objs:
# return construct_json_result(code=RetCode.AUTHENTICATION_ERROR, message="Token is invalid.")
#
# tenant_id = objs[0].tenant_id
tenant_id = current_user.id
request_body = request.json
# In case that there's no name
if "name" not in request_body:
return construct_json_result(code=RetCode.DATA_ERROR, message="Expected 'name' field in request body")
dataset_name = request_body["name"]
# empty dataset_name
if not dataset_name:
return construct_json_result(code=RetCode.DATA_ERROR, message="Empty dataset name")
# In case that there's space in the head or the tail
dataset_name = dataset_name.strip()
# In case that the length of the name exceeds the limit
dataset_name_length = len(dataset_name)
if dataset_name_length > NAME_LENGTH_LIMIT:
return construct_json_result(
code=RetCode.DATA_ERROR,
message=f"Dataset name: {dataset_name} with length {dataset_name_length} exceeds {NAME_LENGTH_LIMIT}!")
# In case that there are other fields in the data-binary
if len(request_body.keys()) > 1:
name_list = []
for key_name in request_body.keys():
if key_name != "name":
name_list.append(key_name)
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"fields: {name_list}, are not allowed in request body.")
# If there is a duplicate name, it will modify it to make it unique
request_body["name"] = duplicate_name(
KnowledgebaseService.query,
name=dataset_name,
tenant_id=tenant_id,
status=StatusEnum.VALID.value)
try:
request_body["id"] = get_uuid()
request_body["tenant_id"] = tenant_id
request_body["created_by"] = tenant_id
exist, t = TenantService.get_by_id(tenant_id)
if not exist:
return construct_result(code=RetCode.AUTHENTICATION_ERROR, message="Tenant not found.")
request_body["embd_id"] = t.embd_id
if not KnowledgebaseService.save(**request_body):
# failed to create new dataset
return construct_result()
return construct_json_result(code=RetCode.SUCCESS,
data={"dataset_name": request_body["name"], "dataset_id": request_body["id"]})
except Exception as e:
return construct_error_response(e)
# -----------------------------list datasets-------------------------------------------------------
@manager.route("/", methods=["GET"])
@login_required
def list_datasets():
offset = request.args.get("offset", 0)
count = request.args.get("count", -1)
orderby = request.args.get("orderby", "create_time")
desc = request.args.get("desc", True)
try:
tenants = TenantService.get_joined_tenants_by_user_id(current_user.id)
datasets = KnowledgebaseService.get_by_tenant_ids_by_offset(
[m["tenant_id"] for m in tenants], current_user.id, int(offset), int(count), orderby, desc)
return construct_json_result(data=datasets, code=RetCode.SUCCESS, message=f"List datasets successfully!")
except Exception as e:
return construct_error_response(e)
except HTTPError as http_err:
return construct_json_result(http_err)
# ---------------------------------delete a dataset ----------------------------
@manager.route("/<dataset_id>", methods=["DELETE"])
@login_required
def remove_dataset(dataset_id):
try:
datasets = KnowledgebaseService.query(created_by=current_user.id, id=dataset_id)
# according to the id, searching for the dataset
if not datasets:
return construct_json_result(message=f"The dataset cannot be found for your current account.",
code=RetCode.OPERATING_ERROR)
# Iterating the documents inside the dataset
for doc in DocumentService.query(kb_id=dataset_id):
if not DocumentService.remove_document(doc, datasets[0].tenant_id):
# the process of deleting failed
return construct_json_result(code=RetCode.DATA_ERROR,
message="There was an error during the document removal process. "
"Please check the status of the RAGFlow server and try the removal again.")
# delete the other files
f2d = File2DocumentService.get_by_document_id(doc.id)
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
File2DocumentService.delete_by_document_id(doc.id)
# delete the dataset
if not KnowledgebaseService.delete_by_id(dataset_id):
return construct_json_result(code=RetCode.DATA_ERROR,
message="There was an error during the dataset removal process. "
"Please check the status of the RAGFlow server and try the removal again.")
# success
return construct_json_result(code=RetCode.SUCCESS, message=f"Remove dataset: {dataset_id} successfully")
except Exception as e:
return construct_error_response(e)
# ------------------------------ get details of a dataset ----------------------------------------
@manager.route("/<dataset_id>", methods=["GET"])
@login_required
def get_dataset(dataset_id):
try:
dataset = KnowledgebaseService.get_detail(dataset_id)
if not dataset:
return construct_json_result(code=RetCode.DATA_ERROR, message="Can't find this dataset!")
return construct_json_result(data=dataset, code=RetCode.SUCCESS)
except Exception as e:
return construct_json_result(e)
# ------------------------------ update a dataset --------------------------------------------
@manager.route("/<dataset_id>", methods=["PUT"])
@login_required
def update_dataset(dataset_id):
req = request.json
try:
# the request cannot be empty
if not req:
return construct_json_result(code=RetCode.DATA_ERROR, message="Please input at least one parameter that "
"you want to update!")
# check whether the dataset can be found
if not KnowledgebaseService.query(created_by=current_user.id, id=dataset_id):
return construct_json_result(message=f"Only the owner of knowledgebase is authorized for this operation!",
code=RetCode.OPERATING_ERROR)
exist, dataset = KnowledgebaseService.get_by_id(dataset_id)
# check whether there is this dataset
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR, message="This dataset cannot be found!")
if "name" in req:
name = req["name"].strip()
# check whether there is duplicate name
if name.lower() != dataset.name.lower() \
and len(KnowledgebaseService.query(name=name, tenant_id=current_user.id,
status=StatusEnum.VALID.value)) > 1:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"The name: {name.lower()} is already used by other "
f"datasets. Please choose a different name.")
dataset_updating_data = {}
chunk_num = req.get("chunk_num")
# modify the value of 11 parameters
# 2 parameters: embedding id and chunk method
# only if chunk_num is 0, the user can update the embedding id
if req.get("embedding_model_id"):
if chunk_num == 0:
dataset_updating_data["embd_id"] = req["embedding_model_id"]
else:
return construct_json_result(code=RetCode.DATA_ERROR,
message="You have already parsed the document in this "
"dataset, so you cannot change the embedding "
"model.")
# only if chunk_num is 0, the user can update the chunk_method
if "chunk_method" in req:
type_value = req["chunk_method"]
if is_illegal_value_for_enum(type_value, ParserType):
return construct_json_result(message=f"Illegal value {type_value} for 'chunk_method' field.",
code=RetCode.DATA_ERROR)
if chunk_num != 0:
construct_json_result(code=RetCode.DATA_ERROR, message="You have already parsed the document "
"in this dataset, so you cannot "
"change the chunk method.")
dataset_updating_data["parser_id"] = req["template_type"]
# convert the photo parameter to avatar
if req.get("photo"):
dataset_updating_data["avatar"] = req["photo"]
# layout_recognize
if "layout_recognize" in req:
if "parser_config" not in dataset_updating_data:
dataset_updating_data['parser_config'] = {}
dataset_updating_data['parser_config']['layout_recognize'] = req['layout_recognize']
# TODO: updating use_raptor needs to construct a class
# 6 parameters
for key in ["name", "language", "description", "permission", "id", "token_num"]:
if key in req:
dataset_updating_data[key] = req.get(key)
# update
if not KnowledgebaseService.update_by_id(dataset.id, dataset_updating_data):
return construct_json_result(code=RetCode.OPERATING_ERROR, message="Failed to update! "
"Please check the status of RAGFlow "
"server and try again!")
exist, dataset = KnowledgebaseService.get_by_id(dataset.id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR, message="Failed to get the dataset "
"using the dataset ID.")
return construct_json_result(data=dataset.to_json(), code=RetCode.SUCCESS)
except Exception as e:
return construct_error_response(e)
# --------------------------------content management ----------------------------------------------
# ----------------------------upload files-----------------------------------------------------
@manager.route("/<dataset_id>/documents/", methods=["POST"])
@login_required
def upload_documents(dataset_id):
# no files
if not request.files:
return construct_json_result(
message="There is no file!", code=RetCode.ARGUMENT_ERROR)
# the number of uploading files exceeds the limit
file_objs = request.files.getlist("file")
num_file_objs = len(file_objs)
if num_file_objs > MAXIMUM_OF_UPLOADING_FILES:
return construct_json_result(code=RetCode.DATA_ERROR, message=f"You try to upload {num_file_objs} files, "
f"which exceeds the maximum number of uploading files: {MAXIMUM_OF_UPLOADING_FILES}")
# no dataset
exist, dataset = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(message="Can't find this dataset", code=RetCode.DATA_ERROR)
for file_obj in file_objs:
file_name = file_obj.filename
# no name
if not file_name:
return construct_json_result(
message="There is a file without name!", code=RetCode.ARGUMENT_ERROR)
# TODO: support the remote files
if 'http' in file_name:
return construct_json_result(code=RetCode.ARGUMENT_ERROR, message="Remote files have not unsupported.")
# get the root_folder
root_folder = FileService.get_root_folder(current_user.id)
# get the id of the root_folder
parent_file_id = root_folder["id"] # document id
# this is for the new user, create '.knowledgebase' file
FileService.init_knowledgebase_docs(parent_file_id, current_user.id)
# go inside this folder, get the kb_root_folder
kb_root_folder = FileService.get_kb_folder(current_user.id)
# link the file management to the kb_folder
kb_folder = FileService.new_a_file_from_kb(dataset.tenant_id, dataset.name, kb_root_folder["id"])
# grab all the errs
err = []
MAX_FILE_NUM_PER_USER = int(os.environ.get("MAX_FILE_NUM_PER_USER", 0))
uploaded_docs_json = []
for file in file_objs:
try:
# TODO: get this value from the database as some tenants have this limit while others don't
if MAX_FILE_NUM_PER_USER > 0 and DocumentService.get_doc_count(dataset.tenant_id) >= MAX_FILE_NUM_PER_USER:
return construct_json_result(code=RetCode.DATA_ERROR,
message="Exceed the maximum file number of a free user!")
# deal with the duplicate name
filename = duplicate_name(
DocumentService.query,
name=file.filename,
kb_id=dataset.id)
# deal with the unsupported type
filetype = filename_type(filename)
if filetype == FileType.OTHER.value:
return construct_json_result(code=RetCode.DATA_ERROR,
message="This type of file has not been supported yet!")
# upload to the minio
location = filename
while STORAGE_IMPL.obj_exist(dataset_id, location):
location += "_"
blob = file.read()
# the content is empty, raising a warning
if blob == b'':
warnings.warn(f"[WARNING]: The content of the file {filename} is empty.")
STORAGE_IMPL.put(dataset_id, location, blob)
doc = {
"id": get_uuid(),
"kb_id": dataset.id,
"parser_id": dataset.parser_id,
"parser_config": dataset.parser_config,
"created_by": current_user.id,
"type": filetype,
"name": filename,
"location": location,
"size": len(blob),
"thumbnail": thumbnail(filename, blob)
}
if doc["type"] == FileType.VISUAL:
doc["parser_id"] = ParserType.PICTURE.value
if doc["type"] == FileType.AURAL:
doc["parser_id"] = ParserType.AUDIO.value
if re.search(r"\.(ppt|pptx|pages)$", filename):
doc["parser_id"] = ParserType.PRESENTATION.value
DocumentService.insert(doc)
FileService.add_file_from_kb(doc, kb_folder["id"], dataset.tenant_id)
uploaded_docs_json.append(doc)
except Exception as e:
err.append(file.filename + ": " + str(e))
if err:
# return all the errors
return construct_json_result(message="\n".join(err), code=RetCode.SERVER_ERROR)
# success
return construct_json_result(data=uploaded_docs_json, code=RetCode.SUCCESS)
# ----------------------------delete a file-----------------------------------------------------
@manager.route("/<dataset_id>/documents/<document_id>", methods=["DELETE"])
@login_required
def delete_document(document_id, dataset_id): # string
# get the root folder
root_folder = FileService.get_root_folder(current_user.id)
# parent file's id
parent_file_id = root_folder["id"]
# consider the new user
FileService.init_knowledgebase_docs(parent_file_id, current_user.id)
# store all the errors that may have
errors = ""
try:
# whether there is this document
exist, doc = DocumentService.get_by_id(document_id)
if not exist:
return construct_json_result(message=f"Document {document_id} not found!", code=RetCode.DATA_ERROR)
# whether this doc is authorized by this tenant
tenant_id = DocumentService.get_tenant_id(document_id)
if not tenant_id:
return construct_json_result(
message=f"You cannot delete this document {document_id} due to the authorization"
f" reason!", code=RetCode.AUTHENTICATION_ERROR)
# get the doc's id and location
real_dataset_id, location = File2DocumentService.get_minio_address(doc_id=document_id)
if real_dataset_id != dataset_id:
return construct_json_result(message=f"The document {document_id} is not in the dataset: {dataset_id}, "
f"but in the dataset: {real_dataset_id}.", code=RetCode.ARGUMENT_ERROR)
# there is an issue when removing
if not DocumentService.remove_document(doc, tenant_id):
return construct_json_result(
message="There was an error during the document removal process. Please check the status of the "
"RAGFlow server and try the removal again.", code=RetCode.OPERATING_ERROR)
# fetch the File2Document record associated with the provided document ID.
file_to_doc = File2DocumentService.get_by_document_id(document_id)
# delete the associated File record.
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == file_to_doc[0].file_id])
# delete the File2Document record itself using the document ID. This removes the
# association between the document and the file after the File record has been deleted.
File2DocumentService.delete_by_document_id(document_id)
# delete it from minio
STORAGE_IMPL.rm(dataset_id, location)
except Exception as e:
errors += str(e)
if errors:
return construct_json_result(data=False, message=errors, code=RetCode.SERVER_ERROR)
return construct_json_result(data=True, code=RetCode.SUCCESS)
# ----------------------------list files-----------------------------------------------------
@manager.route('/<dataset_id>/documents/', methods=['GET'])
@login_required
def list_documents(dataset_id):
if not dataset_id:
return construct_json_result(
data=False, message="Lack of 'dataset_id'", code=RetCode.ARGUMENT_ERROR)
# searching keywords
keywords = request.args.get("keywords", "")
offset = request.args.get("offset", 0)
count = request.args.get("count", -1)
order_by = request.args.get("order_by", "create_time")
descend = request.args.get("descend", True)
try:
docs, total = DocumentService.list_documents_in_dataset(dataset_id, int(offset), int(count), order_by,
descend, keywords)
return construct_json_result(data={"total": total, "docs": docs}, message=RetCode.SUCCESS)
except Exception as e:
return construct_error_response(e)
# ----------------------------update: enable rename-----------------------------------------------------
@manager.route("/<dataset_id>/documents/<document_id>", methods=["PUT"])
@login_required
def update_document(dataset_id, document_id):
req = request.json
try:
legal_parameters = set()
legal_parameters.add("name")
legal_parameters.add("enable")
legal_parameters.add("template_type")
for key in req.keys():
if key not in legal_parameters:
return construct_json_result(code=RetCode.ARGUMENT_ERROR, message=f"{key} is an illegal parameter.")
# The request body cannot be empty
if not req:
return construct_json_result(
code=RetCode.DATA_ERROR,
message="Please input at least one parameter that you want to update!")
# Check whether there is this dataset
exist, dataset = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR, message=f"This dataset {dataset_id} cannot be found!")
# The document does not exist
exist, document = DocumentService.get_by_id(document_id)
if not exist:
return construct_json_result(message=f"This document {document_id} cannot be found!",
code=RetCode.ARGUMENT_ERROR)
# Deal with the different keys
updating_data = {}
if "name" in req:
new_name = req["name"]
updating_data["name"] = new_name
# Check whether the new_name is suitable
# 1. no name value
if not new_name:
return construct_json_result(code=RetCode.DATA_ERROR, message="There is no new name.")
# 2. In case that there's space in the head or the tail
new_name = new_name.strip()
# 3. Check whether the new_name has the same extension of file as before
if pathlib.Path(new_name.lower()).suffix != pathlib.Path(
document.name.lower()).suffix:
return construct_json_result(
data=False,
message="The extension of file cannot be changed",
code=RetCode.ARGUMENT_ERROR)
# 4. Check whether the new name has already been occupied by other file
for d in DocumentService.query(name=new_name, kb_id=document.kb_id):
if d.name == new_name:
return construct_json_result(
message="Duplicated document name in the same dataset.",
code=RetCode.ARGUMENT_ERROR)
if "enable" in req:
enable_value = req["enable"]
if is_illegal_value_for_enum(enable_value, StatusEnum):
return construct_json_result(message=f"Illegal value {enable_value} for 'enable' field.",
code=RetCode.DATA_ERROR)
updating_data["status"] = enable_value
# TODO: Chunk-method - update parameters inside the json object parser_config
if "template_type" in req:
type_value = req["template_type"]
if is_illegal_value_for_enum(type_value, ParserType):
return construct_json_result(message=f"Illegal value {type_value} for 'template_type' field.",
code=RetCode.DATA_ERROR)
updating_data["parser_id"] = req["template_type"]
# The process of updating
if not DocumentService.update_by_id(document_id, updating_data):
return construct_json_result(
code=RetCode.OPERATING_ERROR,
message="Failed to update document in the database! "
"Please check the status of RAGFlow server and try again!")
# name part: file service
if "name" in req:
# Get file by document id
file_information = File2DocumentService.get_by_document_id(document_id)
if file_information:
exist, file = FileService.get_by_id(file_information[0].file_id)
FileService.update_by_id(file.id, {"name": req["name"]})
exist, document = DocumentService.get_by_id(document_id)
# Success
return construct_json_result(data=document.to_json(), message="Success", code=RetCode.SUCCESS)
except Exception as e:
return construct_error_response(e)
# Helper method to judge whether it's an illegal value
def is_illegal_value_for_enum(value, enum_class):
return value not in enum_class.__members__.values()
# ----------------------------download a file-----------------------------------------------------
@manager.route("/<dataset_id>/documents/<document_id>", methods=["GET"])
@login_required
def download_document(dataset_id, document_id):
try:
# Check whether there is this dataset
exist, _ = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"This dataset '{dataset_id}' cannot be found!")
# Check whether there is this document
exist, document = DocumentService.get_by_id(document_id)
if not exist:
return construct_json_result(message=f"This document '{document_id}' cannot be found!",
code=RetCode.ARGUMENT_ERROR)
# The process of downloading
doc_id, doc_location = File2DocumentService.get_minio_address(doc_id=document_id) # minio address
file_stream = STORAGE_IMPL.get(doc_id, doc_location)
if not file_stream:
return construct_json_result(message="This file is empty.", code=RetCode.DATA_ERROR)
file = BytesIO(file_stream)
# Use send_file with a proper filename and MIME type
return send_file(
file,
as_attachment=True,
download_name=document.name,
mimetype='application/octet-stream' # Set a default MIME type
)
# Error
except Exception as e:
return construct_error_response(e)
# ----------------------------start parsing a document-----------------------------------------------------
# helper method for parsing
# callback method
def doc_parse_callback(doc_id, prog=None, msg=""):
cancel = DocumentService.do_cancel(doc_id)
if cancel:
raise Exception("The parsing process has been cancelled!")
"""
def doc_parse(binary, doc_name, parser_name, tenant_id, doc_id):
match parser_name:
case "book":
book.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "laws":
laws.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "manual":
manual.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "naive":
# It's the mode by default, which is general in the front-end
naive.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "one":
one.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "paper":
paper.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "picture":
picture.chunk(doc_name, binary=binary, tenant_id=tenant_id, lang="Chinese",
callback=partial(doc_parse_callback, doc_id))
case "presentation":
presentation.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "qa":
qa.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "resume":
resume.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "table":
table.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "audio":
audio.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "email":
email.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case _:
return False
return True
"""
@manager.route("/<dataset_id>/documents/<document_id>/status", methods=["POST"])
@login_required
def parse_document(dataset_id, document_id):
try:
# valid dataset
exist, _ = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"This dataset '{dataset_id}' cannot be found!")
return parsing_document_internal(document_id)
except Exception as e:
return construct_error_response(e)
# ----------------------------start parsing documents-----------------------------------------------------
@manager.route("/<dataset_id>/documents/status", methods=["POST"])
@login_required
def parse_documents(dataset_id):
doc_ids = request.json["doc_ids"]
try:
exist, _ = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"This dataset '{dataset_id}' cannot be found!")
# two conditions
if not doc_ids:
# documents inside the dataset
docs, total = DocumentService.list_documents_in_dataset(dataset_id, 0, -1, "create_time",
True, "")
doc_ids = [doc["id"] for doc in docs]
message = ""
# for loop
for id in doc_ids:
res = parsing_document_internal(id)
res_body = res.json
if res_body["code"] == RetCode.SUCCESS:
message += res_body["message"]
else:
return res
return construct_json_result(data=True, code=RetCode.SUCCESS, message=message)
except Exception as e:
return construct_error_response(e)
# helper method for parsing the document
def parsing_document_internal(id):
message = ""
try:
# Check whether there is this document
exist, document = DocumentService.get_by_id(id)
if not exist:
return construct_json_result(message=f"This document '{id}' cannot be found!",
code=RetCode.ARGUMENT_ERROR)
tenant_id = DocumentService.get_tenant_id(id)
if not tenant_id:
return construct_json_result(message="Tenant not found!", code=RetCode.AUTHENTICATION_ERROR)
info = {"run": "1", "progress": 0}
info["progress_msg"] = ""
info["chunk_num"] = 0
info["token_num"] = 0
DocumentService.update_by_id(id, info)
ELASTICSEARCH.deleteByQuery(Q("match", doc_id=id), idxnm=search.index_name(tenant_id))
_, doc_attributes = DocumentService.get_by_id(id)
doc_attributes = doc_attributes.to_dict()
doc_id = doc_attributes["id"]
bucket, doc_name = File2DocumentService.get_minio_address(doc_id=doc_id)
binary = STORAGE_IMPL.get(bucket, doc_name)
parser_name = doc_attributes["parser_id"]
if binary:
res = doc_parse(binary, doc_name, parser_name, tenant_id, doc_id)
if res is False:
message += f"The parser id: {parser_name} of the document {doc_id} is not supported; "
else:
message += f"Empty data in the document: {doc_name}; "
# failed in parsing
if doc_attributes["status"] == TaskStatus.FAIL.value:
message += f"Failed in parsing the document: {doc_id}; "
return construct_json_result(code=RetCode.SUCCESS, message=message)
except Exception as e:
return construct_error_response(e)
# ----------------------------stop parsing a doc-----------------------------------------------------
@manager.route("<dataset_id>/documents/<document_id>/status", methods=["DELETE"])
@login_required
def stop_parsing_document(dataset_id, document_id):
try:
# valid dataset
exist, _ = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"This dataset '{dataset_id}' cannot be found!")
return stop_parsing_document_internal(document_id)
except Exception as e:
return construct_error_response(e)
# ----------------------------stop parsing docs-----------------------------------------------------
@manager.route("<dataset_id>/documents/status", methods=["DELETE"])
@login_required
def stop_parsing_documents(dataset_id):
doc_ids = request.json["doc_ids"]
try:
# valid dataset?
exist, _ = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"This dataset '{dataset_id}' cannot be found!")
if not doc_ids:
# documents inside the dataset
docs, total = DocumentService.list_documents_in_dataset(dataset_id, 0, -1, "create_time",
True, "")
doc_ids = [doc["id"] for doc in docs]
message = ""
# for loop
for id in doc_ids:
res = stop_parsing_document_internal(id)
res_body = res.json
if res_body["code"] == RetCode.SUCCESS:
message += res_body["message"]
else:
return res
return construct_json_result(data=True, code=RetCode.SUCCESS, message=message)
except Exception as e:
return construct_error_response(e)
# Helper method
def stop_parsing_document_internal(document_id):
try:
# valid doc?
exist, doc = DocumentService.get_by_id(document_id)
if not exist:
return construct_json_result(message=f"This document '{document_id}' cannot be found!",
code=RetCode.ARGUMENT_ERROR)
doc_attributes = doc.to_dict()
# only when the status is parsing, we need to stop it
if doc_attributes["status"] == TaskStatus.RUNNING.value:
tenant_id = DocumentService.get_tenant_id(document_id)
if not tenant_id:
return construct_json_result(message="Tenant not found!", code=RetCode.AUTHENTICATION_ERROR)
# update successfully?
if not DocumentService.update_by_id(document_id, {"status": "2"}): # cancel
return construct_json_result(
code=RetCode.OPERATING_ERROR,
message="There was an error during the stopping parsing the document process. "
"Please check the status of the RAGFlow server and try the update again."
)
_, doc_attributes = DocumentService.get_by_id(document_id)
doc_attributes = doc_attributes.to_dict()
# failed in stop parsing
if doc_attributes["status"] == TaskStatus.RUNNING.value:
return construct_json_result(message=f"Failed in parsing the document: {document_id}; ", code=RetCode.SUCCESS)
return construct_json_result(code=RetCode.SUCCESS, message="")
except Exception as e:
return construct_error_response(e)
# ----------------------------show the status of the file-----------------------------------------------------
@manager.route("/<dataset_id>/documents/<document_id>/status", methods=["GET"])
@login_required
def show_parsing_status(dataset_id, document_id):
try:
# valid dataset
exist, _ = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"This dataset: '{dataset_id}' cannot be found!")
# valid document
exist, _ = DocumentService.get_by_id(document_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"This document: '{document_id}' is not a valid document.")
_, doc = DocumentService.get_by_id(document_id) # get doc object
doc_attributes = doc.to_dict()
return construct_json_result(
data={"progress": doc_attributes["progress"], "status": TaskStatus(doc_attributes["status"]).name},
code=RetCode.SUCCESS
)
except Exception as e:
return construct_error_response(e)
# ----------------------------list the chunks of the file-----------------------------------------------------
# -- --------------------------delete the chunk-----------------------------------------------------
# ----------------------------edit the status of the chunk-----------------------------------------------------
# ----------------------------insert a new chunk-----------------------------------------------------
# ----------------------------upload a file-----------------------------------------------------
# ----------------------------get a specific chunk-----------------------------------------------------
# ----------------------------retrieval test-----------------------------------------------------

View File

@ -20,27 +20,27 @@ from api.db.services.dialog_service import DialogService
from api.db import StatusEnum from api.db import StatusEnum
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.user_service import TenantService, UserTenantService from api.db.services.user_service import TenantService, UserTenantService
from api.settings import RetCode from api import settings
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.utils import get_uuid from api.utils import get_uuid
from api.utils.api_utils import get_json_result from api.utils.api_utils import get_json_result
@manager.route('/set', methods=['POST']) @manager.route('/set', methods=['POST']) # noqa: F821
@login_required @login_required
def set_dialog(): def set_dialog():
req = request.json req = request.json
dialog_id = req.get("dialog_id") dialog_id = req.get("dialog_id")
name = req.get("name", "New Dialog") name = req.get("name", "New Dialog")
description = req.get("description", "A helpful Dialog") description = req.get("description", "A helpful dialog")
icon = req.get("icon", "") icon = req.get("icon", "")
top_n = req.get("top_n", 6) top_n = req.get("top_n", 6)
top_k = req.get("top_k", 1024) top_k = req.get("top_k", 1024)
rerank_id = req.get("rerank_id", "") rerank_id = req.get("rerank_id", "")
if not rerank_id: req["rerank_id"] = "" if not rerank_id:
req["rerank_id"] = ""
similarity_threshold = req.get("similarity_threshold", 0.1) similarity_threshold = req.get("similarity_threshold", 0.1)
vector_similarity_weight = req.get("vector_similarity_weight", 0.3) vector_similarity_weight = req.get("vector_similarity_weight", 0.3)
if vector_similarity_weight is None: vector_similarity_weight = 0.3
llm_setting = req.get("llm_setting", {}) llm_setting = req.get("llm_setting", {})
default_prompt = { default_prompt = {
"system": """你是一个智能助手,请总结知识库的内容来回答问题,请列举知识库中的数据详细回答。当所有知识库内容都与问题无关时,你的回答必须包括“知识库中未找到您要的答案!”这句话。回答需要考虑聊天历史。 "system": """你是一个智能助手,请总结知识库的内容来回答问题,请列举知识库中的数据详细回答。当所有知识库内容都与问题无关时,你的回答必须包括“知识库中未找到您要的答案!”这句话。回答需要考虑聊天历史。
@ -68,17 +68,23 @@ def set_dialog():
continue continue
if prompt_config["system"].find("{%s}" % p["key"]) < 0: if prompt_config["system"].find("{%s}" % p["key"]) < 0:
return get_data_error_result( return get_data_error_result(
retmsg="Parameter '{}' is not used".format(p["key"])) message="Parameter '{}' is not used".format(p["key"]))
try: try:
e, tenant = TenantService.get_by_id(current_user.id) e, tenant = TenantService.get_by_id(current_user.id)
if not e: if not e:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
kbs = KnowledgebaseService.get_by_ids(req.get("kb_ids"))
embd_count = len(set([kb.embd_id for kb in kbs]))
if embd_count != 1:
return get_data_error_result(message=f'Datasets use different embedding models: {[kb.embd_id for kb in kbs]}"')
llm_id = req.get("llm_id", tenant.llm_id) llm_id = req.get("llm_id", tenant.llm_id)
if not dialog_id: if not dialog_id:
if not req.get("kb_ids"): if not req.get("kb_ids"):
return get_data_error_result( return get_data_error_result(
retmsg="Fail! Please select knowledgebase!") message="Fail! Please select knowledgebase!")
dia = { dia = {
"id": get_uuid(), "id": get_uuid(),
"tenant_id": current_user.id, "tenant_id": current_user.id,
@ -96,35 +102,33 @@ def set_dialog():
"icon": icon "icon": icon
} }
if not DialogService.save(**dia): if not DialogService.save(**dia):
return get_data_error_result(retmsg="Fail to new a dialog!") return get_data_error_result(message="Fail to new a dialog!")
e, dia = DialogService.get_by_id(dia["id"]) return get_json_result(data=dia)
if not e:
return get_data_error_result(retmsg="Fail to new a dialog!")
return get_json_result(data=dia.to_json())
else: else:
del req["dialog_id"] del req["dialog_id"]
if "kb_names" in req: if "kb_names" in req:
del req["kb_names"] del req["kb_names"]
if not DialogService.update_by_id(dialog_id, req): if not DialogService.update_by_id(dialog_id, req):
return get_data_error_result(retmsg="Dialog not found!") return get_data_error_result(message="Dialog not found!")
e, dia = DialogService.get_by_id(dialog_id) e, dia = DialogService.get_by_id(dialog_id)
if not e: if not e:
return get_data_error_result(retmsg="Fail to update a dialog!") return get_data_error_result(message="Fail to update a dialog!")
dia = dia.to_dict() dia = dia.to_dict()
dia.update(req)
dia["kb_ids"], dia["kb_names"] = get_kb_names(dia["kb_ids"]) dia["kb_ids"], dia["kb_names"] = get_kb_names(dia["kb_ids"])
return get_json_result(data=dia) return get_json_result(data=dia)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/get', methods=['GET']) @manager.route('/get', methods=['GET']) # noqa: F821
@login_required @login_required
def get(): def get():
dialog_id = request.args["dialog_id"] dialog_id = request.args["dialog_id"]
try: try:
e, dia = DialogService.get_by_id(dialog_id) e, dia = DialogService.get_by_id(dialog_id)
if not e: if not e:
return get_data_error_result(retmsg="Dialog not found!") return get_data_error_result(message="Dialog not found!")
dia = dia.to_dict() dia = dia.to_dict()
dia["kb_ids"], dia["kb_names"] = get_kb_names(dia["kb_ids"]) dia["kb_ids"], dia["kb_names"] = get_kb_names(dia["kb_ids"])
return get_json_result(data=dia) return get_json_result(data=dia)
@ -143,7 +147,7 @@ def get_kb_names(kb_ids):
return ids, nms return ids, nms
@manager.route('/list', methods=['GET']) @manager.route('/list', methods=['GET']) # noqa: F821
@login_required @login_required
def list_dialogs(): def list_dialogs():
try: try:
@ -160,7 +164,7 @@ def list_dialogs():
return server_error_response(e) return server_error_response(e)
@manager.route('/rm', methods=['POST']) @manager.route('/rm', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("dialog_ids") @validate_request("dialog_ids")
def rm(): def rm():
@ -174,8 +178,8 @@ def rm():
break break
else: else:
return get_json_result( return get_json_result(
data=False, retmsg=f'Only owner of dialog authorized for this operation.', data=False, message='Only owner of dialog authorized for this operation.',
retcode=RetCode.OPERATING_ERROR) code=settings.RetCode.OPERATING_ERROR)
dialog_list.append({"id": id,"status":StatusEnum.INVALID.value}) dialog_list.append({"id": id,"status":StatusEnum.INVALID.value})
DialogService.update_many_by_id(dialog_list) DialogService.update_many_by_id(dialog_list)
return get_json_result(data=True) return get_json_result(data=True)

View File

@ -13,63 +13,59 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License # limitations under the License
# #
import datetime
import hashlib
import json import json
import os import os.path
import pathlib import pathlib
import re import re
import traceback
from concurrent.futures import ThreadPoolExecutor
from copy import deepcopy
from io import BytesIO
import flask import flask
from elasticsearch_dsl import Q
from flask import request from flask import request
from flask_login import login_required, current_user from flask_login import login_required, current_user
from api.db.db_models import Task, File from deepdoc.parser.html_parser import RAGFlowHtmlParser
from api.db.services.dialog_service import DialogService, ConversationService from rag.nlp import search
from api.db import FileType, TaskStatus, ParserType, FileSource
from api.db.db_models import File, Task
from api.db.services.file2document_service import File2DocumentService from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService from api.db.services.file_service import FileService
from api.db.services.llm_service import LLMBundle from api.db.services.task_service import queue_tasks
from api.db.services.task_service import TaskService, queue_tasks from api.db.services.user_service import UserTenantService
from api.db.services.user_service import TenantService, UserTenantService
from graphrag.mind_map_extractor import MindMapExtractor
from rag.app import naive
from rag.nlp import search
from rag.utils.es_conn import ELASTICSEARCH
from api.db.services import duplicate_name from api.db.services import duplicate_name
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request from api.db.services.task_service import TaskService
from api.utils import get_uuid
from api.db import FileType, TaskStatus, ParserType, FileSource, LLMType
from api.db.services.document_service import DocumentService, doc_upload_and_parse from api.db.services.document_service import DocumentService, doc_upload_and_parse
from api.settings import RetCode, stat_logger from api.utils.api_utils import (
server_error_response,
get_data_error_result,
validate_request,
)
from api.utils import get_uuid
from api import settings
from api.utils.api_utils import get_json_result from api.utils.api_utils import get_json_result
from rag.utils.storage_factory import STORAGE_IMPL from rag.utils.storage_factory import STORAGE_IMPL
from api.utils.file_utils import filename_type, thumbnail, get_project_base_directory from api.utils.file_utils import filename_type, thumbnail, get_project_base_directory
from api.utils.web_utils import html2pdf, is_valid_url from api.utils.web_utils import html2pdf, is_valid_url
from api.constants import IMG_BASE64_PREFIX
@manager.route('/upload', methods=['POST']) @manager.route('/upload', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("kb_id") @validate_request("kb_id")
def upload(): def upload():
kb_id = request.form.get("kb_id") kb_id = request.form.get("kb_id")
if not kb_id: if not kb_id:
return get_json_result( return get_json_result(
data=False, retmsg='Lack of "KB ID"', retcode=RetCode.ARGUMENT_ERROR) data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
if 'file' not in request.files: if 'file' not in request.files:
return get_json_result( return get_json_result(
data=False, retmsg='No file part!', retcode=RetCode.ARGUMENT_ERROR) data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist('file') file_objs = request.files.getlist('file')
for file_obj in file_objs: for file_obj in file_objs:
if file_obj.filename == '': if file_obj.filename == '':
return get_json_result( return get_json_result(
data=False, retmsg='No file selected!', retcode=RetCode.ARGUMENT_ERROR) data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_id) e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e: if not e:
@ -78,29 +74,30 @@ def upload():
err, _ = FileService.upload_document(kb, file_objs, current_user.id) err, _ = FileService.upload_document(kb, file_objs, current_user.id)
if err: if err:
return get_json_result( return get_json_result(
data=False, retmsg="\n".join(err), retcode=RetCode.SERVER_ERROR) data=False, message="\n".join(err), code=settings.RetCode.SERVER_ERROR)
return get_json_result(data=True) return get_json_result(data=True)
@manager.route('/web_crawl', methods=['POST']) @manager.route('/web_crawl', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("kb_id", "name", "url") @validate_request("kb_id", "name", "url")
def web_crawl(): def web_crawl():
kb_id = request.form.get("kb_id") kb_id = request.form.get("kb_id")
if not kb_id: if not kb_id:
return get_json_result( return get_json_result(
data=False, retmsg='Lack of "KB ID"', retcode=RetCode.ARGUMENT_ERROR) data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
name = request.form.get("name") name = request.form.get("name")
url = request.form.get("url") url = request.form.get("url")
if not is_valid_url(url): if not is_valid_url(url):
return get_json_result( return get_json_result(
data=False, retmsg='The URL format is invalid', retcode=RetCode.ARGUMENT_ERROR) data=False, message='The URL format is invalid', code=settings.RetCode.ARGUMENT_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_id) e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e: if not e:
raise LookupError("Can't find this knowledgebase!") raise LookupError("Can't find this knowledgebase!")
blob = html2pdf(url) blob = html2pdf(url)
if not blob: return server_error_response(ValueError("Download failure.")) if not blob:
return server_error_response(ValueError("Download failure."))
root_folder = FileService.get_root_folder(current_user.id) root_folder = FileService.get_root_folder(current_user.id)
pf_id = root_folder["id"] pf_id = root_folder["id"]
@ -139,6 +136,8 @@ def web_crawl():
doc["parser_id"] = ParserType.AUDIO.value doc["parser_id"] = ParserType.AUDIO.value
if re.search(r"\.(ppt|pptx|pages)$", filename): if re.search(r"\.(ppt|pptx|pages)$", filename):
doc["parser_id"] = ParserType.PRESENTATION.value doc["parser_id"] = ParserType.PRESENTATION.value
if re.search(r"\.(eml)$", filename):
doc["parser_id"] = ParserType.EMAIL.value
DocumentService.insert(doc) DocumentService.insert(doc)
FileService.add_file_from_kb(doc, kb_folder["id"], kb.tenant_id) FileService.add_file_from_kb(doc, kb_folder["id"], kb.tenant_id)
except Exception as e: except Exception as e:
@ -146,7 +145,7 @@ def web_crawl():
return get_json_result(data=True) return get_json_result(data=True)
@manager.route('/create', methods=['POST']) @manager.route('/create', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("name", "kb_id") @validate_request("name", "kb_id")
def create(): def create():
@ -154,17 +153,17 @@ def create():
kb_id = req["kb_id"] kb_id = req["kb_id"]
if not kb_id: if not kb_id:
return get_json_result( return get_json_result(
data=False, retmsg='Lack of "KB ID"', retcode=RetCode.ARGUMENT_ERROR) data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
try: try:
e, kb = KnowledgebaseService.get_by_id(kb_id) e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e: if not e:
return get_data_error_result( return get_data_error_result(
retmsg="Can't find this knowledgebase!") message="Can't find this knowledgebase!")
if DocumentService.query(name=req["name"], kb_id=kb_id): if DocumentService.query(name=req["name"], kb_id=kb_id):
return get_data_error_result( return get_data_error_result(
retmsg="Duplicated document name in the same knowledgebase.") message="Duplicated document name in the same knowledgebase.")
doc = DocumentService.insert({ doc = DocumentService.insert({
"id": get_uuid(), "id": get_uuid(),
@ -182,13 +181,13 @@ def create():
return server_error_response(e) return server_error_response(e)
@manager.route('/list', methods=['GET']) @manager.route('/list', methods=['GET']) # noqa: F821
@login_required @login_required
def list_docs(): def list_docs():
kb_id = request.args.get("kb_id") kb_id = request.args.get("kb_id")
if not kb_id: if not kb_id:
return get_json_result( return get_json_result(
data=False, retmsg='Lack of "KB ID"', retcode=RetCode.ARGUMENT_ERROR) data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
tenants = UserTenantService.query(user_id=current_user.id) tenants = UserTenantService.query(user_id=current_user.id)
for tenant in tenants: for tenant in tenants:
if KnowledgebaseService.query( if KnowledgebaseService.query(
@ -196,8 +195,8 @@ def list_docs():
break break
else: else:
return get_json_result( return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.', data=False, message='Only owner of knowledgebase authorized for this operation.',
retcode=RetCode.OPERATING_ERROR) code=settings.RetCode.OPERATING_ERROR)
keywords = request.args.get("keywords", "") keywords = request.args.get("keywords", "")
page_number = int(request.args.get("page", 1)) page_number = int(request.args.get("page", 1))
@ -207,83 +206,108 @@ def list_docs():
try: try:
docs, tol = DocumentService.get_by_kb_id( docs, tol = DocumentService.get_by_kb_id(
kb_id, page_number, items_per_page, orderby, desc, keywords) kb_id, page_number, items_per_page, orderby, desc, keywords)
for doc_item in docs:
if doc_item['thumbnail'] and not doc_item['thumbnail'].startswith(IMG_BASE64_PREFIX):
doc_item['thumbnail'] = f"/v1/document/image/{kb_id}-{doc_item['thumbnail']}"
return get_json_result(data={"total": tol, "docs": docs}) return get_json_result(data={"total": tol, "docs": docs})
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/infos', methods=['POST']) @manager.route('/infos', methods=['POST']) # noqa: F821
@login_required
def docinfos(): def docinfos():
req = request.json req = request.json
doc_ids = req["doc_ids"] doc_ids = req["doc_ids"]
for doc_id in doc_ids:
if not DocumentService.accessible(doc_id, current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
docs = DocumentService.get_by_ids(doc_ids) docs = DocumentService.get_by_ids(doc_ids)
return get_json_result(data=list(docs.dicts())) return get_json_result(data=list(docs.dicts()))
@manager.route('/thumbnails', methods=['GET']) @manager.route('/thumbnails', methods=['GET']) # noqa: F821
#@login_required # @login_required
def thumbnails(): def thumbnails():
doc_ids = request.args.get("doc_ids").split(",") doc_ids = request.args.get("doc_ids").split(",")
if not doc_ids: if not doc_ids:
return get_json_result( return get_json_result(
data=False, retmsg='Lack of "Document ID"', retcode=RetCode.ARGUMENT_ERROR) data=False, message='Lack of "Document ID"', code=settings.RetCode.ARGUMENT_ERROR)
try: try:
docs = DocumentService.get_thumbnails(doc_ids) docs = DocumentService.get_thumbnails(doc_ids)
for doc_item in docs:
if doc_item['thumbnail'] and not doc_item['thumbnail'].startswith(IMG_BASE64_PREFIX):
doc_item['thumbnail'] = f"/v1/document/image/{doc_item['kb_id']}-{doc_item['thumbnail']}"
return get_json_result(data={d["id"]: d["thumbnail"] for d in docs}) return get_json_result(data={d["id"]: d["thumbnail"] for d in docs})
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/change_status', methods=['POST']) @manager.route('/change_status', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("doc_id", "status") @validate_request("doc_id", "status")
def change_status(): def change_status():
req = request.json req = request.json
if str(req["status"]) not in ["0", "1"]: if str(req["status"]) not in ["0", "1"]:
get_json_result( return get_json_result(
data=False, data=False,
retmsg='"Status" must be either 0 or 1!', message='"Status" must be either 0 or 1!',
retcode=RetCode.ARGUMENT_ERROR) code=settings.RetCode.ARGUMENT_ERROR)
if not DocumentService.accessible(req["doc_id"], current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR)
try: try:
e, doc = DocumentService.get_by_id(req["doc_id"]) e, doc = DocumentService.get_by_id(req["doc_id"])
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
e, kb = KnowledgebaseService.get_by_id(doc.kb_id) e, kb = KnowledgebaseService.get_by_id(doc.kb_id)
if not e: if not e:
return get_data_error_result( return get_data_error_result(
retmsg="Can't find this knowledgebase!") message="Can't find this knowledgebase!")
if not DocumentService.update_by_id( if not DocumentService.update_by_id(
req["doc_id"], {"status": str(req["status"])}): req["doc_id"], {"status": str(req["status"])}):
return get_data_error_result( return get_data_error_result(
retmsg="Database error (Document update)!") message="Database error (Document update)!")
if str(req["status"]) == "0": status = int(req["status"])
ELASTICSEARCH.updateScriptByQuery(Q("term", doc_id=req["doc_id"]), settings.docStoreConn.update({"doc_id": req["doc_id"]}, {"available_int": status},
scripts="ctx._source.available_int=0;", search.index_name(kb.tenant_id), doc.kb_id)
idxnm=search.index_name(
kb.tenant_id)
)
else:
ELASTICSEARCH.updateScriptByQuery(Q("term", doc_id=req["doc_id"]),
scripts="ctx._source.available_int=1;",
idxnm=search.index_name(
kb.tenant_id)
)
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/rm', methods=['POST']) @manager.route('/rm', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("doc_id") @validate_request("doc_id")
def rm(): def rm():
req = request.json req = request.json
doc_ids = req["doc_id"] doc_ids = req["doc_id"]
if isinstance(doc_ids, str): doc_ids = [doc_ids] if isinstance(doc_ids, str):
doc_ids = [doc_ids]
for doc_id in doc_ids:
if not DocumentService.accessible4deletion(doc_id, current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
root_folder = FileService.get_root_folder(current_user.id) root_folder = FileService.get_root_folder(current_user.id)
pf_id = root_folder["id"] pf_id = root_folder["id"]
FileService.init_knowledgebase_docs(pf_id, current_user.id) FileService.init_knowledgebase_docs(pf_id, current_user.id)
@ -292,16 +316,17 @@ def rm():
try: try:
e, doc = DocumentService.get_by_id(doc_id) e, doc = DocumentService.get_by_id(doc_id)
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id) tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id: if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
b, n = File2DocumentService.get_minio_address(doc_id=doc_id) b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
TaskService.filter_delete([Task.doc_id == doc_id])
if not DocumentService.remove_document(doc, tenant_id): if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result( return get_data_error_result(
retmsg="Database error (Document removal)!") message="Database error (Document removal)!")
f2d = File2DocumentService.get_by_document_id(doc_id) f2d = File2DocumentService.get_by_document_id(doc_id)
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id]) FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
@ -312,37 +337,47 @@ def rm():
errors += str(e) errors += str(e)
if errors: if errors:
return get_json_result(data=False, retmsg=errors, retcode=RetCode.SERVER_ERROR) return get_json_result(data=False, message=errors, code=settings.RetCode.SERVER_ERROR)
return get_json_result(data=True) return get_json_result(data=True)
@manager.route('/run', methods=['POST']) @manager.route('/run', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("doc_ids", "run") @validate_request("doc_ids", "run")
def run(): def run():
req = request.json req = request.json
for doc_id in req["doc_ids"]:
if not DocumentService.accessible(doc_id, current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
try: try:
for id in req["doc_ids"]: for id in req["doc_ids"]:
info = {"run": str(req["run"]), "progress": 0} info = {"run": str(req["run"]), "progress": 0}
if str(req["run"]) == TaskStatus.RUNNING.value: if str(req["run"]) == TaskStatus.RUNNING.value and req.get("delete", False):
info["progress_msg"] = "" info["progress_msg"] = ""
info["chunk_num"] = 0 info["chunk_num"] = 0
info["token_num"] = 0 info["token_num"] = 0
DocumentService.update_by_id(id, info) DocumentService.update_by_id(id, info)
# if str(req["run"]) == TaskStatus.CANCEL.value:
tenant_id = DocumentService.get_tenant_id(id) tenant_id = DocumentService.get_tenant_id(id)
if not tenant_id: if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
ELASTICSEARCH.deleteByQuery( e, doc = DocumentService.get_by_id(id)
Q("match", doc_id=id), idxnm=search.index_name(tenant_id)) if not e:
return get_data_error_result(message="Document not found!")
if req.get("delete", False):
TaskService.filter_delete([Task.doc_id == id])
if settings.docStoreConn.indexExist(search.index_name(tenant_id), doc.kb_id):
settings.docStoreConn.delete({"doc_id": id}, search.index_name(tenant_id), doc.kb_id)
if str(req["run"]) == TaskStatus.RUNNING.value: if str(req["run"]) == TaskStatus.RUNNING.value:
TaskService.filter_delete([Task.doc_id == id])
e, doc = DocumentService.get_by_id(id) e, doc = DocumentService.get_by_id(id)
doc = doc.to_dict() doc = doc.to_dict()
doc["tenant_id"] = tenant_id doc["tenant_id"] = tenant_id
bucket, name = File2DocumentService.get_minio_address(doc_id=doc["id"]) bucket, name = File2DocumentService.get_storage_address(doc_id=doc["id"])
queue_tasks(doc, bucket, name) queue_tasks(doc, bucket, name)
return get_json_result(data=True) return get_json_result(data=True)
@ -350,30 +385,36 @@ def run():
return server_error_response(e) return server_error_response(e)
@manager.route('/rename', methods=['POST']) @manager.route('/rename', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("doc_id", "name") @validate_request("doc_id", "name")
def rename(): def rename():
req = request.json req = request.json
if not DocumentService.accessible(req["doc_id"], current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
try: try:
e, doc = DocumentService.get_by_id(req["doc_id"]) e, doc = DocumentService.get_by_id(req["doc_id"])
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
if pathlib.Path(req["name"].lower()).suffix != pathlib.Path( if pathlib.Path(req["name"].lower()).suffix != pathlib.Path(
doc.name.lower()).suffix: doc.name.lower()).suffix:
return get_json_result( return get_json_result(
data=False, data=False,
retmsg="The extension of file can't be changed", message="The extension of file can't be changed",
retcode=RetCode.ARGUMENT_ERROR) code=settings.RetCode.ARGUMENT_ERROR)
for d in DocumentService.query(name=req["name"], kb_id=doc.kb_id): for d in DocumentService.query(name=req["name"], kb_id=doc.kb_id):
if d.name == req["name"]: if d.name == req["name"]:
return get_data_error_result( return get_data_error_result(
retmsg="Duplicated document name in the same knowledgebase.") message="Duplicated document name in the same knowledgebase.")
if not DocumentService.update_by_id( if not DocumentService.update_by_id(
req["doc_id"], {"name": req["name"]}): req["doc_id"], {"name": req["name"]}):
return get_data_error_result( return get_data_error_result(
retmsg="Database error (Document rename)!") message="Database error (Document rename)!")
informs = File2DocumentService.get_by_document_id(req["doc_id"]) informs = File2DocumentService.get_by_document_id(req["doc_id"])
if informs: if informs:
@ -385,15 +426,15 @@ def rename():
return server_error_response(e) return server_error_response(e)
@manager.route('/get/<doc_id>', methods=['GET']) @manager.route('/get/<doc_id>', methods=['GET']) # noqa: F821
# @login_required # @login_required
def get(doc_id): def get(doc_id):
try: try:
e, doc = DocumentService.get_by_id(doc_id) e, doc = DocumentService.get_by_id(doc_id)
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
b, n = File2DocumentService.get_minio_address(doc_id=doc_id) b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
response = flask.make_response(STORAGE_IMPL.get(b, n)) response = flask.make_response(STORAGE_IMPL.get(b, n))
ext = re.search(r"\.([^.]+)$", doc.name) ext = re.search(r"\.([^.]+)$", doc.name)
@ -410,15 +451,22 @@ def get(doc_id):
return server_error_response(e) return server_error_response(e)
@manager.route('/change_parser', methods=['POST']) @manager.route('/change_parser', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("doc_id", "parser_id") @validate_request("doc_id", "parser_id")
def change_parser(): def change_parser():
req = request.json req = request.json
if not DocumentService.accessible(req["doc_id"], current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
try: try:
e, doc = DocumentService.get_by_id(req["doc_id"]) e, doc = DocumentService.get_by_id(req["doc_id"])
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
if doc.parser_id.lower() == req["parser_id"].lower(): if doc.parser_id.lower() == req["parser_id"].lower():
if "parser_config" in req: if "parser_config" in req:
if req["parser_config"] == doc.parser_config: if req["parser_config"] == doc.parser_config:
@ -426,37 +474,41 @@ def change_parser():
else: else:
return get_json_result(data=True) return get_json_result(data=True)
if doc.type == FileType.VISUAL or re.search( if ((doc.type == FileType.VISUAL and req["parser_id"] != "picture")
r"\.(ppt|pptx|pages)$", doc.name): or (re.search(
return get_data_error_result(retmsg="Not supported yet!") r"\.(ppt|pptx|pages)$", doc.name) and req["parser_id"] != "presentation")):
return get_data_error_result(message="Not supported yet!")
e = DocumentService.update_by_id(doc.id, e = DocumentService.update_by_id(doc.id,
{"parser_id": req["parser_id"], "progress": 0, "progress_msg": "", {"parser_id": req["parser_id"], "progress": 0, "progress_msg": "",
"run": TaskStatus.UNSTART.value}) "run": TaskStatus.UNSTART.value})
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
if "parser_config" in req: if "parser_config" in req:
DocumentService.update_parser_config(doc.id, req["parser_config"]) DocumentService.update_parser_config(doc.id, req["parser_config"])
if doc.token_num > 0: if doc.token_num > 0:
e = DocumentService.increment_chunk_num(doc.id, doc.kb_id, doc.token_num * -1, doc.chunk_num * -1, e = DocumentService.increment_chunk_num(doc.id, doc.kb_id, doc.token_num * -1, doc.chunk_num * -1,
doc.process_duation * -1) doc.process_duation * -1)
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(req["doc_id"]) tenant_id = DocumentService.get_tenant_id(req["doc_id"])
if not tenant_id: if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
ELASTICSEARCH.deleteByQuery( if settings.docStoreConn.indexExist(search.index_name(tenant_id), doc.kb_id):
Q("match", doc_id=doc.id), idxnm=search.index_name(tenant_id)) settings.docStoreConn.delete({"doc_id": doc.id}, search.index_name(tenant_id), doc.kb_id)
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/image/<image_id>', methods=['GET']) @manager.route('/image/<image_id>', methods=['GET']) # noqa: F821
# @login_required # @login_required
def get_image(image_id): def get_image(image_id):
try: try:
arr = image_id.split("-")
if len(arr) != 2:
return get_data_error_result(message="Image not found.")
bkt, nm = image_id.split("-") bkt, nm = image_id.split("-")
response = flask.make_response(STORAGE_IMPL.get(bkt, nm)) response = flask.make_response(STORAGE_IMPL.get(bkt, nm))
response.headers.set('Content-Type', 'image/JPEG') response.headers.set('Content-Type', 'image/JPEG')
@ -465,20 +517,115 @@ def get_image(image_id):
return server_error_response(e) return server_error_response(e)
@manager.route('/upload_and_parse', methods=['POST']) @manager.route('/upload_and_parse', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("conversation_id") @validate_request("conversation_id")
def upload_and_parse(): def upload_and_parse():
if 'file' not in request.files: if 'file' not in request.files:
return get_json_result( return get_json_result(
data=False, retmsg='No file part!', retcode=RetCode.ARGUMENT_ERROR) data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist('file') file_objs = request.files.getlist('file')
for file_obj in file_objs: for file_obj in file_objs:
if file_obj.filename == '': if file_obj.filename == '':
return get_json_result( return get_json_result(
data=False, retmsg='No file selected!', retcode=RetCode.ARGUMENT_ERROR) data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
doc_ids = doc_upload_and_parse(request.form.get("conversation_id"), file_objs, current_user.id) doc_ids = doc_upload_and_parse(request.form.get("conversation_id"), file_objs, current_user.id)
return get_json_result(data=doc_ids) return get_json_result(data=doc_ids)
@manager.route('/parse', methods=['POST']) # noqa: F821
@login_required
def parse():
url = request.json.get("url") if request.json else ""
if url:
if not is_valid_url(url):
return get_json_result(
data=False, message='The URL format is invalid', code=settings.RetCode.ARGUMENT_ERROR)
download_path = os.path.join(get_project_base_directory(), "logs/downloads")
os.makedirs(download_path, exist_ok=True)
from seleniumwire.webdriver import Chrome, ChromeOptions
options = ChromeOptions()
options.add_argument('--headless')
options.add_argument('--disable-gpu')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
options.add_experimental_option('prefs', {
'download.default_directory': download_path,
'download.prompt_for_download': False,
'download.directory_upgrade': True,
'safebrowsing.enabled': True
})
driver = Chrome(options=options)
driver.get(url)
res_headers = [r.response.headers for r in driver.requests if r and r.response]
if len(res_headers) > 1:
sections = RAGFlowHtmlParser().parser_txt(driver.page_source)
driver.quit()
return get_json_result(data="\n".join(sections))
class File:
filename: str
filepath: str
def __init__(self, filename, filepath):
self.filename = filename
self.filepath = filepath
def read(self):
with open(self.filepath, "rb") as f:
return f.read()
r = re.search(r"filename=\"([^\"]+)\"", str(res_headers))
if not r or not r.group(1):
return get_json_result(
data=False, message="Can't not identify downloaded file", code=settings.RetCode.ARGUMENT_ERROR)
f = File(r.group(1), os.path.join(download_path, r.group(1)))
txt = FileService.parse_docs([f], current_user.id)
return get_json_result(data=txt)
if 'file' not in request.files:
return get_json_result(
data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist('file')
txt = FileService.parse_docs(file_objs, current_user.id)
return get_json_result(data=txt)
@manager.route('/set_meta', methods=['POST']) # noqa: F821
@login_required
@validate_request("doc_id", "meta")
def set_meta():
req = request.json
if not DocumentService.accessible(req["doc_id"], current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
try:
meta = json.loads(req["meta"])
except Exception as e:
return get_json_result(
data=False, message=f'Json syntax error: {e}', code=settings.RetCode.ARGUMENT_ERROR)
if not isinstance(meta, dict):
return get_json_result(
data=False, message='Meta data should be in Json map format, like {"key": "value"}', code=settings.RetCode.ARGUMENT_ERROR)
try:
e, doc = DocumentService.get_by_id(req["doc_id"])
if not e:
return get_data_error_result(message="Document not found!")
if not DocumentService.update_by_id(
req["doc_id"], {"meta_fields": meta}):
return get_data_error_result(
message="Database error (meta updates)!")
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)

View File

@ -13,9 +13,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License # limitations under the License
# #
from elasticsearch_dsl import Q
from api.db.db_models import File2Document
from api.db.services.file2document_service import File2DocumentService from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService from api.db.services.file_service import FileService
@ -26,13 +24,11 @@ from api.utils.api_utils import server_error_response, get_data_error_result, va
from api.utils import get_uuid from api.utils import get_uuid
from api.db import FileType from api.db import FileType
from api.db.services.document_service import DocumentService from api.db.services.document_service import DocumentService
from api.settings import RetCode from api import settings
from api.utils.api_utils import get_json_result from api.utils.api_utils import get_json_result
from rag.nlp import search
from rag.utils.es_conn import ELASTICSEARCH
@manager.route('/convert', methods=['POST']) @manager.route('/convert', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("file_ids", "kb_ids") @validate_request("file_ids", "kb_ids")
def convert(): def convert():
@ -54,13 +50,13 @@ def convert():
doc_id = inform.document_id doc_id = inform.document_id
e, doc = DocumentService.get_by_id(doc_id) e, doc = DocumentService.get_by_id(doc_id)
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id) tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id: if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
if not DocumentService.remove_document(doc, tenant_id): if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result( return get_data_error_result(
retmsg="Database error (Document removal)!") message="Database error (Document removal)!")
File2DocumentService.delete_by_file_id(id) File2DocumentService.delete_by_file_id(id)
# insert # insert
@ -68,16 +64,16 @@ def convert():
e, kb = KnowledgebaseService.get_by_id(kb_id) e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e: if not e:
return get_data_error_result( return get_data_error_result(
retmsg="Can't find this knowledgebase!") message="Can't find this knowledgebase!")
e, file = FileService.get_by_id(id) e, file = FileService.get_by_id(id)
if not e: if not e:
return get_data_error_result( return get_data_error_result(
retmsg="Can't find this file!") message="Can't find this file!")
doc = DocumentService.insert({ doc = DocumentService.insert({
"id": get_uuid(), "id": get_uuid(),
"kb_id": kb.id, "kb_id": kb.id,
"parser_id": kb.parser_id, "parser_id": FileService.get_parser(file.type, file.name, kb.parser_id),
"parser_config": kb.parser_config, "parser_config": kb.parser_config,
"created_by": current_user.id, "created_by": current_user.id,
"type": file.type, "type": file.type,
@ -96,7 +92,7 @@ def convert():
return server_error_response(e) return server_error_response(e)
@manager.route('/rm', methods=['POST']) @manager.route('/rm', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("file_ids") @validate_request("file_ids")
def rm(): def rm():
@ -104,26 +100,26 @@ def rm():
file_ids = req["file_ids"] file_ids = req["file_ids"]
if not file_ids: if not file_ids:
return get_json_result( return get_json_result(
data=False, retmsg='Lack of "Files ID"', retcode=RetCode.ARGUMENT_ERROR) data=False, message='Lack of "Files ID"', code=settings.RetCode.ARGUMENT_ERROR)
try: try:
for file_id in file_ids: for file_id in file_ids:
informs = File2DocumentService.get_by_file_id(file_id) informs = File2DocumentService.get_by_file_id(file_id)
if not informs: if not informs:
return get_data_error_result(retmsg="Inform not found!") return get_data_error_result(message="Inform not found!")
for inform in informs: for inform in informs:
if not inform: if not inform:
return get_data_error_result(retmsg="Inform not found!") return get_data_error_result(message="Inform not found!")
File2DocumentService.delete_by_file_id(file_id) File2DocumentService.delete_by_file_id(file_id)
doc_id = inform.document_id doc_id = inform.document_id
e, doc = DocumentService.get_by_id(doc_id) e, doc = DocumentService.get_by_id(doc_id)
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id) tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id: if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
if not DocumentService.remove_document(doc, tenant_id): if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result( return get_data_error_result(
retmsg="Database error (Document removal)!") message="Database error (Document removal)!")
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)

View File

@ -18,7 +18,6 @@ import pathlib
import re import re
import flask import flask
from elasticsearch_dsl import Q
from flask import request from flask import request
from flask_login import login_required, current_user from flask_login import login_required, current_user
@ -29,15 +28,13 @@ from api.utils import get_uuid
from api.db import FileType, FileSource from api.db import FileType, FileSource
from api.db.services import duplicate_name from api.db.services import duplicate_name
from api.db.services.file_service import FileService from api.db.services.file_service import FileService
from api.settings import RetCode from api import settings
from api.utils.api_utils import get_json_result from api.utils.api_utils import get_json_result
from api.utils.file_utils import filename_type from api.utils.file_utils import filename_type
from rag.nlp import search
from rag.utils.es_conn import ELASTICSEARCH
from rag.utils.storage_factory import STORAGE_IMPL from rag.utils.storage_factory import STORAGE_IMPL
@manager.route('/upload', methods=['POST']) @manager.route('/upload', methods=['POST']) # noqa: F821
@login_required @login_required
# @validate_request("parent_id") # @validate_request("parent_id")
def upload(): def upload():
@ -49,24 +46,24 @@ def upload():
if 'file' not in request.files: if 'file' not in request.files:
return get_json_result( return get_json_result(
data=False, retmsg='No file part!', retcode=RetCode.ARGUMENT_ERROR) data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist('file') file_objs = request.files.getlist('file')
for file_obj in file_objs: for file_obj in file_objs:
if file_obj.filename == '': if file_obj.filename == '':
return get_json_result( return get_json_result(
data=False, retmsg='No file selected!', retcode=RetCode.ARGUMENT_ERROR) data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
file_res = [] file_res = []
try: try:
for file_obj in file_objs: for file_obj in file_objs:
e, file = FileService.get_by_id(pf_id) e, file = FileService.get_by_id(pf_id)
if not e: if not e:
return get_data_error_result( return get_data_error_result(
retmsg="Can't find this folder!") message="Can't find this folder!")
MAX_FILE_NUM_PER_USER = int(os.environ.get('MAX_FILE_NUM_PER_USER', 0)) MAX_FILE_NUM_PER_USER = int(os.environ.get('MAX_FILE_NUM_PER_USER', 0))
if MAX_FILE_NUM_PER_USER > 0 and DocumentService.get_doc_count(current_user.id) >= MAX_FILE_NUM_PER_USER: if MAX_FILE_NUM_PER_USER > 0 and DocumentService.get_doc_count(current_user.id) >= MAX_FILE_NUM_PER_USER:
return get_data_error_result( return get_data_error_result(
retmsg="Exceed the maximum file number of a free user!") message="Exceed the maximum file number of a free user!")
# split file name path # split file name path
if not file_obj.filename: if not file_obj.filename:
@ -85,13 +82,13 @@ def upload():
if file_len != len_id_list: if file_len != len_id_list:
e, file = FileService.get_by_id(file_id_list[len_id_list - 1]) e, file = FileService.get_by_id(file_id_list[len_id_list - 1])
if not e: if not e:
return get_data_error_result(retmsg="Folder not found!") return get_data_error_result(message="Folder not found!")
last_folder = FileService.create_folder(file, file_id_list[len_id_list - 1], file_obj_names, last_folder = FileService.create_folder(file, file_id_list[len_id_list - 1], file_obj_names,
len_id_list) len_id_list)
else: else:
e, file = FileService.get_by_id(file_id_list[len_id_list - 2]) e, file = FileService.get_by_id(file_id_list[len_id_list - 2])
if not e: if not e:
return get_data_error_result(retmsg="Folder not found!") return get_data_error_result(message="Folder not found!")
last_folder = FileService.create_folder(file, file_id_list[len_id_list - 2], file_obj_names, last_folder = FileService.create_folder(file, file_id_list[len_id_list - 2], file_obj_names,
len_id_list) len_id_list)
@ -123,7 +120,7 @@ def upload():
return server_error_response(e) return server_error_response(e)
@manager.route('/create', methods=['POST']) @manager.route('/create', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("name") @validate_request("name")
def create(): def create():
@ -137,10 +134,10 @@ def create():
try: try:
if not FileService.is_parent_folder_exist(pf_id): if not FileService.is_parent_folder_exist(pf_id):
return get_json_result( return get_json_result(
data=False, retmsg="Parent Folder Doesn't Exist!", retcode=RetCode.OPERATING_ERROR) data=False, message="Parent Folder Doesn't Exist!", code=settings.RetCode.OPERATING_ERROR)
if FileService.query(name=req["name"], parent_id=pf_id): if FileService.query(name=req["name"], parent_id=pf_id):
return get_data_error_result( return get_data_error_result(
retmsg="Duplicated folder name in the same folder.") message="Duplicated folder name in the same folder.")
if input_file_type == FileType.FOLDER.value: if input_file_type == FileType.FOLDER.value:
file_type = FileType.FOLDER.value file_type = FileType.FOLDER.value
@ -163,7 +160,7 @@ def create():
return server_error_response(e) return server_error_response(e)
@manager.route('/list', methods=['GET']) @manager.route('/list', methods=['GET']) # noqa: F821
@login_required @login_required
def list_files(): def list_files():
pf_id = request.args.get("parent_id") pf_id = request.args.get("parent_id")
@ -181,21 +178,21 @@ def list_files():
try: try:
e, file = FileService.get_by_id(pf_id) e, file = FileService.get_by_id(pf_id)
if not e: if not e:
return get_data_error_result(retmsg="Folder not found!") return get_data_error_result(message="Folder not found!")
files, total = FileService.get_by_pf_id( files, total = FileService.get_by_pf_id(
current_user.id, pf_id, page_number, items_per_page, orderby, desc, keywords) current_user.id, pf_id, page_number, items_per_page, orderby, desc, keywords)
parent_folder = FileService.get_parent_folder(pf_id) parent_folder = FileService.get_parent_folder(pf_id)
if not FileService.get_parent_folder(pf_id): if not FileService.get_parent_folder(pf_id):
return get_json_result(retmsg="File not found!") return get_json_result(message="File not found!")
return get_json_result(data={"total": total, "files": files, "parent_folder": parent_folder.to_json()}) return get_json_result(data={"total": total, "files": files, "parent_folder": parent_folder.to_json()})
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/root_folder', methods=['GET']) @manager.route('/root_folder', methods=['GET']) # noqa: F821
@login_required @login_required
def get_root_folder(): def get_root_folder():
try: try:
@ -205,14 +202,14 @@ def get_root_folder():
return server_error_response(e) return server_error_response(e)
@manager.route('/parent_folder', methods=['GET']) @manager.route('/parent_folder', methods=['GET']) # noqa: F821
@login_required @login_required
def get_parent_folder(): def get_parent_folder():
file_id = request.args.get("file_id") file_id = request.args.get("file_id")
try: try:
e, file = FileService.get_by_id(file_id) e, file = FileService.get_by_id(file_id)
if not e: if not e:
return get_data_error_result(retmsg="Folder not found!") return get_data_error_result(message="Folder not found!")
parent_folder = FileService.get_parent_folder(file_id) parent_folder = FileService.get_parent_folder(file_id)
return get_json_result(data={"parent_folder": parent_folder.to_json()}) return get_json_result(data={"parent_folder": parent_folder.to_json()})
@ -220,14 +217,14 @@ def get_parent_folder():
return server_error_response(e) return server_error_response(e)
@manager.route('/all_parent_folder', methods=['GET']) @manager.route('/all_parent_folder', methods=['GET']) # noqa: F821
@login_required @login_required
def get_all_parent_folders(): def get_all_parent_folders():
file_id = request.args.get("file_id") file_id = request.args.get("file_id")
try: try:
e, file = FileService.get_by_id(file_id) e, file = FileService.get_by_id(file_id)
if not e: if not e:
return get_data_error_result(retmsg="Folder not found!") return get_data_error_result(message="Folder not found!")
parent_folders = FileService.get_all_parent_folders(file_id) parent_folders = FileService.get_all_parent_folders(file_id)
parent_folders_res = [] parent_folders_res = []
@ -238,7 +235,7 @@ def get_all_parent_folders():
return server_error_response(e) return server_error_response(e)
@manager.route('/rm', methods=['POST']) @manager.route('/rm', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("file_ids") @validate_request("file_ids")
def rm(): def rm():
@ -248,9 +245,9 @@ def rm():
for file_id in file_ids: for file_id in file_ids:
e, file = FileService.get_by_id(file_id) e, file = FileService.get_by_id(file_id)
if not e: if not e:
return get_data_error_result(retmsg="File or Folder not found!") return get_data_error_result(message="File or Folder not found!")
if not file.tenant_id: if not file.tenant_id:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
if file.source_type == FileSource.KNOWLEDGEBASE: if file.source_type == FileSource.KNOWLEDGEBASE:
continue continue
@ -259,13 +256,13 @@ def rm():
for inner_file_id in file_id_list: for inner_file_id in file_id_list:
e, file = FileService.get_by_id(inner_file_id) e, file = FileService.get_by_id(inner_file_id)
if not e: if not e:
return get_data_error_result(retmsg="File not found!") return get_data_error_result(message="File not found!")
STORAGE_IMPL.rm(file.parent_id, file.location) STORAGE_IMPL.rm(file.parent_id, file.location)
FileService.delete_folder_by_pf_id(current_user.id, file_id) FileService.delete_folder_by_pf_id(current_user.id, file_id)
else: else:
if not FileService.delete(file): if not FileService.delete(file):
return get_data_error_result( return get_data_error_result(
retmsg="Database error (File removal)!") message="Database error (File removal)!")
# delete file2document # delete file2document
informs = File2DocumentService.get_by_file_id(file_id) informs = File2DocumentService.get_by_file_id(file_id)
@ -273,13 +270,13 @@ def rm():
doc_id = inform.document_id doc_id = inform.document_id
e, doc = DocumentService.get_by_id(doc_id) e, doc = DocumentService.get_by_id(doc_id)
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id) tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id: if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
if not DocumentService.remove_document(doc, tenant_id): if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result( return get_data_error_result(
retmsg="Database error (Document removal)!") message="Database error (Document removal)!")
File2DocumentService.delete_by_file_id(file_id) File2DocumentService.delete_by_file_id(file_id)
return get_json_result(data=True) return get_json_result(data=True)
@ -287,7 +284,7 @@ def rm():
return server_error_response(e) return server_error_response(e)
@manager.route('/rename', methods=['POST']) @manager.route('/rename', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("file_id", "name") @validate_request("file_id", "name")
def rename(): def rename():
@ -295,45 +292,50 @@ def rename():
try: try:
e, file = FileService.get_by_id(req["file_id"]) e, file = FileService.get_by_id(req["file_id"])
if not e: if not e:
return get_data_error_result(retmsg="File not found!") return get_data_error_result(message="File not found!")
if file.type != FileType.FOLDER.value \ if file.type != FileType.FOLDER.value \
and pathlib.Path(req["name"].lower()).suffix != pathlib.Path( and pathlib.Path(req["name"].lower()).suffix != pathlib.Path(
file.name.lower()).suffix: file.name.lower()).suffix:
return get_json_result( return get_json_result(
data=False, data=False,
retmsg="The extension of file can't be changed", message="The extension of file can't be changed",
retcode=RetCode.ARGUMENT_ERROR) code=settings.RetCode.ARGUMENT_ERROR)
for file in FileService.query(name=req["name"], pf_id=file.parent_id): for file in FileService.query(name=req["name"], pf_id=file.parent_id):
if file.name == req["name"]: if file.name == req["name"]:
return get_data_error_result( return get_data_error_result(
retmsg="Duplicated file name in the same folder.") message="Duplicated file name in the same folder.")
if not FileService.update_by_id( if not FileService.update_by_id(
req["file_id"], {"name": req["name"]}): req["file_id"], {"name": req["name"]}):
return get_data_error_result( return get_data_error_result(
retmsg="Database error (File rename)!") message="Database error (File rename)!")
informs = File2DocumentService.get_by_file_id(req["file_id"]) informs = File2DocumentService.get_by_file_id(req["file_id"])
if informs: if informs:
if not DocumentService.update_by_id( if not DocumentService.update_by_id(
informs[0].document_id, {"name": req["name"]}): informs[0].document_id, {"name": req["name"]}):
return get_data_error_result( return get_data_error_result(
retmsg="Database error (Document rename)!") message="Database error (Document rename)!")
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/get/<file_id>', methods=['GET']) @manager.route('/get/<file_id>', methods=['GET']) # noqa: F821
# @login_required @login_required
def get(file_id): def get(file_id):
try: try:
e, file = FileService.get_by_id(file_id) e, file = FileService.get_by_id(file_id)
if not e: if not e:
return get_data_error_result(retmsg="Document not found!") return get_data_error_result(message="Document not found!")
b, n = File2DocumentService.get_minio_address(file_id=file_id)
response = flask.make_response(STORAGE_IMPL.get(b, n)) blob = STORAGE_IMPL.get(file.parent_id, file.location)
if not blob:
b, n = File2DocumentService.get_storage_address(file_id=file_id)
blob = STORAGE_IMPL.get(b, n)
response = flask.make_response(blob)
ext = re.search(r"\.([^.]+)$", file.name) ext = re.search(r"\.([^.]+)$", file.name)
if ext: if ext:
if file.type == FileType.VISUAL.value: if file.type == FileType.VISUAL.value:
@ -348,7 +350,7 @@ def get(file_id):
return server_error_response(e) return server_error_response(e)
@manager.route('/mv', methods=['POST']) @manager.route('/mv', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("src_file_ids", "dest_file_id") @validate_request("src_file_ids", "dest_file_id")
def move(): def move():
@ -359,12 +361,12 @@ def move():
for file_id in file_ids: for file_id in file_ids:
e, file = FileService.get_by_id(file_id) e, file = FileService.get_by_id(file_id)
if not e: if not e:
return get_data_error_result(retmsg="File or Folder not found!") return get_data_error_result(message="File or Folder not found!")
if not file.tenant_id: if not file.tenant_id:
return get_data_error_result(retmsg="Tenant not found!") return get_data_error_result(message="Tenant not found!")
fe, _ = FileService.get_by_id(parent_id) fe, _ = FileService.get_by_id(parent_id)
if not fe: if not fe:
return get_data_error_result(retmsg="Parent Folder not found!") return get_data_error_result(message="Parent Folder not found!")
FileService.move_file(file_ids, parent_id) FileService.move_file(file_ids, parent_id)
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:

View File

@ -13,7 +13,10 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from elasticsearch_dsl import Q import json
import logging
import os
from flask import request from flask import request
from flask_login import login_required, current_user from flask_login import login_required, current_user
@ -22,26 +25,36 @@ from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService from api.db.services.file_service import FileService
from api.db.services.user_service import TenantService, UserTenantService from api.db.services.user_service import TenantService, UserTenantService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request from api.utils.api_utils import server_error_response, get_data_error_result, validate_request, not_allowed_parameters
from api.utils import get_uuid, get_format_time from api.utils import get_uuid
from api.db import StatusEnum, UserTenantRole, FileSource from api.db import StatusEnum, FileSource
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.db_models import Knowledgebase, File from api.db.db_models import File
from api.settings import stat_logger, RetCode
from api.utils.api_utils import get_json_result from api.utils.api_utils import get_json_result
from api import settings
from rag.nlp import search from rag.nlp import search
from rag.utils.es_conn import ELASTICSEARCH from api.constants import DATASET_NAME_LIMIT
from rag.settings import PAGERANK_FLD
@manager.route('/create', methods=['post']) @manager.route('/create', methods=['post']) # noqa: F821
@login_required @login_required
@validate_request("name") @validate_request("name")
def create(): def create():
req = request.json req = request.json
req["name"] = req["name"].strip() dataset_name = req["name"]
req["name"] = duplicate_name( if not isinstance(dataset_name, str):
return get_data_error_result(message="Dataset name must be string.")
if dataset_name == "":
return get_data_error_result(message="Dataset name can't be empty.")
if len(dataset_name) >= DATASET_NAME_LIMIT:
return get_data_error_result(
message=f"Dataset name length is {len(dataset_name)} which is large than {DATASET_NAME_LIMIT}")
dataset_name = dataset_name.strip()
dataset_name = duplicate_name(
KnowledgebaseService.query, KnowledgebaseService.query,
name=req["name"], name=dataset_name,
tenant_id=current_user.id, tenant_id=current_user.id,
status=StatusEnum.VALID.value) status=StatusEnum.VALID.value)
try: try:
@ -50,7 +63,7 @@ def create():
req["created_by"] = current_user.id req["created_by"] = current_user.id
e, t = TenantService.get_by_id(current_user.id) e, t = TenantService.get_by_id(current_user.id)
if not e: if not e:
return get_data_error_result(retmsg="Tenant not found.") return get_data_error_result(message="Tenant not found.")
req["embd_id"] = t.embd_id req["embd_id"] = t.embd_id
if not KnowledgebaseService.save(**req): if not KnowledgebaseService.save(**req):
return get_data_error_result() return get_data_error_result()
@ -59,43 +72,70 @@ def create():
return server_error_response(e) return server_error_response(e)
@manager.route('/update', methods=['post']) @manager.route('/update', methods=['post']) # noqa: F821
@login_required @login_required
@validate_request("kb_id", "name", "description", "permission", "parser_id") @validate_request("kb_id", "name", "description", "permission", "parser_id")
@not_allowed_parameters("id", "tenant_id", "created_by", "create_time", "update_time", "create_date", "update_date", "created_by")
def update(): def update():
req = request.json req = request.json
req["name"] = req["name"].strip() req["name"] = req["name"].strip()
if not KnowledgebaseService.accessible4deletion(req["kb_id"], current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
try: try:
if not KnowledgebaseService.query( if not KnowledgebaseService.query(
created_by=current_user.id, id=req["kb_id"]): created_by=current_user.id, id=req["kb_id"]):
return get_json_result( return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.', retcode=RetCode.OPERATING_ERROR) data=False, message='Only owner of knowledgebase authorized for this operation.',
code=settings.RetCode.OPERATING_ERROR)
e, kb = KnowledgebaseService.get_by_id(req["kb_id"]) e, kb = KnowledgebaseService.get_by_id(req["kb_id"])
if not e: if not e:
return get_data_error_result( return get_data_error_result(
retmsg="Can't find this knowledgebase!") message="Can't find this knowledgebase!")
if req.get("parser_id", "") == "tag" and os.environ.get('DOC_ENGINE', "elasticsearch") == "infinity":
return get_json_result(
data=False,
message='The chunk method Tag has not been supported by Infinity yet.',
code=settings.RetCode.OPERATING_ERROR
)
if req["name"].lower() != kb.name.lower() \ if req["name"].lower() != kb.name.lower() \
and len(KnowledgebaseService.query(name=req["name"], tenant_id=current_user.id, status=StatusEnum.VALID.value)) > 1: and len(
KnowledgebaseService.query(name=req["name"], tenant_id=current_user.id, status=StatusEnum.VALID.value)) > 1:
return get_data_error_result( return get_data_error_result(
retmsg="Duplicated knowledgebase name.") message="Duplicated knowledgebase name.")
del req["kb_id"] del req["kb_id"]
if not KnowledgebaseService.update_by_id(kb.id, req): if not KnowledgebaseService.update_by_id(kb.id, req):
return get_data_error_result() return get_data_error_result()
if kb.pagerank != req.get("pagerank", 0):
if req.get("pagerank", 0) > 0:
settings.docStoreConn.update({"kb_id": kb.id}, {PAGERANK_FLD: req["pagerank"]},
search.index_name(kb.tenant_id), kb.id)
else:
# Elasticsearch requires PAGERANK_FLD be non-zero!
settings.docStoreConn.update({"exists": PAGERANK_FLD}, {"remove": PAGERANK_FLD},
search.index_name(kb.tenant_id), kb.id)
e, kb = KnowledgebaseService.get_by_id(kb.id) e, kb = KnowledgebaseService.get_by_id(kb.id)
if not e: if not e:
return get_data_error_result( return get_data_error_result(
retmsg="Database error (Knowledgebase rename)!") message="Database error (Knowledgebase rename)!")
kb = kb.to_dict()
kb.update(req)
return get_json_result(data=kb.to_json()) return get_json_result(data=kb)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/detail', methods=['GET']) @manager.route('/detail', methods=['GET']) # noqa: F821
@login_required @login_required
def detail(): def detail():
kb_id = request.args["kb_id"] kb_id = request.args["kb_id"]
@ -107,56 +147,177 @@ def detail():
break break
else: else:
return get_json_result( return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.', data=False, message='Only owner of knowledgebase authorized for this operation.',
retcode=RetCode.OPERATING_ERROR) code=settings.RetCode.OPERATING_ERROR)
kb = KnowledgebaseService.get_detail(kb_id) kb = KnowledgebaseService.get_detail(kb_id)
if not kb: if not kb:
return get_data_error_result( return get_data_error_result(
retmsg="Can't find this knowledgebase!") message="Can't find this knowledgebase!")
return get_json_result(data=kb) return get_json_result(data=kb)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/list', methods=['GET']) @manager.route('/list', methods=['GET']) # noqa: F821
@login_required @login_required
def list_kbs(): def list_kbs():
page_number = request.args.get("page", 1) keywords = request.args.get("keywords", "")
items_per_page = request.args.get("page_size", 150) page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 150))
parser_id = request.args.get("parser_id")
orderby = request.args.get("orderby", "create_time") orderby = request.args.get("orderby", "create_time")
desc = request.args.get("desc", True) desc = request.args.get("desc", True)
try: try:
tenants = TenantService.get_joined_tenants_by_user_id(current_user.id) tenants = TenantService.get_joined_tenants_by_user_id(current_user.id)
kbs = KnowledgebaseService.get_by_tenant_ids( kbs, total = KnowledgebaseService.get_by_tenant_ids(
[m["tenant_id"] for m in tenants], current_user.id, page_number, items_per_page, orderby, desc) [m["tenant_id"] for m in tenants], current_user.id, page_number,
return get_json_result(data=kbs) items_per_page, orderby, desc, keywords, parser_id)
return get_json_result(data={"kbs": kbs, "total": total})
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/rm', methods=['post']) @manager.route('/rm', methods=['post']) # noqa: F821
@login_required @login_required
@validate_request("kb_id") @validate_request("kb_id")
def rm(): def rm():
req = request.json req = request.json
if not KnowledgebaseService.accessible4deletion(req["kb_id"], current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
try: try:
kbs = KnowledgebaseService.query( kbs = KnowledgebaseService.query(
created_by=current_user.id, id=req["kb_id"]) created_by=current_user.id, id=req["kb_id"])
if not kbs: if not kbs:
return get_json_result( return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.', retcode=RetCode.OPERATING_ERROR) data=False, message='Only owner of knowledgebase authorized for this operation.',
code=settings.RetCode.OPERATING_ERROR)
for doc in DocumentService.query(kb_id=req["kb_id"]): for doc in DocumentService.query(kb_id=req["kb_id"]):
if not DocumentService.remove_document(doc, kbs[0].tenant_id): if not DocumentService.remove_document(doc, kbs[0].tenant_id):
return get_data_error_result( return get_data_error_result(
retmsg="Database error (Document removal)!") message="Database error (Document removal)!")
f2d = File2DocumentService.get_by_document_id(doc.id) f2d = File2DocumentService.get_by_document_id(doc.id)
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id]) if f2d:
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
File2DocumentService.delete_by_document_id(doc.id) File2DocumentService.delete_by_document_id(doc.id)
FileService.filter_delete(
[File.source_type == FileSource.KNOWLEDGEBASE, File.type == "folder", File.name == kbs[0].name])
if not KnowledgebaseService.delete_by_id(req["kb_id"]): if not KnowledgebaseService.delete_by_id(req["kb_id"]):
return get_data_error_result( return get_data_error_result(
retmsg="Database error (Knowledgebase removal)!") message="Database error (Knowledgebase removal)!")
for kb in kbs:
settings.docStoreConn.delete({"kb_id": kb.id}, search.index_name(kb.tenant_id), kb.id)
settings.docStoreConn.deleteIdx(search.index_name(kb.tenant_id), kb.id)
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/<kb_id>/tags', methods=['GET']) # noqa: F821
@login_required
def list_tags(kb_id):
if not KnowledgebaseService.accessible(kb_id, current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
tags = settings.retrievaler.all_tags(current_user.id, [kb_id])
return get_json_result(data=tags)
@manager.route('/tags', methods=['GET']) # noqa: F821
@login_required
def list_tags_from_kbs():
kb_ids = request.args.get("kb_ids", "").split(",")
for kb_id in kb_ids:
if not KnowledgebaseService.accessible(kb_id, current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
tags = settings.retrievaler.all_tags(current_user.id, kb_ids)
return get_json_result(data=tags)
@manager.route('/<kb_id>/rm_tags', methods=['POST']) # noqa: F821
@login_required
def rm_tags(kb_id):
req = request.json
if not KnowledgebaseService.accessible(kb_id, current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
e, kb = KnowledgebaseService.get_by_id(kb_id)
for t in req["tags"]:
settings.docStoreConn.update({"tag_kwd": t, "kb_id": [kb_id]},
{"remove": {"tag_kwd": t}},
search.index_name(kb.tenant_id),
kb_id)
return get_json_result(data=True)
@manager.route('/<kb_id>/rename_tag', methods=['POST']) # noqa: F821
@login_required
def rename_tags(kb_id):
req = request.json
if not KnowledgebaseService.accessible(kb_id, current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
e, kb = KnowledgebaseService.get_by_id(kb_id)
settings.docStoreConn.update({"tag_kwd": req["from_tag"], "kb_id": [kb_id]},
{"remove": {"tag_kwd": req["from_tag"].strip()}, "add": {"tag_kwd": req["to_tag"]}},
search.index_name(kb.tenant_id),
kb_id)
return get_json_result(data=True)
@manager.route('/<kb_id>/knowledge_graph', methods=['GET']) # noqa: F821
@login_required
def knowledge_graph(kb_id):
if not KnowledgebaseService.accessible(kb_id, current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
_, kb = KnowledgebaseService.get_by_id(kb_id)
req = {
"kb_id": [kb_id],
"knowledge_graph_kwd": ["graph"]
}
obj = {"graph": {}, "mind_map": {}}
try:
sres = settings.retrievaler.search(req, search.index_name(kb.tenant_id), [kb_id])
except Exception as e:
logging.exception(e)
return get_json_result(data=obj)
for id in sres.ids[:1]:
ty = sres.field[id]["knowledge_graph_kwd"]
try:
content_json = json.loads(sres.field[id]["content_with_weight"])
except Exception:
continue
obj[ty] = content_json
if "nodes" in obj["graph"]:
obj["graph"]["nodes"] = sorted(obj["graph"]["nodes"], key=lambda x: x.get("pagerank", 0), reverse=True)[:256]
if "edges" in obj["graph"]:
obj["graph"]["edges"] = sorted(obj["graph"]["edges"], key=lambda x: x.get("weight", 0), reverse=True)[:128]
return get_json_result(data=obj)

View File

@ -13,18 +13,22 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
import json
import os
from flask import request from flask import request
from flask_login import login_required, current_user from flask_login import login_required, current_user
from api.db.services.llm_service import LLMFactoriesService, TenantLLMService, LLMService from api.db.services.llm_service import LLMFactoriesService, TenantLLMService, LLMService
from api import settings
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.db import StatusEnum, LLMType from api.db import StatusEnum, LLMType
from api.db.db_models import TenantLLM from api.db.db_models import TenantLLM
from api.utils.api_utils import get_json_result from api.utils.api_utils import get_json_result
from api.utils.file_utils import get_project_base_directory
from rag.llm import EmbeddingModel, ChatModel, RerankModel, CvModel, TTSModel from rag.llm import EmbeddingModel, ChatModel, RerankModel, CvModel, TTSModel
import requests
@manager.route('/factories', methods=['GET']) @manager.route('/factories', methods=['GET']) # noqa: F821
@login_required @login_required
def factories(): def factories():
try: try:
@ -46,7 +50,7 @@ def factories():
return server_error_response(e) return server_error_response(e)
@manager.route('/set_api_key', methods=['POST']) @manager.route('/set_api_key', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("llm_factory", "api_key") @validate_request("llm_factory", "api_key")
def set_api_key(): def set_api_key():
@ -55,7 +59,7 @@ def set_api_key():
chat_passed, embd_passed, rerank_passed = False, False, False chat_passed, embd_passed, rerank_passed = False, False, False
factory = req["llm_factory"] factory = req["llm_factory"]
msg = "" msg = ""
for llm in LLMService.query(fid=factory)[:3]: for llm in LLMService.query(fid=factory):
if not embd_passed and llm.model_type == LLMType.EMBEDDING.value: if not embd_passed and llm.model_type == LLMType.EMBEDDING.value:
mdl = EmbeddingModel[factory]( mdl = EmbeddingModel[factory](
req["api_key"], llm.llm_name, base_url=req.get("base_url")) req["api_key"], llm.llm_name, base_url=req.get("base_url"))
@ -70,14 +74,14 @@ def set_api_key():
mdl = ChatModel[factory]( mdl = ChatModel[factory](
req["api_key"], llm.llm_name, base_url=req.get("base_url")) req["api_key"], llm.llm_name, base_url=req.get("base_url"))
try: try:
m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}], m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}],
{"temperature": 0.9,'max_tokens':50}) {"temperature": 0.9, 'max_tokens': 50})
if m.find("**ERROR**") >=0: if m.find("**ERROR**") >= 0:
raise Exception(m) raise Exception(m)
chat_passed = True
except Exception as e: except Exception as e:
msg += f"\nFail to access model({llm.llm_name}) using this api key." + str( msg += f"\nFail to access model({llm.llm_name}) using this api key." + str(
e) e)
chat_passed = True
elif not rerank_passed and llm.model_type == LLMType.RERANK: elif not rerank_passed and llm.model_type == LLMType.RERANK:
mdl = RerankModel[factory]( mdl = RerankModel[factory](
req["api_key"], llm.llm_name, base_url=req.get("base_url")) req["api_key"], llm.llm_name, base_url=req.get("base_url"))
@ -85,94 +89,114 @@ def set_api_key():
arr, tc = mdl.similarity("What's the weather?", ["Is it sunny today?"]) arr, tc = mdl.similarity("What's the weather?", ["Is it sunny today?"])
if len(arr) == 0 or tc == 0: if len(arr) == 0 or tc == 0:
raise Exception("Fail") raise Exception("Fail")
rerank_passed = True
logging.debug(f'passed model rerank {llm.llm_name}')
except Exception as e: except Exception as e:
msg += f"\nFail to access model({llm.llm_name}) using this api key." + str( msg += f"\nFail to access model({llm.llm_name}) using this api key." + str(
e) e)
rerank_passed = True if any([embd_passed, chat_passed, rerank_passed]):
msg = ''
break
if msg: if msg:
return get_data_error_result(retmsg=msg) return get_data_error_result(message=msg)
llm = { llm_config = {
"api_key": req["api_key"], "api_key": req["api_key"],
"api_base": req.get("base_url", "") "api_base": req.get("base_url", "")
} }
for n in ["model_type", "llm_name"]: for n in ["model_type", "llm_name"]:
if n in req: if n in req:
llm[n] = req[n] llm_config[n] = req[n]
if not TenantLLMService.filter_update( for llm in LLMService.query(fid=factory):
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == factory], llm): llm_config["max_tokens"]=llm.max_tokens
for llm in LLMService.query(fid=factory): if not TenantLLMService.filter_update(
[TenantLLM.tenant_id == current_user.id,
TenantLLM.llm_factory == factory,
TenantLLM.llm_name == llm.llm_name],
llm_config):
TenantLLMService.save( TenantLLMService.save(
tenant_id=current_user.id, tenant_id=current_user.id,
llm_factory=factory, llm_factory=factory,
llm_name=llm.llm_name, llm_name=llm.llm_name,
model_type=llm.model_type, model_type=llm.model_type,
api_key=req["api_key"], api_key=llm_config["api_key"],
api_base=req.get("base_url", "") api_base=llm_config["api_base"],
max_tokens=llm_config["max_tokens"]
) )
return get_json_result(data=True) return get_json_result(data=True)
@manager.route('/add_llm', methods=['POST']) @manager.route('/add_llm', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("llm_factory") @validate_request("llm_factory")
def add_llm(): def add_llm():
req = request.json req = request.json
factory = req["llm_factory"] factory = req["llm_factory"]
def apikey_json(keys):
nonlocal req
return json.dumps({k: req.get(k, "") for k in keys})
if factory == "VolcEngine": if factory == "VolcEngine":
# For VolcEngine, due to its special authentication method # For VolcEngine, due to its special authentication method
# Assemble ark_api_key endpoint_id into api_key # Assemble ark_api_key endpoint_id into api_key
llm_name = req["llm_name"] llm_name = req["llm_name"]
api_key = '{' + f'"ark_api_key": "{req.get("ark_api_key", "")}", ' \ api_key = apikey_json(["ark_api_key", "endpoint_id"])
f'"ep_id": "{req.get("endpoint_id", "")}", ' + '}'
elif factory == "Tencent Hunyuan": elif factory == "Tencent Hunyuan":
api_key = '{' + f'"hunyuan_sid": "{req.get("hunyuan_sid", "")}", ' \ req["api_key"] = apikey_json(["hunyuan_sid", "hunyuan_sk"])
f'"hunyuan_sk": "{req.get("hunyuan_sk", "")}"' + '}'
req["api_key"] = api_key
return set_api_key() return set_api_key()
elif factory == "Tencent Cloud": elif factory == "Tencent Cloud":
api_key = '{' + f'"tencent_cloud_sid": "{req.get("tencent_cloud_sid", "")}", ' \ req["api_key"] = apikey_json(["tencent_cloud_sid", "tencent_cloud_sk"])
f'"tencent_cloud_sk": "{req.get("tencent_cloud_sk", "")}"' + '}'
req["api_key"] = api_key
elif factory == "Bedrock": elif factory == "Bedrock":
# For Bedrock, due to its special authentication method # For Bedrock, due to its special authentication method
# Assemble bedrock_ak, bedrock_sk, bedrock_region # Assemble bedrock_ak, bedrock_sk, bedrock_region
llm_name = req["llm_name"] llm_name = req["llm_name"]
api_key = '{' + f'"bedrock_ak": "{req.get("bedrock_ak", "")}", ' \ api_key = apikey_json(["bedrock_ak", "bedrock_sk", "bedrock_region"])
f'"bedrock_sk": "{req.get("bedrock_sk", "")}", ' \
f'"bedrock_region": "{req.get("bedrock_region", "")}", ' + '}'
elif factory == "LocalAI": elif factory == "LocalAI":
llm_name = req["llm_name"]+"___LocalAI" llm_name = req["llm_name"] + "___LocalAI"
api_key = "xxxxxxxxxxxxxxx" api_key = "xxxxxxxxxxxxxxx"
elif factory == "HuggingFace":
llm_name = req["llm_name"] + "___HuggingFace"
api_key = "xxxxxxxxxxxxxxx"
elif factory == "OpenAI-API-Compatible": elif factory == "OpenAI-API-Compatible":
llm_name = req["llm_name"]+"___OpenAI-API" llm_name = req["llm_name"] + "___OpenAI-API"
api_key = req.get("api_key","xxxxxxxxxxxxxxx") api_key = req.get("api_key", "xxxxxxxxxxxxxxx")
elif factory =="XunFei Spark":
elif factory == "XunFei Spark":
llm_name = req["llm_name"] llm_name = req["llm_name"]
api_key = req.get("spark_api_password","xxxxxxxxxxxxxxx") if req["model_type"] == "chat":
api_key = req.get("spark_api_password", "xxxxxxxxxxxxxxx")
elif req["model_type"] == "tts":
api_key = apikey_json(["spark_app_id", "spark_api_secret", "spark_api_key"])
elif factory == "BaiduYiyan": elif factory == "BaiduYiyan":
llm_name = req["llm_name"] llm_name = req["llm_name"]
api_key = '{' + f'"yiyan_ak": "{req.get("yiyan_ak", "")}", ' \ api_key = apikey_json(["yiyan_ak", "yiyan_sk"])
f'"yiyan_sk": "{req.get("yiyan_sk", "")}"' + '}'
elif factory == "Fish Audio": elif factory == "Fish Audio":
llm_name = req["llm_name"] llm_name = req["llm_name"]
api_key = '{' + f'"fish_audio_ak": "{req.get("fish_audio_ak", "")}", ' \ api_key = apikey_json(["fish_audio_ak", "fish_audio_refid"])
f'"fish_audio_refid": "{req.get("fish_audio_refid", "59cb5986671546eaa6ca8ae6f29f6d22")}"' + '}'
elif factory == "Google Cloud": elif factory == "Google Cloud":
llm_name = req["llm_name"] llm_name = req["llm_name"]
api_key = ( api_key = apikey_json(["google_project_id", "google_region", "google_service_account_key"])
"{" + f'"google_project_id": "{req.get("google_project_id", "")}", '
f'"google_region": "{req.get("google_region", "")}", ' elif factory == "Azure-OpenAI":
f'"google_service_account_key": "{req.get("google_service_account_key", "")}"' llm_name = req["llm_name"]
+ "}" api_key = apikey_json(["api_key", "api_version"])
)
else: else:
llm_name = req["llm_name"] llm_name = req["llm_name"]
api_key = req.get("api_key","xxxxxxxxxxxxxxx") api_key = req.get("api_key", "xxxxxxxxxxxxxxx")
llm = { llm = {
"tenant_id": current_user.id, "tenant_id": current_user.id,
@ -180,18 +204,19 @@ def add_llm():
"model_type": req["model_type"], "model_type": req["model_type"],
"llm_name": llm_name, "llm_name": llm_name,
"api_base": req.get("api_base", ""), "api_base": req.get("api_base", ""),
"api_key": api_key "api_key": api_key,
"max_tokens": req.get("max_tokens")
} }
msg = "" msg = ""
if llm["model_type"] == LLMType.EMBEDDING.value: if llm["model_type"] == LLMType.EMBEDDING.value:
mdl = EmbeddingModel[factory]( mdl = EmbeddingModel[factory](
key=llm['api_key'], key=llm['api_key'],
model_name=llm["llm_name"], model_name=llm["llm_name"],
base_url=llm["api_base"]) base_url=llm["api_base"])
try: try:
arr, tc = mdl.encode(["Test if the api key is available"]) arr, tc = mdl.encode(["Test if the api key is available"])
if len(arr[0]) == 0 or tc == 0: if len(arr[0]) == 0:
raise Exception("Fail") raise Exception("Fail")
except Exception as e: except Exception as e:
msg += f"\nFail to access embedding model({llm['llm_name']})." + str(e) msg += f"\nFail to access embedding model({llm['llm_name']})." + str(e)
@ -203,7 +228,7 @@ def add_llm():
) )
try: try:
m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}], { m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}], {
"temperature": 0.9}) "temperature": 0.9})
if not tc: if not tc:
raise Exception(m) raise Exception(m)
except Exception as e: except Exception as e:
@ -211,36 +236,28 @@ def add_llm():
e) e)
elif llm["model_type"] == LLMType.RERANK: elif llm["model_type"] == LLMType.RERANK:
mdl = RerankModel[factory]( mdl = RerankModel[factory](
key=llm["api_key"], key=llm["api_key"],
model_name=llm["llm_name"], model_name=llm["llm_name"],
base_url=llm["api_base"] base_url=llm["api_base"]
) )
try: try:
arr, tc = mdl.similarity("Hello~ Ragflower!", ["Hi, there!"]) arr, tc = mdl.similarity("Hello~ Ragflower!", ["Hi, there!", "Ohh, my friend!"])
if len(arr) == 0 or tc == 0: if len(arr) == 0:
raise Exception("Not known.") raise Exception("Not known.")
except Exception as e: except Exception as e:
msg += f"\nFail to access model({llm['llm_name']})." + str( msg += f"\nFail to access model({llm['llm_name']})." + str(
e) e)
elif llm["model_type"] == LLMType.IMAGE2TEXT.value: elif llm["model_type"] == LLMType.IMAGE2TEXT.value:
mdl = CvModel[factory]( mdl = CvModel[factory](
key=llm["api_key"], key=llm["api_key"],
model_name=llm["llm_name"], model_name=llm["llm_name"],
base_url=llm["api_base"] base_url=llm["api_base"]
) )
try: try:
img_url = ( with open(os.path.join(get_project_base_directory(), "web/src/assets/yay.jpg"), "rb") as f:
"https://upload.wikimedia.org/wikipedia/comm" m, tc = mdl.describe(f.read())
"ons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/256"
"0px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
)
res = requests.get(img_url)
if res.status_code == 200:
m, tc = mdl.describe(res.content)
if not tc: if not tc:
raise Exception(m) raise Exception(m)
else:
pass
except Exception as e: except Exception as e:
msg += f"\nFail to access model({llm['llm_name']})." + str(e) msg += f"\nFail to access model({llm['llm_name']})." + str(e)
elif llm["model_type"] == LLMType.TTS: elif llm["model_type"] == LLMType.TTS:
@ -257,26 +274,38 @@ def add_llm():
pass pass
if msg: if msg:
return get_data_error_result(retmsg=msg) return get_data_error_result(message=msg)
if not TenantLLMService.filter_update( if not TenantLLMService.filter_update(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == factory, TenantLLM.llm_name == llm["llm_name"]], llm): [TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == factory,
TenantLLM.llm_name == llm["llm_name"]], llm):
TenantLLMService.save(**llm) TenantLLMService.save(**llm)
return get_json_result(data=True) return get_json_result(data=True)
@manager.route('/delete_llm', methods=['POST']) @manager.route('/delete_llm', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("llm_factory", "llm_name") @validate_request("llm_factory", "llm_name")
def delete_llm(): def delete_llm():
req = request.json req = request.json
TenantLLMService.filter_delete( TenantLLMService.filter_delete(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"], TenantLLM.llm_name == req["llm_name"]]) [TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"],
TenantLLM.llm_name == req["llm_name"]])
return get_json_result(data=True) return get_json_result(data=True)
@manager.route('/my_llms', methods=['GET']) @manager.route('/delete_factory', methods=['POST']) # noqa: F821
@login_required
@validate_request("llm_factory")
def delete_factory():
req = request.json
TenantLLMService.filter_delete(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"]])
return get_json_result(data=True)
@manager.route('/my_llms', methods=['GET']) # noqa: F821
@login_required @login_required
def my_llms(): def my_llms():
try: try:
@ -297,28 +326,32 @@ def my_llms():
return server_error_response(e) return server_error_response(e)
@manager.route('/list', methods=['GET']) @manager.route('/list', methods=['GET']) # noqa: F821
@login_required @login_required
def list_app(): def list_app():
self_deployed = ["Youdao", "FastEmbed", "BAAI", "Ollama", "Xinference", "LocalAI", "LM-Studio", "GPUStack"]
weighted = ["Youdao", "FastEmbed", "BAAI"] if settings.LIGHTEN != 0 else []
model_type = request.args.get("model_type") model_type = request.args.get("model_type")
try: try:
objs = TenantLLMService.query(tenant_id=current_user.id) objs = TenantLLMService.query(tenant_id=current_user.id)
facts = set([o.to_dict()["llm_factory"] for o in objs if o.api_key]) facts = set([o.to_dict()["llm_factory"] for o in objs if o.api_key])
llms = LLMService.get_all() llms = LLMService.get_all()
llms = [m.to_dict() llms = [m.to_dict()
for m in llms if m.status == StatusEnum.VALID.value] for m in llms if m.status == StatusEnum.VALID.value and m.fid not in weighted]
for m in llms: for m in llms:
m["available"] = m["fid"] in facts or m["llm_name"].lower() == "flag-embedding" or m["fid"] in ["Youdao","FastEmbed", "BAAI"] m["available"] = m["fid"] in facts or m["llm_name"].lower() == "flag-embedding" or m["fid"] in self_deployed
llm_set = set([m["llm_name"] for m in llms]) llm_set = set([m["llm_name"] + "@" + m["fid"] for m in llms])
for o in objs: for o in objs:
if not o.api_key:continue if not o.api_key:
if o.llm_name in llm_set:continue continue
if o.llm_name + "@" + o.llm_factory in llm_set:
continue
llms.append({"llm_name": o.llm_name, "model_type": o.model_type, "fid": o.llm_factory, "available": True}) llms.append({"llm_name": o.llm_name, "model_type": o.model_type, "fid": o.llm_factory, "available": True})
res = {} res = {}
for m in llms: for m in llms:
if model_type and m["model_type"].find(model_type)<0: if model_type and m["model_type"].find(model_type) < 0:
continue continue
if m["fid"] not in res: if m["fid"] not in res:
res[m["fid"]] = [] res[m["fid"]] = []
@ -326,4 +359,4 @@ def list_app():
return get_json_result(data=res) return get_json_result(data=res)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)

39
api/apps/sdk/agent.py Normal file
View File

@ -0,0 +1,39 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from api.db.services.canvas_service import UserCanvasService
from api.utils.api_utils import get_error_data_result, token_required
from api.utils.api_utils import get_result
from flask import request
@manager.route('/agents', methods=['GET']) # noqa: F821
@token_required
def list_agents(tenant_id):
id = request.args.get("id")
title = request.args.get("title")
if id or title:
canvas = UserCanvasService.query(id=id, title=title, user_id=tenant_id)
if not canvas:
return get_error_data_result("The agent doesn't exist.")
page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 30))
orderby = request.args.get("orderby", "update_time")
if request.args.get("desc") == "False" or request.args.get("desc") == "false":
desc = False
else:
desc = True
canvas = UserCanvasService.get_list(tenant_id,page_number,items_per_page,orderby,desc,id,title)
return get_result(data=canvas)

View File

@ -1,304 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import request
from api.db import StatusEnum
from api.db.db_models import TenantLLM
from api.db.services.dialog_service import DialogService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMService, TenantLLMService
from api.db.services.user_service import TenantService
from api.settings import RetCode
from api.utils import get_uuid
from api.utils.api_utils import get_data_error_result, token_required
from api.utils.api_utils import get_json_result
@manager.route('/save', methods=['POST'])
@token_required
def save(tenant_id):
req = request.json
# dataset
if req.get("knowledgebases") == []:
return get_data_error_result(retmsg="knowledgebases can not be empty list")
kb_list = []
if req.get("knowledgebases"):
for kb in req.get("knowledgebases"):
if not kb["id"]:
return get_data_error_result(retmsg="knowledgebase needs id")
if not KnowledgebaseService.query(id=kb["id"], tenant_id=tenant_id):
return get_data_error_result(retmsg="you do not own the knowledgebase")
# if not DocumentService.query(kb_id=kb["id"]):
# return get_data_error_result(retmsg="There is a invalid knowledgebase")
kb_list.append(kb["id"])
req["kb_ids"] = kb_list
# llm
llm = req.get("llm")
if llm:
if "model_name" in llm:
req["llm_id"] = llm.pop("model_name")
req["llm_setting"] = req.pop("llm")
e, tenant = TenantService.get_by_id(tenant_id)
if not e:
return get_data_error_result(retmsg="Tenant not found!")
# prompt
prompt = req.get("prompt")
key_mapping = {"parameters": "variables",
"prologue": "opener",
"quote": "show_quote",
"system": "prompt",
"rerank_id": "rerank_model",
"vector_similarity_weight": "keywords_similarity_weight"}
key_list = ["similarity_threshold", "vector_similarity_weight", "top_n", "rerank_id"]
if prompt:
for new_key, old_key in key_mapping.items():
if old_key in prompt:
prompt[new_key] = prompt.pop(old_key)
for key in key_list:
if key in prompt:
req[key] = prompt.pop(key)
req["prompt_config"] = req.pop("prompt")
# create
if "id" not in req:
# dataset
if not kb_list:
return get_data_error_result(retmsg="knowledgebases are required!")
# init
req["id"] = get_uuid()
req["description"] = req.get("description", "A helpful Assistant")
req["icon"] = req.get("avatar", "")
req["top_n"] = req.get("top_n", 6)
req["top_k"] = req.get("top_k", 1024)
req["rerank_id"] = req.get("rerank_id", "")
if req.get("llm_id"):
if not TenantLLMService.query(llm_name=req["llm_id"]):
return get_data_error_result(retmsg="the model_name does not exist.")
else:
req["llm_id"] = tenant.llm_id
if not req.get("name"):
return get_data_error_result(retmsg="name is required.")
if DialogService.query(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_data_error_result(retmsg="Duplicated assistant name in creating dataset.")
# tenant_id
if req.get("tenant_id"):
return get_data_error_result(retmsg="tenant_id must not be provided.")
req["tenant_id"] = tenant_id
# prompt more parameter
default_prompt = {
"system": """你是一个智能助手,请总结知识库的内容来回答问题,请列举知识库中的数据详细回答。当所有知识库内容都与问题无关时,你的回答必须包括“知识库中未找到您要的答案!”这句话。回答需要考虑聊天历史。
以下是知识库:
{knowledge}
以上是知识库。""",
"prologue": "您好我是您的助手小樱长得可爱又善良can I help you?",
"parameters": [
{"key": "knowledge", "optional": False}
],
"empty_response": "Sorry! 知识库中未找到相关内容!"
}
key_list_2 = ["system", "prologue", "parameters", "empty_response"]
if "prompt_config" not in req:
req['prompt_config'] = {}
for key in key_list_2:
temp = req['prompt_config'].get(key)
if not temp:
req['prompt_config'][key] = default_prompt[key]
for p in req['prompt_config']["parameters"]:
if p["optional"]:
continue
if req['prompt_config']["system"].find("{%s}" % p["key"]) < 0:
return get_data_error_result(
retmsg="Parameter '{}' is not used".format(p["key"]))
# save
if not DialogService.save(**req):
return get_data_error_result(retmsg="Fail to new an assistant!")
# response
e, res = DialogService.get_by_id(req["id"])
if not e:
return get_data_error_result(retmsg="Fail to new an assistant!")
res = res.to_json()
renamed_dict = {}
for key, value in res["prompt_config"].items():
new_key = key_mapping.get(key, key)
renamed_dict[new_key] = value
res["prompt"] = renamed_dict
del res["prompt_config"]
new_dict = {"similarity_threshold": res["similarity_threshold"],
"keywords_similarity_weight": res["vector_similarity_weight"],
"top_n": res["top_n"],
"rerank_model": res['rerank_id']}
res["prompt"].update(new_dict)
for key in key_list:
del res[key]
res["llm"] = res.pop("llm_setting")
res["llm"]["model_name"] = res.pop("llm_id")
del res["kb_ids"]
res["knowledgebases"] = req["knowledgebases"]
res["avatar"] = res.pop("icon")
return get_json_result(data=res)
else:
# authorization
if not DialogService.query(tenant_id=tenant_id, id=req["id"], status=StatusEnum.VALID.value):
return get_json_result(data=False, retmsg='You do not own the assistant', retcode=RetCode.OPERATING_ERROR)
# prompt
if not req["id"]:
return get_data_error_result(retmsg="id can not be empty")
e, res = DialogService.get_by_id(req["id"])
res = res.to_json()
if "llm_id" in req:
if not TenantLLMService.query(llm_name=req["llm_id"]):
return get_data_error_result(retmsg="the model_name does not exist.")
if "name" in req:
if not req.get("name"):
return get_data_error_result(retmsg="name is not empty.")
if req["name"].lower() != res["name"].lower() \
and len(
DialogService.query(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value)) > 0:
return get_data_error_result(retmsg="Duplicated assistant name in updating dataset.")
if "prompt_config" in req:
res["prompt_config"].update(req["prompt_config"])
for p in res["prompt_config"]["parameters"]:
if p["optional"]:
continue
if res["prompt_config"]["system"].find("{%s}" % p["key"]) < 0:
return get_data_error_result(retmsg="Parameter '{}' is not used".format(p["key"]))
if "llm_setting" in req:
res["llm_setting"].update(req["llm_setting"])
req["prompt_config"] = res["prompt_config"]
req["llm_setting"] = res["llm_setting"]
# avatar
if "avatar" in req:
req["icon"] = req.pop("avatar")
assistant_id = req.pop("id")
if "knowledgebases" in req:
req.pop("knowledgebases")
if not DialogService.update_by_id(assistant_id, req):
return get_data_error_result(retmsg="Assistant not found!")
return get_json_result(data=True)
@manager.route('/delete', methods=['DELETE'])
@token_required
def delete(tenant_id):
req = request.args
if "id" not in req:
return get_data_error_result(retmsg="id is required")
id = req['id']
if not DialogService.query(tenant_id=tenant_id, id=id, status=StatusEnum.VALID.value):
return get_json_result(data=False, retmsg='you do not own the assistant.', retcode=RetCode.OPERATING_ERROR)
temp_dict = {"status": StatusEnum.INVALID.value}
DialogService.update_by_id(req["id"], temp_dict)
return get_json_result(data=True)
@manager.route('/get', methods=['GET'])
@token_required
def get(tenant_id):
req = request.args
if "id" in req:
id = req["id"]
ass = DialogService.query(tenant_id=tenant_id, id=id, status=StatusEnum.VALID.value)
if not ass:
return get_json_result(data=False, retmsg='You do not own the assistant.', retcode=RetCode.OPERATING_ERROR)
if "name" in req:
name = req["name"]
if ass[0].name != name:
return get_json_result(data=False, retmsg='name does not match id.', retcode=RetCode.OPERATING_ERROR)
res = ass[0].to_json()
else:
if "name" in req:
name = req["name"]
ass = DialogService.query(name=name, tenant_id=tenant_id, status=StatusEnum.VALID.value)
if not ass:
return get_json_result(data=False, retmsg='You do not own the assistant.',
retcode=RetCode.OPERATING_ERROR)
res = ass[0].to_json()
else:
return get_data_error_result(retmsg="At least one of `id` or `name` must be provided.")
renamed_dict = {}
key_mapping = {"parameters": "variables",
"prologue": "opener",
"quote": "show_quote",
"system": "prompt",
"rerank_id": "rerank_model",
"vector_similarity_weight": "keywords_similarity_weight"}
key_list = ["similarity_threshold", "vector_similarity_weight", "top_n", "rerank_id"]
for key, value in res["prompt_config"].items():
new_key = key_mapping.get(key, key)
renamed_dict[new_key] = value
res["prompt"] = renamed_dict
del res["prompt_config"]
new_dict = {"similarity_threshold": res["similarity_threshold"],
"keywords_similarity_weight": res["vector_similarity_weight"],
"top_n": res["top_n"],
"rerank_model": res['rerank_id']}
res["prompt"].update(new_dict)
for key in key_list:
del res[key]
res["llm"] = res.pop("llm_setting")
res["llm"]["model_name"] = res.pop("llm_id")
kb_list = []
for kb_id in res["kb_ids"]:
kb = KnowledgebaseService.query(id=kb_id)
kb_list.append(kb[0].to_json())
del res["kb_ids"]
res["knowledgebases"] = kb_list
res["avatar"] = res.pop("icon")
return get_json_result(data=res)
@manager.route('/list', methods=['GET'])
@token_required
def list_assistants(tenant_id):
assts = DialogService.query(
tenant_id=tenant_id,
status=StatusEnum.VALID.value,
reverse=True,
order_by=DialogService.model.create_time)
assts = [d.to_dict() for d in assts]
list_assts = []
renamed_dict = {}
key_mapping = {"parameters": "variables",
"prologue": "opener",
"quote": "show_quote",
"system": "prompt",
"rerank_id": "rerank_model",
"vector_similarity_weight": "keywords_similarity_weight"}
key_list = ["similarity_threshold", "vector_similarity_weight", "top_n", "rerank_id"]
for res in assts:
for key, value in res["prompt_config"].items():
new_key = key_mapping.get(key, key)
renamed_dict[new_key] = value
res["prompt"] = renamed_dict
del res["prompt_config"]
new_dict = {"similarity_threshold": res["similarity_threshold"],
"keywords_similarity_weight": res["vector_similarity_weight"],
"top_n": res["top_n"],
"rerank_model": res['rerank_id']}
res["prompt"].update(new_dict)
for key in key_list:
del res[key]
res["llm"] = res.pop("llm_setting")
res["llm"]["model_name"] = res.pop("llm_id")
kb_list = []
for kb_id in res["kb_ids"]:
kb = KnowledgebaseService.query(id=kb_id)
kb_list.append(kb[0].to_json())
del res["kb_ids"]
res["knowledgebases"] = kb_list
res["avatar"] = res.pop("icon")
list_assts.append(res)
return get_json_result(data=list_assts)

325
api/apps/sdk/chat.py Normal file
View File

@ -0,0 +1,325 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import request
from api import settings
from api.db import StatusEnum
from api.db.services.dialog_service import DialogService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import TenantLLMService
from api.db.services.user_service import TenantService
from api.utils import get_uuid
from api.utils.api_utils import get_error_data_result, token_required
from api.utils.api_utils import get_result
@manager.route('/chats', methods=['POST']) # noqa: F821
@token_required
def create(tenant_id):
req = request.json
ids = req.get("dataset_ids")
if not ids:
return get_error_data_result(message="`dataset_ids` is required")
for kb_id in ids:
kbs = KnowledgebaseService.accessible(kb_id=kb_id, user_id=tenant_id)
if not kbs:
return get_error_data_result(f"You don't own the dataset {kb_id}")
kbs = KnowledgebaseService.query(id=kb_id)
kb = kbs[0]
if kb.chunk_num == 0:
return get_error_data_result(f"The dataset {kb_id} doesn't own parsed file")
kbs = KnowledgebaseService.get_by_ids(ids)
embd_count = list(set([kb.embd_id for kb in kbs]))
if len(embd_count) != 1:
return get_result(message='Datasets use different embedding models."',
code=settings.RetCode.AUTHENTICATION_ERROR)
req["kb_ids"] = ids
# llm
llm = req.get("llm")
if llm:
if "model_name" in llm:
req["llm_id"] = llm.pop("model_name")
if not TenantLLMService.query(tenant_id=tenant_id, llm_name=req["llm_id"], model_type="chat"):
return get_error_data_result(f"`model_name` {req.get('llm_id')} doesn't exist")
req["llm_setting"] = req.pop("llm")
e, tenant = TenantService.get_by_id(tenant_id)
if not e:
return get_error_data_result(message="Tenant not found!")
# prompt
prompt = req.get("prompt")
key_mapping = {"parameters": "variables",
"prologue": "opener",
"quote": "show_quote",
"system": "prompt",
"rerank_id": "rerank_model",
"vector_similarity_weight": "keywords_similarity_weight"}
key_list = ["similarity_threshold", "vector_similarity_weight", "top_n", "rerank_id","top_k"]
if prompt:
for new_key, old_key in key_mapping.items():
if old_key in prompt:
prompt[new_key] = prompt.pop(old_key)
for key in key_list:
if key in prompt:
req[key] = prompt.pop(key)
req["prompt_config"] = req.pop("prompt")
# init
req["id"] = get_uuid()
req["description"] = req.get("description", "A helpful Assistant")
req["icon"] = req.get("avatar", "")
req["top_n"] = req.get("top_n", 6)
req["top_k"] = req.get("top_k", 1024)
req["rerank_id"] = req.get("rerank_id", "")
if req.get("rerank_id"):
value_rerank_model = ["BAAI/bge-reranker-v2-m3", "maidalun1020/bce-reranker-base_v1"]
if req["rerank_id"] not in value_rerank_model and not TenantLLMService.query(tenant_id=tenant_id,
llm_name=req.get("rerank_id"),
model_type="rerank"):
return get_error_data_result(f"`rerank_model` {req.get('rerank_id')} doesn't exist")
if not req.get("llm_id"):
req["llm_id"] = tenant.llm_id
if not req.get("name"):
return get_error_data_result(message="`name` is required.")
if DialogService.query(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_error_data_result(message="Duplicated chat name in creating chat.")
# tenant_id
if req.get("tenant_id"):
return get_error_data_result(message="`tenant_id` must not be provided.")
req["tenant_id"] = tenant_id
# prompt more parameter
default_prompt = {
"system": """You are an intelligent assistant. Please summarize the content of the knowledge base to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence "The answer you are looking for is not found in the knowledge base!" Answers need to consider chat history.
Here is the knowledge base:
{knowledge}
The above is the knowledge base.""",
"prologue": "Hi! I'm your assistant, what can I do for you?",
"parameters": [
{"key": "knowledge", "optional": False}
],
"empty_response": "Sorry! No relevant content was found in the knowledge base!",
"quote": True,
"tts": False,
"refine_multiturn": True
}
key_list_2 = ["system", "prologue", "parameters", "empty_response", "quote", "tts", "refine_multiturn"]
if "prompt_config" not in req:
req['prompt_config'] = {}
for key in key_list_2:
temp = req['prompt_config'].get(key)
if (not temp and key == 'system') or (key not in req["prompt_config"]):
req['prompt_config'][key] = default_prompt[key]
for p in req['prompt_config']["parameters"]:
if p["optional"]:
continue
if req['prompt_config']["system"].find("{%s}" % p["key"]) < 0:
return get_error_data_result(
message="Parameter '{}' is not used".format(p["key"]))
# save
if not DialogService.save(**req):
return get_error_data_result(message="Fail to new a chat!")
# response
e, res = DialogService.get_by_id(req["id"])
if not e:
return get_error_data_result(message="Fail to new a chat!")
res = res.to_json()
renamed_dict = {}
for key, value in res["prompt_config"].items():
new_key = key_mapping.get(key, key)
renamed_dict[new_key] = value
res["prompt"] = renamed_dict
del res["prompt_config"]
new_dict = {"similarity_threshold": res["similarity_threshold"],
"keywords_similarity_weight": 1-res["vector_similarity_weight"],
"top_n": res["top_n"],
"rerank_model": res['rerank_id']}
res["prompt"].update(new_dict)
for key in key_list:
del res[key]
res["llm"] = res.pop("llm_setting")
res["llm"]["model_name"] = res.pop("llm_id")
del res["kb_ids"]
res["dataset_ids"] = req["dataset_ids"]
res["avatar"] = res.pop("icon")
return get_result(data=res)
@manager.route('/chats/<chat_id>', methods=['PUT']) # noqa: F821
@token_required
def update(tenant_id, chat_id):
if not DialogService.query(tenant_id=tenant_id, id=chat_id, status=StatusEnum.VALID.value):
return get_error_data_result(message='You do not own the chat')
req = request.json
ids = req.get("dataset_ids")
if "show_quotation" in req:
req["do_refer"] = req.pop("show_quotation")
if "dataset_ids" in req:
if not ids:
return get_error_data_result("`dataset_ids` can't be empty")
if ids:
for kb_id in ids:
kbs = KnowledgebaseService.accessible(kb_id=kb_id, user_id=tenant_id)
if not kbs:
return get_error_data_result(f"You don't own the dataset {kb_id}")
kbs = KnowledgebaseService.query(id=kb_id)
kb = kbs[0]
if kb.chunk_num == 0:
return get_error_data_result(f"The dataset {kb_id} doesn't own parsed file")
kbs = KnowledgebaseService.get_by_ids(ids)
embd_count = list(set([kb.embd_id for kb in kbs]))
if len(embd_count) != 1:
return get_result(
message='Datasets use different embedding models."',
code=settings.RetCode.AUTHENTICATION_ERROR)
req["kb_ids"] = ids
llm = req.get("llm")
if llm:
if "model_name" in llm:
req["llm_id"] = llm.pop("model_name")
if not TenantLLMService.query(tenant_id=tenant_id, llm_name=req["llm_id"], model_type="chat"):
return get_error_data_result(f"`model_name` {req.get('llm_id')} doesn't exist")
req["llm_setting"] = req.pop("llm")
e, tenant = TenantService.get_by_id(tenant_id)
if not e:
return get_error_data_result(message="Tenant not found!")
# prompt
prompt = req.get("prompt")
key_mapping = {"parameters": "variables",
"prologue": "opener",
"quote": "show_quote",
"system": "prompt",
"rerank_id": "rerank_model",
"vector_similarity_weight": "keywords_similarity_weight"}
key_list = ["similarity_threshold", "vector_similarity_weight", "top_n", "rerank_id","top_k"]
if prompt:
for new_key, old_key in key_mapping.items():
if old_key in prompt:
prompt[new_key] = prompt.pop(old_key)
for key in key_list:
if key in prompt:
req[key] = prompt.pop(key)
req["prompt_config"] = req.pop("prompt")
e, res = DialogService.get_by_id(chat_id)
res = res.to_json()
if req.get("rerank_id"):
value_rerank_model = ["BAAI/bge-reranker-v2-m3", "maidalun1020/bce-reranker-base_v1"]
if req["rerank_id"] not in value_rerank_model and not TenantLLMService.query(tenant_id=tenant_id,
llm_name=req.get("rerank_id"),
model_type="rerank"):
return get_error_data_result(f"`rerank_model` {req.get('rerank_id')} doesn't exist")
if "name" in req:
if not req.get("name"):
return get_error_data_result(message="`name` is not empty.")
if req["name"].lower() != res["name"].lower() \
and len(
DialogService.query(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value)) > 0:
return get_error_data_result(message="Duplicated chat name in updating dataset.")
if "prompt_config" in req:
res["prompt_config"].update(req["prompt_config"])
for p in res["prompt_config"]["parameters"]:
if p["optional"]:
continue
if res["prompt_config"]["system"].find("{%s}" % p["key"]) < 0:
return get_error_data_result(message="Parameter '{}' is not used".format(p["key"]))
if "llm_setting" in req:
res["llm_setting"].update(req["llm_setting"])
req["prompt_config"] = res["prompt_config"]
req["llm_setting"] = res["llm_setting"]
# avatar
if "avatar" in req:
req["icon"] = req.pop("avatar")
if "dataset_ids" in req:
req.pop("dataset_ids")
if not DialogService.update_by_id(chat_id, req):
return get_error_data_result(message="Chat not found!")
return get_result()
@manager.route('/chats', methods=['DELETE']) # noqa: F821
@token_required
def delete(tenant_id):
req = request.json
if not req:
ids = None
else:
ids = req.get("ids")
if not ids:
id_list = []
dias = DialogService.query(tenant_id=tenant_id, status=StatusEnum.VALID.value)
for dia in dias:
id_list.append(dia.id)
else:
id_list = ids
for id in id_list:
if not DialogService.query(tenant_id=tenant_id, id=id, status=StatusEnum.VALID.value):
return get_error_data_result(message=f"You don't own the chat {id}")
temp_dict = {"status": StatusEnum.INVALID.value}
DialogService.update_by_id(id, temp_dict)
return get_result()
@manager.route('/chats', methods=['GET']) # noqa: F821
@token_required
def list_chat(tenant_id):
id = request.args.get("id")
name = request.args.get("name")
if id or name:
chat = DialogService.query(id=id, name=name, status=StatusEnum.VALID.value, tenant_id=tenant_id)
if not chat:
return get_error_data_result(message="The chat doesn't exist")
page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 30))
orderby = request.args.get("orderby", "create_time")
if request.args.get("desc") == "False" or request.args.get("desc") == "false":
desc = False
else:
desc = True
chats = DialogService.get_list(tenant_id, page_number, items_per_page, orderby, desc, id, name)
if not chats:
return get_result(data=[])
list_assts = []
key_mapping = {"parameters": "variables",
"prologue": "opener",
"quote": "show_quote",
"system": "prompt",
"rerank_id": "rerank_model",
"vector_similarity_weight": "keywords_similarity_weight",
"do_refer": "show_quotation"}
key_list = ["similarity_threshold", "vector_similarity_weight", "top_n", "rerank_id"]
for res in chats:
renamed_dict = {}
for key, value in res["prompt_config"].items():
new_key = key_mapping.get(key, key)
renamed_dict[new_key] = value
res["prompt"] = renamed_dict
del res["prompt_config"]
new_dict = {"similarity_threshold": res["similarity_threshold"],
"keywords_similarity_weight": 1-res["vector_similarity_weight"],
"top_n": res["top_n"],
"rerank_model": res['rerank_id']}
res["prompt"].update(new_dict)
for key in key_list:
del res[key]
res["llm"] = res.pop("llm_setting")
res["llm"]["model_name"] = res.pop("llm_id")
kb_list = []
for kb_id in res["kb_ids"]:
kb = KnowledgebaseService.query(id=kb_id)
if not kb:
return get_error_data_result(message=f"Don't exist the kb {kb_id}")
kb_list.append(kb[0].to_json())
del res["kb_ids"]
res["datasets"] = kb_list
res["avatar"] = res.pop("icon")
list_assts.append(res)
return get_result(data=list_assts)

View File

@ -1,224 +1,535 @@
# #
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved. # Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
# You may obtain a copy of the License at # You may obtain a copy of the License at
# #
# http://www.apache.org/licenses/LICENSE-2.0 # http://www.apache.org/licenses/LICENSE-2.0
# #
# Unless required by applicable law or agreed to in writing, software # Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, # distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from flask import request from flask import request
from api.db import StatusEnum, FileSource
from api.db import StatusEnum, FileSource from api.db.db_models import File
from api.db.db_models import File from api.db.services.document_service import DocumentService
from api.db.services.document_service import DocumentService from api.db.services.file2document_service import File2DocumentService
from api.db.services.file2document_service import File2DocumentService from api.db.services.file_service import FileService
from api.db.services.file_service import FileService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.llm_service import TenantLLMService, LLMService
from api.db.services.user_service import TenantService from api.db.services.user_service import TenantService
from api.settings import RetCode from api import settings
from api.utils import get_uuid from api.utils import get_uuid
from api.utils.api_utils import get_json_result, token_required, get_data_error_result from api.utils.api_utils import (
get_result,
token_required,
@manager.route('/save', methods=['POST']) get_error_data_result,
@token_required valid,
def save(tenant_id): get_parser_config,
req = request.json )
e, t = TenantService.get_by_id(tenant_id)
if "id" not in req:
if "tenant_id" in req or "embedding_model" in req: @manager.route("/datasets", methods=["POST"]) # noqa: F821
return get_data_error_result( @token_required
retmsg="Tenant_id or embedding_model must not be provided") def create(tenant_id):
if "name" not in req: """
return get_data_error_result( Create a new dataset.
retmsg="Name is not empty!") ---
req['id'] = get_uuid() tags:
req["name"] = req["name"].strip() - Datasets
if req["name"] == "": security:
return get_data_error_result( - ApiKeyAuth: []
retmsg="Name is not empty string!") parameters:
if KnowledgebaseService.query(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value): - in: header
return get_data_error_result( name: Authorization
retmsg="Duplicated knowledgebase name in creating dataset.") type: string
req["tenant_id"] = req['created_by'] = tenant_id required: true
req['embedding_model'] = t.embd_id description: Bearer token for authentication.
key_mapping = { - in: body
"chunk_num": "chunk_count", name: body
"doc_num": "document_count", description: Dataset creation parameters.
"parser_id": "parse_method", required: true
"embd_id": "embedding_model" schema:
} type: object
mapped_keys = {new_key: req[old_key] for new_key, old_key in key_mapping.items() if old_key in req} required:
req.update(mapped_keys) - name
if not KnowledgebaseService.save(**req): properties:
return get_data_error_result(retmsg="Create dataset error.(Database error)") name:
renamed_data = {} type: string
e, k = KnowledgebaseService.get_by_id(req["id"]) description: Name of the dataset.
for key, value in k.to_dict().items(): permission:
new_key = key_mapping.get(key, key) type: string
renamed_data[new_key] = value enum: ['me', 'team']
return get_json_result(data=renamed_data) description: Dataset permission.
else: language:
invalid_keys = {"embd_id", "chunk_num", "doc_num", "parser_id"} type: string
if any(key in req for key in invalid_keys): enum: ['Chinese', 'English']
return get_data_error_result(retmsg="The input parameters are invalid.") description: Language of the dataset.
chunk_method:
if "tenant_id" in req: type: string
if req["tenant_id"] != tenant_id: enum: ["naive", "manual", "qa", "table", "paper", "book", "laws",
return get_data_error_result( "presentation", "picture", "one", "knowledge_graph", "email", "tag"
retmsg="Can't change tenant_id.") ]
description: Chunking method.
if "embedding_model" in req: parser_config:
if req["embedding_model"] != t.embd_id: type: object
return get_data_error_result( description: Parser configuration.
retmsg="Can't change embedding_model.") responses:
req.pop("embedding_model") 200:
description: Successful operation.
if not KnowledgebaseService.query( schema:
created_by=tenant_id, id=req["id"]): type: object
return get_json_result( properties:
data=False, retmsg='You do not own the dataset.', data:
retcode=RetCode.OPERATING_ERROR) type: object
"""
if not req["id"]: req = request.json
return get_data_error_result( e, t = TenantService.get_by_id(tenant_id)
retmsg="id can not be empty.") permission = req.get("permission")
e, kb = KnowledgebaseService.get_by_id(req["id"]) language = req.get("language")
chunk_method = req.get("chunk_method")
if "chunk_count" in req: parser_config = req.get("parser_config")
if req["chunk_count"] != kb.chunk_num: valid_permission = ["me", "team"]
return get_data_error_result( valid_language = ["Chinese", "English"]
retmsg="Can't change chunk_count.") valid_chunk_method = [
req.pop("chunk_count") "naive",
"manual",
if "document_count" in req: "qa",
if req['document_count'] != kb.doc_num: "table",
return get_data_error_result( "paper",
retmsg="Can't change document_count.") "book",
req.pop("document_count") "laws",
"presentation",
if "parse_method" in req: "picture",
if kb.chunk_num != 0 and req['parse_method'] != kb.parser_id: "one",
return get_data_error_result( "knowledge_graph",
retmsg="If chunk count is not 0, parse method is not changable.") "email",
req['parser_id'] = req.pop('parse_method') "tag"
if "name" in req: ]
req["name"] = req["name"].strip() check_validation = valid(
if req["name"].lower() != kb.name.lower() \ permission,
and len(KnowledgebaseService.query(name=req["name"], tenant_id=tenant_id, valid_permission,
status=StatusEnum.VALID.value)) > 0: language,
return get_data_error_result( valid_language,
retmsg="Duplicated knowledgebase name in updating dataset.") chunk_method,
valid_chunk_method,
del req["id"] )
if not KnowledgebaseService.update_by_id(kb.id, req): if check_validation:
return get_data_error_result(retmsg="Update dataset error.(Database error)") return check_validation
return get_json_result(data=True) req["parser_config"] = get_parser_config(chunk_method, parser_config)
if "tenant_id" in req:
return get_error_data_result(message="`tenant_id` must not be provided")
@manager.route('/delete', methods=['DELETE']) if "chunk_count" in req or "document_count" in req:
@token_required return get_error_data_result(
def delete(tenant_id): message="`chunk_count` or `document_count` must not be provided"
req = request.args )
if "id" not in req: if "name" not in req:
return get_data_error_result( return get_error_data_result(message="`name` is not empty!")
retmsg="id is required") req["id"] = get_uuid()
kbs = KnowledgebaseService.query( req["name"] = req["name"].strip()
created_by=tenant_id, id=req["id"]) if req["name"] == "":
if not kbs: return get_error_data_result(message="`name` is not empty string!")
return get_json_result( if KnowledgebaseService.query(
data=False, retmsg='You do not own the dataset', name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value
retcode=RetCode.OPERATING_ERROR) ):
return get_error_data_result(
for doc in DocumentService.query(kb_id=req["id"]): message="Duplicated dataset name in creating dataset."
if not DocumentService.remove_document(doc, kbs[0].tenant_id): )
return get_data_error_result( req["tenant_id"] = req["created_by"] = tenant_id
retmsg="Remove document error.(Database error)") if not req.get("embedding_model"):
f2d = File2DocumentService.get_by_document_id(doc.id) req["embedding_model"] = t.embd_id
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id]) else:
File2DocumentService.delete_by_document_id(doc.id) valid_embedding_models = [
"BAAI/bge-large-zh-v1.5",
if not KnowledgebaseService.delete_by_id(req["id"]): "BAAI/bge-base-en-v1.5",
return get_data_error_result( "BAAI/bge-large-en-v1.5",
retmsg="Delete dataset error.(Database serror)") "BAAI/bge-small-en-v1.5",
return get_json_result(data=True) "BAAI/bge-small-zh-v1.5",
"jinaai/jina-embeddings-v2-base-en",
"jinaai/jina-embeddings-v2-small-en",
@manager.route('/list', methods=['GET']) "nomic-ai/nomic-embed-text-v1.5",
@token_required "sentence-transformers/all-MiniLM-L6-v2",
def list_datasets(tenant_id): "text-embedding-v2",
page_number = int(request.args.get("page", 1)) "text-embedding-v3",
items_per_page = int(request.args.get("page_size", 1024)) "maidalun1020/bce-embedding-base_v1",
orderby = request.args.get("orderby", "create_time") ]
desc = bool(request.args.get("desc", True)) embd_model = LLMService.query(
tenants = TenantService.get_joined_tenants_by_user_id(tenant_id) llm_name=req["embedding_model"], model_type="embedding"
kbs = KnowledgebaseService.get_by_tenant_ids( )
[m["tenant_id"] for m in tenants], tenant_id, page_number, items_per_page, orderby, desc) if embd_model:
renamed_list = [] if req["embedding_model"] not in valid_embedding_models and not TenantLLMService.query(tenant_id=tenant_id,model_type="embedding",llm_name=req.get("embedding_model"),):
for kb in kbs: return get_error_data_result(f"`embedding_model` {req.get('embedding_model')} doesn't exist")
key_mapping = { if not embd_model:
"chunk_num": "chunk_count", embd_model=TenantLLMService.query(tenant_id=tenant_id,model_type="embedding", llm_name=req.get("embedding_model"))
"doc_num": "document_count", if not embd_model:
"parser_id": "parse_method", return get_error_data_result(
"embd_id": "embedding_model" f"`embedding_model` {req.get('embedding_model')} doesn't exist"
} )
renamed_data = {} key_mapping = {
for key, value in kb.items(): "chunk_num": "chunk_count",
new_key = key_mapping.get(key, key) "doc_num": "document_count",
renamed_data[new_key] = value "parser_id": "chunk_method",
renamed_list.append(renamed_data) "embd_id": "embedding_model",
return get_json_result(data=renamed_list) }
mapped_keys = {
new_key: req[old_key]
@manager.route('/detail', methods=['GET']) for new_key, old_key in key_mapping.items()
@token_required if old_key in req
def detail(tenant_id): }
req = request.args req.update(mapped_keys)
key_mapping = { if not KnowledgebaseService.save(**req):
"chunk_num": "chunk_count", return get_error_data_result(message="Create dataset error.(Database error)")
"doc_num": "document_count", renamed_data = {}
"parser_id": "parse_method", e, k = KnowledgebaseService.get_by_id(req["id"])
"embd_id": "embedding_model" for key, value in k.to_dict().items():
} new_key = key_mapping.get(key, key)
renamed_data = {} renamed_data[new_key] = value
if "id" in req: return get_result(data=renamed_data)
id = req["id"]
kb = KnowledgebaseService.query(created_by=tenant_id, id=req["id"])
if not kb: @manager.route("/datasets", methods=["DELETE"]) # noqa: F821
return get_json_result( @token_required
data=False, retmsg='You do not own the dataset.', def delete(tenant_id):
retcode=RetCode.OPERATING_ERROR) """
if "name" in req: Delete datasets.
name = req["name"] ---
if kb[0].name != name: tags:
return get_json_result( - Datasets
data=False, retmsg='You do not own the dataset.', security:
retcode=RetCode.OPERATING_ERROR) - ApiKeyAuth: []
e, k = KnowledgebaseService.get_by_id(id) parameters:
for key, value in k.to_dict().items(): - in: header
new_key = key_mapping.get(key, key) name: Authorization
renamed_data[new_key] = value type: string
return get_json_result(data=renamed_data) required: true
else: description: Bearer token for authentication.
if "name" in req: - in: body
name = req["name"] name: body
e, k = KnowledgebaseService.get_by_name(kb_name=name, tenant_id=tenant_id) description: Dataset deletion parameters.
if not e: required: true
return get_json_result( schema:
data=False, retmsg='You do not own the dataset.', type: object
retcode=RetCode.OPERATING_ERROR) properties:
for key, value in k.to_dict().items(): ids:
new_key = key_mapping.get(key, key) type: array
renamed_data[new_key] = value items:
return get_json_result(data=renamed_data) type: string
else: description: List of dataset IDs to delete.
return get_data_error_result( responses:
retmsg="At least one of `id` or `name` must be provided.") 200:
description: Successful operation.
schema:
type: object
"""
req = request.json
if not req:
ids = None
else:
ids = req.get("ids")
if not ids:
id_list = []
kbs = KnowledgebaseService.query(tenant_id=tenant_id)
for kb in kbs:
id_list.append(kb.id)
else:
id_list = ids
for id in id_list:
kbs = KnowledgebaseService.query(id=id, tenant_id=tenant_id)
if not kbs:
return get_error_data_result(message=f"You don't own the dataset {id}")
for doc in DocumentService.query(kb_id=id):
if not DocumentService.remove_document(doc, tenant_id):
return get_error_data_result(
message="Remove document error.(Database error)"
)
f2d = File2DocumentService.get_by_document_id(doc.id)
FileService.filter_delete(
[
File.source_type == FileSource.KNOWLEDGEBASE,
File.id == f2d[0].file_id,
]
)
File2DocumentService.delete_by_document_id(doc.id)
FileService.filter_delete(
[File.source_type == FileSource.KNOWLEDGEBASE, File.type == "folder", File.name == kbs[0].name])
if not KnowledgebaseService.delete_by_id(id):
return get_error_data_result(message="Delete dataset error.(Database error)")
return get_result(code=settings.RetCode.SUCCESS)
@manager.route("/datasets/<dataset_id>", methods=["PUT"]) # noqa: F821
@token_required
def update(tenant_id, dataset_id):
"""
Update a dataset.
---
tags:
- Datasets
security:
- ApiKeyAuth: []
parameters:
- in: path
name: dataset_id
type: string
required: true
description: ID of the dataset to update.
- in: header
name: Authorization
type: string
required: true
description: Bearer token for authentication.
- in: body
name: body
description: Dataset update parameters.
required: true
schema:
type: object
properties:
name:
type: string
description: New name of the dataset.
permission:
type: string
enum: ['me', 'team']
description: Updated permission.
language:
type: string
enum: ['Chinese', 'English']
description: Updated language.
chunk_method:
type: string
enum: ["naive", "manual", "qa", "table", "paper", "book", "laws",
"presentation", "picture", "one", "knowledge_graph", "email", "tag"
]
description: Updated chunking method.
parser_config:
type: object
description: Updated parser configuration.
responses:
200:
description: Successful operation.
schema:
type: object
"""
if not KnowledgebaseService.query(id=dataset_id, tenant_id=tenant_id):
return get_error_data_result(message="You don't own the dataset")
req = request.json
e, t = TenantService.get_by_id(tenant_id)
invalid_keys = {"id", "embd_id", "chunk_num", "doc_num", "parser_id"}
if any(key in req for key in invalid_keys):
return get_error_data_result(message="The input parameters are invalid.")
permission = req.get("permission")
language = req.get("language")
chunk_method = req.get("chunk_method")
parser_config = req.get("parser_config")
valid_permission = ["me", "team"]
valid_language = ["Chinese", "English"]
valid_chunk_method = [
"naive",
"manual",
"qa",
"table",
"paper",
"book",
"laws",
"presentation",
"picture",
"one",
"knowledge_graph",
"email",
"tag"
]
check_validation = valid(
permission,
valid_permission,
language,
valid_language,
chunk_method,
valid_chunk_method,
)
if check_validation:
return check_validation
if "tenant_id" in req:
if req["tenant_id"] != tenant_id:
return get_error_data_result(message="Can't change `tenant_id`.")
e, kb = KnowledgebaseService.get_by_id(dataset_id)
if "parser_config" in req:
temp_dict = kb.parser_config
temp_dict.update(req["parser_config"])
req["parser_config"] = temp_dict
if "chunk_count" in req:
if req["chunk_count"] != kb.chunk_num:
return get_error_data_result(message="Can't change `chunk_count`.")
req.pop("chunk_count")
if "document_count" in req:
if req["document_count"] != kb.doc_num:
return get_error_data_result(message="Can't change `document_count`.")
req.pop("document_count")
if "chunk_method" in req:
if kb.chunk_num != 0 and req["chunk_method"] != kb.parser_id:
return get_error_data_result(
message="If `chunk_count` is not 0, `chunk_method` is not changeable."
)
req["parser_id"] = req.pop("chunk_method")
if req["parser_id"] != kb.parser_id:
if not req.get("parser_config"):
req["parser_config"] = get_parser_config(chunk_method, parser_config)
if "embedding_model" in req:
if kb.chunk_num != 0 and req["embedding_model"] != kb.embd_id:
return get_error_data_result(
message="If `chunk_count` is not 0, `embedding_model` is not changeable."
)
if not req.get("embedding_model"):
return get_error_data_result("`embedding_model` can't be empty")
valid_embedding_models = [
"BAAI/bge-large-zh-v1.5",
"BAAI/bge-base-en-v1.5",
"BAAI/bge-large-en-v1.5",
"BAAI/bge-small-en-v1.5",
"BAAI/bge-small-zh-v1.5",
"jinaai/jina-embeddings-v2-base-en",
"jinaai/jina-embeddings-v2-small-en",
"nomic-ai/nomic-embed-text-v1.5",
"sentence-transformers/all-MiniLM-L6-v2",
"text-embedding-v2",
"text-embedding-v3",
"maidalun1020/bce-embedding-base_v1",
]
embd_model = LLMService.query(
llm_name=req["embedding_model"], model_type="embedding"
)
if embd_model:
if req["embedding_model"] not in valid_embedding_models and not TenantLLMService.query(tenant_id=tenant_id,model_type="embedding",llm_name=req.get("embedding_model"),):
return get_error_data_result(f"`embedding_model` {req.get('embedding_model')} doesn't exist")
if not embd_model:
embd_model=TenantLLMService.query(tenant_id=tenant_id,model_type="embedding", llm_name=req.get("embedding_model"))
if not embd_model:
return get_error_data_result(
f"`embedding_model` {req.get('embedding_model')} doesn't exist"
)
req["embd_id"] = req.pop("embedding_model")
if "name" in req:
req["name"] = req["name"].strip()
if (
req["name"].lower() != kb.name.lower()
and len(
KnowledgebaseService.query(
name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value
)
)
> 0
):
return get_error_data_result(
message="Duplicated dataset name in updating dataset."
)
if not KnowledgebaseService.update_by_id(kb.id, req):
return get_error_data_result(message="Update dataset error.(Database error)")
return get_result(code=settings.RetCode.SUCCESS)
@manager.route("/datasets", methods=["GET"]) # noqa: F821
@token_required
def list(tenant_id):
"""
List datasets.
---
tags:
- Datasets
security:
- ApiKeyAuth: []
parameters:
- in: query
name: id
type: string
required: false
description: Dataset ID to filter.
- in: query
name: name
type: string
required: false
description: Dataset name to filter.
- in: query
name: page
type: integer
required: false
default: 1
description: Page number.
- in: query
name: page_size
type: integer
required: false
default: 1024
description: Number of items per page.
- in: query
name: orderby
type: string
required: false
default: "create_time"
description: Field to order by.
- in: query
name: desc
type: boolean
required: false
default: true
description: Order in descending.
- in: header
name: Authorization
type: string
required: true
description: Bearer token for authentication.
responses:
200:
description: Successful operation.
schema:
type: array
items:
type: object
"""
id = request.args.get("id")
name = request.args.get("name")
if id:
kbs = KnowledgebaseService.get_kb_by_id(id,tenant_id)
if not kbs:
return get_error_data_result(f"You don't own the dataset {id}")
if name:
kbs = KnowledgebaseService.get_kb_by_name(name,tenant_id)
if not kbs:
return get_error_data_result(f"You don't own the dataset {name}")
page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 30))
orderby = request.args.get("orderby", "create_time")
if request.args.get("desc") == "False" or request.args.get("desc") == "false":
desc = False
else:
desc = True
tenants = TenantService.get_joined_tenants_by_user_id(tenant_id)
kbs = KnowledgebaseService.get_list(
[m["tenant_id"] for m in tenants],
tenant_id,
page_number,
items_per_page,
orderby,
desc,
id,
name,
)
renamed_list = []
for kb in kbs:
key_mapping = {
"chunk_num": "chunk_count",
"doc_num": "document_count",
"parser_id": "chunk_method",
"embd_id": "embedding_model",
}
renamed_data = {}
for key, value in kb.items():
new_key = key_mapping.get(key, key)
renamed_data[new_key] = value
renamed_list.append(renamed_data)
return get_result(data=renamed_list)

View File

@ -0,0 +1,88 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import request, jsonify
from api.db import LLMType
from api.db.services.dialog_service import label_question
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle
from api import settings
from api.utils.api_utils import validate_request, build_error_result, apikey_required
@manager.route('/dify/retrieval', methods=['POST']) # noqa: F821
@apikey_required
@validate_request("knowledge_id", "query")
def retrieval(tenant_id):
req = request.json
question = req["query"]
kb_id = req["knowledge_id"]
use_kg = req.get("use_kg", False)
retrieval_setting = req.get("retrieval_setting", {})
similarity_threshold = float(retrieval_setting.get("score_threshold", 0.0))
top = int(retrieval_setting.get("top_k", 1024))
try:
e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e:
return build_error_result(message="Knowledgebase not found!", code=settings.RetCode.NOT_FOUND)
if kb.tenant_id != tenant_id:
return build_error_result(message="Knowledgebase not found!", code=settings.RetCode.NOT_FOUND)
embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
ranks = settings.retrievaler.retrieval(
question,
embd_mdl,
kb.tenant_id,
[kb_id],
page=1,
page_size=top,
similarity_threshold=similarity_threshold,
vector_similarity_weight=0.3,
top=top,
rank_feature=label_question(question, [kb])
)
if use_kg:
ck = settings.kg_retrievaler.retrieval(question,
[tenant_id],
[kb_id],
embd_mdl,
LLMBundle(kb.tenant_id, LLMType.CHAT))
if ck["content_with_weight"]:
ranks["chunks"].insert(0, ck)
records = []
for c in ranks["chunks"]:
c.pop("vector", None)
records.append({
"content": c["content_with_weight"],
"score": c["similarity"],
"title": c["docnm_kwd"],
"metadata": {}
})
return jsonify({"records": records})
except Exception as e:
if str(e).find("not_found") > 0:
return build_error_result(
message='No chunk found! Check the chunk status please!',
code=settings.RetCode.NOT_FOUND
)
return build_error_result(message=str(e), code=settings.RetCode.SERVER_ERROR)

File diff suppressed because it is too large Load Diff

View File

@ -1,263 +1,454 @@
# #
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved. # Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
# You may obtain a copy of the License at # You may obtain a copy of the License at
# #
# http://www.apache.org/licenses/LICENSE-2.0 # http://www.apache.org/licenses/LICENSE-2.0
# #
# Unless required by applicable law or agreed to in writing, software # Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, # distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import json import re
from uuid import uuid4 import json
from api.db import LLMType
from flask import request, Response from flask import request, Response
from api.db import StatusEnum from api.db.services.conversation_service import ConversationService, iframe_completion
from api.db.services.dialog_service import DialogService, ConversationService, chat from api.db.services.conversation_service import completion as rag_completion
from api.settings import RetCode from api.db.services.canvas_service import completion as agent_completion
from api.utils import get_uuid from api.db.services.dialog_service import ask
from api.utils.api_utils import get_data_error_result from agent.canvas import Canvas
from api.utils.api_utils import get_json_result, token_required from api.db import StatusEnum
from api.db.db_models import APIToken
from api.db.services.api_service import API4ConversationService
@manager.route('/save', methods=['POST']) from api.db.services.canvas_service import UserCanvasService
@token_required from api.db.services.dialog_service import DialogService
def set_conversation(tenant_id): from api.db.services.knowledgebase_service import KnowledgebaseService
req = request.json from api.utils import get_uuid
conv_id = req.get("id") from api.utils.api_utils import get_error_data_result
if "assistant_id" in req: from api.utils.api_utils import get_result, token_required
req["dialog_id"] = req.pop("assistant_id") from api.db.services.llm_service import LLMBundle
if "id" in req:
del req["id"]
conv = ConversationService.query(id=conv_id)
if not conv: @manager.route('/chats/<chat_id>/sessions', methods=['POST']) # noqa: F821
return get_data_error_result(retmsg="Session does not exist") @token_required
if not DialogService.query(id=conv[0].dialog_id, tenant_id=tenant_id, status=StatusEnum.VALID.value): def create(tenant_id, chat_id):
return get_data_error_result(retmsg="You do not own the session") req = request.json
if req.get("dialog_id"): req["dialog_id"] = chat_id
dia = DialogService.query(tenant_id=tenant_id, id=req["dialog_id"], status=StatusEnum.VALID.value) dia = DialogService.query(tenant_id=tenant_id, id=req["dialog_id"], status=StatusEnum.VALID.value)
if not dia: if not dia:
return get_data_error_result(retmsg="You do not own the assistant") return get_error_data_result(message="You do not own the assistant.")
if "dialog_id" in req and not req.get("dialog_id"): conv = {
return get_data_error_result(retmsg="assistant_id can not be empty.") "id": get_uuid(),
if "message" in req: "dialog_id": req["dialog_id"],
return get_data_error_result(retmsg="message can not be change") "name": req.get("name", "New session"),
if "reference" in req: "message": [{"role": "assistant", "content": dia[0].prompt_config.get("prologue")}],
return get_data_error_result(retmsg="reference can not be change") "user_id": req.get("user_id", "")
if "name" in req and not req.get("name"): }
return get_data_error_result(retmsg="name can not be empty.") if not conv.get("name"):
if not ConversationService.update_by_id(conv_id, req): return get_error_data_result(message="`name` can not be empty.")
return get_data_error_result(retmsg="Session updates error") ConversationService.save(**conv)
return get_json_result(data=True) e, conv = ConversationService.get_by_id(conv["id"])
if not e:
if not req.get("dialog_id"): return get_error_data_result(message="Fail to create a session!")
return get_data_error_result(retmsg="assistant_id is required.") conv = conv.to_dict()
dia = DialogService.query(tenant_id=tenant_id, id=req["dialog_id"], status=StatusEnum.VALID.value) conv['messages'] = conv.pop("message")
if not dia: conv["chat_id"] = conv.pop("dialog_id")
return get_data_error_result(retmsg="You do not own the assistant") del conv["reference"]
conv = { return get_result(data=conv)
"id": get_uuid(),
"dialog_id": req["dialog_id"],
"name": req.get("name", "New session"), @manager.route('/agents/<agent_id>/sessions', methods=['POST']) # noqa: F821
"message": [{"role": "assistant", "content": "Hi! I am your assistantcan I help you?"}] @token_required
} def create_agent_session(tenant_id, agent_id):
if not conv.get("name"): req = request.json
return get_data_error_result(retmsg="name can not be empty.") e, cvs = UserCanvasService.get_by_id(agent_id)
ConversationService.save(**conv) if not e:
e, conv = ConversationService.get_by_id(conv["id"]) return get_error_data_result("Agent not found.")
if not e:
return get_data_error_result(retmsg="Fail to new session!") if not UserCanvasService.query(user_id=tenant_id, id=agent_id):
conv = conv.to_dict() return get_error_data_result("You cannot access the agent.")
conv['messages'] = conv.pop("message")
conv["assistant_id"] = conv.pop("dialog_id") if not isinstance(cvs.dsl, str):
del conv["reference"] cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
return get_json_result(data=conv)
canvas = Canvas(cvs.dsl, tenant_id)
canvas.reset()
@manager.route('/completion', methods=['POST']) query = canvas.get_preset_param()
@token_required if query:
def completion(tenant_id): for ele in query:
req = request.json if not ele["optional"]:
# req = {"conversation_id": "9aaaca4c11d311efa461fa163e197198", "messages": [ if not req.get(ele["key"]):
# {"role": "user", "content": "上海有吗?"} return get_error_data_result(f"`{ele['key']}` is required")
# ]} ele["value"] = req[ele["key"]]
if "id" not in req: if ele["optional"]:
return get_data_error_result(retmsg="id is required") if req.get(ele["key"]):
conv = ConversationService.query(id=req["id"]) ele["value"] = req[ele['key']]
if not conv: else:
return get_data_error_result(retmsg="Session does not exist") if "value" in ele:
conv = conv[0] ele.pop("value")
if not DialogService.query(id=conv.dialog_id, tenant_id=tenant_id, status=StatusEnum.VALID.value): else:
return get_data_error_result(retmsg="You do not own the session") for ans in canvas.run(stream=False):
msg = [] pass
question = { cvs.dsl = json.loads(str(canvas))
"content": req.get("question"), conv = {
"role": "user", "id": get_uuid(),
"id": str(uuid4()) "dialog_id": cvs.id,
} "user_id": req.get("user_id", "") if isinstance(req, dict) else "",
conv.message.append(question) "message": [{"role": "assistant", "content": canvas.get_prologue()}],
for m in conv.message: "source": "agent",
if m["role"] == "system": continue "dsl": cvs.dsl
if m["role"] == "assistant" and not msg: continue }
msg.append(m) API4ConversationService.save(**conv)
message_id = msg[-1].get("id") conv["agent_id"] = conv.pop("dialog_id")
e, dia = DialogService.get_by_id(conv.dialog_id) return get_result(data=conv)
del req["id"]
if not conv.reference: @manager.route('/chats/<chat_id>/sessions/<session_id>', methods=['PUT']) # noqa: F821
conv.reference = [] @token_required
conv.message.append({"role": "assistant", "content": "", "id": message_id}) def update(tenant_id, chat_id, session_id):
conv.reference.append({"chunks": [], "doc_aggs": []}) req = request.json
req["dialog_id"] = chat_id
def fillin_conv(ans): conv_id = session_id
nonlocal conv, message_id conv = ConversationService.query(id=conv_id, dialog_id=chat_id)
if not conv.reference: if not conv:
conv.reference.append(ans["reference"]) return get_error_data_result(message="Session does not exist")
else: if not DialogService.query(id=chat_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
conv.reference[-1] = ans["reference"] return get_error_data_result(message="You do not own the session")
conv.message[-1] = {"role": "assistant", "content": ans["answer"], if "message" in req or "messages" in req:
"id": message_id, "prompt": ans.get("prompt", "")} return get_error_data_result(message="`message` can not be change")
ans["id"] = message_id if "reference" in req:
return get_error_data_result(message="`reference` can not be change")
def stream(): if "name" in req and not req.get("name"):
nonlocal dia, msg, req, conv return get_error_data_result(message="`name` can not be empty.")
try: if not ConversationService.update_by_id(conv_id, req):
for ans in chat(dia, msg, **req): return get_error_data_result(message="Session updates error")
fillin_conv(ans) return get_result()
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n"
ConversationService.update_by_id(conv.id, conv.to_dict())
except Exception as e: @manager.route('/chats/<chat_id>/completions', methods=['POST']) # noqa: F821
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e), @token_required
"data": {"answer": "**ERROR**: " + str(e), "reference": []}}, def chat_completion(tenant_id, chat_id):
ensure_ascii=False) + "\n\n" req = request.json
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n" if not req or not req.get("session_id"):
req = {"question": ""}
if req.get("stream", True): if not DialogService.query(tenant_id=tenant_id, id=chat_id, status=StatusEnum.VALID.value):
resp = Response(stream(), mimetype="text/event-stream") return get_error_data_result(f"You don't own the chat {chat_id}")
resp.headers.add_header("Cache-control", "no-cache") if req.get("session_id"):
resp.headers.add_header("Connection", "keep-alive") if not ConversationService.query(id=req["session_id"], dialog_id=chat_id):
resp.headers.add_header("X-Accel-Buffering", "no") return get_error_data_result(f"You don't own the session {req['session_id']}")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8") if req.get("stream", True):
return resp resp = Response(rag_completion(tenant_id, chat_id, **req), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
else: resp.headers.add_header("Connection", "keep-alive")
answer = None resp.headers.add_header("X-Accel-Buffering", "no")
for ans in chat(dia, msg, **req): resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
answer = ans
fillin_conv(ans) return resp
ConversationService.update_by_id(conv.id, conv.to_dict()) else:
break answer = None
return get_json_result(data=answer) for ans in rag_completion(tenant_id, chat_id, **req):
answer = ans
break
@manager.route('/get', methods=['GET']) return get_result(data=answer)
@token_required
def get(tenant_id):
req = request.args @manager.route('/agents/<agent_id>/completions', methods=['POST']) # noqa: F821
if "id" not in req: @token_required
return get_data_error_result(retmsg="id is required") def agent_completions(tenant_id, agent_id):
conv_id = req["id"] req = request.json
conv = ConversationService.query(id=conv_id) cvs = UserCanvasService.query(user_id=tenant_id, id=agent_id)
if not conv: if not cvs:
return get_data_error_result(retmsg="Session does not exist") return get_error_data_result(f"You don't own the agent {agent_id}")
if not DialogService.query(id=conv[0].dialog_id, tenant_id=tenant_id, status=StatusEnum.VALID.value): if req.get("session_id"):
return get_data_error_result(retmsg="You do not own the session") dsl = cvs[0].dsl
conv = conv[0].to_dict() if not isinstance(dsl, str):
conv['messages'] = conv.pop("message") dsl = json.dumps(dsl)
conv["assistant_id"] = conv.pop("dialog_id") #canvas = Canvas(dsl, tenant_id)
if conv["reference"]: #if canvas.get_preset_param():
messages = conv["messages"] # req["question"] = ""
message_num = 0 conv = API4ConversationService.query(id=req["session_id"], dialog_id=agent_id)
chunk_num = 0 if not conv:
while message_num < len(messages): return get_error_data_result(f"You don't own the session {req['session_id']}")
if message_num != 0 and messages[message_num]["role"] != "user": else:
chunk_list = [] req["question"] = ""
if "chunks" in conv["reference"][chunk_num]: if req.get("stream", True):
chunks = conv["reference"][chunk_num]["chunks"] resp = Response(agent_completion(tenant_id, agent_id, **req), mimetype="text/event-stream")
for chunk in chunks: resp.headers.add_header("Cache-control", "no-cache")
new_chunk = { resp.headers.add_header("Connection", "keep-alive")
"id": chunk["chunk_id"], resp.headers.add_header("X-Accel-Buffering", "no")
"content": chunk["content_with_weight"], resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
"document_id": chunk["doc_id"], return resp
"document_name": chunk["docnm_kwd"], try:
"knowledgebase_id": chunk["kb_id"], for answer in agent_completion(tenant_id, agent_id, **req):
"image_id": chunk["img_id"], return get_result(data=answer)
"similarity": chunk["similarity"], except Exception as e:
"vector_similarity": chunk["vector_similarity"], return get_error_data_result(str(e))
"term_similarity": chunk["term_similarity"],
"positions": chunk["positions"],
} @manager.route('/chats/<chat_id>/sessions', methods=['GET']) # noqa: F821
chunk_list.append(new_chunk) @token_required
chunk_num += 1 def list_session(tenant_id, chat_id):
messages[message_num]["reference"] = chunk_list if not DialogService.query(tenant_id=tenant_id, id=chat_id, status=StatusEnum.VALID.value):
message_num += 1 return get_error_data_result(message=f"You don't own the assistant {chat_id}.")
del conv["reference"] id = request.args.get("id")
return get_json_result(data=conv) name = request.args.get("name")
page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 30))
@manager.route('/list', methods=["GET"]) orderby = request.args.get("orderby", "create_time")
@token_required user_id = request.args.get("user_id")
def list(tenant_id): if request.args.get("desc") == "False" or request.args.get("desc") == "false":
assistant_id = request.args["assistant_id"] desc = False
if not DialogService.query(tenant_id=tenant_id, id=assistant_id, status=StatusEnum.VALID.value): else:
return get_json_result( desc = True
data=False, retmsg=f'Only owner of the assistant is authorized for this operation.', convs = ConversationService.get_list(chat_id, page_number, items_per_page, orderby, desc, id, name, user_id)
retcode=RetCode.OPERATING_ERROR) if not convs:
convs = ConversationService.query( return get_result(data=[])
dialog_id=assistant_id, for conv in convs:
order_by=ConversationService.model.create_time, conv['messages'] = conv.pop("message")
reverse=True) infos = conv["messages"]
convs = [d.to_dict() for d in convs] for info in infos:
for conv in convs: if "prompt" in info:
conv['messages'] = conv.pop("message") info.pop("prompt")
conv["assistant_id"] = conv.pop("dialog_id") conv["chat_id"] = conv.pop("dialog_id")
if conv["reference"]: if conv["reference"]:
messages = conv["messages"] messages = conv["messages"]
message_num = 0 message_num = 0
chunk_num = 0 chunk_num = 0
while message_num < len(messages): while message_num < len(messages):
if message_num != 0 and messages[message_num]["role"] != "user": if message_num != 0 and messages[message_num]["role"] != "user":
chunk_list = [] chunk_list = []
if "chunks" in conv["reference"][chunk_num]: if "chunks" in conv["reference"][chunk_num]:
chunks = conv["reference"][chunk_num]["chunks"] chunks = conv["reference"][chunk_num]["chunks"]
for chunk in chunks: for chunk in chunks:
new_chunk = { new_chunk = {
"id": chunk["chunk_id"], "id": chunk.get("chunk_id", chunk.get("id")),
"content": chunk["content_with_weight"], "content": chunk.get("content_with_weight", chunk.get("content")),
"document_id": chunk["doc_id"], "document_id": chunk.get("doc_id", chunk.get("document_id")),
"document_name": chunk["docnm_kwd"], "document_name": chunk.get("docnm_kwd", chunk.get("document_name")),
"knowledgebase_id": chunk["kb_id"], "dataset_id": chunk.get("kb_id", chunk.get("dataset_id")),
"image_id": chunk["img_id"], "image_id": chunk.get("image_id", chunk.get("img_id")),
"similarity": chunk["similarity"], "positions": chunk.get("positions", chunk.get("position_int")),
"vector_similarity": chunk["vector_similarity"], }
"term_similarity": chunk["term_similarity"],
"positions": chunk["positions"], chunk_list.append(new_chunk)
} chunk_num += 1
chunk_list.append(new_chunk) messages[message_num]["reference"] = chunk_list
chunk_num += 1 message_num += 1
messages[message_num]["reference"] = chunk_list del conv["reference"]
message_num += 1 return get_result(data=convs)
del conv["reference"]
return get_json_result(data=convs)
@manager.route('/agents/<agent_id>/sessions', methods=['GET']) # noqa: F821
@token_required
@manager.route('/delete', methods=["DELETE"]) def list_agent_session(tenant_id, agent_id):
@token_required if not UserCanvasService.query(user_id=tenant_id, id=agent_id):
def delete(tenant_id): return get_error_data_result(message=f"You don't own the agent {agent_id}.")
id = request.args.get("id") id = request.args.get("id")
if not id: user_id = request.args.get("user_id")
return get_data_error_result(retmsg="`id` is required in deleting operation") page_number = int(request.args.get("page", 1))
conv = ConversationService.query(id=id) items_per_page = int(request.args.get("page_size", 30))
if not conv: orderby = request.args.get("orderby", "update_time")
return get_data_error_result(retmsg="Session doesn't exist") if request.args.get("desc") == "False" or request.args.get("desc") == "false":
conv = conv[0] desc = False
if not DialogService.query(id=conv.dialog_id, tenant_id=tenant_id, status=StatusEnum.VALID.value): else:
return get_data_error_result(retmsg="You don't own the session") desc = True
ConversationService.delete_by_id(id) convs = API4ConversationService.get_list(agent_id, tenant_id, page_number, items_per_page, orderby, desc, id, user_id)
return get_json_result(data=True) if not convs:
return get_result(data=[])
for conv in convs:
conv['messages'] = conv.pop("message")
infos = conv["messages"]
for info in infos:
if "prompt" in info:
info.pop("prompt")
conv["agent_id"] = conv.pop("dialog_id")
if conv["reference"]:
messages = conv["messages"]
message_num = 0
chunk_num = 0
while message_num < len(messages):
if message_num != 0 and messages[message_num]["role"] != "user":
chunk_list = []
if "chunks" in conv["reference"][chunk_num]:
chunks = conv["reference"][chunk_num]["chunks"]
for chunk in chunks:
new_chunk = {
"id": chunk.get("chunk_id", chunk.get("id")),
"content": chunk.get("content_with_weight", chunk.get("content")),
"document_id": chunk.get("doc_id", chunk.get("document_id")),
"document_name": chunk.get("docnm_kwd", chunk.get("document_name")),
"dataset_id": chunk.get("kb_id", chunk.get("dataset_id")),
"image_id": chunk.get("image_id", chunk.get("img_id")),
"positions": chunk.get("positions", chunk.get("position_int")),
}
chunk_list.append(new_chunk)
chunk_num += 1
messages[message_num]["reference"] = chunk_list
message_num += 1
del conv["reference"]
return get_result(data=convs)
@manager.route('/chats/<chat_id>/sessions', methods=["DELETE"]) # noqa: F821
@token_required
def delete(tenant_id, chat_id):
if not DialogService.query(id=chat_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_error_data_result(message="You don't own the chat")
req = request.json
convs = ConversationService.query(dialog_id=chat_id)
if not req:
ids = None
else:
ids = req.get("ids")
if not ids:
conv_list = []
for conv in convs:
conv_list.append(conv.id)
else:
conv_list = ids
for id in conv_list:
conv = ConversationService.query(id=id, dialog_id=chat_id)
if not conv:
return get_error_data_result(message="The chat doesn't own the session")
ConversationService.delete_by_id(id)
return get_result()
@manager.route('/sessions/ask', methods=['POST']) # noqa: F821
@token_required
def ask_about(tenant_id):
req = request.json
if not req.get("question"):
return get_error_data_result("`question` is required.")
if not req.get("dataset_ids"):
return get_error_data_result("`dataset_ids` is required.")
if not isinstance(req.get("dataset_ids"), list):
return get_error_data_result("`dataset_ids` should be a list.")
req["kb_ids"] = req.pop("dataset_ids")
for kb_id in req["kb_ids"]:
if not KnowledgebaseService.accessible(kb_id, tenant_id):
return get_error_data_result(f"You don't own the dataset {kb_id}.")
kbs = KnowledgebaseService.query(id=kb_id)
kb = kbs[0]
if kb.chunk_num == 0:
return get_error_data_result(f"The dataset {kb_id} doesn't own parsed file")
uid = tenant_id
def stream():
nonlocal req, uid
try:
for ans in ask(req["question"], req["kb_ids"], uid):
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
except Exception as e:
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
resp = Response(stream(), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
@manager.route('/sessions/related_questions', methods=['POST']) # noqa: F821
@token_required
def related_questions(tenant_id):
req = request.json
if not req.get("question"):
return get_error_data_result("`question` is required.")
question = req["question"]
chat_mdl = LLMBundle(tenant_id, LLMType.CHAT)
prompt = """
Objective: To generate search terms related to the user's search keywords, helping users find more valuable information.
Instructions:
- Based on the keywords provided by the user, generate 5-10 related search terms.
- Each search term should be directly or indirectly related to the keyword, guiding the user to find more valuable information.
- Use common, general terms as much as possible, avoiding obscure words or technical jargon.
- Keep the term length between 2-4 words, concise and clear.
- DO NOT translate, use the language of the original keywords.
### Example:
Keywords: Chinese football
Related search terms:
1. Current status of Chinese football
2. Reform of Chinese football
3. Youth training of Chinese football
4. Chinese football in the Asian Cup
5. Chinese football in the World Cup
Reason:
- When searching, users often only use one or two keywords, making it difficult to fully express their information needs.
- Generating related search terms can help users dig deeper into relevant information and improve search efficiency.
- At the same time, related terms can also help search engines better understand user needs and return more accurate search results.
"""
ans = chat_mdl.chat(prompt, [{"role": "user", "content": f"""
Keywords: {question}
Related search terms:
"""}], {"temperature": 0.9})
return get_result(data=[re.sub(r"^[0-9]\. ", "", a) for a in ans.split("\n") if re.match(r"^[0-9]\. ", a)])
@manager.route('/chatbots/<dialog_id>/completions', methods=['POST']) # noqa: F821
def chatbot_completions(dialog_id):
req = request.json
token = request.headers.get('Authorization').split()
if len(token) != 2:
return get_error_data_result(message='Authorization is not valid!"')
token = token[1]
objs = APIToken.query(beta=token)
if not objs:
return get_error_data_result(message='Authentication error: API key is invalid!"')
if "quote" not in req:
req["quote"] = False
if req.get("stream", True):
resp = Response(iframe_completion(dialog_id, **req), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
for answer in iframe_completion(dialog_id, **req):
return get_result(data=answer)
@manager.route('/agentbots/<agent_id>/completions', methods=['POST']) # noqa: F821
def agent_bot_completions(agent_id):
req = request.json
token = request.headers.get('Authorization').split()
if len(token) != 2:
return get_error_data_result(message='Authorization is not valid!"')
token = token[1]
objs = APIToken.query(beta=token)
if not objs:
return get_error_data_result(message='Authentication error: API key is invalid!"')
if "quote" not in req:
req["quote"] = False
if req.get("stream", True):
resp = Response(agent_completion(objs[0].tenant_id, agent_id, **req), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
for answer in agent_completion(objs[0].tenant_id, agent_id, **req):
return get_result(data=answer)

View File

@ -13,77 +13,288 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License # limitations under the License
# #
import logging
from datetime import datetime
import json import json
from flask_login import login_required from flask_login import login_required, current_user
from api.db.db_models import APIToken
from api.db.services.api_service import APITokenService
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.utils.api_utils import get_json_result from api.db.services.user_service import UserTenantService
from api.versions import get_rag_version from api import settings
from rag.settings import SVR_QUEUE_NAME from api.utils import current_timestamp, datetime_format
from rag.utils.es_conn import ELASTICSEARCH from api.utils.api_utils import (
from rag.utils.storage_factory import STORAGE_IMPL get_json_result,
get_data_error_result,
server_error_response,
generate_confirmation_token,
)
from api.versions import get_ragflow_version
from rag.utils.storage_factory import STORAGE_IMPL, STORAGE_IMPL_TYPE
from timeit import default_timer as timer from timeit import default_timer as timer
from rag.utils.redis_conn import REDIS_CONN from rag.utils.redis_conn import REDIS_CONN
@manager.route('/version', methods=['GET']) @manager.route("/version", methods=["GET"]) # noqa: F821
@login_required @login_required
def version(): def version():
return get_json_result(data=get_rag_version()) """
Get the current version of the application.
---
tags:
- System
security:
- ApiKeyAuth: []
responses:
200:
description: Version retrieved successfully.
schema:
type: object
properties:
version:
type: string
description: Version number.
"""
return get_json_result(data=get_ragflow_version())
@manager.route('/status', methods=['GET']) @manager.route("/status", methods=["GET"]) # noqa: F821
@login_required @login_required
def status(): def status():
"""
Get the system status.
---
tags:
- System
security:
- ApiKeyAuth: []
responses:
200:
description: System is operational.
schema:
type: object
properties:
es:
type: object
description: Elasticsearch status.
storage:
type: object
description: Storage status.
database:
type: object
description: Database status.
503:
description: Service unavailable.
schema:
type: object
properties:
error:
type: string
description: Error message.
"""
res = {} res = {}
st = timer() st = timer()
try: try:
res["es"] = ELASTICSEARCH.health() res["doc_engine"] = settings.docStoreConn.health()
res["es"]["elapsed"] = "{:.1f}".format((timer() - st)*1000.) res["doc_engine"]["elapsed"] = "{:.1f}".format((timer() - st) * 1000.0)
except Exception as e: except Exception as e:
res["es"] = {"status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)} res["doc_engine"] = {
"type": "unknown",
"status": "red",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
"error": str(e),
}
st = timer() st = timer()
try: try:
STORAGE_IMPL.health() STORAGE_IMPL.health()
res["minio"] = {"status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)} res["storage"] = {
"storage": STORAGE_IMPL_TYPE.lower(),
"status": "green",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
}
except Exception as e: except Exception as e:
res["minio"] = {"status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)} res["storage"] = {
"storage": STORAGE_IMPL_TYPE.lower(),
"status": "red",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
"error": str(e),
}
st = timer() st = timer()
try: try:
KnowledgebaseService.get_by_id("x") KnowledgebaseService.get_by_id("x")
res["mysql"] = {"status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)} res["database"] = {
"database": settings.DATABASE_TYPE.lower(),
"status": "green",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
}
except Exception as e: except Exception as e:
res["mysql"] = {"status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)} res["database"] = {
"database": settings.DATABASE_TYPE.lower(),
"status": "red",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
"error": str(e),
}
st = timer() st = timer()
try: try:
if not REDIS_CONN.health(): if not REDIS_CONN.health():
raise Exception("Lost connection!") raise Exception("Lost connection!")
res["redis"] = {"status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)} res["redis"] = {
"status": "green",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
}
except Exception as e: except Exception as e:
res["redis"] = {"status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)} res["redis"] = {
"status": "red",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
"error": str(e),
}
task_executor_heartbeats = {}
try: try:
v = REDIS_CONN.get("TASKEXE") task_executors = REDIS_CONN.smembers("TASKEXE")
if not v: now = datetime.now().timestamp()
raise Exception("No task executor running!") for task_executor_id in task_executors:
obj = json.loads(v) heartbeats = REDIS_CONN.zrangebyscore(task_executor_id, now - 60*30, now)
color = "green" heartbeats = [json.loads(heartbeat) for heartbeat in heartbeats]
for id in obj.keys(): task_executor_heartbeats[task_executor_id] = heartbeats
arr = obj[id] except Exception:
if len(arr) == 1: logging.exception("get task executor heartbeats failed!")
obj[id] = [0] res["task_executor_heartbeats"] = task_executor_heartbeats
else:
obj[id] = [arr[i+1]-arr[i] for i in range(len(arr)-1)]
elapsed = max(obj[id])
if elapsed > 50: color = "yellow"
if elapsed > 120: color = "red"
res["task_executor"] = {"status": color, "elapsed": obj}
except Exception as e:
res["task_executor"] = {"status": "red", "error": str(e)}
return get_json_result(data=res) return get_json_result(data=res)
@manager.route("/new_token", methods=["POST"]) # noqa: F821
@login_required
def new_token():
"""
Generate a new API token.
---
tags:
- API Tokens
security:
- ApiKeyAuth: []
parameters:
- in: query
name: name
type: string
required: false
description: Name of the token.
responses:
200:
description: Token generated successfully.
schema:
type: object
properties:
token:
type: string
description: The generated API token.
"""
try:
tenants = UserTenantService.query(user_id=current_user.id)
if not tenants:
return get_data_error_result(message="Tenant not found!")
tenant_id = tenants[0].tenant_id
obj = {
"tenant_id": tenant_id,
"token": generate_confirmation_token(tenant_id),
"beta": generate_confirmation_token(generate_confirmation_token(tenant_id)).replace("ragflow-", "")[:32],
"create_time": current_timestamp(),
"create_date": datetime_format(datetime.now()),
"update_time": None,
"update_date": None,
}
if not APITokenService.save(**obj):
return get_data_error_result(message="Fail to new a dialog!")
return get_json_result(data=obj)
except Exception as e:
return server_error_response(e)
@manager.route("/token_list", methods=["GET"]) # noqa: F821
@login_required
def token_list():
"""
List all API tokens for the current user.
---
tags:
- API Tokens
security:
- ApiKeyAuth: []
responses:
200:
description: List of API tokens.
schema:
type: object
properties:
tokens:
type: array
items:
type: object
properties:
token:
type: string
description: The API token.
name:
type: string
description: Name of the token.
create_time:
type: string
description: Token creation time.
"""
try:
tenants = UserTenantService.query(user_id=current_user.id)
if not tenants:
return get_data_error_result(message="Tenant not found!")
tenant_id = tenants[0].tenant_id
objs = APITokenService.query(tenant_id=tenant_id)
objs = [o.to_dict() for o in objs]
for o in objs:
if not o["beta"]:
o["beta"] = generate_confirmation_token(generate_confirmation_token(tenants[0].tenant_id)).replace("ragflow-", "")[:32]
APITokenService.filter_update([APIToken.tenant_id == tenant_id, APIToken.token == o["token"]], o)
return get_json_result(data=objs)
except Exception as e:
return server_error_response(e)
@manager.route("/token/<token>", methods=["DELETE"]) # noqa: F821
@login_required
def rm(token):
"""
Remove an API token.
---
tags:
- API Tokens
security:
- ApiKeyAuth: []
parameters:
- in: path
name: token
type: string
required: true
description: The API token to remove.
responses:
200:
description: Token removed successfully.
schema:
type: object
properties:
success:
type: boolean
description: Deletion status.
"""
APITokenService.filter_delete(
[APIToken.tenant_id == current_user.id, APIToken.token == token]
)
return get_json_result(data=True)

View File

@ -15,71 +15,108 @@
# #
from flask import request from flask import request
from flask_login import current_user, login_required from flask_login import login_required, current_user
from api import settings
from api.db import UserTenantRole, StatusEnum from api.db import UserTenantRole, StatusEnum
from api.db.db_models import UserTenant from api.db.db_models import UserTenant
from api.db.services.user_service import TenantService, UserTenantService from api.db.services.user_service import UserTenantService, UserService
from api.settings import RetCode
from api.utils import get_uuid from api.utils import get_uuid, delta_seconds
from api.utils.api_utils import get_json_result, validate_request, server_error_response from api.utils.api_utils import get_json_result, validate_request, server_error_response, get_data_error_result
@manager.route("/list", methods=["GET"]) @manager.route("/<tenant_id>/user/list", methods=["GET"]) # noqa: F821
@login_required
def tenant_list():
try:
tenants = TenantService.get_by_user_id(current_user.id)
return get_json_result(data=tenants)
except Exception as e:
return server_error_response(e)
@manager.route("/<tenant_id>/user/list", methods=["GET"])
@login_required @login_required
def user_list(tenant_id): def user_list(tenant_id):
if current_user.id != tenant_id:
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR)
try: try:
users = UserTenantService.get_by_tenant_id(tenant_id) users = UserTenantService.get_by_tenant_id(tenant_id)
for u in users:
u["delta_seconds"] = delta_seconds(str(u["update_date"]))
return get_json_result(data=users) return get_json_result(data=users)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route('/<tenant_id>/user', methods=['POST']) @manager.route('/<tenant_id>/user', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("user_id") @validate_request("email")
def create(tenant_id): def create(tenant_id):
user_id = request.json.get("user_id") if current_user.id != tenant_id:
if not user_id:
return get_json_result( return get_json_result(
data=False, retmsg='Lack of "USER ID"', retcode=RetCode.ARGUMENT_ERROR) data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR)
try: req = request.json
user_tenants = UserTenantService.query(user_id=user_id, tenant_id=tenant_id) invite_user_email = req["email"]
if user_tenants: invite_users = UserService.query(email=invite_user_email)
uuid = user_tenants[0].id if not invite_users:
return get_json_result(data={"id": uuid}) return get_data_error_result(message="User not found.")
uuid = get_uuid() user_id_to_invite = invite_users[0].id
UserTenantService.save( user_tenants = UserTenantService.query(user_id=user_id_to_invite, tenant_id=tenant_id)
id = uuid, if user_tenants:
user_id = user_id, user_tenant_role = user_tenants[0].role
tenant_id = tenant_id, if user_tenant_role == UserTenantRole.NORMAL:
role = UserTenantRole.NORMAL.value, return get_data_error_result(message=f"{invite_user_email} is already in the team.")
status = StatusEnum.VALID.value) if user_tenant_role == UserTenantRole.OWNER:
return get_data_error_result(message=f"{invite_user_email} is the owner of the team.")
return get_data_error_result(message=f"{invite_user_email} is in the team, but the role: {user_tenant_role} is invalid.")
return get_json_result(data={"id": uuid}) UserTenantService.save(
except Exception as e: id=get_uuid(),
return server_error_response(e) user_id=user_id_to_invite,
tenant_id=tenant_id,
invited_by=current_user.id,
role=UserTenantRole.INVITE,
status=StatusEnum.VALID.value)
usr = invite_users[0].to_dict()
usr = {k: v for k, v in usr.items() if k in ["id", "avatar", "email", "nickname"]}
return get_json_result(data=usr)
@manager.route('/<tenant_id>/user/<user_id>', methods=['DELETE']) @manager.route('/<tenant_id>/user/<user_id>', methods=['DELETE']) # noqa: F821
@login_required @login_required
def rm(tenant_id, user_id): def rm(tenant_id, user_id):
if current_user.id != tenant_id and current_user.id != user_id:
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR)
try: try:
UserTenantService.filter_delete([UserTenant.tenant_id == tenant_id, UserTenant.user_id == user_id]) UserTenantService.filter_delete([UserTenant.tenant_id == tenant_id, UserTenant.user_id == user_id])
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route("/list", methods=["GET"]) # noqa: F821
@login_required
def tenant_list():
try:
users = UserTenantService.get_tenants_by_user_id(current_user.id)
for u in users:
u["delta_seconds"] = delta_seconds(str(u["update_date"]))
return get_json_result(data=users)
except Exception as e:
return server_error_response(e)
@manager.route("/agree/<tenant_id>", methods=["PUT"]) # noqa: F821
@login_required
def agree(tenant_id):
try:
UserTenantService.filter_update([UserTenant.tenant_id == tenant_id, UserTenant.user_id == current_user.id], {"role": UserTenantRole.NORMAL})
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
import json import json
import re import re
from datetime import datetime from datetime import datetime
@ -23,65 +24,127 @@ from flask_login import login_required, current_user, login_user, logout_user
from api.db.db_models import TenantLLM from api.db.db_models import TenantLLM
from api.db.services.llm_service import TenantLLMService, LLMService from api.db.services.llm_service import TenantLLMService, LLMService
from api.utils.api_utils import server_error_response, validate_request from api.utils.api_utils import (
from api.utils import get_uuid, get_format_time, decrypt, download_img, current_timestamp, datetime_format server_error_response,
from api.db import UserTenantRole, LLMType, FileType validate_request,
from api.settings import RetCode, GITHUB_OAUTH, FEISHU_OAUTH, CHAT_MDL, EMBEDDING_MDL, ASR_MDL, IMAGE2TEXT_MDL, PARSERS, \ get_data_error_result,
API_KEY, \ )
LLM_FACTORY, LLM_BASE_URL, RERANK_MDL from api.utils import (
get_uuid,
get_format_time,
decrypt,
download_img,
current_timestamp,
datetime_format,
)
from api.db import UserTenantRole, FileType
from api import settings
from api.db.services.user_service import UserService, TenantService, UserTenantService from api.db.services.user_service import UserService, TenantService, UserTenantService
from api.db.services.file_service import FileService from api.db.services.file_service import FileService
from api.settings import stat_logger
from api.utils.api_utils import get_json_result, construct_response from api.utils.api_utils import get_json_result, construct_response
@manager.route('/login', methods=['POST', 'GET']) @manager.route("/login", methods=["POST", "GET"]) # noqa: F821
def login(): def login():
"""
User login endpoint.
---
tags:
- User
parameters:
- in: body
name: body
description: Login credentials.
required: true
schema:
type: object
properties:
email:
type: string
description: User email.
password:
type: string
description: User password.
responses:
200:
description: Login successful.
schema:
type: object
401:
description: Authentication failed.
schema:
type: object
"""
if not request.json: if not request.json:
return get_json_result(data=False, return get_json_result(
retcode=RetCode.AUTHENTICATION_ERROR, data=False, code=settings.RetCode.AUTHENTICATION_ERROR, message="Unauthorized!"
retmsg='Unauthorized!') )
email = request.json.get('email', "") email = request.json.get("email", "")
users = UserService.query(email=email) users = UserService.query(email=email)
if not users: if not users:
return get_json_result(data=False, return get_json_result(
retcode=RetCode.AUTHENTICATION_ERROR, data=False,
retmsg=f'Email: {email} is not registered!') code=settings.RetCode.AUTHENTICATION_ERROR,
message=f"Email: {email} is not registered!",
)
password = request.json.get('password') password = request.json.get("password")
try: try:
password = decrypt(password) password = decrypt(password)
except BaseException: except BaseException:
return get_json_result(data=False, return get_json_result(
retcode=RetCode.SERVER_ERROR, data=False, code=settings.RetCode.SERVER_ERROR, message="Fail to crypt password"
retmsg='Fail to crypt password') )
user = UserService.query_user(email, password) user = UserService.query_user(email, password)
if user: if user:
response_data = user.to_json() response_data = user.to_json()
user.access_token = get_uuid() user.access_token = get_uuid()
login_user(user) login_user(user)
user.update_time = current_timestamp(), user.update_time = (current_timestamp(),)
user.update_date = datetime_format(datetime.now()), user.update_date = (datetime_format(datetime.now()),)
user.save() user.save()
msg = "Welcome back!" msg = "Welcome back!"
return construct_response(data=response_data, auth=user.get_id(), retmsg=msg) return construct_response(data=response_data, auth=user.get_id(), message=msg)
else: else:
return get_json_result(data=False, return get_json_result(
retcode=RetCode.AUTHENTICATION_ERROR, data=False,
retmsg='Email and password do not match!') code=settings.RetCode.AUTHENTICATION_ERROR,
message="Email and password do not match!",
)
@manager.route('/github_callback', methods=['GET']) @manager.route("/github_callback", methods=["GET"]) # noqa: F821
def github_callback(): def github_callback():
"""
GitHub OAuth callback endpoint.
---
tags:
- OAuth
parameters:
- in: query
name: code
type: string
required: true
description: Authorization code from GitHub.
responses:
200:
description: Authentication successful.
schema:
type: object
"""
import requests import requests
res = requests.post(GITHUB_OAUTH.get("url"),
data={ res = requests.post(
"client_id": GITHUB_OAUTH.get("client_id"), settings.GITHUB_OAUTH.get("url"),
"client_secret": GITHUB_OAUTH.get("secret_key"), data={
"code": request.args.get('code')}, "client_id": settings.GITHUB_OAUTH.get("client_id"),
headers={"Accept": "application/json"}) "client_secret": settings.GITHUB_OAUTH.get("secret_key"),
"code": request.args.get("code"),
},
headers={"Accept": "application/json"},
)
res = res.json() res = res.json()
if "error" in res: if "error" in res:
return redirect("/?error=%s" % res["error_description"]) return redirect("/?error=%s" % res["error_description"])
@ -101,21 +164,24 @@ def github_callback():
try: try:
avatar = download_img(user_info["avatar_url"]) avatar = download_img(user_info["avatar_url"])
except Exception as e: except Exception as e:
stat_logger.exception(e) logging.exception(e)
avatar = "" avatar = ""
users = user_register(user_id, { users = user_register(
"access_token": session["access_token"], user_id,
"email": email_address, {
"avatar": avatar, "access_token": session["access_token"],
"nickname": user_info["login"], "email": email_address,
"login_channel": "github", "avatar": avatar,
"last_login_time": get_format_time(), "nickname": user_info["login"],
"is_superuser": False, "login_channel": "github",
}) "last_login_time": get_format_time(),
"is_superuser": False,
},
)
if not users: if not users:
raise Exception(f'Fail to register {email_address}.') raise Exception(f"Fail to register {email_address}.")
if len(users) > 1: if len(users) > 1:
raise Exception(f'Same email: {email_address} exists!') raise Exception(f"Same email: {email_address} exists!")
# Try to log in # Try to log in
user = users[0] user = users[0]
@ -123,7 +189,7 @@ def github_callback():
return redirect("/?auth=%s" % user.get_id()) return redirect("/?auth=%s" % user.get_id())
except Exception as e: except Exception as e:
rollback_user_registration(user_id) rollback_user_registration(user_id)
stat_logger.exception(e) logging.exception(e)
return redirect("/?error=%s" % str(e)) return redirect("/?error=%s" % str(e))
# User has already registered, try to log in # User has already registered, try to log in
@ -134,33 +200,59 @@ def github_callback():
return redirect("/?auth=%s" % user.get_id()) return redirect("/?auth=%s" % user.get_id())
@manager.route('/feishu_callback', methods=['GET']) @manager.route("/feishu_callback", methods=["GET"]) # noqa: F821
def feishu_callback(): def feishu_callback():
"""
Feishu OAuth callback endpoint.
---
tags:
- OAuth
parameters:
- in: query
name: code
type: string
required: true
description: Authorization code from Feishu.
responses:
200:
description: Authentication successful.
schema:
type: object
"""
import requests import requests
app_access_token_res = requests.post(FEISHU_OAUTH.get("app_access_token_url"),
data=json.dumps({ app_access_token_res = requests.post(
"app_id": FEISHU_OAUTH.get("app_id"), settings.FEISHU_OAUTH.get("app_access_token_url"),
"app_secret": FEISHU_OAUTH.get("app_secret") data=json.dumps(
}), {
headers={"Content-Type": "application/json; charset=utf-8"}) "app_id": settings.FEISHU_OAUTH.get("app_id"),
"app_secret": settings.FEISHU_OAUTH.get("app_secret"),
}
),
headers={"Content-Type": "application/json; charset=utf-8"},
)
app_access_token_res = app_access_token_res.json() app_access_token_res = app_access_token_res.json()
if app_access_token_res['code'] != 0: if app_access_token_res["code"] != 0:
return redirect("/?error=%s" % app_access_token_res) return redirect("/?error=%s" % app_access_token_res)
res = requests.post(FEISHU_OAUTH.get("user_access_token_url"), res = requests.post(
data=json.dumps({ settings.FEISHU_OAUTH.get("user_access_token_url"),
"grant_type": FEISHU_OAUTH.get("grant_type"), data=json.dumps(
"code": request.args.get('code') {
}), "grant_type": settings.FEISHU_OAUTH.get("grant_type"),
headers={ "code": request.args.get("code"),
"Content-Type": "application/json; charset=utf-8", }
'Authorization': f"Bearer {app_access_token_res['app_access_token']}" ),
}) headers={
"Content-Type": "application/json; charset=utf-8",
"Authorization": f"Bearer {app_access_token_res['app_access_token']}",
},
)
res = res.json() res = res.json()
if res['code'] != 0: if res["code"] != 0:
return redirect("/?error=%s" % res["message"]) return redirect("/?error=%s" % res["message"])
if "contact:user.email:readonly" not in res["data"]["scope"].split(" "): if "contact:user.email:readonly" not in res["data"]["scope"].split():
return redirect("/?error=contact:user.email:readonly not in scope") return redirect("/?error=contact:user.email:readonly not in scope")
session["access_token"] = res["data"]["access_token"] session["access_token"] = res["data"]["access_token"]
session["access_token_from"] = "feishu" session["access_token_from"] = "feishu"
@ -174,21 +266,24 @@ def feishu_callback():
try: try:
avatar = download_img(user_info["avatar_url"]) avatar = download_img(user_info["avatar_url"])
except Exception as e: except Exception as e:
stat_logger.exception(e) logging.exception(e)
avatar = "" avatar = ""
users = user_register(user_id, { users = user_register(
"access_token": session["access_token"], user_id,
"email": email_address, {
"avatar": avatar, "access_token": session["access_token"],
"nickname": user_info["en_name"], "email": email_address,
"login_channel": "feishu", "avatar": avatar,
"last_login_time": get_format_time(), "nickname": user_info["en_name"],
"is_superuser": False, "login_channel": "feishu",
}) "last_login_time": get_format_time(),
"is_superuser": False,
},
)
if not users: if not users:
raise Exception(f'Fail to register {email_address}.') raise Exception(f"Fail to register {email_address}.")
if len(users) > 1: if len(users) > 1:
raise Exception(f'Same email: {email_address} exists!') raise Exception(f"Same email: {email_address} exists!")
# Try to log in # Try to log in
user = users[0] user = users[0]
@ -196,7 +291,7 @@ def feishu_callback():
return redirect("/?auth=%s" % user.get_id()) return redirect("/?auth=%s" % user.get_id())
except Exception as e: except Exception as e:
rollback_user_registration(user_id) rollback_user_registration(user_id)
stat_logger.exception(e) logging.exception(e)
return redirect("/?error=%s" % str(e)) return redirect("/?error=%s" % str(e))
# User has already registered, try to log in # User has already registered, try to log in
@ -209,11 +304,14 @@ def feishu_callback():
def user_info_from_feishu(access_token): def user_info_from_feishu(access_token):
import requests import requests
headers = {"Content-Type": "application/json; charset=utf-8",
'Authorization': f"Bearer {access_token}"} headers = {
"Content-Type": "application/json; charset=utf-8",
"Authorization": f"Bearer {access_token}",
}
res = requests.get( res = requests.get(
f"https://open.feishu.cn/open-apis/authen/v1/user_info", "https://open.feishu.cn/open-apis/authen/v1/user_info", headers=headers
headers=headers) )
user_info = res.json()["data"] user_info = res.json()["data"]
user_info["email"] = None if user_info.get("email") == "" else user_info["email"] user_info["email"] = None if user_info.get("email") == "" else user_info["email"]
return user_info return user_info
@ -221,46 +319,103 @@ def user_info_from_feishu(access_token):
def user_info_from_github(access_token): def user_info_from_github(access_token):
import requests import requests
headers = {"Accept": "application/json",
'Authorization': f"token {access_token}"} headers = {"Accept": "application/json", "Authorization": f"token {access_token}"}
res = requests.get( res = requests.get(
f"https://api.github.com/user?access_token={access_token}", f"https://api.github.com/user?access_token={access_token}", headers=headers
headers=headers) )
user_info = res.json() user_info = res.json()
email_info = requests.get( email_info = requests.get(
f"https://api.github.com/user/emails?access_token={access_token}", f"https://api.github.com/user/emails?access_token={access_token}",
headers=headers).json() headers=headers,
).json()
user_info["email"] = next( user_info["email"] = next(
(email for email in email_info if email['primary'] == True), (email for email in email_info if email["primary"]), None
None)["email"] )["email"]
return user_info return user_info
@manager.route("/logout", methods=['GET']) @manager.route("/logout", methods=["GET"]) # noqa: F821
@login_required @login_required
def log_out(): def log_out():
"""
User logout endpoint.
---
tags:
- User
security:
- ApiKeyAuth: []
responses:
200:
description: Logout successful.
schema:
type: object
"""
current_user.access_token = "" current_user.access_token = ""
current_user.save() current_user.save()
logout_user() logout_user()
return get_json_result(data=True) return get_json_result(data=True)
@manager.route("/setting", methods=["POST"]) @manager.route("/setting", methods=["POST"]) # noqa: F821
@login_required @login_required
def setting_user(): def setting_user():
"""
Update user settings.
---
tags:
- User
security:
- ApiKeyAuth: []
parameters:
- in: body
name: body
description: User settings to update.
required: true
schema:
type: object
properties:
nickname:
type: string
description: New nickname.
email:
type: string
description: New email.
responses:
200:
description: Settings updated successfully.
schema:
type: object
"""
update_dict = {} update_dict = {}
request_data = request.json request_data = request.json
if request_data.get("password"): if request_data.get("password"):
new_password = request_data.get("new_password") new_password = request_data.get("new_password")
if not check_password_hash( if not check_password_hash(
current_user.password, decrypt(request_data["password"])): current_user.password, decrypt(request_data["password"])
return get_json_result(data=False, retcode=RetCode.AUTHENTICATION_ERROR, retmsg='Password error!') ):
return get_json_result(
data=False,
code=settings.RetCode.AUTHENTICATION_ERROR,
message="Password error!",
)
if new_password: if new_password:
update_dict["password"] = generate_password_hash(decrypt(new_password)) update_dict["password"] = generate_password_hash(decrypt(new_password))
for k in request_data.keys(): for k in request_data.keys():
if k in ["password", "new_password"]: if k in [
"password",
"new_password",
"email",
"status",
"is_superuser",
"login_channel",
"is_anonymous",
"is_active",
"is_authenticated",
"last_login_time",
]:
continue continue
update_dict[k] = request_data[k] update_dict[k] = request_data[k]
@ -268,34 +423,59 @@ def setting_user():
UserService.update_by_id(current_user.id, update_dict) UserService.update_by_id(current_user.id, update_dict)
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
stat_logger.exception(e) logging.exception(e)
return get_json_result(data=False, retmsg='Update failure!', retcode=RetCode.EXCEPTION_ERROR) return get_json_result(
data=False, message="Update failure!", code=settings.RetCode.EXCEPTION_ERROR
)
@manager.route("/info", methods=["GET"]) @manager.route("/info", methods=["GET"]) # noqa: F821
@login_required @login_required
def user_profile(): def user_profile():
"""
Get user profile information.
---
tags:
- User
security:
- ApiKeyAuth: []
responses:
200:
description: User profile retrieved successfully.
schema:
type: object
properties:
id:
type: string
description: User ID.
nickname:
type: string
description: User nickname.
email:
type: string
description: User email.
"""
return get_json_result(data=current_user.to_dict()) return get_json_result(data=current_user.to_dict())
def rollback_user_registration(user_id): def rollback_user_registration(user_id):
try: try:
UserService.delete_by_id(user_id) UserService.delete_by_id(user_id)
except Exception as e: except Exception:
pass pass
try: try:
TenantService.delete_by_id(user_id) TenantService.delete_by_id(user_id)
except Exception as e: except Exception:
pass pass
try: try:
u = UserTenantService.query(tenant_id=user_id) u = UserTenantService.query(tenant_id=user_id)
if u: if u:
UserTenantService.delete_by_id(u[0].id) UserTenantService.delete_by_id(u[0].id)
except Exception as e: except Exception:
pass pass
try: try:
TenantLLM.delete().where(TenantLLM.tenant_id == user_id).execute() TenantLLM.delete().where(TenantLLM.tenant_id == user_id).execute()
except Exception as e: except Exception:
pass pass
@ -304,18 +484,18 @@ def user_register(user_id, user):
tenant = { tenant = {
"id": user_id, "id": user_id,
"name": user["nickname"] + "s Kingdom", "name": user["nickname"] + "s Kingdom",
"llm_id": CHAT_MDL, "llm_id": settings.CHAT_MDL,
"embd_id": EMBEDDING_MDL, "embd_id": settings.EMBEDDING_MDL,
"asr_id": ASR_MDL, "asr_id": settings.ASR_MDL,
"parser_ids": PARSERS, "parser_ids": settings.PARSERS,
"img2txt_id": IMAGE2TEXT_MDL, "img2txt_id": settings.IMAGE2TEXT_MDL,
"rerank_id": RERANK_MDL "rerank_id": settings.RERANK_MDL,
} }
usr_tenant = { usr_tenant = {
"tenant_id": user_id, "tenant_id": user_id,
"user_id": user_id, "user_id": user_id,
"invited_by": user_id, "invited_by": user_id,
"role": UserTenantRole.OWNER "role": UserTenantRole.OWNER,
} }
file_id = get_uuid() file_id = get_uuid()
file = { file = {
@ -329,14 +509,18 @@ def user_register(user_id, user):
"location": "", "location": "",
} }
tenant_llm = [] tenant_llm = []
for llm in LLMService.query(fid=LLM_FACTORY): for llm in LLMService.query(fid=settings.LLM_FACTORY):
tenant_llm.append({"tenant_id": user_id, tenant_llm.append(
"llm_factory": LLM_FACTORY, {
"llm_name": llm.llm_name, "tenant_id": user_id,
"model_type": llm.model_type, "llm_factory": settings.LLM_FACTORY,
"api_key": API_KEY, "llm_name": llm.llm_name,
"api_base": LLM_BASE_URL "model_type": llm.model_type,
}) "api_key": settings.API_KEY,
"api_base": settings.LLM_BASE_URL,
"max_tokens": llm.max_tokens if llm.max_tokens else 8192
}
)
if not UserService.save(**user): if not UserService.save(**user):
return return
@ -347,24 +531,55 @@ def user_register(user_id, user):
return UserService.query(email=user["email"]) return UserService.query(email=user["email"])
@manager.route("/register", methods=["POST"]) @manager.route("/register", methods=["POST"]) # noqa: F821
@validate_request("nickname", "email", "password") @validate_request("nickname", "email", "password")
def user_add(): def user_add():
"""
Register a new user.
---
tags:
- User
parameters:
- in: body
name: body
description: Registration details.
required: true
schema:
type: object
properties:
nickname:
type: string
description: User nickname.
email:
type: string
description: User email.
password:
type: string
description: User password.
responses:
200:
description: Registration successful.
schema:
type: object
"""
req = request.json req = request.json
email_address = req["email"] email_address = req["email"]
# Validate the email address # Validate the email address
if not re.match(r"^[\w\._-]+@([\w_-]+\.)+[\w-]{2,4}$", email_address): if not re.match(r"^[\w\._-]+@([\w_-]+\.)+[\w-]{2,5}$", email_address):
return get_json_result(data=False, return get_json_result(
retmsg=f'Invalid email address: {email_address}!', data=False,
retcode=RetCode.OPERATING_ERROR) message=f"Invalid email address: {email_address}!",
code=settings.RetCode.OPERATING_ERROR,
)
# Check if the email address is already used # Check if the email address is already used
if UserService.query(email=email_address): if UserService.query(email=email_address):
return get_json_result( return get_json_result(
data=False, data=False,
retmsg=f'Email: {email_address} has already registered!', message=f"Email: {email_address} has already registered!",
retcode=RetCode.OPERATING_ERROR) code=settings.RetCode.OPERATING_ERROR,
)
# Construct user info data # Construct user info data
nickname = req["nickname"] nickname = req["nickname"]
@ -382,40 +597,107 @@ def user_add():
try: try:
users = user_register(user_id, user_dict) users = user_register(user_id, user_dict)
if not users: if not users:
raise Exception(f'Fail to register {email_address}.') raise Exception(f"Fail to register {email_address}.")
if len(users) > 1: if len(users) > 1:
raise Exception(f'Same email: {email_address} exists!') raise Exception(f"Same email: {email_address} exists!")
user = users[0] user = users[0]
login_user(user) login_user(user)
return construct_response(data=user.to_json(), return construct_response(
auth=user.get_id(), data=user.to_json(),
retmsg=f"{nickname}, welcome aboard!") auth=user.get_id(),
message=f"{nickname}, welcome aboard!",
)
except Exception as e: except Exception as e:
rollback_user_registration(user_id) rollback_user_registration(user_id)
stat_logger.exception(e) logging.exception(e)
return get_json_result(data=False, return get_json_result(
retmsg=f'User registration failure, error: {str(e)}', data=False,
retcode=RetCode.EXCEPTION_ERROR) message=f"User registration failure, error: {str(e)}",
code=settings.RetCode.EXCEPTION_ERROR,
)
@manager.route("/tenant_info", methods=["GET"]) @manager.route("/tenant_info", methods=["GET"]) # noqa: F821
@login_required @login_required
def tenant_info(): def tenant_info():
"""
Get tenant information.
---
tags:
- Tenant
security:
- ApiKeyAuth: []
responses:
200:
description: Tenant information retrieved successfully.
schema:
type: object
properties:
tenant_id:
type: string
description: Tenant ID.
name:
type: string
description: Tenant name.
llm_id:
type: string
description: LLM ID.
embd_id:
type: string
description: Embedding model ID.
"""
try: try:
tenants = TenantService.get_by_user_id(current_user.id)[0] tenants = TenantService.get_info_by(current_user.id)
return get_json_result(data=tenants) if not tenants:
return get_data_error_result(message="Tenant not found!")
return get_json_result(data=tenants[0])
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route("/set_tenant_info", methods=["POST"]) @manager.route("/set_tenant_info", methods=["POST"]) # noqa: F821
@login_required @login_required
@validate_request("tenant_id", "asr_id", "embd_id", "img2txt_id", "llm_id") @validate_request("tenant_id", "asr_id", "embd_id", "img2txt_id", "llm_id")
def set_tenant_info(): def set_tenant_info():
"""
Update tenant information.
---
tags:
- Tenant
security:
- ApiKeyAuth: []
parameters:
- in: body
name: body
description: Tenant information to update.
required: true
schema:
type: object
properties:
tenant_id:
type: string
description: Tenant ID.
llm_id:
type: string
description: LLM ID.
embd_id:
type: string
description: Embedding model ID.
asr_id:
type: string
description: ASR model ID.
img2txt_id:
type: string
description: Image to Text model ID.
responses:
200:
description: Tenant information updated successfully.
schema:
type: object
"""
req = request.json req = request.json
try: try:
tid = req["tenant_id"] tid = req.pop("tenant_id")
del req["tenant_id"]
TenantService.update_by_id(tid, req) TenantService.update_by_id(tid, req)
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:

View File

@ -13,9 +13,15 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
class DataSet: NAME_LENGTH_LIMIT = 2 ** 10
def __init__(self, user_key, dataset_url, uuid, name):
self.user_key = user_key IMG_BASE64_PREFIX = 'data:image/png;base64,'
self.dataset_url = dataset_url
self.uuid = uuid SERVICE_CONF = "service_conf.yaml"
self.name = name
API_VERSION = "v1"
RAG_FLOW_SERVICE_NAME = "ragflow"
REQUEST_WAIT_SEC = 2
REQUEST_MAX_WAIT_SEC = 300
DATASET_NAME_LIMIT = 128

View File

@ -27,6 +27,7 @@ class UserTenantRole(StrEnum):
OWNER = 'owner' OWNER = 'owner'
ADMIN = 'admin' ADMIN = 'admin'
NORMAL = 'normal' NORMAL = 'normal'
INVITE = 'invite'
class TenantPermission(StrEnum): class TenantPermission(StrEnum):
@ -88,6 +89,7 @@ class ParserType(StrEnum):
AUDIO = "audio" AUDIO = "audio"
EMAIL = "email" EMAIL = "email"
KG = "knowledge_graph" KG = "knowledge_graph"
TAG = "tag"
class FileSource(StrEnum): class FileSource(StrEnum):

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging
import inspect import inspect
import os import os
import sys import sys
@ -29,12 +30,10 @@ from peewee import (
Field, Model, Metadata Field, Model, Metadata
) )
from playhouse.pool import PooledMySQLDatabase, PooledPostgresqlDatabase from playhouse.pool import PooledMySQLDatabase, PooledPostgresqlDatabase
from api.db import SerializedType, ParserType
from api.settings import DATABASE, stat_logger, SECRET_KEY, DATABASE_TYPE
from api.utils.log_utils import getLogger
from api import utils
LOGGER = getLogger() from api.db import SerializedType, ParserType
from api import settings
from api import utils
def singleton(cls, *args, **kw): def singleton(cls, *args, **kw):
@ -65,7 +64,7 @@ class TextFieldType(Enum):
class LongTextField(TextField): class LongTextField(TextField):
field_type = TextFieldType[DATABASE_TYPE.upper()].value field_type = TextFieldType[settings.DATABASE_TYPE.upper()].value
class JSONField(LongTextField): class JSONField(LongTextField):
@ -130,7 +129,7 @@ def is_continuous_field(cls: typing.Type) -> bool:
for p in cls.__bases__: for p in cls.__bases__:
if p in CONTINUOUS_FIELD_TYPE: if p in CONTINUOUS_FIELD_TYPE:
return True return True
elif p != Field and p != object: elif p is not Field and p is not object:
if is_continuous_field(p): if is_continuous_field(p):
return True return True
else: else:
@ -272,6 +271,7 @@ class JsonSerializedField(SerializedField):
super(JsonSerializedField, self).__init__(serialized_type=SerializedType.JSON, object_hook=object_hook, super(JsonSerializedField, self).__init__(serialized_type=SerializedType.JSON, object_hook=object_hook,
object_pairs_hook=object_pairs_hook, **kwargs) object_pairs_hook=object_pairs_hook, **kwargs)
class PooledDatabase(Enum): class PooledDatabase(Enum):
MYSQL = PooledMySQLDatabase MYSQL = PooledMySQLDatabase
POSTGRES = PooledPostgresqlDatabase POSTGRES = PooledPostgresqlDatabase
@ -285,10 +285,11 @@ class DatabaseMigrator(Enum):
@singleton @singleton
class BaseDataBase: class BaseDataBase:
def __init__(self): def __init__(self):
database_config = DATABASE.copy() database_config = settings.DATABASE.copy()
db_name = database_config.pop("name") db_name = database_config.pop("name")
self.database_connection = PooledDatabase[DATABASE_TYPE.upper()].value(db_name, **database_config) self.database_connection = PooledDatabase[settings.DATABASE_TYPE.upper()].value(db_name, **database_config)
stat_logger.info('init database on cluster mode successfully') logging.info('init database on cluster mode successfully')
class PostgresDatabaseLock: class PostgresDatabaseLock:
def __init__(self, lock_name, timeout=10, db=None): def __init__(self, lock_name, timeout=10, db=None):
@ -334,6 +335,7 @@ class PostgresDatabaseLock:
return magic return magic
class MysqlDatabaseLock: class MysqlDatabaseLock:
def __init__(self, lock_name, timeout=10, db=None): def __init__(self, lock_name, timeout=10, db=None):
self.lock_name = lock_name self.lock_name = lock_name
@ -388,7 +390,7 @@ class DatabaseLock(Enum):
DB = BaseDataBase().database_connection DB = BaseDataBase().database_connection
DB.lock = DatabaseLock[DATABASE_TYPE.upper()].value DB.lock = DatabaseLock[settings.DATABASE_TYPE.upper()].value
def close_connection(): def close_connection():
@ -396,7 +398,7 @@ def close_connection():
if DB: if DB:
DB.close_stale(age=30) DB.close_stale(age=30)
except Exception as e: except Exception as e:
LOGGER.exception(e) logging.exception(e)
class DataBaseModel(BaseModel): class DataBaseModel(BaseModel):
@ -412,15 +414,15 @@ def init_database_tables(alter_fields=[]):
for name, obj in members: for name, obj in members:
if obj != DataBaseModel and issubclass(obj, DataBaseModel): if obj != DataBaseModel and issubclass(obj, DataBaseModel):
table_objs.append(obj) table_objs.append(obj)
LOGGER.info(f"start create table {obj.__name__}") logging.debug(f"start create table {obj.__name__}")
try: try:
obj.create_table() obj.create_table()
LOGGER.info(f"create table success: {obj.__name__}") logging.debug(f"create table success: {obj.__name__}")
except Exception as e: except Exception as e:
LOGGER.exception(e) logging.exception(e)
create_failed_list.append(obj.__name__) create_failed_list.append(obj.__name__)
if create_failed_list: if create_failed_list:
LOGGER.info(f"create tables failed: {create_failed_list}") logging.error(f"create tables failed: {create_failed_list}")
raise Exception(f"create tables failed: {create_failed_list}") raise Exception(f"create tables failed: {create_failed_list}")
migrate_db() migrate_db()
@ -470,7 +472,7 @@ class User(DataBaseModel, UserMixin):
status = CharField( status = CharField(
max_length=1, max_length=1,
null=True, null=True,
help_text="is it validate(0: wasted1: validate)", help_text="is it validate(0: wasted, 1: validate)",
default="1", default="1",
index=True) index=True)
is_superuser = BooleanField(null=True, help_text="is root", default=False, index=True) is_superuser = BooleanField(null=True, help_text="is root", default=False, index=True)
@ -479,7 +481,7 @@ class User(DataBaseModel, UserMixin):
return self.email return self.email
def get_id(self): def get_id(self):
jwt = Serializer(secret_key=SECRET_KEY) jwt = Serializer(secret_key=settings.SECRET_KEY)
return jwt.dumps(str(self.access_token)) return jwt.dumps(str(self.access_token))
class Meta: class Meta:
@ -525,7 +527,7 @@ class Tenant(DataBaseModel):
status = CharField( status = CharField(
max_length=1, max_length=1,
null=True, null=True,
help_text="is it validate(0: wasted1: validate)", help_text="is it validate(0: wasted, 1: validate)",
default="1", default="1",
index=True) index=True)
@ -542,7 +544,7 @@ class UserTenant(DataBaseModel):
status = CharField( status = CharField(
max_length=1, max_length=1,
null=True, null=True,
help_text="is it validate(0: wasted1: validate)", help_text="is it validate(0: wasted, 1: validate)",
default="1", default="1",
index=True) index=True)
@ -559,7 +561,7 @@ class InvitationCode(DataBaseModel):
status = CharField( status = CharField(
max_length=1, max_length=1,
null=True, null=True,
help_text="is it validate(0: wasted1: validate)", help_text="is it validate(0: wasted, 1: validate)",
default="1", default="1",
index=True) index=True)
@ -582,7 +584,7 @@ class LLMFactories(DataBaseModel):
status = CharField( status = CharField(
max_length=1, max_length=1,
null=True, null=True,
help_text="is it validate(0: wasted1: validate)", help_text="is it validate(0: wasted, 1: validate)",
default="1", default="1",
index=True) index=True)
@ -616,7 +618,7 @@ class LLM(DataBaseModel):
status = CharField( status = CharField(
max_length=1, max_length=1,
null=True, null=True,
help_text="is it validate(0: wasted1: validate)", help_text="is it validate(0: wasted, 1: validate)",
default="1", default="1",
index=True) index=True)
@ -648,7 +650,7 @@ class TenantLLM(DataBaseModel):
index=True) index=True)
api_key = CharField(max_length=1024, null=True, help_text="API KEY", index=True) api_key = CharField(max_length=1024, null=True, help_text="API KEY", index=True)
api_base = CharField(max_length=255, null=True, help_text="API Base") api_base = CharField(max_length=255, null=True, help_text="API Base")
max_tokens = IntegerField(default=8192, index=True)
used_tokens = IntegerField(default=0, index=True) used_tokens = IntegerField(default=0, index=True)
def __str__(self): def __str__(self):
@ -700,10 +702,11 @@ class Knowledgebase(DataBaseModel):
default=ParserType.NAIVE.value, default=ParserType.NAIVE.value,
index=True) index=True)
parser_config = JSONField(null=False, default={"pages": [[1, 1000000]]}) parser_config = JSONField(null=False, default={"pages": [[1, 1000000]]})
pagerank = IntegerField(default=0, index=False)
status = CharField( status = CharField(
max_length=1, max_length=1,
null=True, null=True,
help_text="is it validate(0: wasted1: validate)", help_text="is it validate(0: wasted, 1: validate)",
default="1", default="1",
index=True) index=True)
@ -757,6 +760,7 @@ class Document(DataBaseModel):
default="") default="")
process_begin_at = DateTimeField(null=True, index=True) process_begin_at = DateTimeField(null=True, index=True)
process_duation = FloatField(default=0) process_duation = FloatField(default=0)
meta_fields = JSONField(null=True, default={})
run = CharField( run = CharField(
max_length=1, max_length=1,
@ -767,7 +771,7 @@ class Document(DataBaseModel):
status = CharField( status = CharField(
max_length=1, max_length=1,
null=True, null=True,
help_text="is it validate(0: wasted1: validate)", help_text="is it validate(0: wasted, 1: validate)",
default="1", default="1",
index=True) index=True)
@ -840,7 +844,7 @@ class Task(DataBaseModel):
doc_id = CharField(max_length=32, null=False, index=True) doc_id = CharField(max_length=32, null=False, index=True)
from_page = IntegerField(default=0) from_page = IntegerField(default=0)
to_page = IntegerField(default=-1) to_page = IntegerField(default=100000000)
begin_at = DateTimeField(null=True, index=True) begin_at = DateTimeField(null=True, index=True)
process_duation = FloatField(default=0) process_duation = FloatField(default=0)
@ -851,6 +855,8 @@ class Task(DataBaseModel):
help_text="process message", help_text="process message",
default="") default="")
retry_count = IntegerField(default=0) retry_count = IntegerField(default=0)
digest = TextField(null=True, help_text="task digest", default="")
chunk_ids = LongTextField(null=True, help_text="chunk ids", default="")
class Dialog(DataBaseModel): class Dialog(DataBaseModel):
@ -879,8 +885,10 @@ class Dialog(DataBaseModel):
default="simple", default="simple",
help_text="simple|advanced", help_text="simple|advanced",
index=True) index=True)
prompt_config = JSONField(null=False, default={"system": "", "prologue": "您好我是您的助手小樱长得可爱又善良can I help you?", prompt_config = JSONField(null=False,
"parameters": [], "empty_response": "Sorry! 知识库中未找到相关内容!"}) default={"system": "", "prologue": "Hi! I'm your assistant, what can I do for you?",
"parameters": [],
"empty_response": "Sorry! No relevant content was found in the knowledge base!"})
similarity_threshold = FloatField(default=0.2) similarity_threshold = FloatField(default=0.2)
vector_similarity_weight = FloatField(default=0.3) vector_similarity_weight = FloatField(default=0.3)
@ -894,7 +902,7 @@ class Dialog(DataBaseModel):
null=False, null=False,
default="1", default="1",
help_text="it needs to insert reference index into answer or not") help_text="it needs to insert reference index into answer or not")
rerank_id = CharField( rerank_id = CharField(
max_length=128, max_length=128,
null=False, null=False,
@ -904,7 +912,7 @@ class Dialog(DataBaseModel):
status = CharField( status = CharField(
max_length=1, max_length=1,
null=True, null=True,
help_text="is it validate(0: wasted1: validate)", help_text="is it validate(0: wasted, 1: validate)",
default="1", default="1",
index=True) index=True)
@ -918,6 +926,7 @@ class Conversation(DataBaseModel):
name = CharField(max_length=255, null=True, help_text="converastion name", index=True) name = CharField(max_length=255, null=True, help_text="converastion name", index=True)
message = JSONField(null=True) message = JSONField(null=True)
reference = JSONField(null=True, default=[]) reference = JSONField(null=True, default=[])
user_id = CharField(max_length=255, null=True, help_text="user_id", index=True)
class Meta: class Meta:
db_table = "conversation" db_table = "conversation"
@ -928,6 +937,7 @@ class APIToken(DataBaseModel):
token = CharField(max_length=255, null=False, index=True) token = CharField(max_length=255, null=False, index=True)
dialog_id = CharField(max_length=32, null=False, index=True) dialog_id = CharField(max_length=32, null=False, index=True)
source = CharField(max_length=16, null=True, help_text="none|agent|dialog", index=True) source = CharField(max_length=16, null=True, help_text="none|agent|dialog", index=True)
beta = CharField(max_length=255, null=True, index=True)
class Meta: class Meta:
db_table = "api_token" db_table = "api_token"
@ -942,7 +952,7 @@ class API4Conversation(DataBaseModel):
reference = JSONField(null=True, default=[]) reference = JSONField(null=True, default=[])
tokens = IntegerField(default=0) tokens = IntegerField(default=0)
source = CharField(max_length=16, null=True, help_text="none|agent|dialog", index=True) source = CharField(max_length=16, null=True, help_text="none|agent|dialog", index=True)
dsl = JSONField(null=True, default={})
duration = FloatField(default=0, index=True) duration = FloatField(default=0, index=True)
round = IntegerField(default=0, index=True) round = IntegerField(default=0, index=True)
thumb_up = IntegerField(default=0, index=True) thumb_up = IntegerField(default=0, index=True)
@ -980,14 +990,14 @@ class CanvasTemplate(DataBaseModel):
def migrate_db(): def migrate_db():
with DB.transaction(): with DB.transaction():
migrator = DatabaseMigrator[DATABASE_TYPE.upper()].value(DB) migrator = DatabaseMigrator[settings.DATABASE_TYPE.upper()].value(DB)
try: try:
migrate( migrate(
migrator.add_column('file', 'source_type', CharField(max_length=128, null=False, default="", migrator.add_column('file', 'source_type', CharField(max_length=128, null=False, default="",
help_text="where dose this document come from", help_text="where dose this document come from",
index=True)) index=True))
) )
except Exception as e: except Exception:
pass pass
try: try:
migrate( migrate(
@ -996,7 +1006,7 @@ def migrate_db():
help_text="default rerank model ID")) help_text="default rerank model ID"))
) )
except Exception as e: except Exception:
pass pass
try: try:
migrate( migrate(
@ -1004,52 +1014,104 @@ def migrate_db():
help_text="default rerank model ID")) help_text="default rerank model ID"))
) )
except Exception as e: except Exception:
pass pass
try: try:
migrate( migrate(
migrator.add_column('dialog', 'top_k', IntegerField(default=1024)) migrator.add_column('dialog', 'top_k', IntegerField(default=1024))
) )
except Exception as e: except Exception:
pass pass
try: try:
migrate( migrate(
migrator.alter_column_type('tenant_llm', 'api_key', migrator.alter_column_type('tenant_llm', 'api_key',
CharField(max_length=1024, null=True, help_text="API KEY", index=True)) CharField(max_length=1024, null=True, help_text="API KEY", index=True))
) )
except Exception as e: except Exception:
pass pass
try: try:
migrate( migrate(
migrator.add_column('api_token', 'source', migrator.add_column('api_token', 'source',
CharField(max_length=16, null=True, help_text="none|agent|dialog", index=True)) CharField(max_length=16, null=True, help_text="none|agent|dialog", index=True))
) )
except Exception as e: except Exception:
pass pass
try: try:
migrate( migrate(
migrator.add_column("tenant","tts_id", migrator.add_column("tenant", "tts_id",
CharField(max_length=256,null=True,help_text="default tts model ID",index=True)) CharField(max_length=256, null=True, help_text="default tts model ID", index=True))
) )
except Exception as e: except Exception:
pass pass
try: try:
migrate( migrate(
migrator.add_column('api_4_conversation', 'source', migrator.add_column('api_4_conversation', 'source',
CharField(max_length=16, null=True, help_text="none|agent|dialog", index=True)) CharField(max_length=16, null=True, help_text="none|agent|dialog", index=True))
) )
except Exception as e: except Exception:
pass
try:
DB.execute_sql('ALTER TABLE llm DROP PRIMARY KEY;')
DB.execute_sql('ALTER TABLE llm ADD PRIMARY KEY (llm_name,fid);')
except Exception as e:
pass pass
try: try:
migrate( migrate(
migrator.add_column('task', 'retry_count', IntegerField(default=0)) migrator.add_column('task', 'retry_count', IntegerField(default=0))
) )
except Exception as e: except Exception:
pass
try:
migrate(
migrator.alter_column_type('api_token', 'dialog_id',
CharField(max_length=32, null=True, index=True))
)
except Exception:
pass
try:
migrate(
migrator.add_column("tenant_llm", "max_tokens", IntegerField(default=8192, index=True))
)
except Exception:
pass
try:
migrate(
migrator.add_column("api_4_conversation", "dsl", JSONField(null=True, default={}))
)
except Exception:
pass
try:
migrate(
migrator.add_column("knowledgebase", "pagerank", IntegerField(default=0, index=False))
)
except Exception:
pass
try:
migrate(
migrator.add_column("api_token", "beta", CharField(max_length=255, null=True, index=True))
)
except Exception:
pass
try:
migrate(
migrator.add_column("task", "digest", TextField(null=True, help_text="task digest", default=""))
)
except Exception:
pass pass
try:
migrate(
migrator.add_column("task", "chunk_ids", LongTextField(null=True, help_text="chunk ids", default=""))
)
except Exception:
pass
try:
migrate(
migrator.add_column("conversation", "user_id",
CharField(max_length=255, null=True, help_text="user_id", index=True))
)
except Exception:
pass
try:
migrate(
migrator.add_column("document", "meta_fields",
JSONField(null=True, default={}))
)
except Exception:
pass

View File

@ -15,19 +15,12 @@
# #
import operator import operator
from functools import reduce from functools import reduce
from typing import Dict, Type, Union
from playhouse.pool import PooledMySQLDatabase from playhouse.pool import PooledMySQLDatabase
from api.utils import current_timestamp, timestamp_to_date from api.utils import current_timestamp, timestamp_to_date
from api.db.db_models import DB, DataBaseModel from api.db.db_models import DB, DataBaseModel
from api.db.runtime_config import RuntimeConfig
from api.utils.log_utils import getLogger
from enum import Enum
LOGGER = getLogger()
@DB.connection_context() @DB.connection_context()
@ -93,7 +86,7 @@ supported_operators = {
def query_dict2expression( def query_dict2expression(
model: Type[DataBaseModel], query: Dict[str, Union[bool, int, str, list, tuple]]): model: type[DataBaseModel], query: dict[str, bool | int | str | list | tuple]):
expression = [] expression = []
for field, value in query.items(): for field, value in query.items():
@ -111,8 +104,8 @@ def query_dict2expression(
return reduce(operator.iand, expression) return reduce(operator.iand, expression)
def query_db(model: Type[DataBaseModel], limit: int = 0, offset: int = 0, def query_db(model: type[DataBaseModel], limit: int = 0, offset: int = 0,
query: dict = None, order_by: Union[str, list, tuple] = None): query: dict = None, order_by: str | list | tuple | None = None):
data = model.select() data = model.select()
if query: if query:
data = data.where(query_dict2expression(model, query)) data = data.where(query_dict2expression(model, query))

Some files were not shown because too many files have changed in this diff Show More