Compare commits

...

230 Commits

Author SHA1 Message Date
1160b58b6e Update pyproject.toml to 0.15.1 (#4222)
### What problem does this PR solve?

Missing update information

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-25 14:06:00 +08:00
61790ebe15 Fix: Rename chat name, missing field 'avatar' #4125 (#4221)
### What problem does this PR solve?

Fix: Rename chat name, missing field 'avatar' #4125

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-25 11:25:50 +08:00
bc3288d390 Fix misspell. (#4219)
### What problem does this PR solve?
#4216

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-25 10:48:59 +08:00
4e5f92f01b Fix interface as input variable for component. (#4212)
### What problem does this PR solve?

#4108

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-24 17:58:11 +08:00
7d8e0602aa Remove session owner check. (#4211)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-24 17:40:31 +08:00
03cbbf7784 Add user_id for third-party system to record sessions. (#4206)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-12-24 15:59:11 +08:00
b7a7413419 Bump infinity to 0.5.2 (#4207)
### What problem does this PR solve?

Bump infinity to 0.5.2

### Type of change

- [x] Refactoring
2024-12-24 15:17:37 +08:00
321e9f3719 fix: stop rerank by model when search result is empty (#4203)
### What problem does this PR solve?


stop rerank by model when search result is empty, otherwise rerank may
raise an error (qwen).

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: 刘博 <liubo@ynby.cn>
2024-12-24 14:33:46 +08:00
76cd23eecf Catch the exception while parsing pptx. (#4202)
### What problem does this PR solve?
#4189

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-24 10:49:28 +08:00
d030b4a680 Update progress time info (#4193)
### What problem does this PR solve?

Ignore the millisecond and microsecond value.

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-23 21:04:44 +08:00
a9fd6066d2 Fix score() issue (#4194)
### What problem does this PR solve?

as title

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-23 21:01:20 +08:00
c373dba0bc Fix raptor bug. (#4192)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-23 18:59:48 +08:00
cf62230548 Fix: Fixed the issue that the page crashed when the node ID was the same as the combo ID #4180 (#4191)
### What problem does this PR solve?

Fix: Fixed the issue that the page crashed when the node ID was the same
as the combo ID #4180

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-23 18:13:56 +08:00
8d73cf6f02 Added time to progress message (#4185)
### What problem does this PR solve?

Added time to progress message

### Type of change

- [x] Refactoring
2024-12-23 17:25:55 +08:00
b635002666 Fix duplicated communitiy (#4187)
### What problem does this PR solve?

#4180

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-23 17:07:12 +08:00
4abc144d3d Fix error of changing embedding model (#4184)
### What problem does this PR solve?

1. Change embedding model of knowledge base won't change the default
embedding model.
2. Retrieval test bug

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-23 16:23:54 +08:00
a4bccc1ae7 Feat: If there is no result in the recall test, an empty data image will be displayed. #4182 (#4183)
### What problem does this PR solve?

Feat: If there is no result in the recall test, an empty data image will
be displayed. #4182
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-23 15:17:56 +08:00
8f070c3d56 Fix 'SCORE' not found bug (#4178)
### What problem does this PR solve?

As title

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-23 14:50:12 +08:00
31d67c850e Fetch chunk by batches. (#4177)
### What problem does this PR solve?

#4173

### Type of change

- [x] Performance Improvement
2024-12-23 12:12:15 +08:00
2cbe064080 Add Llama3.3 (#4174)
### What problem does this PR solve?

#4168

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-23 11:18:01 +08:00
cac7851fc5 Update jin10.svg (#4159)
Can custom color and size instead of base64
2024-12-23 10:34:13 +08:00
96da618b6a Fix bug over chunks classification by document in the promp (#4156)
The refactor in 0.15 contains a small bug that eliminate the
classification

### What problem does this PR solve

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2024-12-23 10:22:57 +08:00
f13f503952 Use s3 configuration from settings module (#4167)
### What problem does this PR solve?

Fix the issue when retrieving AWS credentials from the S3 configuration
from the settings module instead of getting from the environment
variables.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-23 10:22:45 +08:00
cb45431412 Fix Voyage re-rank model. Limit file name length. (#4171)
### What problem does this PR solve?

#4152 
#4154

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-23 10:03:50 +08:00
85083ad400 Validate returned chunk at list_chunks and add_chunk (#4153)
### What problem does this PR solve?

Validate returned chunk at list_chunks and add_chunk

### Type of change

- [x] Refactoring
2024-12-20 22:55:45 +08:00
35580af875 Update the component of the agent API with parameters. (#4131)
### What problem does this PR solve?

Update the  component of the agent API with parameters.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-12-20 17:34:16 +08:00
a0dc9e1bdf Fix position_int on infinity (#4144)
### What problem does this PR solve?

Fix position_int on infinity

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-20 11:30:33 +08:00
6379a934ff Fix redis get error. (#4140)
### What problem does this PR solve?

#4126
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-20 10:39:50 +08:00
10a62115c7 Fix example in doc (#4133)
### What problem does this PR solve?

Fix example in doc

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-12-19 20:08:22 +08:00
e38e3bcc3b Mask password in log (#4129)
### What problem does this PR solve?

Mask password in log

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-19 18:37:01 +08:00
8dcf99611b Corrections. (#4127)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-12-19 18:19:56 +08:00
213218a094 Refactor ask decorator (#4116)
### What problem does this PR solve?

Refactor ask decorator

### Type of change

- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-19 18:13:33 +08:00
478da3118c add gemini 2.0 (#4115)
add gemini 2.0
2024-12-19 17:30:45 +08:00
101b8ff813 fix chunk method "Table" losing content when the Excel file has multi… (#4123)
…ple sheets

### What problem does this PR solve?
discussed in https://github.com/infiniflow/ragflow/pull/4102
- In excel_parser.py, `total` means the total number of rows in Excel,
but it return in the first iterate, that lead to the wrong `to_page`
- In table.py, it when Excel file has multiple sheets, it will be
divided into multiple parts, every part size is 3000, `data` may be
empty, because it has recorded in the last iterate.
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-19 17:30:26 +08:00
d8fca43017 Make fast embed and default embed mutually exclusive. (#4121)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-12-19 17:27:09 +08:00
b35e811fe7 Add parameters for ask_chat and fix bugs in list_sessions (#4119)
### What problem does this PR solve?

Add parameters for ask_chat and fix bugs in list_sessions
#4105
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-19 17:24:26 +08:00
7474348394 Fix fastembed reloading issue. (#4117)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-19 16:18:18 +08:00
8939206531 Separated list_agents() from session management (#4111)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-12-19 14:36:51 +08:00
57c99dd811 Fixed infinity exception SCORE() / SCORE_FACTORS() requires Fusion or MATCH TEXT or MATCH TENSOR (#4110)
### What problem does this PR solve?

Fixed infinity exception SCORE() / SCORE_FACTORS() requires Fusion or
MATCH TEXT or MATCH TENSOR. Close #4109

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-19 13:49:36 +08:00
561eeabfa4 add typo locale (#4099)
Add typo vi
2024-12-19 13:34:11 +08:00
5fb9136251 Miscellaneous updates to session APIs (#4097)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-18 19:01:05 +08:00
044bb0334b Fix release.yml (#4100)
### What problem does this PR solve?

Fix release.yml

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-18 17:23:29 +08:00
a5cf6fc546 Feat: Translate the previous run into parsing #4094 (#4095)
### What problem does this PR solve?

Feat: Translate the previous run into parsing #4094

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-18 16:16:57 +08:00
57fe5d0864 Add latest updates. (#4093)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-12-18 16:00:24 +08:00
bfdc4944a3 Added release notes for v0.15.0 (#4056)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-12-18 15:46:31 +08:00
a45ba3a91e Prepare docs for v0.15.0 release (#4077)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-12-18 15:32:15 +08:00
e513ad2f16 Feat: Add MultiSelect #3221 (#4090)
### What problem does this PR solve?

Feat: Add MultiSelect #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-18 14:51:24 +08:00
1fdad50dac Fix raptor (#4089)
### What problem does this PR solve?

Fix raptor

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-18 14:42:33 +08:00
4764ca5ef7 Bump infinity to 0.5.0 (#4088)
### What problem does this PR solve?

Bump infinity to 0.5.0

### Type of change

- [x] Refactoring
2024-12-18 14:39:09 +08:00
85f3d92816 Update team invite message (#4085)
### What problem does this PR solve?

Refactor inviting team member message.

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-18 14:20:09 +08:00
742eef028f Add huqie trie to docker image. (#4084)
### What problem does this PR solve?



### Type of change

- [x] Performance Improvement
2024-12-18 14:19:43 +08:00
dfbdeaddaf Fix: Fixed the issue where the required information in the input box was incorrect when inviting users #2834 (#4086)
### What problem does this PR solve?

Fix: Fixed the issue where the required information in the input box was
incorrect when inviting users #2834

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-18 14:07:56 +08:00
50c2b9d562 Refactor trie load and construct (#4083)
### What problem does this PR solve?

1. Fix initial build and load trie
2. Update comment

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-18 12:52:56 +08:00
f8cef73244 Fix abnormal user invitaion message. (#4081)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-18 12:45:24 +08:00
f8c9ec4d56 Fix arm doc (#4080)
### What problem does this PR solve?

Fix arm doc

### Type of change

- [x] Documentation Update
2024-12-18 11:51:03 +08:00
db74a3ef34 Fix conversation bug in agent session (#4078)
### What problem does this PR solve?

Fix conversation bug in agent session

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-18 11:49:25 +08:00
00f99ecbd5 Fix: Fixed the issue with external chat box reporting errors #3909 (#4079)
### What problem does this PR solve?

Fix:  Fixed the issue with external chat box reporting errors #3909

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-18 11:16:52 +08:00
0a3c6fff7c update chinese model warning message (#4075)
### What problem does this PR solve?

fix chinese warning to update model

### Type of change

- [x] Other (please describe): see the message

---------

Signed-off-by: Hui Peng <benquike@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-18 11:16:31 +08:00
79e435fc2e Fix: The cursor is lost after entering a character in the operator form #4072 (#4073)
### What problem does this PR solve?

Fix: The cursor is lost after entering a character in the operator form
#4072

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-17 18:07:22 +08:00
163c2a70fc Feat: Add AdvancedSettingForm #3221 (#4071)
### What problem does this PR solve?

Feat: Add AdvancedSettingForm #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 17:58:03 +08:00
bedc09f69c Add Architecture-Specific Logic for msodbcsql in Dockerfile #4036 (#4049)
### What problem does this PR solve?

For the new feature Add mssql support in the Dockerfile, I suggest
including support for msodbcsql18 for ARM64.
Based on current testing results (on macOS ARM64 environment),
msodbcsql18 needs to be installed.
I hope future Dockerfiles can incorporate a conditional check for this.
Specifically:

When $ARCH=arm64 (macOS ARM64 environment), install msodbcsql18.
In other cases (general x86_64 environment), install msodbcsql17.
### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Mage Lu <magelu@MagedeMac-mini.local>
2024-12-17 17:44:51 +08:00
251592eeeb show avatar dialog instead of default (#4033)
show avatar dialog instead of default

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-17 17:29:35 +08:00
09436f6c60 Elasticsearch disk-based shard allocator use absolute byte values instead of ratio (#4069)
### What problem does this PR solve?

Elasticsearch disk-based shard allocator use absolute byte values
instead of ratio. Close #4018

### Type of change

- [ ] Documentation Update
- [x] Refactoring
2024-12-17 16:48:40 +08:00
e8b4e8b3d7 Feat: Bind event to the theme Switch #3221 (#4067)
### What problem does this PR solve?

Feat: Bind event to  the theme Switch #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 16:32:17 +08:00
000cd6d615 Fix position lost issue. (#4068)
### What problem does this PR solve?

#4040

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-17 16:31:58 +08:00
1d65299791 Fix rerank_model bug in chat and markdown bug (#4061)
### What problem does this PR solve?

Fix rerank_model bug in chat and markdown bug
#4000
#3992
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-17 16:03:37 +08:00
bcccaccc2b Added pagerank support to infinity (#4059)
### What problem does this PR solve?

Added pagerank support to infinity

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-17 15:45:01 +08:00
fddac1345d Fix raptor resuable issue. (#4063)
### What problem does this PR solve?

#4045

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 15:28:35 +08:00
4a95349492 Feat: Modify the link address of the agent id #3909 (#4062)
### What problem does this PR solve?

Feat: Modify the link address of the agent id #3909

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 15:13:22 +08:00
0fcb564261 Feat: Modify the text of the embedded website button #3909 (#4057)
### What problem does this PR solve?

Feat: Modify the text of the embedded website button #3909

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 14:46:54 +08:00
96667696d2 Compatible with former API keys. (#4055)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-17 13:58:26 +08:00
ce1e855328 Upgrades Document Layout Analysis model. (#4054)
### What problem does this PR solve?

#4052

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 11:27:19 +08:00
b5e4a5563c Feat: Set the color of the canvas's control button #3851 (#4053)
### What problem does this PR solve?

Feat: Set the color of the canvas's control button #3851

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 11:23:00 +08:00
1053ef5551 Fix: Hide the upload button in the external chat box #2242 (#4048)
### What problem does this PR solve?

Fix: Hide the upload button in the external chat box  #2242

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-17 10:32:52 +08:00
cb6e9ce164 Cache the result from llm for graphrag and raptor (#4051)
### What problem does this PR solve?

#4045

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-17 09:48:03 +08:00
8ea631a2a0 Fix: Every time you switch the page number of a chunk, the PDF document will be reloaded. #4046 (#4047)
### What problem does this PR solve?

Fix: Every time you switch the page number of a chunk, the PDF document
will be reloaded. #4046

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-16 18:51:45 +08:00
7fb67c4f67 Fix chunk number error after re-parsing. (#4043)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-16 15:23:49 +08:00
44ac87aef4 Remove Redundant None Check for vector_similarity_weight (#4037)
### What problem does this PR solve?
The removed if statement is unnecessary and adds cognitive load for
readers.
The original code:
```
vector_similarity_weight = req.get("vector_similarity_weight", 0.3)
if vector_similarity_weight is None:
    vector_similarity_weight = 0.3
```
has been simplified to:
```
vector_similarity_weight = req.get("vector_similarity_weight", 0.3)
```

### Type of change
- [x] Refactoring
2024-12-16 14:35:21 +08:00
7ddccbb952 extraction sqlquery (#4027)
clone https://github.com/infiniflow/ragflow/pull/4023 
improve the information extraction, most llm return results in markdown
format ````sql ___ query `____ ```
2024-12-16 09:46:59 +08:00
4a7bc4df92 Updated configurations (#4032)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2024-12-13 19:45:54 +08:00
3b7d182720 Fixed a display issue (#4030)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-12-13 19:11:52 +08:00
78527acd88 Update launch_ragflow_from_source.md (#4005)
update install  poetry full package

- [x] Documentation Update
2024-12-13 18:58:01 +08:00
e5c3083826 Updated RAGFlow Agent UI (#4029)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-12-13 18:57:22 +08:00
9b9039de92 Fix connection error for adding visual llm. (#4028)
### What problem does this PR solve?

#3897

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-13 18:54:51 +08:00
9b2ef62aee Fix xinfo_groups returns unexpected result (#4026)
### What problem does this PR solve?

Fix xinfo_groups returns unexpected result. Close #3545 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-13 17:31:15 +08:00
86507af770 Set task progress on exception (#4025)
### What problem does this PR solve?

Set task progress on exception

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-13 17:15:08 +08:00
93635674c3 Feat: Reparse a file shall reuse existing chunks if possible #3793 (#4021)
### What problem does this PR solve?

Feat: Reparse a file shall reuse existing chunks if possible #3793

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-13 16:55:13 +08:00
1defe0b19b Feat: Supports to debug single component in Agent. #3993 (#4007)
### What problem does this PR solve?

Feat: Supports to debug single component in Agent. #3993
Fix: The github button on the login page is displayed incorrectly  #4002

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-13 14:43:24 +08:00
0bca46ac3a Migrate infinity at startup (#3858)
### What problem does this PR solve?

Migrate infinity at startup

#3809
https://github.com/infiniflow/infinity/issues/2321

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-13 13:43:56 +08:00
1ecb687c51 Fix bugs in agent api and update api document (#3996)
### What problem does this PR solve?

Fix bugs in agent api and update api document

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-13 10:25:52 +08:00
68d46b2a1e Fix bug in hierarchical_merge function (#4006)
### What problem does this PR solve?

Fix hierarchical_merge function. From idx vs. actual value to actual
value vs. actual value.
Related issue #4003 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: luopan <luopan@example.com>
2024-12-13 08:50:58 +08:00
7559bbd46d Component debugging funcionality. (#4012)
### What problem does this PR solve?

#3993
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-13 08:50:32 +08:00
275b5d14f2 Fix json file parse (#4004)
### What problem does this PR solve?

Fix json file parsing

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-12 20:34:46 +08:00
9ae81b42a3 Updated UI (#4011)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-12-12 19:46:53 +08:00
d6c74ff131 Add mssql support (#3985)
some thing
-  execsql  add connection mssql
- fix bug duckduckgo-search rate limit
- update typo vi res

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-12 19:26:44 +08:00
e8d74108a5 Fix: Completion AttributeError: 'list' object has no attribute 'get' (#3999)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: lizheng@ssc-hn.com <lizheng@ssc-hn.com>
2024-12-12 19:00:34 +08:00
c8b1a564aa Replaced md5 with xxhash64 for chunk id (#4009)
### What problem does this PR solve?

Replaced md5 with xxhash64 for chunk id

### Type of change

- [x] Refactoring
2024-12-12 17:47:39 +08:00
301f95837c Try to reuse existing chunks (#3983)
### What problem does this PR solve?

Try to reuse existing chunks. Close #3793
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-12 16:38:03 +08:00
835fd7abcd Updated RAGFlow edition descriptions (#4001)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-12-12 11:45:59 +08:00
bb8f97c9cd UI updates + RAGFlow image description (#3995)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-12-12 09:57:52 +08:00
6d19294ddc Support debug components. (#3994)
### What problem does this PR solve?

#3993

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-11 19:23:59 +08:00
f61c276f74 Update comment (#3981)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-12-11 18:39:09 +08:00
409acf0d9f Fix: Fixed the issue where two consecutive indexes were displayed incorrectly #3839 (#3988)
### What problem does this PR solve?

Fix: Fixed the issue where two consecutive indexes were displayed
incorrectly #3839

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-11 16:29:17 +08:00
74c6b21f3b Update api documents (#3979)
### What problem does this PR solve?

Update api documents

### Type of change

- [x] Documentation Update

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-11 12:38:57 +08:00
beeacd3e3f Fix exec sql exception issue. (#3982)
### What problem does this PR solve?
#3978

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-11 11:44:59 +08:00
95259af68f update typo vietnamese (#3973)
update typo vietnamese

### Type of change
- [x] Refactoring

---------

Co-authored-by: Yingfeng <yingfeng.zhang@gmail.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: bill <yibie_jingnian@163.com>
2024-12-11 11:12:57 +08:00
855455006b Disable SQL DB binlog in Helm chart (#3976)
### What problem does this PR solve?

The initial Helm chart implementation added in #3815 suffers from an
issue where the 5GB data volume for the SQL DB is filled up with
[binlog](https://dev.mysql.com/doc/refman/8.4/en/binary-log.html) files
after just a few days. Since the app uses a non-replicated SQL DB config
I think it makes sense to disable the binlog in the SQL DB container.
This is achieved by simply adding the required argument to the container
startup command.

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2024-12-11 11:10:33 +08:00
b844ad6e06 Added release notes (#3969)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-12-10 19:38:27 +08:00
e0533f19e9 Fix: Answers with links to information not parsing #3839 (#3968)
### What problem does this PR solve?

Fix: Answers with links to information not parsing #3839

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-10 18:58:06 +08:00
9a6d976252 Add back beartype (#3967)
### What problem does this PR solve?

Add back beartype

### Type of change

- [x] Refactoring
2024-12-10 18:43:43 +08:00
3d76f10a91 Fixed retrieval TypeError: unhashable type: 'list' (#3966)
### What problem does this PR solve?

Fixed retrieval TypeError: unhashable type: 'list'

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-10 18:28:56 +08:00
e9b8c30a38 Support iframe chatbot. (#3961)
### What problem does this PR solve?

#3909

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-10 17:03:24 +08:00
601d74160b Feat: Exclude reference from the data returned by the conversation/get interface #3909 (#3962)
### What problem does this PR solve?

Feat: Exclude reference from the data returned by the conversation/get
interface #3909

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-10 16:46:47 +08:00
fc4e644e5f Feat: Modify the data structure of the chunk in the conversation #3909 (#3955)
### What problem does this PR solve?

Feat: Modify the data structure of the chunk in the conversation #3909

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-10 16:36:16 +08:00
03f00c9e6f Rename page_num_list, top_list, position_list (#3940)
### What problem does this PR solve?

Rename page_num_list, top_list, position_list to page_num_int, top_int,
position_int

### Type of change

- [x] Refactoring
2024-12-10 16:32:58 +08:00
87e46b4425 Fixed README (#3956)
### What problem does this PR solve?

Fixed README

### Type of change

- [x] Documentation Update
2024-12-10 12:11:39 +08:00
d5a322a352 Theme switch support (#3568)
### What problem does this PR solve?
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Yingfeng <yingfeng.zhang@gmail.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
2024-12-10 11:42:04 +08:00
7d4f1c0645 Case insensitive when set doc engine (#3954)
### What problem does this PR solve?

DOC_ENGINE="INFINITY" or "Infinity" or "Elasticsearch" also works

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-12-10 11:26:10 +08:00
927873bfa6 Fix syn error. (#3953)
### What problem does this PR solve?

Close #3696
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-10 10:54:54 +08:00
5fe0791684 Allows quick entry of entities separated by commas (#3914)
Allows quick entry of entities separated by commas

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-10 10:29:27 +08:00
3e134ac0ad Miscellaneous updates (#3942)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update

---------

Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
2024-12-10 10:19:50 +08:00
7a6bf4326e Fixed log not displaying (#3946)
### What problem does this PR solve?

Fixed log not displaying

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-10 09:36:59 +08:00
41a0601735 organize chunks by document in the prompt (#3925)
### What problem does this PR solve?

This PR organize chunks in the prompt by document and indicate what is
the name of the document in this way

```
Document: {doc_name} \nContains the following relevant fragments:
chunk1
chunk2
chunk3

Document: {doc_name} \nContains the following relevant fragments:
chunk4
chunk5
```

Maybe can be a baseline to add metadata to the documents.

This allow in my case to improve llm context about the orgin of the
information.


### Type of change

- [X] New Feature (non-breaking change which adds functionality)

Co-authored-by: Miguel <your-noreply-github-email>
2024-12-10 09:06:52 +08:00
60486ecde5 api http return error (#3941)
api http  return error when content is not found


- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-10 09:04:24 +08:00
255f4ccffc Fix session API issues. (#3939)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-09 17:37:36 +08:00
afe82feb57 Fix error message for image access. (#3936)
### What problem does this PR solve?

#3883

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-09 15:24:58 +08:00
044afa83d1 Fix transformers dependencies for slim. (#3934)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-09 14:21:37 +08:00
4b00be4173 Fixed tmp in Dockerfile (#3933)
### What problem does this PR solve?

Fixed tmp in Dockerfile

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-09 14:20:18 +08:00
215e9361ea Fix field missing issue. (#3931)
### What problem does this PR solve?

#3905
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-09 13:20:58 +08:00
aaec630759 Obsoleted dev and dev-slim (#3930)
### What problem does this PR solve?

Obsoleted dev and dev-slim
### Type of change

- [x] Documentation Update
2024-12-09 12:44:57 +08:00
3d735dca87 Add support to iframe chatbot (#3929)
### What problem does this PR solve?

#3909

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-09 12:38:04 +08:00
dcedfc5ec8 Add Japanese support (#3906)
### What problem does this PR solve?

Add native translation in locales for Japanese 🇯🇵 to support new local
language



### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Hiroshi Kameya <kameya_h@sunflare.co.jp>
2024-12-09 11:38:06 +08:00
1254ecf445 Added static check at PR CI (#3921)
### What problem does this PR solve?

Added static check at PR CI

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
2024-12-08 21:23:51 +08:00
0d68a6cd1b Fix errors detected by Ruff (#3918)
### What problem does this PR solve?

Fix errors detected by Ruff

### Type of change

- [x] Refactoring
2024-12-08 14:21:12 +08:00
e267a026f3 Fix VERSION 2024-12-07 16:56:34 +08:00
44d4686b20 Fix release.yml 2024-12-07 15:21:24 +08:00
95614175e6 Update oc9 docker compose file (#3913)
### What problem does this PR solve?

Update OC9 docker compose files

issue: #3901

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-07 11:04:45 +08:00
c817ff184b Refactor UI text (#3911)
### What problem does this PR solve?

Refactor UI text

### Type of change

- [x] Documentation Update
- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-07 11:04:36 +08:00
f284578cea Fix release.yml 2024-12-06 23:00:29 +08:00
e69e6b2274 Fix VERSION 2024-12-06 22:51:31 +08:00
8cdb805c0b Fix release.py 2024-12-06 22:31:27 +08:00
885418f3b0 Fix release.yml 2024-12-06 21:10:06 +08:00
b44321f9c3 Introduced NEED_MIRROR (#3907)
### What problem does this PR solve?

Introduced NEED_MIRROR

### Type of change

- [x] Refactoring
2024-12-06 20:47:22 +08:00
f54a8d7748 Remove vector stored in component output. (#3908)
### What problem does this PR solve?

Close #3805

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-06 18:32:57 +08:00
311a475b6f Fix: Fixed the issue that the agent list page failed to load #3827 (#3902)
### What problem does this PR solve?

Fix: Fixed the issue that the agent list page failed to load #3827

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-06 17:05:40 +08:00
655b01a0a4 Remove token check while adding model. (#3903)
### What problem does this PR solve?

#3820

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-06 17:01:19 +08:00
d4ee082735 fix 2024-12-06 15:50:58 +08:00
1f5a7c4b12 Fix release.yml 2024-12-06 15:24:53 +08:00
dab58b9311 Fix release.yml 2024-12-06 14:41:43 +08:00
e56a60b316 Fixed release.yml 2024-12-06 14:37:41 +08:00
f189452446 Fix: An error occurred when deleting a new conversation #3898 (#3900)
### What problem does this PR solve?

Fix: An error occurred when deleting a new conversation #3898

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-06 14:24:41 +08:00
f576c555e4 Fixed release.yml 2024-12-06 14:18:42 +08:00
d8eea624e2 release with CI (#3891)
### What problem does this PR solve?

Refactor Dockerfile files.
Release with CI.

### Type of change

- [x] Refactoring
2024-12-06 14:05:30 +08:00
e64c7dfdf6 Feat: Import & export the agents. #3851 (#3894)
### What problem does this PR solve?

Feat: Import & export the agents. #3851

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-06 13:43:17 +08:00
c76e7b1e28 Fix typos (#3887)
### What problem does this PR solve?

Lots of typo, also need to merge into DEMO

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-12-06 09:47:56 +08:00
0d5486aa57 Add a dependency blincker. (#3882)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-05 18:14:39 +08:00
3a0e9f9263 Feat: Add tooltip to question item of ChunkCreatingModal #3873 (#3880)
### What problem does this PR solve?

Feat: Add tooltip to question item of ChunkCreatingModal  #3873

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-05 17:03:20 +08:00
1f0a153d0e Revert "Feat: Add tooltip to question item of ChunkCreatingModal #3873" (#3879)
Reverts infiniflow/ragflow#3878
2024-12-05 16:51:33 +08:00
8bdf1d98a3 Feat: Add tooltip to question item of ChunkCreatingModal #3873 (#3878)
### What problem does this PR solve?

Feat: Add tooltip to question item of ChunkCreatingModal  #3873

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-05 16:49:55 +08:00
8037dc7b76 Fix chunk available issue (#3877)
### What problem does this PR solve?

Close #3873

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-05 16:49:43 +08:00
56f473b680 Feat: Add question parameter to edit chunk modal (#3875)
### What problem does this PR solve?

Close #3873

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-05 14:51:19 +08:00
b502dc7399 Feat: Adjust the input box width of EditTag to 100% #3873 (#3876)
### What problem does this PR solve?

Feat: Adjust the input box width of EditTag to 100% #3873

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-05 14:29:49 +08:00
cfe23badb0 Feat: Add question parameter to edit chunk modal #3873 (#3874)
### What problem does this PR solve?

Feat: Add question parameter to edit chunk modal #3873

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-05 13:58:33 +08:00
593ffc4067 Fix HuggingFace model error. (#3870)
### What problem does this PR solve?

#3865

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-05 13:28:42 +08:00
a88a1848ff Fix: The right coordinates of Categorize and Switch operators are misplaced #3868 (#3869)
### What problem does this PR solve?

Fix: The right coordinates of Categorize and Switch operators are
misplaced #3868

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-05 10:53:44 +08:00
5ae33184d5 Fix chunk position issue. (#3867)
### What problem does this PR solve?

Close #3864

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-05 10:53:26 +08:00
78601ee1bd Fix open AI compatible rerank issue. (#3866)
### What problem does this PR solve?
#3700
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-05 10:26:21 +08:00
84afb4259c Feat: Bind the route to the navigation bar in the head #3221 (#3863)
### What problem does this PR solve?
Feat: Bind the route to the navigation bar in the head #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-04 19:10:08 +08:00
1b817a5b4c Refine synonym query. (#3855)
### What problem does this PR solve?

### Type of change

- [x] Performance Improvement
2024-12-04 17:20:12 +08:00
1b589609a4 Add sdk for list agents and sessions (#3848)
### What problem does this PR solve?

Add sdk for list agents and sessions

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-04 16:23:22 +08:00
289f4f1916 Fix: Hide the download button if it is an empty file #3762 (#3853)
### What problem does this PR solve?

Fix: Hide the download button if it is an empty file #3762

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-04 15:09:41 +08:00
cf37e2ef1a Fix: Delete unused code #3651 (#3852)
### What problem does this PR solve?

Fix: Delete unused code #3651

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-04 14:15:17 +08:00
41e2dadea7 Feat: Fixed the problem of not finding EmailForm #3837 (#3847)
### What problem does this PR solve?

Feat: Fixed the problem of not finding EmailForm #3837

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-04 14:05:07 +08:00
f3318b2e49 Update docs. (#3849)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-12-04 12:23:31 +08:00
3f3469130b Fix preview issue in file manager. (#3846)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-04 11:53:23 +08:00
fc38afcec4 Web: Fixed the download and preview file not authorized. (#3652)
https://github.com/infiniflow/ragflow/issues/3651

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-04 11:48:06 +08:00
efae7afd62 Email sending tool (#3837)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._
Added the function of sending emails through SMTP
Instructions for use-
Corresponding parameters need to be configured
Need to output upstream in a fixed format

![image](https://github.com/user-attachments/assets/93bc1af7-6d4f-4406-bd1d-bc042535dd82)


### Type of change


- [√] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-12-04 11:21:17 +08:00
285bc58364 Update QRcode (#3844)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-12-04 10:11:20 +08:00
6657ca7cde Change default error message to English (#3838)
### What problem does this PR solve?

As title

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-12-04 09:34:49 +08:00
87455d79e4 Add api for list agents and agent seesions (#3835)
### What problem does this PR solve?

Add api for list agents and agent seesions

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-03 19:03:16 +08:00
821fdf02b4 Fix parsing JSON file error (#3829)
### What problem does this PR solve?

Close issue: #3828

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-03 19:02:03 +08:00
54980337e4 Feat: Modify the style of the home page in bright mode #3221 (#3832)
### What problem does this PR solve?

Feat: Modify the style of the home page in bright mode #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-03 18:59:11 +08:00
92ab7ef659 Refactor embedding batch_size (#3825)
### What problem does this PR solve?

Refactor embedding batch_size. Close #3657

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
2024-12-03 16:22:39 +08:00
934dbc2e2b Add more mistral models. (#3826)
### What problem does this PR solve?

#3647

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-03 15:18:38 +08:00
95da6de9e1 Fix the agent reference bug and the session prologue (#3823)
### What problem does this PR solve?

Fix the agent reference bug and the session prologue
#3285 #3819
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-03 14:49:26 +08:00
ccdeeda9cc Add Helm chart deployment method (#3815)
### What problem does this PR solve?

Add's a Helm chart for deploying RAGFlow on Kubernetes. 

Closes #864. 

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
2024-12-03 14:48:36 +08:00
74b28ef1b0 Add pagerank to KB. (#3809)
### What problem does this PR solve?

#3794

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-12-03 14:30:35 +08:00
7543047de3 Fix @ in model name issue. (#3821)
### What problem does this PR solve?

#3814

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-03 12:41:39 +08:00
e66addc82d Feat: Store the pagerank field at the outermost #3794 (#3822)
### What problem does this PR solve?

Feat: Store the pagerank field at the outermost #3794

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-03 11:55:21 +08:00
7b6a5ffaff Fix: page_chars attribute does not exist in some formats of PDF (#3796)
### What problem does this PR solve?

In #3335 someone suggested to upgrade pdfplumber==0.11.1, but that
didn't solve it.
It's actually the special formatting in some of the pdfs that triggers
the problem.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-03 11:08:06 +08:00
19545282aa Web API test cases (#3812)
### What problem does this PR solve?

1. Failed update dataset case
2. Upload and parse text file

### Type of change

- [x] Other (please describe): test cases

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-03 10:33:35 +08:00
6a0583f5ad Fix voyage embedding. (#3818)
### What problem does this PR solve?

#3816 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-03 09:33:54 +08:00
ed7e46b6ca Feat: Add TestingForm #3221 (#3810)
### What problem does this PR solve?

Feat: Add TestingForm #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-02 19:33:20 +08:00
9654e64a0a Fix the bug that the agent could not find the context (#3795)
### What problem does this PR solve?

Fix the bug that the agent could not find the context
#3682
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-02 19:05:18 +08:00
8b650fc9ef Feat: Supports page rank score for different knowledge bases. #3794 (#3800)
### What problem does this PR solve?

Feat: Supports page rank score for different knowledge bases. #3794

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-02 19:00:11 +08:00
69fb323581 [DOC] We have no plan to maintain Docker images for ARM. Please build your … (#3808)
…Docker image

### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-12-02 18:56:55 +08:00
9d093547e8 Improved ollama doc (#3787)
### What problem does this PR solve?

Improved ollama doc. Close #3723

### Type of change

- [x] Documentation Update
2024-12-02 17:28:30 +08:00
c5f13629af Set Log level by env (#3798)
### What problem does this PR solve?

Set Log level by env

### Type of change

- [x] Refactoring
2024-12-02 17:24:39 +08:00
c4b6df350a More dataset test cases (#3802)
### What problem does this PR solve?

1. Test not allowed fields of dataset

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Other (please describe): Test cases

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-02 17:15:19 +08:00
976d112280 Feat: Quit from jointed team #3759 (#3797)
### What problem does this PR solve?

Feat: Quit from jointed team #3759

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-12-02 15:31:41 +08:00
8fba5c4179 Update document: how upload file large than 128MB (#3791)
### What problem does this PR solve?

As title.

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-12-02 14:37:37 +08:00
d19f059f34 Detect invalid response from api.siliconflow.cn (#3792)
### What problem does this PR solve?

Detect invalid response from api.siliconflow.cn. Close #2643

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-02 12:55:05 +08:00
deca6c1b72 Fix service_conf for oc9 docker compose file. (#3790)
### What problem does this PR solve?

#3678

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-02 12:00:58 +08:00
3ee9ca749d Fix the bug that prevented modifying dataset_ids (#3784)
### What problem does this PR solve?

Fix the bug that prevented modifying `dataset_ids`.
 #3772

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-12-02 11:40:20 +08:00
7058ac0041 Fix out of boundary. (#3786)
### What problem does this PR solve?

#3769
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-12-02 11:38:53 +08:00
a7efd3cac5 Removed obsolete instructions (#3785)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-12-02 10:51:48 +08:00
59a5813f1b add jina new models in jina connector (#3770)
### What problem does this PR solve?

add new models in jinna connector, to allow use models that support
multilingual models

### Type of change

- [X] Other (please describe): new connectors no breaking change
2024-12-02 10:06:39 +08:00
08c1a5e1e8 Refactor parse progress (#3781)
### What problem does this PR solve?

Refactor parse file progress

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-01 22:28:00 +08:00
ea84cc2e33 Update file parsing progress info (#3780)
### What problem does this PR solve?

Refine the file parsing progress info

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-01 17:03:00 +08:00
b5f643681f Update README (#3777)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-01 14:53:51 +08:00
5497ea34b9 Update QR code of wechat (#3776)
### What problem does this PR solve?

Update wechat group QR code.

### Type of change

- [x] Documentation Update

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-12-01 12:47:53 +08:00
e079656473 Update progress info and start welcome info (#3768)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-30 18:48:06 +08:00
d00297a763 Fix chunk creation using Infinity (#3763)
### What problem does this PR solve?

1. Store error type in Infinity
2. position list value read from Infinity isn't correct.

Fix issue: #3729

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-30 00:10:14 +08:00
a19210daf1 Feat: Add tooltip to delimiter filed #1909 (#3758)
### What problem does this PR solve?

Feat: Add tooltip to delimiter filed #1909

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-29 18:13:59 +08:00
b2abc36baa Optimize the knowledge base card UI (#3756)
### What problem does this PR solve?

Set the title to 2 lines, display the excess part
Set the description to 3 lines, display the excess part

![image](https://github.com/user-attachments/assets/4b01cac5-b7a3-40cc-8965-42aeba9ea233)

![image](https://github.com/user-attachments/assets/aa4230eb-100f-4905-9cf0-75ce9813a332)


### Type of change

- [x] Other (please describe):
2024-11-29 18:02:17 +08:00
fadbe23bfe Feat: Translate comments of file-util.ts to English #3749 (#3757)
### What problem does this PR solve?

Feat: Translate comments of file-util.ts to English #3749

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-29 18:01:14 +08:00
ea8a59d0b0 Image compression (#3749)
### What problem does this PR solve?

The uploaded avatar has been compressed to preserve transparency while
meeting the length requirements for the 'text' type. The current
compressed size is 100x100 pixels.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-29 17:15:46 +08:00
381219aa41 Fixed increase_usage for builtin models (#3748)
### What problem does this PR solve?

Fixed increase_usage for builtin models. Close #1803

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-29 17:02:49 +08:00
0f08b0f053 Weight up title and keywords for chunks in terms of retrieval (#3750)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-11-29 16:39:55 +08:00
0dafce31c4 Feat: Support for formulas #1954 (#3747)
### What problem does this PR solve?

Feat: Support for formulas #1954

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-29 16:34:30 +08:00
c93e0355c3 Feat: Add DatasetTable #3221 (#3743)
### What problem does this PR solve?

Feat: Add DatasetTable #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-29 16:05:46 +08:00
1e0fc76efa Added release notes v0.11.0 (#3745)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-29 16:00:42 +08:00
d94386e00a Pass top_p to ollama (#3744)
### What problem does this PR solve?

Pass top_p to ollama. Close #1769

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-29 14:52:27 +08:00
0a62dd7a7e Update document (#3746)
### What problem does this PR solve?

Fix description on local LLM deployment case

### Type of change

- [x] Documentation Update

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-11-29 14:50:45 +08:00
06a21d2031 Change Traditional Chinese to Simplified Chinese (#3742)
### What problem does this PR solve?

Change Traditional Chinese to Simplified Chinese

### Type of change

- [x] Other (please describe):
2024-11-29 13:45:31 +08:00
9a3febb7c5 Refactor dockerfile (#3741)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-11-29 13:37:50 +08:00
27cd765d6f Fix raptor issue (#3737)
### What problem does this PR solve?

#3732

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-29 11:55:41 +08:00
a0c0a957b4 Fix GPU docker compose file (#3736)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-11-29 10:49:15 +08:00
b89f7c69ad Fix image_id absence issue (#3735)
### What problem does this PR solve?

#3731

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-29 10:37:09 +08:00
fcdc6ad085 Fix the issue where the agent interface cannot call the context (#3725)
### What problem does this PR solve?

Fix the context of the agent interface call to the context during web
testing, and change it to the context record of user chat
Remove duplicate messages and add them, which can be viewed in the
messages section of database api_4_comversation

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
2024-11-29 10:36:48 +08:00
439 changed files with 17910 additions and 8583 deletions

124
.github/workflows/release.yml vendored Normal file
View File

@ -0,0 +1,124 @@
name: release
on:
schedule:
- cron: '0 13 * * *' # This schedule runs every 13:00:00Z(21:00:00+08:00)
# The "create tags" trigger is specifically focused on the creation of new tags, while the "push tags" trigger is activated when tags are pushed, including both new tag creations and updates to existing tags.
create:
tags:
- "v*.*.*" # normal release
- "nightly" # the only one mutable tag
# https://docs.github.com/en/actions/using-jobs/using-concurrency
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
release:
runs-on: [ "self-hosted", "overseas" ]
steps:
- name: Ensure workspace ownership
run: echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
# https://github.com/actions/checkout/blob/v3/README.md
- name: Check out code
uses: actions/checkout@v4
with:
token: ${{ secrets.MY_GITHUB_TOKEN }} # Use the secret as an environment variable
fetch-depth: 0
fetch-tags: true
- name: Prepare release body
run: |
if [[ $GITHUB_EVENT_NAME == 'create' ]]; then
RELEASE_TAG=${GITHUB_REF#refs/tags/}
if [[ $RELEASE_TAG == 'nightly' ]]; then
PRERELEASE=true
else
PRERELEASE=false
fi
echo "Workflow triggered by create tag: $RELEASE_TAG"
else
RELEASE_TAG=nightly
PRERELEASE=true
echo "Workflow triggered by schedule"
fi
echo "RELEASE_TAG=$RELEASE_TAG" >> $GITHUB_ENV
echo "PRERELEASE=$PRERELEASE" >> $GITHUB_ENV
RELEASE_DATETIME=$(date --rfc-3339=seconds)
echo Release $RELEASE_TAG created from $GITHUB_SHA at $RELEASE_DATETIME > release_body.md
- name: Move the existing mutable tag
# https://github.com/softprops/action-gh-release/issues/171
run: |
git fetch --tags
if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
# Determine if a given tag exists and matches a specific Git commit.
# actions/checkout@v4 fetch-tags doesn't work when triggered by schedule
if [ "$(git rev-parse -q --verify "refs/tags/$RELEASE_TAG")" = "$GITHUB_SHA" ]; then
echo "mutable tag $RELEASE_TAG exists and matches $GITHUB_SHA"
else
git tag -f $RELEASE_TAG $GITHUB_SHA
git push -f origin $RELEASE_TAG:refs/tags/$RELEASE_TAG
echo "created/moved mutable tag $RELEASE_TAG to $GITHUB_SHA"
fi
fi
- name: Create or overwrite a release
# https://github.com/actions/upload-release-asset has been replaced by https://github.com/softprops/action-gh-release
uses: softprops/action-gh-release@v2
with:
token: ${{ secrets.MY_GITHUB_TOKEN }} # Use the secret as an environment variable
prerelease: ${{ env.PRERELEASE }}
tag_name: ${{ env.RELEASE_TAG }}
# The body field does not support environment variable substitution directly.
body_path: release_body.md
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
# https://github.com/marketplace/actions/docker-login
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: infiniflow
password: ${{ secrets.DOCKERHUB_TOKEN }}
# https://github.com/marketplace/actions/build-and-push-docker-images
- name: Build and push full image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: infiniflow/ragflow:${{ env.RELEASE_TAG }}
file: Dockerfile
platforms: linux/amd64
# https://github.com/marketplace/actions/build-and-push-docker-images
- name: Build and push slim image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: infiniflow/ragflow:${{ env.RELEASE_TAG }}-slim
file: Dockerfile
build-args: LIGHTEN=1
platforms: linux/amd64
- name: Build ragflow-sdk
if: startsWith(github.ref, 'refs/tags/v')
run: |
cd sdk/python && \
poetry build
- name: Publish package distributions to PyPI
if: startsWith(github.ref, 'refs/tags/v')
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: sdk/python/dist/
password: ${{ secrets.PYPI_API_TOKEN }}
verbose: true

View File

@ -49,29 +49,36 @@ jobs:
fetch-depth: 0
fetch-tags: true
- name: Build ragflow:dev-slim
# https://github.com/astral-sh/ruff-action
- name: Static check with Ruff
uses: astral-sh/ruff-action@v2
with:
version: ">=0.8.2"
args: "check --ignore E402"
- name: Build ragflow:nightly-slim
run: |
RUNNER_WORKSPACE_PREFIX=${RUNNER_WORKSPACE_PREFIX:-$HOME}
cp -r ${RUNNER_WORKSPACE_PREFIX}/huggingface.co ${RUNNER_WORKSPACE_PREFIX}/nltk_data ${RUNNER_WORKSPACE_PREFIX}/libssl*.deb ${RUNNER_WORKSPACE_PREFIX}/tika-server*.jar* ${RUNNER_WORKSPACE_PREFIX}/chrome* ${RUNNER_WORKSPACE_PREFIX}/cl100k_base.tiktoken .
sudo docker pull ubuntu:22.04
sudo docker build --progress=plain -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
sudo docker build --progress=plain --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
- name: Build ragflow:dev
- name: Build ragflow:nightly
run: |
sudo docker build --progress=plain -f Dockerfile -t infiniflow/ragflow:dev .
sudo docker build --progress=plain --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
- name: Start ragflow:dev-slim
- name: Start ragflow:nightly-slim
run: |
echo "RAGFLOW_IMAGE=infiniflow/ragflow:nightly-slim" >> docker/.env
sudo docker compose -f docker/docker-compose.yml up -d
- name: Stop ragflow:dev-slim
- name: Stop ragflow:nightly-slim
if: always() # always run this step even if previous steps failed
run: |
sudo docker compose -f docker/docker-compose.yml down -v
- name: Start ragflow:dev
- name: Start ragflow:nightly
run: |
echo "RAGFLOW_IMAGE=infiniflow/ragflow:dev" >> docker/.env
echo "RAGFLOW_IMAGE=infiniflow/ragflow:nightly" >> docker/.env
sudo docker compose -f docker/docker-compose.yml up -d
- name: Run sdk tests against Elasticsearch
@ -95,12 +102,12 @@ jobs:
cd sdk/python && poetry install && source .venv/bin/activate && cd test/test_frontend_api && pytest -s --tb=short get_email.py test_dataset.py
- name: Stop ragflow:dev
- name: Stop ragflow:nightly
if: always() # always run this step even if previous steps failed
run: |
sudo docker compose -f docker/docker-compose.yml down -v
- name: Start ragflow:dev
- name: Start ragflow:nightly
run: |
sudo DOC_ENGINE=infinity docker compose -f docker/docker-compose.yml up -d
@ -124,7 +131,7 @@ jobs:
done
cd sdk/python && poetry install && source .venv/bin/activate && cd test/test_frontend_api && pytest -s --tb=short get_email.py test_dataset.py
- name: Stop ragflow:dev
- name: Stop ragflow:nightly
if: always() # always run this step even if previous steps failed
run: |
sudo DOC_ENGINE=infinity docker compose -f docker/docker-compose.yml down -v

4
.gitignore vendored
View File

@ -35,4 +35,6 @@ rag/res/deepdoc
sdk/python/ragflow.egg-info/
sdk/python/build/
sdk/python/dist/
sdk/python/ragflow_sdk.egg-info/
sdk/python/ragflow_sdk.egg-info/
huggingface.co/
nltk_data/

View File

@ -3,36 +3,71 @@ FROM ubuntu:22.04 AS base
USER root
SHELL ["/bin/bash", "-c"]
ENV LIGHTEN=0
ARG NEED_MIRROR=0
ARG LIGHTEN=0
ENV LIGHTEN=${LIGHTEN}
WORKDIR /ragflow
RUN rm -f /etc/apt/apt.conf.d/docker-clean \
&& echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
# Copy models downloaded via download_deps.py
RUN mkdir -p /ragflow/rag/res/deepdoc /root/.ragflow
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/huggingface.co,target=/huggingface.co \
cp /huggingface.co/InfiniFlow/huqie/huqie.txt.trie /ragflow/rag/res/ && \
tar --exclude='.*' -cf - \
/huggingface.co/InfiniFlow/text_concat_xgb_v1.0 \
/huggingface.co/InfiniFlow/deepdoc \
| tar -xf - --strip-components=3 -C /ragflow/rag/res/deepdoc
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/huggingface.co,target=/huggingface.co \
if [ "$LIGHTEN" != "1" ]; then \
(tar -cf - \
/huggingface.co/BAAI/bge-large-zh-v1.5 \
/huggingface.co/BAAI/bge-reranker-v2-m3 \
/huggingface.co/maidalun1020/bce-embedding-base_v1 \
/huggingface.co/maidalun1020/bce-reranker-base_v1 \
| tar -xf - --strip-components=2 -C /root/.ragflow) \
fi
RUN --mount=type=cache,id=ragflow_base_apt,target=/var/cache/apt,sharing=locked \
apt update && apt-get --no-install-recommends install -y ca-certificates
# https://github.com/chrismattmann/tika-python
# This is the only way to run python-tika without internet access. Without this set, the default is to check the tika version and pull latest every time from Apache.
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/,target=/deps \
cp -r /deps/nltk_data /root/ && \
cp /deps/tika-server-standard-3.0.0.jar /deps/tika-server-standard-3.0.0.jar.md5 /ragflow/ && \
cp /deps/cl100k_base.tiktoken /ragflow/9b5ad71b2ce5302211f9c61530b329a4922fc6a4
# Setup apt mirror site
RUN sed -i 's|http://archive.ubuntu.com|https://mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list
ENV TIKA_SERVER_JAR="file:///ragflow/tika-server-standard-3.0.0.jar"
ENV DEBIAN_FRONTEND=noninteractive
RUN --mount=type=cache,id=ragflow_base_apt,target=/var/cache/apt,sharing=locked \
apt update && DEBIAN_FRONTEND=noninteractive apt install -y curl libpython3-dev nginx libglib2.0-0 libglx-mesa0 pkg-config libicu-dev libgdiplus default-jdk python3-pip pipx \
libatk-bridge2.0-0 libgtk-4-1 libnss3 xdg-utils unzip libgbm-dev wget git \
&& rm -rf /var/lib/apt/lists/*
# Setup apt
# Python package and implicit dependencies:
# opencv-python: libglib2.0-0 libglx-mesa0 libgl1
# aspose-slides: pkg-config libicu-dev libgdiplus libssl1.1_1.1.1f-1ubuntu2_amd64.deb
# python-pptx: default-jdk tika-server-standard-3.0.0.jar
# selenium: libatk-bridge2.0-0 chrome-linux64-121-0-6167-85
# Building C extensions: libpython3-dev libgtk-4-1 libnss3 xdg-utils libgbm-dev
RUN --mount=type=cache,id=ragflow_apt,target=/var/cache/apt,sharing=locked \
if [ "$NEED_MIRROR" == "1" ]; then \
sed -i 's|http://archive.ubuntu.com|https://mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list; \
fi; \
rm -f /etc/apt/apt.conf.d/docker-clean && \
echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache && \
chmod 1777 /tmp && \
apt update && \
apt --no-install-recommends install -y ca-certificates && \
apt update && \
apt install -y libglib2.0-0 libglx-mesa0 libgl1 && \
apt install -y pkg-config libicu-dev libgdiplus && \
apt install -y default-jdk && \
apt install -y libatk-bridge2.0-0 && \
apt install -y libpython3-dev libgtk-4-1 libnss3 xdg-utils libgbm-dev && \
apt install -y python3-pip pipx nginx unzip curl wget git vim less
RUN pip3 config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple && pip3 config set global.trusted-host "pypi.tuna.tsinghua.edu.cn mirrors.pku.edu.cn" && pip3 config set global.extra-index-url "https://mirrors.pku.edu.cn/pypi/web/simple" \
&& pipx install poetry \
&& /root/.local/bin/poetry self add poetry-plugin-pypi-mirror
# https://forum.aspose.com/t/aspose-slides-for-net-no-usable-version-of-libssl-found-with-linux-server/271344/13
# aspose-slides on linux/arm64 is unavailable
RUN --mount=type=bind,source=libssl1.1_1.1.1f-1ubuntu2_amd64.deb,target=/root/libssl1.1_1.1.1f-1ubuntu2_amd64.deb \
--mount=type=bind,source=libssl1.1_1.1.1f-1ubuntu2_arm64.deb,target=/root/libssl1.1_1.1.1f-1ubuntu2_arm64.deb \
if [ "$(uname -m)" = "x86_64" ]; then \
dpkg -i /root/libssl1.1_1.1.1f-1ubuntu2_amd64.deb; \
elif [ "$(uname -m)" = "aarch64" ]; then \
dpkg -i /root/libssl1.1_1.1.1f-1ubuntu2_arm64.deb; \
RUN if [ "$NEED_MIRROR" == "1" ]; then \
pip3 config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple && \
pip3 config set global.trusted-host pypi.tuna.tsinghua.edu.cn; \
fi; \
pipx install poetry; \
if [ "$NEED_MIRROR" == "1" ]; then \
pipx inject poetry poetry-plugin-pypi-mirror; \
fi
ENV PYTHONDONTWRITEBYTECODE=1 DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1
@ -42,16 +77,53 @@ ENV POETRY_NO_INTERACTION=1
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
ENV POETRY_VIRTUALENVS_CREATE=true
ENV POETRY_REQUESTS_TIMEOUT=15
ENV POETRY_PYPI_MIRROR_URL=https://pypi.tuna.tsinghua.edu.cn/simple/
# nodejs 12.22 on Ubuntu 22.04 is too old
RUN --mount=type=cache,id=ragflow_base_apt,target=/var/cache/apt,sharing=locked \
RUN --mount=type=cache,id=ragflow_apt,target=/var/cache/apt,sharing=locked \
curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \
apt purge -y nodejs npm && \
apt autoremove && \
apt update && \
apt install -y nodejs cargo && \
rm -rf /var/lib/apt/lists/*
apt install -y nodejs cargo
# Add msssql ODBC driver
# macOS ARM64 environment, install msodbcsql18.
# general x86_64 environment, install msodbcsql17.
RUN --mount=type=cache,id=ragflow_apt,target=/var/cache/apt,sharing=locked \
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \
curl https://packages.microsoft.com/config/ubuntu/22.04/prod.list > /etc/apt/sources.list.d/mssql-release.list && \
apt update && \
if [ -n "$ARCH" ] && [ "$ARCH" = "arm64" ]; then \
# MacOS ARM64
ACCEPT_EULA=Y apt install -y unixodbc-dev msodbcsql18; \
else \
# (x86_64)
ACCEPT_EULA=Y apt install -y unixodbc-dev msodbcsql17; \
fi || \
{ echo "Failed to install ODBC driver"; exit 1; }
# Add dependencies of selenium
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/chrome-linux64-121-0-6167-85,target=/chrome-linux64.zip \
unzip /chrome-linux64.zip && \
mv chrome-linux64 /opt/chrome && \
ln -s /opt/chrome/chrome /usr/local/bin/
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/chromedriver-linux64-121-0-6167-85,target=/chromedriver-linux64.zip \
unzip -j /chromedriver-linux64.zip chromedriver-linux64/chromedriver && \
mv chromedriver /usr/local/bin/ && \
rm -f /usr/bin/google-chrome
# https://forum.aspose.com/t/aspose-slides-for-net-no-usable-version-of-libssl-found-with-linux-server/271344/13
# aspose-slides on linux/arm64 is unavailable
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/,target=/deps \
if [ "$(uname -m)" = "x86_64" ]; then \
dpkg -i /deps/libssl1.1_1.1.1f-1ubuntu2_amd64.deb; \
elif [ "$(uname -m)" = "aarch64" ]; then \
dpkg -i /deps/libssl1.1_1.1.1f-1ubuntu2_arm64.deb; \
fi
# builder stage
FROM base AS builder
@ -59,17 +131,27 @@ USER root
WORKDIR /ragflow
# install dependencies from poetry.lock file
COPY pyproject.toml poetry.toml poetry.lock ./
RUN --mount=type=cache,id=ragflow_poetry,target=/root/.cache/pypoetry,sharing=locked \
if [ "$NEED_MIRROR" == "1" ]; then \
export POETRY_PYPI_MIRROR_URL=https://pypi.tuna.tsinghua.edu.cn/simple/; \
fi; \
if [ "$LIGHTEN" == "1" ]; then \
poetry install --no-root; \
else \
poetry install --no-root --with=full; \
fi
COPY web web
COPY docs docs
RUN --mount=type=cache,id=ragflow_npm,target=/root/.npm,sharing=locked \
cd web && npm install --force && npm run build
COPY .git /ragflow/.git
RUN current_commit=$(git rev-parse --short HEAD); \
last_tag=$(git describe --tags --abbrev=0); \
commit_count=$(git rev-list --count "$last_tag..HEAD"); \
version_info=""; \
if [ "$commit_count" -eq 0 ]; then \
version_info=$last_tag; \
else \
version_info="$current_commit($last_tag~$commit_count)"; \
fi; \
RUN version_info=$(git describe --tags --match=v* --first-parent --always); \
if [ "$LIGHTEN" == "1" ]; then \
version_info="$version_info slim"; \
else \
@ -78,34 +160,18 @@ RUN current_commit=$(git rev-parse --short HEAD); \
echo "RAGFlow version: $version_info"; \
echo $version_info > /ragflow/VERSION
COPY web web
COPY docs docs
RUN --mount=type=cache,id=ragflow_builder_npm,target=/root/.npm,sharing=locked \
cd web && npm install --force && npm run build
# install dependencies from poetry.lock file
COPY pyproject.toml poetry.toml poetry.lock ./
RUN --mount=type=cache,id=ragflow_builder_poetry,target=/root/.cache/pypoetry,sharing=locked \
if [ "$LIGHTEN" == "1" ]; then \
poetry install --no-root; \
else \
poetry install --no-root --with=full; \
fi
# production stage
FROM base AS production
USER root
WORKDIR /ragflow
COPY --from=builder /ragflow/VERSION /ragflow/VERSION
# Copy Python environment and packages
ENV VIRTUAL_ENV=/ragflow/.venv
COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
ENV PATH="${VIRTUAL_ENV}/bin:${PATH}"
# Install python packages' dependencies
# cv2 requires libGL.so.1
RUN --mount=type=cache,id=ragflow_production_apt,target=/var/cache/apt,sharing=locked \
apt update && apt install -y --no-install-recommends nginx libgl1 vim less && \
rm -rf /var/lib/apt/lists/*
ENV PYTHONPATH=/ragflow/
COPY web web
COPY api api
@ -116,55 +182,12 @@ COPY agent agent
COPY graphrag graphrag
COPY pyproject.toml poetry.toml poetry.lock ./
# Copy models downloaded via download_deps.py
RUN mkdir -p /ragflow/rag/res/deepdoc /root/.ragflow
RUN --mount=type=bind,source=huggingface.co,target=/huggingface.co \
tar --exclude='.*' -cf - \
/huggingface.co/InfiniFlow/text_concat_xgb_v1.0 \
/huggingface.co/InfiniFlow/deepdoc \
| tar -xf - --strip-components=3 -C /ragflow/rag/res/deepdoc
RUN --mount=type=bind,source=huggingface.co,target=/huggingface.co \
tar -cf - \
/huggingface.co/BAAI/bge-large-zh-v1.5 \
/huggingface.co/BAAI/bge-reranker-v2-m3 \
/huggingface.co/maidalun1020/bce-embedding-base_v1 \
/huggingface.co/maidalun1020/bce-reranker-base_v1 \
| tar -xf - --strip-components=2 -C /root/.ragflow
# Copy nltk data downloaded via download_deps.py
COPY nltk_data /root/nltk_data
# https://github.com/chrismattmann/tika-python
# This is the only way to run python-tika without internet access. Without this set, the default is to check the tika version and pull latest every time from Apache.
COPY tika-server-standard-3.0.0.jar /ragflow/tika-server-standard.jar
COPY tika-server-standard-3.0.0.jar.md5 /ragflow/tika-server-standard.jar.md5
ENV TIKA_SERVER_JAR="file:///ragflow/tika-server-standard.jar"
# Copy cl100k_base
COPY cl100k_base.tiktoken /ragflow/9b5ad71b2ce5302211f9c61530b329a4922fc6a4
# Add dependencies of selenium
RUN --mount=type=bind,source=chrome-linux64-121-0-6167-85,target=/chrome-linux64.zip \
unzip /chrome-linux64.zip && \
mv chrome-linux64 /opt/chrome && \
ln -s /opt/chrome/chrome /usr/local/bin/
RUN --mount=type=bind,source=chromedriver-linux64-121-0-6167-85,target=/chromedriver-linux64.zip \
unzip -j /chromedriver-linux64.zip chromedriver-linux64/chromedriver && \
mv chromedriver /usr/local/bin/ && \
rm -f /usr/bin/google-chrome
# Copy compiled web pages
COPY --from=builder /ragflow/web/dist /ragflow/web/dist
# Copy Python environment and packages
ENV VIRTUAL_ENV=/ragflow/.venv
COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
ENV PATH="${VIRTUAL_ENV}/bin:${PATH}"
ENV PYTHONPATH=/ragflow/
COPY docker/service_conf.yaml.template ./conf/service_conf.yaml.template
COPY docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
# Copy compiled web pages
COPY --from=builder /ragflow/web/dist /ragflow/web/dist
COPY --from=builder /ragflow/VERSION /ragflow/VERSION
ENTRYPOINT ["./entrypoint.sh"]

10
Dockerfile.deps Normal file
View File

@ -0,0 +1,10 @@
# This builds an image that contains the resources needed by Dockerfile
#
FROM scratch
# Copy resources downloaded via download_deps.py
COPY chromedriver-linux64-121-0-6167-85 chrome-linux64-121-0-6167-85 cl100k_base.tiktoken libssl1.1_1.1.1f-1ubuntu2_amd64.deb libssl1.1_1.1.1f-1ubuntu2_arm64.deb tika-server-standard-3.0.0.jar tika-server-standard-3.0.0.jar.md5 libssl*.deb /
COPY nltk_data /nltk_data
COPY huggingface.co /huggingface.co

View File

@ -1,163 +0,0 @@
# base stage
FROM ubuntu:22.04 AS base
USER root
SHELL ["/bin/bash", "-c"]
ENV LIGHTEN=1
WORKDIR /ragflow
RUN rm -f /etc/apt/apt.conf.d/docker-clean \
&& echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,id=ragflow_base_apt,target=/var/cache/apt,sharing=locked \
apt update && apt-get --no-install-recommends install -y ca-certificates
# Setup apt mirror site
RUN sed -i 's|http://archive.ubuntu.com|https://mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list
RUN --mount=type=cache,id=ragflow_base_apt,target=/var/cache/apt,sharing=locked \
apt update && DEBIAN_FRONTEND=noninteractive apt install -y curl libpython3-dev nginx libglib2.0-0 libglx-mesa0 pkg-config libicu-dev libgdiplus default-jdk python3-pip pipx \
libatk-bridge2.0-0 libgtk-4-1 libnss3 xdg-utils unzip libgbm-dev wget git \
&& rm -rf /var/lib/apt/lists/*
RUN pip3 config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple && pip3 config set global.trusted-host "pypi.tuna.tsinghua.edu.cn mirrors.pku.edu.cn" && pip3 config set global.extra-index-url "https://mirrors.pku.edu.cn/pypi/web/simple" \
&& pipx install poetry \
&& /root/.local/bin/poetry self add poetry-plugin-pypi-mirror
# https://forum.aspose.com/t/aspose-slides-for-net-no-usable-version-of-libssl-found-with-linux-server/271344/13
# aspose-slides on linux/arm64 is unavailable
RUN --mount=type=bind,source=libssl1.1_1.1.1f-1ubuntu2_amd64.deb,target=/root/libssl1.1_1.1.1f-1ubuntu2_amd64.deb \
--mount=type=bind,source=libssl1.1_1.1.1f-1ubuntu2_arm64.deb,target=/root/libssl1.1_1.1.1f-1ubuntu2_arm64.deb \
if [ "$(uname -m)" = "x86_64" ]; then \
dpkg -i /root/libssl1.1_1.1.1f-1ubuntu2_amd64.deb; \
elif [ "$(uname -m)" = "aarch64" ]; then \
dpkg -i /root/libssl1.1_1.1.1f-1ubuntu2_arm64.deb; \
fi
ENV PYTHONDONTWRITEBYTECODE=1 DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1
ENV PATH=/root/.local/bin:$PATH
# Configure Poetry
ENV POETRY_NO_INTERACTION=1
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
ENV POETRY_VIRTUALENVS_CREATE=true
ENV POETRY_REQUESTS_TIMEOUT=15
ENV POETRY_PYPI_MIRROR_URL=https://pypi.tuna.tsinghua.edu.cn/simple/
# nodejs 12.22 on Ubuntu 22.04 is too old
RUN --mount=type=cache,id=ragflow_base_apt,target=/var/cache/apt,sharing=locked \
curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \
apt purge -y nodejs npm && \
apt autoremove && \
apt update && \
apt install -y nodejs cargo && \
rm -rf /var/lib/apt/lists/*
# builder stage
FROM base AS builder
USER root
WORKDIR /ragflow
COPY .git /ragflow/.git
RUN current_commit=$(git rev-parse --short HEAD); \
last_tag=$(git describe --tags --abbrev=0); \
commit_count=$(git rev-list --count "$last_tag..HEAD"); \
version_info=""; \
if [ "$commit_count" -eq 0 ]; then \
version_info=$last_tag; \
else \
version_info="$current_commit($last_tag~$commit_count)"; \
fi; \
if [ "$LIGHTEN" == "1" ]; then \
version_info="$version_info slim"; \
else \
version_info="$version_info full"; \
fi; \
echo "RAGFlow version: $version_info"; \
echo $version_info > /ragflow/VERSION
COPY web web
COPY docs docs
RUN --mount=type=cache,id=ragflow_builder_npm,target=/root/.npm,sharing=locked \
cd web && npm install --force && npm run build
# install dependencies from poetry.lock file
COPY pyproject.toml poetry.toml poetry.lock ./
RUN --mount=type=cache,id=ragflow_builder_poetry,target=/root/.cache/pypoetry,sharing=locked \
if [ "$LIGHTEN" == "1" ]; then \
poetry install --no-root; \
else \
poetry install --no-root --with=full; \
fi
# production stage
FROM base AS production
USER root
WORKDIR /ragflow
COPY --from=builder /ragflow/VERSION /ragflow/VERSION
# Install python packages' dependencies
# cv2 requires libGL.so.1
RUN --mount=type=cache,id=ragflow_production_apt,target=/var/cache/apt,sharing=locked \
apt update && apt install -y --no-install-recommends nginx libgl1 vim less && \
rm -rf /var/lib/apt/lists/*
COPY web web
COPY api api
COPY conf conf
COPY deepdoc deepdoc
COPY rag rag
COPY agent agent
COPY graphrag graphrag
COPY pyproject.toml poetry.toml poetry.lock ./
# Copy models downloaded via download_deps.py
RUN mkdir -p /ragflow/rag/res/deepdoc /root/.ragflow
RUN --mount=type=bind,source=huggingface.co,target=/huggingface.co \
tar --exclude='.*' -cf - \
/huggingface.co/InfiniFlow/text_concat_xgb_v1.0 \
/huggingface.co/InfiniFlow/deepdoc \
| tar -xf - --strip-components=3 -C /ragflow/rag/res/deepdoc
# Copy nltk data downloaded via download_deps.py
COPY nltk_data /root/nltk_data
# https://github.com/chrismattmann/tika-python
# This is the only way to run python-tika without internet access. Without this set, the default is to check the tika version and pull latest every time from Apache.
COPY tika-server-standard-3.0.0.jar /ragflow/tika-server-standard.jar
COPY tika-server-standard-3.0.0.jar.md5 /ragflow/tika-server-standard.jar.md5
ENV TIKA_SERVER_JAR="file:///ragflow/tika-server-standard.jar"
# Copy cl100k_base
COPY cl100k_base.tiktoken /ragflow/9b5ad71b2ce5302211f9c61530b329a4922fc6a4
# Add dependencies of selenium
RUN --mount=type=bind,source=chrome-linux64-121-0-6167-85,target=/chrome-linux64.zip \
unzip /chrome-linux64.zip && \
mv chrome-linux64 /opt/chrome && \
ln -s /opt/chrome/chrome /usr/local/bin/
RUN --mount=type=bind,source=chromedriver-linux64-121-0-6167-85,target=/chromedriver-linux64.zip \
unzip -j /chromedriver-linux64.zip chromedriver-linux64/chromedriver && \
mv chromedriver /usr/local/bin/ && \
rm -f /usr/bin/google-chrome
# Copy compiled web pages
COPY --from=builder /ragflow/web/dist /ragflow/web/dist
# Copy Python environment and packages
ENV VIRTUAL_ENV=/ragflow/.venv
COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
ENV PATH="${VIRTUAL_ENV}/bin:${PATH}"
ENV PYTHONPATH=/ragflow/
COPY docker/service_conf.yaml.template ./conf/service_conf.yaml.template
COPY docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

View File

@ -20,7 +20,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.14.1-brightgreen" alt="docker pull infiniflow/ragflow:v0.14.1">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.15.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.15.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -75,9 +75,10 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Latest Updates
- 2024-12-18 Upgrades Document Layout Analysis model in Deepdoc.
- 2024-12-04 Adds support for pagerank score in knowledge base.
- 2024-11-22 Adds more variables to Agent.
- 2024-11-01 Adds keyword extraction and related question generation to the parsed chunks to improve the accuracy of retrieval.
- 2024-09-13 Adds search mode for knowledge base Q&A.
- 2024-08-22 Support text to SQL statements through RAG.
- 2024-08-02 Supports GraphRAG inspired by [graphrag](https://github.com/microsoft/graphrag) and mind map.
@ -165,29 +166,21 @@ releases! 🌟
$ git clone https://github.com/infiniflow/ragflow.git
```
3. Build the pre-built Docker images and start up the server:
3. Start up the server using the pre-built Docker images:
> The command below downloads the dev version Docker image for RAGFlow slim (`dev-slim`). Note that RAGFlow slim
Docker images do not include embedding models or Python libraries and hence are approximately 1GB in size.
> The command below downloads the `v0.15.0-slim` edition of the RAGFlow Docker image. Refer to the following table for descriptions of different RAGFlow editions. To download an RAGFlow edition different from `v0.15.0-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.15.0` for the full edition `v0.15.0`.
```bash
$ cd ragflow/docker
$ docker compose -f docker-compose.yml up -d
$ cd ragflow
$ docker compose -f docker/docker-compose.yml up -d
```
> - To download a RAGFlow slim Docker image of a specific version, update the `RAGFLOW_IMAGE` variable in *
*docker/.env** to your desired version. For example, `RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.1-slim`. After
making this change, rerun the command above to initiate the download.
> - To download the dev version of RAGFlow Docker image *including* embedding models and Python libraries, update the
`RAGFLOW_IMAGE` variable in **docker/.env** to `RAGFLOW_IMAGE=infiniflow/ragflow:dev`. After making this change,
rerun the command above to initiate the download.
> - To download a specific version of RAGFlow Docker image *including* embedding models and Python libraries, update
the `RAGFLOW_IMAGE` variable in **docker/.env** to your desired version. For example,
`RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.1`. After making this change, rerun the command above to initiate the
download.
> **NOTE:** A RAGFlow Docker image that includes embedding models and Python libraries is approximately 9GB in size
and may take significantly longer time to load.
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.15.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.15.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | *Unstable* nightly build |
| nightly-slim | &approx;2 | ❌ | *Unstable* nightly build |
4. Check the server status after having the server up and running:
@ -267,14 +260,12 @@ RAGFlow uses Elasticsearch by default for storing full text and vectors. To swit
## 🔧 Build a Docker image without embedding models
This image is approximately 1 GB in size and relies on external LLM and embedding services.
This image is approximately 2 GB in size and relies on external LLM and embedding services.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
docker build --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 Build a Docker image including embedding models
@ -284,23 +275,21 @@ This image is approximately 9 GB in size. As it includes embedding models, it re
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
docker build -f Dockerfile -t infiniflow/ragflow:nightly .
```
## 🔨 Launch service from source for development
1. Install Poetry, or skip this step if it is already installed:
```bash
curl -sSL https://install.python-poetry.org | python3 -
pipx install poetry
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
```
2. Clone the source code and install Python dependencies:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
~/.local/bin/poetry install --sync --no-root --with=full # install RAGFlow dependent python modules
```
@ -313,7 +302,6 @@ docker build -f Dockerfile -t infiniflow/ragflow:dev .
```
127.0.0.1 es01 infinity mysql minio redis
```
In **docker/service_conf.yaml.template**, update mysql port to `5455` and es port to `1200`, as specified in **docker/.env**.
4. If you cannot access HuggingFace, set the `HF_ENDPOINT` environment variable to use a mirror site:

View File

@ -20,7 +20,7 @@
<img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.14.1-brightgreen" alt="docker pull infiniflow/ragflow:v0.14.1">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.15.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.15.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru">
@ -72,11 +72,12 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Pembaruan Terbaru
- 22-11-2024 Peningkatan definisi dan penggunaan variabel di Agen.
- 2024-11-01: Penambahan ekstraksi kata kunci dan pembuatan pertanyaan terkait untuk meningkatkan akurasi pengambilan.
- 2024-09-13: Penambahan mode pencarian untuk Q&A basis pengetahuan.
- 2024-08-22: Dukungan untuk teks ke pernyataan SQL melalui RAG.
- 2024-08-02: Dukungan GraphRAG yang terinspirasi oleh [graphrag](https://github.com/microsoft/graphrag) dan mind map.
- 2024-12-18 Meningkatkan model Analisis Tata Letak Dokumen di Deepdoc.
- 2024-12-04 Mendukung skor pagerank ke basis pengetahuan.
- 2024-11-22 Peningkatan definisi dan penggunaan variabel di Agen.
- 2024-11-01 Penambahan ekstraksi kata kunci dan pembuatan pertanyaan terkait untuk meningkatkan akurasi pengambilan.
- 2024-08-22 Dukungan untuk teks ke pernyataan SQL melalui RAG.
- 2024-08-02 Dukungan GraphRAG yang terinspirasi oleh [graphrag](https://github.com/microsoft/graphrag) dan mind map.
## 🎉 Tetap Terkini
@ -160,26 +161,19 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
3. Bangun image Docker pre-built dan jalankan server:
> Perintah di bawah ini akan mengunduh versi dev dari Docker image RAGFlow slim (`dev-slim`). Image RAGFlow slim
tidak termasuk model embedding atau library Python dan berukuran sekitar 1GB.
> Perintah di bawah ini mengunduh edisi v0.15.0-slim dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.15.0-slim, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. Misalnya, atur RAGFLOW_IMAGE=infiniflow/ragflow:v0.15.0 untuk edisi lengkap v0.15.0.
```bash
$ cd ragflow/docker
$ docker compose -f docker-compose.yml up -d
$ cd ragflow
$ docker compose -f docker/docker-compose.yml up -d
```
> - Untuk mengunduh versi tertentu dari image Docker RAGFlow slim, perbarui variabel `RAGFlow_IMAGE` di *
*docker/.env** sesuai dengan versi yang diinginkan. Misalnya, `RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.1-slim`.
Setelah mengubah ini, jalankan ulang perintah di atas untuk memulai unduhan.
> - Untuk mengunduh versi dev dari image Docker RAGFlow *termasuk* model embedding dan library Python, perbarui
variabel `RAGFlow_IMAGE` di **docker/.env** menjadi `RAGFLOW_IMAGE=infiniflow/ragflow:dev`. Setelah mengubah ini,
jalankan ulang perintah di atas untuk memulai unduhan.
> - Untuk mengunduh versi tertentu dari image Docker RAGFlow *termasuk* model embedding dan library Python, perbarui
variabel `RAGFlow_IMAGE` di **docker/.env** sesuai dengan versi yang diinginkan. Misalnya,
`RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.1`. Setelah mengubah ini, jalankan ulang perintah di atas untuk memulai unduhan.
> **CATATAN:** Image Docker RAGFlow yang mencakup model embedding dan library Python berukuran sekitar 9GB
dan mungkin memerlukan waktu lebih lama untuk dimuat.
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.15.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.15.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | *Unstable* nightly build |
| nightly-slim | &approx;2 | ❌ | *Unstable* nightly build |
4. Periksa status server setelah server aktif dan berjalan:
@ -208,7 +202,7 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
5. Buka browser web Anda, masukkan alamat IP server Anda, dan login ke RAGFlow.
> Dengan pengaturan default, Anda hanya perlu memasukkan `http://IP_DEVICE_ANDA` (**tanpa** nomor port) karena
port HTTP default `80` bisa dihilangkan saat menggunakan konfigurasi default.
6. Dalam [service_conf.yaml](./docker/service_conf.yaml), pilih LLM factory yang diinginkan di `user_default_llm` dan perbarui
6. Dalam [service_conf.yaml.template](./docker/service_conf.yaml.template), pilih LLM factory yang diinginkan di `user_default_llm` dan perbarui
bidang `API_KEY` dengan kunci API yang sesuai.
> Lihat [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) untuk informasi lebih lanjut.
@ -221,16 +215,9 @@ Untuk konfigurasi sistem, Anda perlu mengelola file-file berikut:
- [.env](./docker/.env): Menyimpan pengaturan dasar sistem, seperti `SVR_HTTP_PORT`, `MYSQL_PASSWORD`, dan
`MINIO_PASSWORD`.
- [service_conf.yaml](./docker/service_conf.yaml): Mengonfigurasi aplikasi backend.
- [service_conf.yaml.template](./docker/service_conf.yaml.template): Mengonfigurasi aplikasi backend.
- [docker-compose.yml](./docker/docker-compose.yml): Sistem ini bergantung pada [docker-compose.yml](./docker/docker-compose.yml) untuk memulai.
Anda harus memastikan bahwa perubahan pada file [.env](./docker/.env) sesuai dengan yang ada di file [service_conf.yaml](./docker/service_conf.yaml).
> File [./docker/README](./docker/README.md) menyediakan penjelasan detail tentang pengaturan lingkungan dan konfigurasi aplikasi,
> dan Anda DIWAJIBKAN memastikan bahwa semua pengaturan lingkungan yang tercantum di
> [./docker/README](./docker/README.md) selaras dengan konfigurasi yang sesuai di
> [service_conf.yaml](./docker/service_conf.yaml).
Untuk memperbarui port HTTP default (80), buka [docker-compose.yml](./docker/docker-compose.yml) dan ubah `80:80`
menjadi `<YOUR_SERVING_PORT>:80`.
@ -242,14 +229,12 @@ Pembaruan konfigurasi ini memerlukan reboot semua kontainer agar efektif:
## 🔧 Membangun Docker Image tanpa Model Embedding
Image ini berukuran sekitar 1 GB dan bergantung pada aplikasi LLM eksternal dan embedding.
Image ini berukuran sekitar 2 GB dan bergantung pada aplikasi LLM eksternal dan embedding.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
docker build --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 Membangun Docker Image Termasuk Model Embedding
@ -259,23 +244,21 @@ Image ini berukuran sekitar 9 GB. Karena sudah termasuk model embedding, ia hany
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
docker build -f Dockerfile -t infiniflow/ragflow:nightly .
```
## 🔨 Menjalankan Aplikasi dari untuk Pengembangan
1. Instal Poetry, atau lewati langkah ini jika sudah terinstal:
```bash
curl -sSL https://install.python-poetry.org | python3 -
pipx install poetry
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
```
2. Clone kode sumber dan instal dependensi Python:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
~/.local/bin/poetry install --sync --no-root # install modul python RAGFlow
```
@ -284,11 +267,10 @@ docker build -f Dockerfile -t infiniflow/ragflow:dev .
docker compose -f docker/docker-compose-base.yml up -d
```
Tambahkan baris berikut ke `/etc/hosts` untuk memetakan semua host yang ditentukan di **docker/service_conf.yaml** ke `127.0.0.1`:
Tambahkan baris berikut ke `/etc/hosts` untuk memetakan semua host yang ditentukan di **conf/service_conf.yaml** ke `127.0.0.1`:
```
127.0.0.1 es01 infinity mysql minio redis
```
Di **docker/service_conf.yaml**, perbarui port mysql ke `5455` dan es ke `1200`, sesuai dengan yang ditentukan di **docker/.env**.
4. Jika Anda tidak dapat mengakses HuggingFace, atur variabel lingkungan `HF_ENDPOINT` untuk menggunakan situs mirror:

View File

@ -20,7 +20,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.14.1-brightgreen" alt="docker pull infiniflow/ragflow:v0.14.1">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.15.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.15.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -54,9 +54,10 @@
## 🔥 最新情報
- 2024-12-18 Deepdoc のドキュメント レイアウト分析モデルをアップグレードします。
- 2024-12-04 ナレッジ ベースへのページランク スコアをサポートしました。
- 2024-11-22 エージェントでの変数の定義と使用法を改善しました。
- 2024-11-01 再現の精度を向上させるために、解析されたチャンクにキーワード抽出と関連質問の生成を追加しました。
- 2024-09-13 ナレッジベース Q&A の検索モードを追加しました。
- 2024-08-22 RAG を介して SQL ステートメントへのテキストをサポートします。
- 2024-08-02 [graphrag](https://github.com/microsoft/graphrag) からインスピレーションを得た GraphRAG とマインド マップをサポートします。
@ -141,18 +142,19 @@
3. ビルド済みの Docker イメージをビルドし、サーバーを起動する:
> 以下のコマンドは、RAGFlow slim`dev-slim`)の開発版Dockerイメージをダウンロードします。RAGFlow slimのDockerイメージには、埋め込みモデルやPythonライブラリが含まれていないため、サイズは約1GBです。
> 以下のコマンドは、RAGFlow Dockerイメージの v0.15.0-slim エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.15.0-slim とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。例えば、完全版 v0.15.0 をダウンロードするには、RAGFLOW_IMAGE=infiniflow/ragflow:v0.15.0 と設定します。
```bash
$ cd ragflow/docker
$ docker compose -f docker-compose.yml up -d
$ cd ragflow
$ docker compose -f docker/docker-compose.yml up -d
```
> - 特定のバージョンのRAGFlow slim Dockerイメージをダウンロードするには、**docker/.env**内の`RAGFlow_IMAGE`変数を希望のバージョンに更新します。例えば、`RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.1`とします。この変更を行った後、上記のコマンドを再実行してダウンロードを開始してください。
> - RAGFlowの埋め込みモデルとPythonライブラリを含む開発版Dockerイメージをダウンロードするには、**docker/.env**内の`RAGFlow_IMAGE`変数を`RAGFLOW_IMAGE=infiniflow/ragflow:dev`に更新します。この変更を行った後、上記のコマンドを再実行してダウンロードを開始してください。
> - 特定のバージョンのRAGFlow Dockerイメージ埋め込みモデルとPythonライブラリを含むをダウンロードするには、**docker/.env**内の`RAGFlow_IMAGE`変数を希望のバージョンに更新します。例えば、`RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.1`とします。この変更を行った後、上記のコマンドを再実行してダウンロードを開始してください。
> **NOTE:** 埋め込みモデルとPythonライブラリを含むRAGFlow Dockerイメージのサイズは約9GBであり、読み込みにかなりの時間がかかる場合があります。
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.15.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.15.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | *Unstable* nightly build |
| nightly-slim | &approx;2 | ❌ | *Unstable* nightly build |
4. サーバーを立ち上げた後、サーバーの状態を確認する:
@ -178,7 +180,7 @@
5. ウェブブラウザで、プロンプトに従ってサーバーの IP アドレスを入力し、RAGFlow にログインします。
> デフォルトの設定を使用する場合、デフォルトの HTTP サービングポート `80` は省略できるので、与えられたシナリオでは、`http://IP_OF_YOUR_MACHINE`(ポート番号は省略)だけを入力すればよい。
6. [service_conf.yaml](./docker/service_conf.yaml) で、`user_default_llm` で希望の LLM ファクトリを選択し、`API_KEY` フィールドを対応する API キーで更新する。
6. [service_conf.yaml.template](./docker/service_conf.yaml.template) で、`user_default_llm` で希望の LLM ファクトリを選択し、`API_KEY` フィールドを対応する API キーで更新する。
> 詳しくは [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) を参照してください。
@ -189,12 +191,12 @@
システムコンフィグに関しては、以下のファイルを管理する必要がある:
- [.env](./docker/.env): `SVR_HTTP_PORT`、`MYSQL_PASSWORD`、`MINIO_PASSWORD` などのシステムの基本設定を保持する。
- [service_conf.yaml](./docker/service_conf.yaml): バックエンドのサービスを設定します。
- [service_conf.yaml.template](./docker/service_conf.yaml.template): バックエンドのサービスを設定します。
- [docker-compose.yml](./docker/docker-compose.yml): システムの起動は [docker-compose.yml](./docker/docker-compose.yml) に依存している。
[.env](./docker/.env) ファイルの変更が [service_conf.yaml](./docker/service_conf.yaml) ファイルの内容と一致していることを確認する必要があります。
[.env](./docker/.env) ファイルの変更が [service_conf.yaml.template](./docker/service_conf.yaml.template) ファイルの内容と一致していることを確認する必要があります。
> [./docker/README](./docker/README.md) ファイルは環境設定とサービスコンフィグの詳細な説明を提供し、[./docker/README](./docker/README.md) ファイルに記載されている全ての環境設定が [service_conf.yaml](./docker/service_conf.yaml) ファイルの対応するコンフィグと一致していることを確認することが義務付けられています。
> [./docker/README](./docker/README.md) ファイル ./docker/README には、service_conf.yaml.template ファイルで ${ENV_VARS} として使用できる環境設定とサービス構成の詳細な説明が含まれています。
デフォルトの HTTP サービングポート(80)を更新するには、[docker-compose.yml](./docker/docker-compose.yml) にアクセスして、`80:80` を `<YOUR_SERVING_PORT>:80` に変更します。
@ -228,9 +230,7 @@ RAGFlow はデフォルトで Elasticsearch を使用して全文とベクトル
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
docker build --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 ソースコードをコンパイルしたDockerイメージ埋め込みモデルを含む
@ -240,23 +240,21 @@ docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
docker build -f Dockerfile -t infiniflow/ragflow:nightly .
```
## 🔨 ソースコードからサービスを起動する方法
1. Poetry をインストールする。すでにインストールされている場合は、このステップをスキップしてください:
```bash
curl -sSL https://install.python-poetry.org | python3 -
pipx install poetry
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
```
2. ソースコードをクローンし、Python の依存関係をインストールする:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
~/.local/bin/poetry install --sync --no-root # install RAGFlow dependent python modules
```
@ -265,11 +263,10 @@ docker build -f Dockerfile -t infiniflow/ragflow:dev .
docker compose -f docker/docker-compose-base.yml up -d
```
`/etc/hosts` に以下の行を追加して、**docker/service_conf.yaml** に指定されたすべてのホストを `127.0.0.1` に解決します:
`/etc/hosts` に以下の行を追加して、**conf/service_conf.yaml** に指定されたすべてのホストを `127.0.0.1` に解決します:
```
127.0.0.1 es01 infinity mysql minio redis
```
**docker/service_conf.yaml** で mysql のポートを `5455` に、es のポートを `1200` に更新します(**docker/.env** に指定された通り).
4. HuggingFace にアクセスできない場合は、`HF_ENDPOINT` 環境変数を設定してミラーサイトを使用してください:

View File

@ -20,7 +20,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.14.1-brightgreen" alt="docker pull infiniflow/ragflow:v0.14.1">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.15.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.15.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -56,12 +56,14 @@
## 🔥 업데이트
- 2024-12-18 Deepdoc의 문서 레이아웃 분석 모델 업그레이드.
- 2024-12-04 지식베이스에 대한 페이지랭크 점수를 지원합니다.
- 2024-11-22 에이전트의 변수 정의 및 사용을 개선했습니다.
- 2024-11-01 파싱된 청크에 키워드 추출 및 관련 질문 생성을 추가하여 재현율을 향상시킵니다.
- 2024-09-13 지식베이스 Q&A 검색 모드를 추가합니다.
- 2024-08-22 RAG를 통해 SQL 문에 텍스트를 지원합니다.
- 2024-08-02: [graphrag](https://github.com/microsoft/graphrag)와 마인드맵에서 영감을 받은 GraphRAG를 지원합니다.
@ -145,19 +147,19 @@
3. 미리 빌드된 Docker 이미지를 생성하고 서버를 시작하세요:
> 아래 명령 RAGFlow slim(dev-slim)의 개발 버전 Docker 이미지를 다운로드합니다. RAGFlow slim Docker 이미지에는 임베딩 모델이나 Python 라이브러리가 포함되어 있지 않으므로 크기는 약 1GB입니다.
> 아래 명령어는 RAGFlow Docker 이미지의 v0.15.0-slim 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.15.0-slim과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. 예를 들어, 전체 버전인 v0.15.0을 다운로드하려면 RAGFLOW_IMAGE=infiniflow/ragflow:v0.15.0로 설정합니다.
```bash
$ cd ragflow/docker
$ docker compose -f docker-compose.yml up -d
$ cd ragflow
$ docker compose -f docker/docker-compose.yml up -d
```
> - 특정 버전의 RAGFlow slim Docker 이미지를 다운로드하려면, **docker/.env**에서 `RAGFlow_IMAGE` 변수를 원하는 버전으로 업데이트하세요. 예를 들어, `RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.1-slim`으로 설정합니다. 이 변경을 완료한 후, 위의 명령을 다시 실행하여 다운로드를 시작하세요.
> - RAGFlow의 임베딩 모델과 Python 라이브러리를 포함한 개발 버전 Docker 이미지를 다운로드하려면, **docker/.env**에서 `RAGFlow_IMAGE` 변수를 `RAGFLOW_IMAGE=infiniflow/ragflow:dev`로 업데이트하세요. 이 변경을 완료한 후, 위의 명령을 다시 실행하여 다운로드를 시작하세요.
> - 특정 버전의 RAGFlow Docker 이미지를 임베딩 모델과 Python 라이브러리를 포함하여 다운로드하려면, **docker/.env**에서 `RAGFlow_IMAGE` 변수를 원하는 버전으로 업데이트하세요. 예를 들어, `RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.1` 로 설정합니다. 이 변경을 완료한 후, 위의 명령을 다시 실행하여 다운로드를 시작하세요.
> **NOTE:** 임베딩 모델과 Python 라이브러리를 포함한 RAGFlow Docker 이미지의 크기는 약 9GB이며, 로드하는 데 상당히 오랜 시간이 걸릴 수 있습니다.
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.15.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.15.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | *Unstable* nightly build |
| nightly-slim | &approx;2 | ❌ | *Unstable* nightly build |
4. 서버가 시작된 후 서버 상태를 확인하세요:
@ -183,7 +185,7 @@
5. 웹 브라우저에 서버의 IP 주소를 입력하고 RAGFlow에 로그인하세요.
> 기본 설정을 사용할 경우, `http://IP_OF_YOUR_MACHINE`만 입력하면 됩니다 (포트 번호는 제외). 기본 HTTP 서비스 포트 `80`은 기본 구성으로 사용할 때 생략할 수 있습니다.
6. [service_conf.yaml](./docker/service_conf.yaml) 파일에서 원하는 LLM 팩토리를 `user_default_llm`에 선택하고, `API_KEY` 필드를 해당 API 키로 업데이트하세요.
6. [service_conf.yaml.template](./docker/service_conf.yaml.template) 파일에서 원하는 LLM 팩토리를 `user_default_llm`에 선택하고, `API_KEY` 필드를 해당 API 키로 업데이트하세요.
> 자세한 내용은 [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup)를 참조하세요.
_이제 쇼가 시작됩니다!_
@ -193,12 +195,12 @@
시스템 설정과 관련하여 다음 파일들을 관리해야 합니다:
- [.env](./docker/.env): `SVR_HTTP_PORT`, `MYSQL_PASSWORD`, `MINIO_PASSWORD`와 같은 시스템의 기본 설정을 포함합니다.
- [service_conf.yaml](./docker/service_conf.yaml): 백엔드 서비스를 구성합니다.
- [service_conf.yaml.template](./docker/service_conf.yaml.template): 백엔드 서비스를 구성합니다.
- [docker-compose.yml](./docker/docker-compose.yml): 시스템은 [docker-compose.yml](./docker/docker-compose.yml)을 사용하여 시작됩니다.
[.env](./docker/.env) 파일의 변경 사항이 [service_conf.yaml](./docker/service_conf.yaml) 파일의 내용과 일치하도록 해야 합니다.
[.env](./docker/.env) 파일의 변경 사항이 [service_conf.yaml.template](./docker/service_conf.yaml.template) 파일의 내용과 일치하도록 해야 합니다.
> [./docker/README](./docker/README.md) 파일에는 환경 설정과 서비스 구성에 대한 자세한 설명이 있으며, [./docker/README](./docker/README.md) 파일에 나열된 모든 환경 설정이 [service_conf.yaml](./docker/service_conf.yaml) 파일의 해당 구성과 일치하도록 해야 합니다.
> [./docker/README](./docker/README.md) 파일 ./docker/README은 service_conf.yaml.template 파일에서 ${ENV_VARS}로 사용할 수 있는 환경 설정과 서비스 구성에 대한 자세한 설명을 제공합니다.
기본 HTTP 서비스 포트(80)를 업데이트하려면 [docker-compose.yml](./docker/docker-compose.yml) 파일에서 `80:80`을 `<YOUR_SERVING_PORT>:80`으로 변경하세요.
@ -213,12 +215,12 @@
RAGFlow 는 기본적으로 Elasticsearch 를 사용하여 전체 텍스트 및 벡터를 저장합니다. [Infinity]로 전환(https://github.com/infiniflow/infinity/), 다음 절차를 따르십시오.
1. 실행 중인 모든 컨테이너를 중지합니다.
```bash
$docker compose-f docker/docker-compose.yml down-v
$docker compose-f docker/docker-compose.yml down -v
```
2. **docker/.env**의 "DOC_ENGINE" 을 "infinity" 로 설정합니다.
3. 컨테이너 부팅:
```bash
$docker compose-f docker/docker-compose.yml up-d
$docker compose-f docker/docker-compose.yml up -d
```
> [!WARNING]
> Linux/arm64 시스템에서 Infinity로 전환하는 것은 공식적으로 지원되지 않습니다.
@ -230,9 +232,7 @@ RAGFlow 는 기본적으로 Elasticsearch 를 사용하여 전체 텍스트 및
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
docker build --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함)
@ -242,23 +242,21 @@ docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
docker build -f Dockerfile -t infiniflow/ragflow:nightly .
```
## 🔨 소스 코드로 서비스를 시작합니다.
1. Poetry를 설치하거나 이미 설치된 경우 이 단계를 건너뜁니다:
```bash
curl -sSL https://install.python-poetry.org | python3 -
pipx install poetry
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
```
2. 소스 코드를 클론하고 Python 의존성을 설치합니다:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
~/.local/bin/poetry install --sync --no-root # install RAGFlow dependent python modules
```
@ -267,11 +265,10 @@ docker build -f Dockerfile -t infiniflow/ragflow:dev .
docker compose -f docker/docker-compose-base.yml up -d
```
`/etc/hosts` 에 다음 줄을 추가하여 **docker/service_conf.yaml** 에 지정된 모든 호스트를 `127.0.0.1` 로 해결합니다:
`/etc/hosts` 에 다음 줄을 추가하여 **conf/service_conf.yaml** 에 지정된 모든 호스트를 `127.0.0.1` 로 해결합니다:
```
127.0.0.1 es01 infinity mysql minio redis
```
**docker/service_conf.yaml** 에서 mysql 포트를 `5455` 로, es 포트를 `1200` 으로 업데이트합니다( **docker/.env** 에 지정된 대로).
4. HuggingFace에 접근할 수 없는 경우, `HF_ENDPOINT` 환경 변수를 설정하여 미러 사이트를 사용하세요:

View File

@ -20,7 +20,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.14.1-brightgreen" alt="docker pull infiniflow/ragflow:v0.14.1">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.15.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.15.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -54,9 +54,10 @@
## 🔥 近期更新
- 2024-12-18 升级了 Deepdoc 的文档布局分析模型。
- 2024-12-04 支持知识库的 Pagerank 分数。
- 2024-11-22 完善了 Agent 中的变量定义和使用。
- 2024-11-01 对解析后的 chunk 加入关键词抽取和相关问题生成以提高召回的准确度。
- 2024-09-13 增加知识库问答搜索模式。
- 2024-08-22 支持用 RAG 技术实现从自然语言到 SQL 语句的转换。
- 2024-08-02 支持 GraphRAG 启发于 [graphrag](https://github.com/microsoft/graphrag) 和思维导图。
@ -142,18 +143,25 @@
3. 进入 **docker** 文件夹,利用提前编译好的 Docker 镜像启动服务器:
> 运行以下命令会自动下载 dev 版的 RAGFlow slim Docker 镜像`dev-slim`),该镜像并不包含 embedding 模型以及一些 Python 库,因此镜像大小约 1GB
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.15.0-slim`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.15.0-slim` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。比如,你可以通过设置 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.15.0` 来下载 RAGFlow 镜像的 `v0.15.0` 完整发行版
```bash
$ cd ragflow/docker
$ docker compose -f docker-compose.yml up -d
$ cd ragflow
$ docker compose -f docker/docker-compose.yml up -d
```
> - 如果你想下载并运行特定版本的 RAGFlow slim Docker 镜像,请在 **docker/.env** 文件中找到 `RAGFLOW_IMAGE` 变量,将其改为对应版本。例如 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.1-slim`,然后再运行上述命令。
> - 如果您想安装内置 embedding 模型和 Python 库的 dev 版本的 Docker 镜像,需要将 **docker/.env** 文件中的 `RAGFLOW_IMAGE` 变量修改为: `RAGFLOW_IMAGE=infiniflow/ragflow:dev`。
> - 如果您想安装内置 embedding 模型和 Python 库的指定版本的 RAGFlow Docker 镜像,需要将 **docker/.env** 文件中的 `RAGFLOW_IMAGE` 变量修改为: `RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.1`。修改后,再运行上面的命令。
> **注意:** 安装内置 embedding 模型和 Python 库的指定版本的 RAGFlow Docker 镜像大小约 9 GB可能需要更长时间下载请耐心等待。
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.15.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.15.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | *Unstable* nightly build |
| nightly-slim | &approx;2 | ❌ | *Unstable* nightly build |
> [!TIP]
> 如果你遇到 Docker 镜像拉不下来的问题,可以在 **docker/.env** 文件内根据变量 `RAGFLOW_IMAGE` 的注释提示选择华为云或者阿里云的相应镜像。
> - 华为云镜像名:`swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow`
> - 阿里云镜像名:`registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow`
4. 服务器启动成功后再次确认服务器状态:
```bash
@ -178,7 +186,7 @@
5. 在你的浏览器中输入你的服务器对应的 IP 地址并登录 RAGFlow。
> 上面这个例子中,您只需输入 http://IP_OF_YOUR_MACHINE 即可:未改动过配置则无需输入端口(默认的 HTTP 服务端口 80
6. 在 [service_conf.yaml](./docker/service_conf.yaml) 文件的 `user_default_llm` 栏配置 LLM factory并在 `API_KEY` 栏填写和你选择的大模型相对应的 API key。
6. 在 [service_conf.yaml.template](./docker/service_conf.yaml.template) 文件的 `user_default_llm` 栏配置 LLM factory并在 `API_KEY` 栏填写和你选择的大模型相对应的 API key。
> 详见 [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup)。
@ -189,21 +197,21 @@
系统配置涉及以下三份文件:
- [.env](./docker/.env):存放一些基本的系统环境变量,比如 `SVR_HTTP_PORT`、`MYSQL_PASSWORD`、`MINIO_PASSWORD` 等。
- [service_conf.yaml](./docker/service_conf.yaml):配置各类后台服务。
- [service_conf.yaml.template](./docker/service_conf.yaml.template):配置各类后台服务。
- [docker-compose.yml](./docker/docker-compose.yml): 系统依赖该文件完成启动。
请务必确保 [.env](./docker/.env) 文件中的变量设置与 [service_conf.yaml](./docker/service_conf.yaml) 文件中的配置保持一致!
请务必确保 [.env](./docker/.env) 文件中的变量设置与 [service_conf.yaml.template](./docker/service_conf.yaml.template) 文件中的配置保持一致!
如果不能访问镜像站点hub.docker.com或者模型站点huggingface.co请按照[.env](./docker/.env)注释修改`RAGFLOW_IMAGE``HF_ENDPOINT`。
如果不能访问镜像站点 hub.docker.com 或者模型站点 huggingface.co请按照 [.env](./docker/.env) 注释修改 `RAGFLOW_IMAGE``HF_ENDPOINT`。
> [./docker/README](./docker/README.md) 文件提供了环境变量设置和服务配置的详细信息。请**一定要**确保 [./docker/README](./docker/README.md) 文件当中列出来的环境变量的值与 [service_conf.yaml](./docker/service_conf.yaml) 文件当中的系统配置保持一致
> [./docker/README](./docker/README.md) 解释了 [service_conf.yaml.template](./docker/service_conf.yaml.template) 用到的环境变量设置和服务配置
如需更新默认的 HTTP 服务端口(80), 可以在 [docker-compose.yml](./docker/docker-compose.yml) 文件中将配置 `80:80` 改为 `<YOUR_SERVING_PORT>:80`。
> 所有系统配置都需要通过系统重启生效:
>
> ```bash
> $ docker compose -f docker-compose.yml up -d
> $ docker compose -f docker/docker-compose.yml up -d
> ```
### 把文档引擎从 Elasticsearch 切换成为 Infinity
@ -230,14 +238,12 @@ RAGFlow 默认使用 Elasticsearch 存储文本和向量数据. 如果要切换
## 🔧 源码编译 Docker 镜像(不含 embedding 模型)
本 Docker 镜像大小约 1 GB 左右并且依赖外部的大模型和 embedding 服务。
本 Docker 镜像大小约 2 GB 左右并且依赖外部的大模型和 embedding 服务。
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
docker build --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 源码编译 Docker 镜像(包含 embedding 模型)
@ -247,23 +253,23 @@ docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
docker build --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
```
## 🔨 以源代码启动服务
1. 安装 Poetry。如已经安装可跳过本步骤
```bash
curl -sSL https://install.python-poetry.org | python3 -
pipx install poetry
pipx inject poetry poetry-plugin-pypi-mirror
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
export POETRY_PYPI_MIRROR_URL=https://pypi.tuna.tsinghua.edu.cn/simple/
```
2. 下载源代码并安装 Python 依赖:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
~/.local/bin/poetry install --sync --no-root # install RAGFlow dependent python modules
```
@ -272,11 +278,10 @@ docker build -f Dockerfile -t infiniflow/ragflow:dev .
docker compose -f docker/docker-compose-base.yml up -d
```
在 `/etc/hosts` 中添加以下代码,将 **docker/service_conf.yaml** 文件中的所有 host 地址都解析为 `127.0.0.1`
在 `/etc/hosts` 中添加以下代码,将 **conf/service_conf.yaml** 文件中的所有 host 地址都解析为 `127.0.0.1`
```
127.0.0.1 es01 infinity mysql minio redis
```
在文件 **docker/service_conf.yaml** 中,对照 **docker/.env** 的配置将 mysql 端口更新为 `5455`es 端口更新为 `1200`。
4. 如果无法访问 HuggingFace可以把环境变量 `HF_ENDPOINT` 设成相应的镜像站点:

View File

@ -10,7 +10,7 @@ It is used to compose a complex work flow or agent.
And this graph is beyond the DAG that we can use circles to describe our agent or work flow.
Under this folder, we propose a test tool ./test/client.py which can test the DSLs such as json files in folder ./test/dsl_examples.
Please use this client at the same folder you start RAGFlow. If it's run by Docker, please go into the container before running the client.
Otherwise, correct configurations in conf/service_conf.yaml is essential.
Otherwise, correct configurations in service_conf.yaml is essential.
```bash
PYTHONPATH=path/to/ragflow python graph/test/client.py -h

View File

@ -11,7 +11,7 @@
在这个文件夹下,我们提出了一个测试工具 ./test/client.py
它可以测试像文件夹./test/dsl_examples下一样的DSL文件。
请在启动 RAGFlow 的同一文件夹中使用此客户端。如果它是通过 Docker 运行的,请在运行客户端之前进入容器。
否则,正确配置 conf/service_conf.yaml 文件是必不可少的。
否则,正确配置 service_conf.yaml 文件是必不可少的。
```bash
PYTHONPATH=path/to/ragflow python graph/test/client.py -h

View File

@ -0,0 +1,2 @@
from beartype.claw import beartype_this_package
beartype_this_package()

View File

@ -21,6 +21,7 @@ from functools import partial
from agent.component import component_class
from agent.component.base import ComponentBase
class Canvas(ABC):
"""
dsl = {
@ -133,7 +134,8 @@ class Canvas(ABC):
"components": {}
}
for k in self.dsl.keys():
if k in ["components"]:continue
if k in ["components"]:
continue
dsl[k] = deepcopy(self.dsl[k])
for k, cpn in self.components.items():
@ -158,7 +160,8 @@ class Canvas(ABC):
def get_compnent_name(self, cid):
for n in self.dsl["graph"]["nodes"]:
if cid == n["id"]: return n["data"]["name"]
if cid == n["id"]:
return n["data"]["name"]
return ""
def run(self, **kwargs):
@ -173,7 +176,8 @@ class Canvas(ABC):
if kwargs.get("stream"):
for an in ans():
yield an
else: yield ans
else:
yield ans
return
if not self.path:
@ -181,6 +185,7 @@ class Canvas(ABC):
self.path.append(["begin"])
self.path.append([])
ran = -1
waiting = []
without_dependent_checking = []
@ -188,7 +193,8 @@ class Canvas(ABC):
def prepare2run(cpns):
nonlocal ran, ans
for c in cpns:
if self.path[-1] and c == self.path[-1][-1]: continue
if self.path[-1] and c == self.path[-1][-1]:
continue
cpn = self.components[c]["obj"]
if cpn.component_name == "Answer":
self.answer.append(c)
@ -197,10 +203,17 @@ class Canvas(ABC):
if c not in without_dependent_checking:
cpids = cpn.get_dependent_components()
if any([cc not in self.path[-1] for cc in cpids]):
if c not in waiting: waiting.append(c)
if c not in waiting:
waiting.append(c)
continue
yield "*'{}'* is running...🕞".format(self.get_compnent_name(c))
ans = cpn.run(self.history, **kwargs)
try:
ans = cpn.run(self.history, **kwargs)
except Exception as e:
logging.exception(f"Canvas.run got exception: {e}")
self.path[-1].append(c)
ran += 1
raise e
self.path[-1].append(c)
ran += 1
@ -211,29 +224,23 @@ class Canvas(ABC):
logging.debug(f"Canvas.run: {ran} {self.path}")
cpn_id = self.path[-1][ran]
cpn = self.get_component(cpn_id)
if not cpn["downstream"]: break
if not cpn["downstream"]:
break
loop = self._find_loop()
if loop: raise OverflowError(f"Too much loops: {loop}")
if loop:
raise OverflowError(f"Too much loops: {loop}")
if cpn["obj"].component_name.lower() in ["switch", "categorize", "relevant"]:
switch_out = cpn["obj"].output()[1].iloc[0, 0]
assert switch_out in self.components, \
"{}'s output: {} not valid.".format(cpn_id, switch_out)
try:
for m in prepare2run([switch_out]):
yield {"content": m, "running_status": True}
except Exception as e:
yield {"content": "*Exception*: {}".format(e), "running_status": True}
logging.exception("Canvas.run got exception")
for m in prepare2run([switch_out]):
yield {"content": m, "running_status": True}
continue
try:
for m in prepare2run(cpn["downstream"]):
yield {"content": m, "running_status": True}
except Exception as e:
yield {"content": "*Exception*: {}".format(e), "running_status": True}
logging.exception("Canvas.run got exception")
for m in prepare2run(cpn["downstream"]):
yield {"content": m, "running_status": True}
if ran >= len(self.path[-1]) and waiting:
without_dependent_checking = waiting
@ -283,19 +290,22 @@ class Canvas(ABC):
def _find_loop(self, max_loops=6):
path = self.path[-1][::-1]
if len(path) < 2: return False
if len(path) < 2:
return False
for i in range(len(path)):
if path[i].lower().find("answer") >= 0:
path = path[:i]
break
if len(path) < 2: return False
if len(path) < 2:
return False
for l in range(2, len(path) // 2):
pat = ",".join(path[0:l])
for loc in range(2, len(path) // 2):
pat = ",".join(path[0:loc])
path_str = ",".join(path)
if len(pat) >= len(path_str): return False
if len(pat) >= len(path_str):
return False
loop = max_loops
while path_str.find(pat) == 0 and loop >= 0:
loop -= 1
@ -303,10 +313,23 @@ class Canvas(ABC):
return False
path_str = path_str[len(pat)+1:]
if loop < 0:
pat = " => ".join([p.split(":")[0] for p in path[0:l]])
pat = " => ".join([p.split(":")[0] for p in path[0:loc]])
return pat + " => " + pat
return False
def get_prologue(self):
return self.components["begin"]["obj"]._param.prologue
def set_global_param(self, **kwargs):
for k, v in kwargs.items():
for q in self.components["begin"]["obj"]._param.query:
if k != q["key"]:
continue
q["value"] = v
def get_preset_param(self):
return self.components["begin"]["obj"]._param.query
def get_component_input_elements(self, cpnnm):
return self.components[cpnnm]["obj"].get_input_elements()

View File

@ -31,9 +31,81 @@ from .akshare import AkShare, AkShareParam
from .crawler import Crawler, CrawlerParam
from .invoke import Invoke, InvokeParam
from .template import Template, TemplateParam
from .email import Email, EmailParam
def component_class(class_name):
m = importlib.import_module("agent.component")
c = getattr(m, class_name)
return c
__all__ = [
"Begin",
"BeginParam",
"Generate",
"GenerateParam",
"Retrieval",
"RetrievalParam",
"Answer",
"AnswerParam",
"Categorize",
"CategorizeParam",
"Switch",
"SwitchParam",
"Relevant",
"RelevantParam",
"Message",
"MessageParam",
"RewriteQuestion",
"RewriteQuestionParam",
"KeywordExtract",
"KeywordExtractParam",
"Concentrator",
"ConcentratorParam",
"Baidu",
"BaiduParam",
"DuckDuckGo",
"DuckDuckGoParam",
"Wikipedia",
"WikipediaParam",
"PubMed",
"PubMedParam",
"ArXiv",
"ArXivParam",
"Google",
"GoogleParam",
"Bing",
"BingParam",
"GoogleScholar",
"GoogleScholarParam",
"DeepL",
"DeepLParam",
"GitHub",
"GitHubParam",
"BaiduFanyi",
"BaiduFanyiParam",
"QWeather",
"QWeatherParam",
"ExeSQL",
"ExeSQLParam",
"YahooFinance",
"YahooFinanceParam",
"WenCai",
"WenCaiParam",
"Jin10",
"Jin10Param",
"TuShare",
"TuShareParam",
"AkShare",
"AkShareParam",
"Crawler",
"CrawlerParam",
"Invoke",
"InvokeParam",
"Template",
"TemplateParam",
"Email",
"EmailParam",
"component_class"
]

View File

@ -37,6 +37,7 @@ class ComponentParamBase(ABC):
self.message_history_window_size = 22
self.query = []
self.inputs = []
self.debug_inputs = []
def set_name(self, name: str):
self._name = name
@ -410,6 +411,7 @@ class ComponentBase(ABC):
def run(self, history, **kwargs):
logging.debug("{}, history: {}, kwargs: {}".format(self, json.dumps(history, ensure_ascii=False),
json.dumps(kwargs, ensure_ascii=False)))
self._param.debug_inputs = []
try:
res = self._run(history, **kwargs)
self.set_output(res)
@ -425,7 +427,8 @@ class ComponentBase(ABC):
def output(self, allow_partial=True) -> Tuple[str, Union[pd.DataFrame, partial]]:
o = getattr(self._param, self._param.output_var_name)
if not isinstance(o, partial) and not isinstance(o, pd.DataFrame):
if not isinstance(o, list): o = [o]
if not isinstance(o, list):
o = [o]
o = pd.DataFrame(o)
if allow_partial or not isinstance(o, partial):
@ -437,17 +440,21 @@ class ComponentBase(ABC):
for oo in o():
if not isinstance(oo, pd.DataFrame):
outs = pd.DataFrame(oo if isinstance(oo, list) else [oo])
else: outs = oo
else:
outs = oo
return self._param.output_var_name, outs
def reset(self):
setattr(self._param, self._param.output_var_name, None)
self._param.inputs = []
def set_output(self, v: partial | pd.DataFrame):
def set_output(self, v):
setattr(self._param, self._param.output_var_name, v)
def get_input(self):
if self._param.debug_inputs:
return pd.DataFrame([{"content": v["value"]} for v in self._param.debug_inputs])
reversed_cpnts = []
if len(self._canvas.path) > 1:
reversed_cpnts.extend(self._canvas.path[-2])
@ -457,7 +464,7 @@ class ComponentBase(ABC):
self._param.inputs = []
outs = []
for q in self._param.query:
if q["component_id"]:
if q.get("component_id"):
if q["component_id"].split("@")[0].lower().find("begin") >= 0:
cpn_id, key = q["component_id"].split("@")
for p in self._canvas.get_component(cpn_id)["obj"]._param.query:
@ -470,22 +477,31 @@ class ComponentBase(ABC):
assert False, f"Can't find parameter '{key}' for {cpn_id}"
continue
if q["component_id"].lower().find("answer") == 0:
for r, c in self._canvas.history[::-1]:
if r == "user":
self._param.inputs.append(pd.DataFrame([{"content": c, "component_id": q["component_id"]}]))
break
continue
outs.append(self._canvas.get_component(q["component_id"])["obj"].output(allow_partial=False)[1])
self._param.inputs.append({"component_id": q["component_id"],
"content": "\n".join(
[str(d["content"]) for d in outs[-1].to_dict('records')])})
elif q["value"]:
elif q.get("value"):
self._param.inputs.append({"component_id": None, "content": q["value"]})
outs.append(pd.DataFrame([{"content": q["value"]}]))
if outs:
df = pd.concat(outs, ignore_index=True)
if "content" in df: df = df.drop_duplicates(subset=['content']).reset_index(drop=True)
if "content" in df:
df = df.drop_duplicates(subset=['content']).reset_index(drop=True)
return df
upstream_outs = []
for u in reversed_cpnts[::-1]:
if self.get_component_name(u) in ["switch", "concentrator"]: continue
if self.get_component_name(u) in ["switch", "concentrator"]:
continue
if self.component_name.lower() == "generate" and self.get_component_name(u) == "retrieval":
o = self._canvas.get_component(u)["obj"].output(allow_partial=False)[1]
if o is not None:
@ -522,6 +538,22 @@ class ComponentBase(ABC):
return df
def get_input_elements(self):
assert self._param.query, "Please identify input parameters firstly."
eles = []
for q in self._param.query:
if q.get("component_id"):
cpn_id = q["component_id"]
if cpn_id.split("@")[0].lower().find("begin") >= 0:
cpn_id, key = cpn_id.split("@")
eles.extend(self._canvas.get_component(cpn_id)["obj"]._param.query)
continue
eles.append({"name": self._canvas.get_compnent_name(cpn_id), "key": cpn_id})
else:
eles.append({"key": q["value"], "name": q["value"], "value": q["value"]})
return eles
def get_stream_input(self):
reversed_cpnts = []
if len(self._canvas.path) > 1:
@ -529,7 +561,8 @@ class ComponentBase(ABC):
reversed_cpnts.extend(self._canvas.path[-1])
for u in reversed_cpnts[::-1]:
if self.get_component_name(u) in ["switch", "answer"]: continue
if self.get_component_name(u) in ["switch", "answer"]:
continue
return self._canvas.get_component(u)["obj"].output()[1]
@staticmethod
@ -538,3 +571,6 @@ class ComponentBase(ABC):
def get_component_name(self, cpn_id):
return self._canvas.get_component(cpn_id)["obj"].component_name.lower()
def debug(self, **kwargs):
return self._run([], **kwargs)

View File

@ -26,6 +26,7 @@ class BeginParam(ComponentParamBase):
def __init__(self):
super().__init__()
self.prologue = "Hi! I'm your smart assistant. What can I do for you?"
self.query = []
def check(self):
return True
@ -42,7 +43,7 @@ class Begin(ComponentBase):
def stream_output(self):
res = {"content": self._param.prologue}
yield res
self.set_output(res)
self.set_output(self.be_output(res))

View File

@ -34,15 +34,18 @@ class CategorizeParam(GenerateParam):
super().check()
self.check_empty(self.category_description, "[Categorize] Category examples")
for k, v in self.category_description.items():
if not k: raise ValueError("[Categorize] Category name can not be empty!")
if not v.get("to"): raise ValueError(f"[Categorize] 'To' of category {k} can not be empty!")
if not k:
raise ValueError("[Categorize] Category name can not be empty!")
if not v.get("to"):
raise ValueError(f"[Categorize] 'To' of category {k} can not be empty!")
def get_prompt(self):
cate_lines = []
for c, desc in self.category_description.items():
for l in desc.get("examples", "").split("\n"):
if not l: continue
cate_lines.append("Question: {}\tCategory: {}".format(l, c))
for line in desc.get("examples", "").split("\n"):
if not line:
continue
cate_lines.append("Question: {}\tCategory: {}".format(line, c))
descriptions = []
for c, desc in self.category_description.items():
if desc.get("description"):
@ -84,4 +87,8 @@ class Categorize(Generate, ABC):
return Categorize.be_output(list(self._param.category_description.items())[-1][1]["to"])
def debug(self, **kwargs):
df = self._run([], **kwargs)
cpn_id = df.iloc[0, 0]
return Categorize.be_output(self._canvas.get_compnent_name(cpn_id))

View File

@ -14,7 +14,6 @@
# limitations under the License.
#
from abc import ABC
import re
from agent.component.base import ComponentBase, ComponentParamBase
import deepl

138
agent/component/email.py Normal file
View File

@ -0,0 +1,138 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
import json
import smtplib
import logging
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.header import Header
from email.utils import formataddr
from agent.component.base import ComponentBase, ComponentParamBase
class EmailParam(ComponentParamBase):
"""
Define the Email component parameters.
"""
def __init__(self):
super().__init__()
# Fixed configuration parameters
self.smtp_server = "" # SMTP server address
self.smtp_port = 465 # SMTP port
self.email = "" # Sender email
self.password = "" # Email authorization code
self.sender_name = "" # Sender name
def check(self):
# Check required parameters
self.check_empty(self.smtp_server, "SMTP Server")
self.check_empty(self.email, "Email")
self.check_empty(self.password, "Password")
self.check_empty(self.sender_name, "Sender Name")
class Email(ComponentBase, ABC):
component_name = "Email"
def _run(self, history, **kwargs):
# Get upstream component output and parse JSON
ans = self.get_input()
content = "".join(ans["content"]) if "content" in ans else ""
if not content:
return Email.be_output("No content to send")
success = False
try:
# Parse JSON string passed from upstream
email_data = json.loads(content)
# Validate required fields
if "to_email" not in email_data:
return Email.be_output("Missing required field: to_email")
# Create email object
msg = MIMEMultipart('alternative')
# Properly handle sender name encoding
msg['From'] = formataddr((str(Header(self._param.sender_name,'utf-8')), self._param.email))
msg['To'] = email_data["to_email"]
if "cc_email" in email_data and email_data["cc_email"]:
msg['Cc'] = email_data["cc_email"]
msg['Subject'] = Header(email_data.get("subject", "No Subject"), 'utf-8').encode()
# Use content from email_data or default content
email_content = email_data.get("content", "No content provided")
# msg.attach(MIMEText(email_content, 'plain', 'utf-8'))
msg.attach(MIMEText(email_content, 'html', 'utf-8'))
# Connect to SMTP server and send
logging.info(f"Connecting to SMTP server {self._param.smtp_server}:{self._param.smtp_port}")
context = smtplib.ssl.create_default_context()
with smtplib.SMTP_SSL(self._param.smtp_server, self._param.smtp_port, context=context) as server:
# Login
logging.info(f"Attempting to login with email: {self._param.email}")
server.login(self._param.email, self._param.password)
# Get all recipient list
recipients = [email_data["to_email"]]
if "cc_email" in email_data and email_data["cc_email"]:
recipients.extend(email_data["cc_email"].split(','))
# Send email
logging.info(f"Sending email to recipients: {recipients}")
try:
server.send_message(msg, self._param.email, recipients)
success = True
except Exception as e:
logging.error(f"Error during send_message: {str(e)}")
# Try alternative method
server.sendmail(self._param.email, recipients, msg.as_string())
success = True
try:
server.quit()
except Exception as e:
# Ignore errors when closing connection
logging.warning(f"Non-fatal error during connection close: {str(e)}")
if success:
return Email.be_output("Email sent successfully")
except json.JSONDecodeError:
error_msg = "Invalid JSON format in input"
logging.error(error_msg)
return Email.be_output(error_msg)
except smtplib.SMTPAuthenticationError:
error_msg = "SMTP Authentication failed. Please check your email and authorization code."
logging.error(error_msg)
return Email.be_output(f"Failed to send email: {error_msg}")
except smtplib.SMTPConnectError:
error_msg = f"Failed to connect to SMTP server {self._param.smtp_server}:{self._param.smtp_port}"
logging.error(error_msg)
return Email.be_output(f"Failed to send email: {error_msg}")
except smtplib.SMTPException as e:
error_msg = f"SMTP error occurred: {str(e)}"
logging.error(error_msg)
return Email.be_output(f"Failed to send email: {error_msg}")
except Exception as e:
error_msg = f"Unexpected error: {str(e)}"
logging.error(error_msg)
return Email.be_output(f"Failed to send email: {error_msg}")

View File

@ -19,7 +19,8 @@ import pandas as pd
import pymysql
import psycopg2
from agent.component.base import ComponentBase, ComponentParamBase
import pyodbc
import logging
class ExeSQLParam(ComponentParamBase):
"""
@ -38,7 +39,7 @@ class ExeSQLParam(ComponentParamBase):
self.top_n = 30
def check(self):
self.check_valid_value(self.db_type, "Choose DB type", ['mysql', 'postgresql', 'mariadb'])
self.check_valid_value(self.db_type, "Choose DB type", ['mysql', 'postgresql', 'mariadb', 'mssql'])
self.check_empty(self.database, "Database name")
self.check_empty(self.username, "database username")
self.check_empty(self.host, "IP Address")
@ -46,8 +47,10 @@ class ExeSQLParam(ComponentParamBase):
self.check_empty(self.password, "Database password")
self.check_positive_integer(self.top_n, "Number of records")
if self.database == "rag_flow":
if self.host == "ragflow-mysql": raise ValueError("The host is not accessible.")
if self.password == "infini_rag_flow": raise ValueError("The host is not accessible.")
if self.host == "ragflow-mysql":
raise ValueError("The host is not accessible.")
if self.password == "infini_rag_flow":
raise ValueError("The host is not accessible.")
class ExeSQL(ComponentBase, ABC):
@ -62,20 +65,41 @@ class ExeSQL(ComponentBase, ABC):
self._loop += 1
ans = self.get_input()
ans = "".join(ans["content"]) if "content" in ans else ""
ans = re.sub(r'^.*?SELECT ', 'SELECT ', repr(ans), flags=re.IGNORECASE)
ans = "".join([str(a) for a in ans["content"]]) if "content" in ans else ""
if self._param.db_type == 'mssql':
# improve the information extraction, most llm return results in markdown format ```sql query ```
match = re.search(r"```sql\s*(.*?)\s*```", ans, re.DOTALL)
if match:
ans = match.group(1) # Query content
print(ans)
else:
print("no markdown")
ans = re.sub(r'^.*?SELECT ', 'SELECT ', (ans), flags=re.IGNORECASE)
else:
ans = re.sub(r'^.*?SELECT ', 'SELECT ', repr(ans), flags=re.IGNORECASE)
ans = re.sub(r';.*?SELECT ', '; SELECT ', ans, flags=re.IGNORECASE)
ans = re.sub(r';[^;]*$', r';', ans)
if not ans:
raise Exception("SQL statement not found!")
logging.info("db_type: ",self._param.db_type)
if self._param.db_type in ["mysql", "mariadb"]:
db = pymysql.connect(db=self._param.database, user=self._param.username, host=self._param.host,
port=self._param.port, password=self._param.password)
elif self._param.db_type == 'postgresql':
db = psycopg2.connect(dbname=self._param.database, user=self._param.username, host=self._param.host,
port=self._param.port, password=self._param.password)
elif self._param.db_type == 'mssql':
conn_str = (
r'DRIVER={ODBC Driver 17 for SQL Server};'
r'SERVER=' + self._param.host + ',' + str(self._param.port) + ';'
r'DATABASE=' + self._param.database + ';'
r'UID=' + self._param.username + ';'
r'PWD=' + self._param.password
)
db = pyodbc.connect(conn_str)
try:
cursor = db.cursor()
except Exception as e:
@ -85,11 +109,12 @@ class ExeSQL(ComponentBase, ABC):
if not single_sql:
continue
try:
logging.info("single_sql: ",single_sql)
cursor.execute(single_sql)
if cursor.rowcount == 0:
sql_res.append({"content": "\nTotal: 0\n No record in the database!"})
continue
single_res = pd.DataFrame([i for i in cursor.fetchmany(size=self._param.top_n)])
single_res = pd.DataFrame([i for i in cursor.fetchmany(self._param.top_n)])
single_res.columns = [i[0] for i in cursor.description]
sql_res.append({"content": "\nTotal: " + str(cursor.rowcount) + "\n" + single_res.to_markdown()})
except Exception as e:

View File

@ -17,6 +17,7 @@ import re
from functools import partial
import pandas as pd
from api.db import LLMType
from api.db.services.conversation_service import structure_answer
from api.db.services.dialog_service import message_fit_in
from api.db.services.llm_service import LLMBundle
from api import settings
@ -51,11 +52,16 @@ class GenerateParam(ComponentParamBase):
def gen_conf(self):
conf = {}
if self.max_tokens > 0: conf["max_tokens"] = self.max_tokens
if self.temperature > 0: conf["temperature"] = self.temperature
if self.top_p > 0: conf["top_p"] = self.top_p
if self.presence_penalty > 0: conf["presence_penalty"] = self.presence_penalty
if self.frequency_penalty > 0: conf["frequency_penalty"] = self.frequency_penalty
if self.max_tokens > 0:
conf["max_tokens"] = self.max_tokens
if self.temperature > 0:
conf["temperature"] = self.temperature
if self.top_p > 0:
conf["top_p"] = self.top_p
if self.presence_penalty > 0:
conf["presence_penalty"] = self.presence_penalty
if self.frequency_penalty > 0:
conf["frequency_penalty"] = self.frequency_penalty
return conf
@ -83,7 +89,8 @@ class Generate(ComponentBase):
recall_docs = []
for i in idx:
did = retrieval_res.loc[int(i), "doc_id"]
if did in doc_ids: continue
if did in doc_ids:
continue
doc_ids.add(did)
recall_docs.append({"doc_id": did, "doc_name": retrieval_res.loc[int(i), "docnm_kwd"]})
@ -96,11 +103,18 @@ class Generate(ComponentBase):
}
if answer.lower().find("invalid key") >= 0 or answer.lower().find("invalid api") >= 0:
answer += " Please set LLM API-Key in 'User Setting -> Model Providers -> API-Key'"
answer += " Please set LLM API-Key in 'User Setting -> Model providers -> API-Key'"
res = {"content": answer, "reference": reference}
res = structure_answer(None, res, "", "")
return res
def get_input_elements(self):
if self._param.parameters:
return [{"key": "user", "name": "User"}, *self._param.parameters]
return [{"key": "user", "name": "User"}]
def _run(self, history, **kwargs):
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
prompt = self._param.prompt
@ -108,7 +122,8 @@ class Generate(ComponentBase):
retrieval_res = []
self._param.inputs = []
for para in self._param.parameters:
if not para.get("component_id"): continue
if not para.get("component_id"):
continue
component_id = para["component_id"].split("@")[0]
if para["component_id"].lower().find("@") >= 0:
cpn_id, key = para["component_id"].split("@")
@ -142,7 +157,8 @@ class Generate(ComponentBase):
if retrieval_res:
retrieval_res = pd.concat(retrieval_res, ignore_index=True)
else: retrieval_res = pd.DataFrame([])
else:
retrieval_res = pd.DataFrame([])
for n, v in kwargs.items():
prompt = re.sub(r"\{%s\}" % re.escape(n), str(v).replace("\\", " "), prompt)
@ -164,9 +180,11 @@ class Generate(ComponentBase):
return pd.DataFrame([res])
msg = self._canvas.get_history(self._param.message_history_window_size)
if len(msg) < 1: msg.append({"role": "user", "content": ""})
if len(msg) < 1:
msg.append({"role": "user", "content": ""})
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(chat_mdl.max_length * 0.97))
if len(msg) < 2: msg.append({"role": "user", "content": ""})
if len(msg) < 2:
msg.append({"role": "user", "content": ""})
ans = chat_mdl.chat(msg[0]["content"], msg[1:], self._param.gen_conf())
if self._param.cite and "content_ltks" in retrieval_res.columns and "vector" in retrieval_res.columns:
@ -185,9 +203,11 @@ class Generate(ComponentBase):
return
msg = self._canvas.get_history(self._param.message_history_window_size)
if len(msg) < 1: msg.append({"role": "user", "content": ""})
if len(msg) < 1:
msg.append({"role": "user", "content": ""})
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(chat_mdl.max_length * 0.97))
if len(msg) < 2: msg.append({"role": "user", "content": ""})
if len(msg) < 2:
msg.append({"role": "user", "content": ""})
answer = ""
for ans in chat_mdl.chat_streamly(msg[0]["content"], msg[1:], self._param.gen_conf()):
res = {"content": ans, "reference": []}
@ -198,4 +218,17 @@ class Generate(ComponentBase):
res = self.set_cite(retrieval_res, answer)
yield res
self.set_output(res)
self.set_output(Generate.be_output(res))
def debug(self, **kwargs):
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
prompt = self._param.prompt
for para in self._param.debug_inputs:
kwargs[para["key"]] = para.get("value", "")
for n, v in kwargs.items():
prompt = re.sub(r"\{%s\}" % re.escape(n), str(v).replace("\\", " "), prompt)
ans = chat_mdl.chat(prompt, [{"role": "user", "content": kwargs.get("user", "")}], self._param.gen_conf())
return pd.DataFrame([ans])

View File

@ -60,3 +60,6 @@ class KeywordExtract(Generate, ABC):
ans = re.sub(r".*keyword:", "", ans).strip()
logging.debug(f"ans: {ans}")
return KeywordExtract.be_output(ans)
def debug(self, **kwargs):
return self._run([], **kwargs)

View File

@ -78,4 +78,6 @@ class Relevant(Generate, ABC):
return Relevant.be_output(self._param.no)
assert False, f"Relevant component got: {ans}"
def debug(self, **kwargs):
return self._run([], **kwargs)

View File

@ -95,7 +95,8 @@ class RewriteQuestion(Generate, ABC):
hist = self._canvas.get_history(4)
conv = []
for m in hist:
if m["role"] not in ["user", "assistant"]: continue
if m["role"] not in ["user", "assistant"]:
continue
conv.append("{}: {}".format(m["role"].upper(), m["content"]))
conv = "\n".join(conv)
@ -109,3 +110,4 @@ class RewriteQuestion(Generate, ABC):
return RewriteQuestion.be_output(ans)

View File

@ -41,7 +41,8 @@ class SwitchParam(ComponentParamBase):
def check(self):
self.check_empty(self.conditions, "[Switch] conditions")
for cond in self.conditions:
if not cond["to"]: raise ValueError(f"[Switch] 'To' can not be empty!")
if not cond["to"]:
raise ValueError("[Switch] 'To' can not be empty!")
class Switch(ComponentBase, ABC):
@ -51,7 +52,8 @@ class Switch(ComponentBase, ABC):
res = []
for cond in self._param.conditions:
for item in cond["items"]:
if not item["cpn_id"]: continue
if not item["cpn_id"]:
continue
if item["cpn_id"].find("begin") >= 0:
continue
cid = item["cpn_id"].split("@")[0]
@ -63,7 +65,8 @@ class Switch(ComponentBase, ABC):
for cond in self._param.conditions:
res = []
for item in cond["items"]:
if not item["cpn_id"]:continue
if not item["cpn_id"]:
continue
cid = item["cpn_id"].split("@")[0]
if item["cpn_id"].find("@") > 0:
cpn_id, key = item["cpn_id"].split("@")
@ -107,22 +110,22 @@ class Switch(ComponentBase, ABC):
elif operator == ">":
try:
return True if float(input) > float(value) else False
except Exception as e:
except Exception:
return True if input > value else False
elif operator == "<":
try:
return True if float(input) < float(value) else False
except Exception as e:
except Exception:
return True if input < value else False
elif operator == "":
try:
return True if float(input) >= float(value) else False
except Exception as e:
except Exception:
return True if input >= value else False
elif operator == "":
try:
return True if float(input) <= float(value) else False
except Exception as e:
except Exception:
return True if input <= value else False
raise ValueError('Not supported operator' + operator)

View File

@ -47,7 +47,8 @@ class Template(ComponentBase):
self._param.inputs = []
for para in self._param.parameters:
if not para.get("component_id"): continue
if not para.get("component_id"):
continue
component_id = para["component_id"].split("@")[0]
if para["component_id"].lower().find("@") >= 0:
cpn_id, key = para["component_id"].split("@")

View File

@ -620,7 +620,7 @@
"text": "Searches for description about meanings of tables and fields."
},
"label": "Note",
"name": "N:DB Desctription"
"name": "N:DB Description"
},
"dragging": false,
"height": 128,
@ -679,7 +679,7 @@
{
"data": {
"form": {
"text": "DDL(Data Definition Language).\n\nSearches for relevent database creation statements.\n\nIt should bind with a KB to which DDL is dumped in.\nYou could use 'General' as parsing method and ';' as delimiter."
"text": "DDL(Data Definition Language).\n\nSearches for relevant database creation statements.\n\nIt should bind with a KB to which DDL is dumped in.\nYou could use 'General' as parsing method and ';' as delimiter."
},
"label": "Note",
"name": "N: DDL"

View File

@ -90,7 +90,7 @@
"message_history_window_size": 12,
"parameters": [],
"presence_penalty": 0.4,
"prompt": "Role: You are a customer support. \n\nTask: Please answer the question based on content of knowledge base. \n\nReuirements & restrictions:\n - DO NOT make things up when all knowledge base content is irrelevant to the question. \n - Answers need to consider chat history.\n - Request about customer's contact information like, Wechat number, LINE number, twitter, discord, etc,. , when knowlegebase content can't answer his question. So, product expert could contact him soon to solve his problem.\n\n Knowledge base content is as following:\n {input}\n The above is the content of knowledge base.",
"prompt": "Role: You are a customer support. \n\nTask: Please answer the question based on content of knowledge base. \n\nRequirements & restrictions:\n - DO NOT make things up when all knowledge base content is irrelevant to the question. \n - Answers need to consider chat history.\n - Request about customer's contact information like, Wechat number, LINE number, twitter, discord, etc,. , when knowledge base content can't answer his question. So, product expert could contact him soon to solve his problem.\n\n Knowledge base content is as following:\n {input}\n The above is the content of knowledge base.",
"temperature": 0.1,
"top_p": 0.3
}

View File

@ -70,8 +70,8 @@
"to": "QWeather:DeepKiwisTeach"
},
"2. finance": {
"description": "Question is about finace/economic information, stock market, economic news.",
"examples": "昨日涨幅大于5%的军工股?\nStocks have MACD buyin signals\nWhen is the next interest rate cut by the Federal Reserve?\n国家救市都有哪些举措?",
"description": "Question is about finance/economic information, stock market, economic news.",
"examples": "Stocks have MACD buy signals\nWhen is the next interest rate cut by the Federal Reserve?\n",
"to": "Concentrator:TrueGeckosSlide"
},
"3. medical": {
@ -268,7 +268,7 @@
"message_history_window_size": 12,
"parameters": [],
"presence_penalty": 0.4,
"prompt": "Role: Youre warm-hearted lovely young girl, 22 years old, located at Shanghai in China. Your name is R. Who are talking to you is your very good old friend of yours.\n\nTask: \n- Chat with the friend.\n- Ask question and care about them.\n- Provide useful advice to your friend.\n- Tell jokes to make your firend happy.\n\nThe following is the weatcher information:\n{weather}",
"prompt": "Role: Youre warm-hearted lovely young girl, 22 years old, located at Shanghai in China. Your name is R. Who are talking to you is your very good old friend of yours.\n\nTask: \n- Chat with the friend.\n- Ask question and care about them.\n- Provide useful advice to your friend.\n- Tell jokes to make your friend happy.\n\nThe following is the weather information:\n{weather}",
"temperature": 0.1,
"top_p": 0.3
}
@ -497,7 +497,7 @@
}
],
"presence_penalty": 0.4,
"prompt": "Role: Youre warm-hearted lovely young girl, 22 years old, located at Shanghai in China. Your name is R. Who are talking to you is your very good old friend of yours.\n\nTask: \n- Chat with the friend.\n- Ask question and care about them.\n- Tell your friend the weather if there's weather information provided. If your friend did not provide region information, ask about where he/she is.\n\nThe following is the weatcher information:\n{weather}\n",
"prompt": "Role: Youre warm-hearted lovely young girl, 22 years old, located at Shanghai in China. Your name is R. Who are talking to you is your very good old friend of yours.\n\nTask: \n- Chat with the friend.\n- Ask question and care about them.\n- Tell your friend the weather if there's weather information provided. If your friend did not provide region information, ask about where he/she is.\n\nThe following is the weather information:\n{weather}\n",
"temperature": 0.1,
"top_p": 0.3
}
@ -622,8 +622,8 @@
"to": "QWeather:DeepKiwisTeach"
},
"2. finance": {
"description": "Question is about finace/economic information, stock market, economic news.",
"examples": "昨日涨幅大于5%的军工股?\nStocks have MACD buyin signals\nWhen is the next interest rate cut by the Federal Reserve?\n国家救市都有哪些举措?",
"description": "Question is about finance/economic information, stock market, economic news.",
"examples": "Stocks have MACD buy signals\nWhen is the next interest rate cut by the Federal Reserve?\n",
"to": "Concentrator:TrueGeckosSlide"
},
"3. medical": {
@ -927,7 +927,7 @@
"parameters": [],
"presencePenaltyEnabled": true,
"presence_penalty": 0.4,
"prompt": "Role: Youre warm-hearted lovely young girl, 22 years old, located at Shanghai in China. Your name is R. Who are talking to you is your very good old friend of yours.\n\nTask: \n- Chat with the friend.\n- Ask question and care about them.\n- Provide useful advice to your friend.\n- Tell jokes to make your firend happy.\n\nThe following is the weatcher information:\n{weather}",
"prompt": "Role: Youre warm-hearted lovely young girl, 22 years old, located at Shanghai in China. Your name is R. Who are talking to you is your very good old friend of yours.\n\nTask: \n- Chat with the friend.\n- Ask question and care about them.\n- Provide useful advice to your friend.\n- Tell jokes to make your friend happy.\n\nThe following is the weather information:\n{weather}",
"temperature": 0.1,
"temperatureEnabled": true,
"topPEnabled": true,
@ -1011,7 +1011,7 @@
"top_p": 0.3
},
"label": "Generate",
"name": "tranlate to Chinese"
"name": "translate to Chinese"
},
"dragging": false,
"height": 86,
@ -1276,7 +1276,7 @@
],
"presencePenaltyEnabled": true,
"presence_penalty": 0.4,
"prompt": "Role: Youre warm-hearted lovely young girl, 22 years old, located at Shanghai in China. Your name is R. Who are talking to you is your very good old friend of yours.\n\nTask: \n- Chat with the friend.\n- Ask question and care about them.\n- Tell your friend the weather if there's weather information provided. If your friend did not provide region information, ask about where he/she is.\n\nThe following is the weatcher information:\n{weather}\n",
"prompt": "Role: Youre warm-hearted lovely young girl, 22 years old, located at Shanghai in China. Your name is R. Who are talking to you is your very good old friend of yours.\n\nTask: \n- Chat with the friend.\n- Ask question and care about them.\n- Tell your friend the weather if there's weather information provided. If your friend did not provide region information, ask about where he/she is.\n\nThe following is the weather information:\n{weather}\n",
"temperature": 0.1,
"temperatureEnabled": true,
"topPEnabled": true,

View File

@ -476,7 +476,7 @@
"text": "Translation Agent: Agentic translation using reflection workflow\n\nThis is inspired by Andrew NG's project: https://github.com/andrewyng/translation-agent\n\n1. Prompt an LLM to translate a text into the target language;\n2. Have the LLM reflect on the translation and provide constructive suggestions for improvement;\n3. Use these suggestions to improve the translation."
},
"label": "Note",
"name": "Breif"
"name": "Brief"
},
"dragHandle": ".note-drag-handle",
"dragging": false,

View File

@ -534,7 +534,7 @@
{
"data": {
"form": {
"text": "A prompt sumerize content from search result from PubMed and Q&A dataset."
"text": "A prompt summarize content from search result from PubMed and Q&A dataset."
},
"label": "Note",
"name": "N: LLM"

View File

@ -347,7 +347,7 @@
}
],
"presence_penalty": 0.4,
"prompt": "You are an SEO expert who writes in a direct, practical, educational style that is factual rather than storytelling or narrative, focusing on explaining to {audience} the \"how\" and \"what is\" and \u201cwhy\u201d rather than narrating to the audience. \n - Please write at a sixth grade reading level. \n - ONLY output in Markdown format.\n - Use positive, present tense expressions and avoid using complex words and sentence structures that lack narrative, such as \"reveal\" and \"dig deep.\"\n - Next, please continue writing articles related to our topic with a concise title, {title_0}{title} {keywords_0}{keywords}. \n - Please AVOID repeating what has already been written and do not use the same sentence structure. \n - JUST write the body of the article based on the outline.\n - DO NOT include introduction, title.\n - DO NOT miss anything mentioned in artical outline, except introduction and title.\n - Please use the information I provide to create in-depth, interesting and unique content. Also, incorporate the references and data points I provided earlier into the article to increase its value to the reader.\n - MUST be in language of \"{keywords_0} {title_0}\".\n\n<article_outline>\n{outline}\n\n<article_body>",
"prompt": "You are an SEO expert who writes in a direct, practical, educational style that is factual rather than storytelling or narrative, focusing on explaining to {audience} the \"how\" and \"what is\" and \u201cwhy\u201d rather than narrating to the audience. \n - Please write at a sixth grade reading level. \n - ONLY output in Markdown format.\n - Use positive, present tense expressions and avoid using complex words and sentence structures that lack narrative, such as \"reveal\" and \"dig deep.\"\n - Next, please continue writing articles related to our topic with a concise title, {title_0}{title} {keywords_0}{keywords}. \n - Please AVOID repeating what has already been written and do not use the same sentence structure. \n - JUST write the body of the article based on the outline.\n - DO NOT include introduction, title.\n - DO NOT miss anything mentioned in article outline, except introduction and title.\n - Please use the information I provide to create in-depth, interesting and unique content. Also, incorporate the references and data points I provided earlier into the article to increase its value to the reader.\n - MUST be in language of \"{keywords_0} {title_0}\".\n\n<article_outline>\n{outline}\n\n<article_body>",
"query": [],
"temperature": 0.1,
"top_p": 0.3

View File

@ -440,7 +440,7 @@
{
"data": {
"form": {
"text": "DDL(Data Definition Language).\n\nSearches for relevent database creation statements.\n\nIt should bind with a KB to which DDL is dumped in.\nYou could use 'General' as parsing method and ';' as delimiter."
"text": "DDL(Data Definition Language).\n\nSearches for relevant database creation statements.\n\nIt should bind with a KB to which DDL is dumped in.\nYou could use 'General' as parsing method and ';' as delimiter."
},
"label": "Note",
"name": "N: DDL"

View File

@ -577,7 +577,7 @@
"text": "Based on the keywords, searches on Wikipedia and returns the found content."
},
"label": "Note",
"name": "N: Wiukipedia"
"name": "N: Wikipedia"
},
"dragging": false,
"height": 128,

View File

@ -43,6 +43,7 @@ if __name__ == '__main__':
else:
print(ans["content"])
if DEBUG: print(canvas.path)
if DEBUG:
print(canvas.path)
question = input("\n==================== User =====================\n> ")
canvas.add_user_input(question)

View File

@ -0,0 +1,2 @@
from beartype.claw import beartype_this_package
beartype_this_package()

View File

@ -45,7 +45,7 @@ from agent.canvas import Canvas
from functools import partial
@manager.route('/new_token', methods=['POST'])
@manager.route('/new_token', methods=['POST']) # noqa: F821
@login_required
def new_token():
req = request.json
@ -75,7 +75,7 @@ def new_token():
return server_error_response(e)
@manager.route('/token_list', methods=['GET'])
@manager.route('/token_list', methods=['GET']) # noqa: F821
@login_required
def token_list():
try:
@ -90,7 +90,7 @@ def token_list():
return server_error_response(e)
@manager.route('/rm', methods=['POST'])
@manager.route('/rm', methods=['POST']) # noqa: F821
@validate_request("tokens", "tenant_id")
@login_required
def rm():
@ -104,7 +104,7 @@ def rm():
return server_error_response(e)
@manager.route('/stats', methods=['GET'])
@manager.route('/stats', methods=['GET']) # noqa: F821
@login_required
def stats():
try:
@ -135,14 +135,13 @@ def stats():
return server_error_response(e)
@manager.route('/new_conversation', methods=['GET'])
@manager.route('/new_conversation', methods=['GET']) # noqa: F821
def set_conversation():
token = request.headers.get('Authorization').split()[1]
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, message='Token is not valid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json
try:
if objs[0].source == "agent":
e, cvs = UserCanvasService.get_by_id(objs[0].dialog_id)
@ -176,7 +175,7 @@ def set_conversation():
return server_error_response(e)
@manager.route('/completion', methods=['POST'])
@manager.route('/completion', methods=['POST']) # noqa: F821
@validate_request("conversation_id", "messages")
def completion():
token = request.headers.get('Authorization').split()[1]
@ -188,7 +187,8 @@ def completion():
e, conv = API4ConversationService.get_by_id(req["conversation_id"])
if not e:
return get_data_error_result(message="Conversation not found!")
if "quote" not in req: req["quote"] = False
if "quote" not in req:
req["quote"] = False
msg = []
for m in req["messages"]:
@ -197,7 +197,8 @@ def completion():
if m["role"] == "assistant" and not msg:
continue
msg.append(m)
if not msg[-1].get("id"): msg[-1]["id"] = get_uuid()
if not msg[-1].get("id"):
msg[-1]["id"] = get_uuid()
message_id = msg[-1]["id"]
def fillin_conv(ans):
@ -340,7 +341,7 @@ def completion():
return server_error_response(e)
@manager.route('/conversation/<conversation_id>', methods=['GET'])
@manager.route('/conversation/<conversation_id>', methods=['GET']) # noqa: F821
# @login_required
def get(conversation_id):
token = request.headers.get('Authorization').split()[1]
@ -371,7 +372,7 @@ def get(conversation_id):
return server_error_response(e)
@manager.route('/document/upload', methods=['POST'])
@manager.route('/document/upload', methods=['POST']) # noqa: F821
@validate_request("kb_name")
def upload():
token = request.headers.get('Authorization').split()[1]
@ -483,7 +484,7 @@ def upload():
return get_json_result(data=doc_result.to_json())
@manager.route('/document/upload_and_parse', methods=['POST'])
@manager.route('/document/upload_and_parse', methods=['POST']) # noqa: F821
@validate_request("conversation_id")
def upload_parse():
token = request.headers.get('Authorization').split()[1]
@ -506,7 +507,7 @@ def upload_parse():
return get_json_result(data=doc_ids)
@manager.route('/list_chunks', methods=['POST'])
@manager.route('/list_chunks', methods=['POST']) # noqa: F821
# @login_required
def list_chunks():
token = request.headers.get('Authorization').split()[1]
@ -546,7 +547,7 @@ def list_chunks():
return get_json_result(data=res)
@manager.route('/list_kb_docs', methods=['POST'])
@manager.route('/list_kb_docs', methods=['POST']) # noqa: F821
# @login_required
def list_kb_docs():
token = request.headers.get('Authorization').split()[1]
@ -586,7 +587,7 @@ def list_kb_docs():
return server_error_response(e)
@manager.route('/document/infos', methods=['POST'])
@manager.route('/document/infos', methods=['POST']) # noqa: F821
@validate_request("doc_ids")
def docinfos():
token = request.headers.get('Authorization').split()[1]
@ -600,7 +601,7 @@ def docinfos():
return get_json_result(data=list(docs.dicts()))
@manager.route('/document', methods=['DELETE'])
@manager.route('/document', methods=['DELETE']) # noqa: F821
# @login_required
def document_rm():
token = request.headers.get('Authorization').split()[1]
@ -659,7 +660,7 @@ def document_rm():
return get_json_result(data=True)
@manager.route('/completion_aibotk', methods=['POST'])
@manager.route('/completion_aibotk', methods=['POST']) # noqa: F821
@validate_request("Authorization", "conversation_id", "word")
def completion_faq():
import base64
@ -674,11 +675,13 @@ def completion_faq():
e, conv = API4ConversationService.get_by_id(req["conversation_id"])
if not e:
return get_data_error_result(message="Conversation not found!")
if "quote" not in req: req["quote"] = True
if "quote" not in req:
req["quote"] = True
msg = []
msg.append({"role": "user", "content": req["word"]})
if not msg[-1].get("id"): msg[-1]["id"] = get_uuid()
if not msg[-1].get("id"):
msg[-1]["id"] = get_uuid()
message_id = msg[-1]["id"]
def fillin_conv(ans):
@ -799,7 +802,7 @@ def completion_faq():
return server_error_response(e)
@manager.route('/retrieval', methods=['POST'])
@manager.route('/retrieval', methods=['POST']) # noqa: F821
@validate_request("kb_id", "question")
def retrieval():
token = request.headers.get('Authorization').split()[1]

View File

@ -13,10 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import json
import traceback
from functools import partial
from flask import request, Response
from flask_login import login_required, current_user
from api.db.services.canvas_service import CanvasTemplateService, UserCanvasService
@ -25,15 +23,16 @@ from api.utils import get_uuid
from api.utils.api_utils import get_json_result, server_error_response, validate_request, get_data_error_result
from agent.canvas import Canvas
from peewee import MySQLDatabase, PostgresqlDatabase
from api.db.db_models import APIToken
@manager.route('/templates', methods=['GET'])
@manager.route('/templates', methods=['GET']) # noqa: F821
@login_required
def templates():
return get_json_result(data=[c.to_dict() for c in CanvasTemplateService.get_all()])
@manager.route('/list', methods=['GET'])
@manager.route('/list', methods=['GET']) # noqa: F821
@login_required
def canvas_list():
return get_json_result(data=sorted([c.to_dict() for c in \
@ -41,7 +40,7 @@ def canvas_list():
)
@manager.route('/rm', methods=['POST'])
@manager.route('/rm', methods=['POST']) # noqa: F821
@validate_request("canvas_ids")
@login_required
def rm():
@ -54,18 +53,19 @@ def rm():
return get_json_result(data=True)
@manager.route('/set', methods=['POST'])
@manager.route('/set', methods=['POST']) # noqa: F821
@validate_request("dsl", "title")
@login_required
def save():
req = request.json
req["user_id"] = current_user.id
if not isinstance(req["dsl"], str): req["dsl"] = json.dumps(req["dsl"], ensure_ascii=False)
if not isinstance(req["dsl"], str):
req["dsl"] = json.dumps(req["dsl"], ensure_ascii=False)
req["dsl"] = json.loads(req["dsl"])
if "id" not in req:
if UserCanvasService.query(user_id=current_user.id, title=req["title"].strip()):
return server_error_response(ValueError("Duplicated title."))
return get_data_error_result(f"{req['title'].strip()} already exists.")
req["id"] = get_uuid()
if not UserCanvasService.save(**req):
return get_data_error_result(message="Fail to save canvas.")
@ -78,7 +78,7 @@ def save():
return get_json_result(data=req)
@manager.route('/get/<canvas_id>', methods=['GET'])
@manager.route('/get/<canvas_id>', methods=['GET']) # noqa: F821
@login_required
def get(canvas_id):
e, c = UserCanvasService.get_by_id(canvas_id)
@ -86,8 +86,22 @@ def get(canvas_id):
return get_data_error_result(message="canvas not found.")
return get_json_result(data=c.to_dict())
@manager.route('/getsse/<canvas_id>', methods=['GET']) # type: ignore # noqa: F821
def getsse(canvas_id):
token = request.headers.get('Authorization').split()
if len(token) != 2:
return get_data_error_result(message='Authorization is not valid!"')
token = token[1]
objs = APIToken.query(beta=token)
if not objs:
return get_data_error_result(message='Token is not valid!"')
e, c = UserCanvasService.get_by_id(canvas_id)
if not e:
return get_data_error_result(message="canvas not found.")
return get_json_result(data=c.to_dict())
@manager.route('/completion', methods=['POST'])
@manager.route('/completion', methods=['POST']) # noqa: F821
@validate_request("id")
@login_required
def run():
@ -153,7 +167,8 @@ def run():
return resp
for answer in canvas.run(stream=False):
if answer.get("running_status"): continue
if answer.get("running_status"):
continue
final_ans["content"] = "\n".join(answer["content"]) if "content" in answer else ""
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
if final_ans.get("reference"):
@ -163,7 +178,7 @@ def run():
return get_json_result(data={"answer": final_ans["content"], "reference": final_ans.get("reference", [])})
@manager.route('/reset', methods=['POST'])
@manager.route('/reset', methods=['POST']) # noqa: F821
@validate_request("id")
@login_required
def reset():
@ -186,7 +201,51 @@ def reset():
return server_error_response(e)
@manager.route('/test_db_connect', methods=['POST'])
@manager.route('/input_elements', methods=['GET']) # noqa: F821
@login_required
def input_elements():
cvs_id = request.args.get("id")
cpn_id = request.args.get("component_id")
try:
e, user_canvas = UserCanvasService.get_by_id(cvs_id)
if not e:
return get_data_error_result(message="canvas not found.")
if not UserCanvasService.query(user_id=current_user.id, id=cvs_id):
return get_json_result(
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
canvas = Canvas(json.dumps(user_canvas.dsl), current_user.id)
return get_json_result(data=canvas.get_component_input_elements(cpn_id))
except Exception as e:
return server_error_response(e)
@manager.route('/debug', methods=['POST']) # noqa: F821
@validate_request("id", "component_id", "params")
@login_required
def debug():
req = request.json
for p in req["params"]:
assert p.get("key")
try:
e, user_canvas = UserCanvasService.get_by_id(req["id"])
if not e:
return get_data_error_result(message="canvas not found.")
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
return get_json_result(
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
canvas = Canvas(json.dumps(user_canvas.dsl), current_user.id)
canvas.get_component(req["component_id"])["obj"]._param.debug_inputs = req["params"]
df = canvas.get_component(req["component_id"])["obj"].debug()
return get_json_result(data=df.to_dict(orient="records"))
except Exception as e:
return server_error_response(e)
@manager.route('/test_db_connect', methods=['POST']) # noqa: F821
@validate_request("db_type", "database", "username", "host", "port", "password")
@login_required
def test_db_connect():
@ -198,8 +257,26 @@ def test_db_connect():
elif req["db_type"] == 'postgresql':
db = PostgresqlDatabase(req["database"], user=req["username"], host=req["host"], port=req["port"],
password=req["password"])
db.connect()
elif req["db_type"] == 'mssql':
import pyodbc
connection_string = (
f"DRIVER={{ODBC Driver 17 for SQL Server}};"
f"SERVER={req['host']},{req['port']};"
f"DATABASE={req['database']};"
f"UID={req['username']};"
f"PWD={req['password']};"
)
db = pyodbc.connect(connection_string)
cursor = db.cursor()
cursor.execute("SELECT 1")
cursor.close()
else:
return server_error_response("Unsupported database type.")
if req["db_type"] != 'mssql':
db.connect()
db.close()
return get_json_result(data="Database Connection Successful!")
except Exception as e:
return server_error_response(e)

View File

@ -31,11 +31,11 @@ from api.utils.api_utils import server_error_response, get_data_error_result, va
from api.db.services.document_service import DocumentService
from api import settings
from api.utils.api_utils import get_json_result
import hashlib
import xxhash
import re
@manager.route('/list', methods=['POST'])
@manager.route('/list', methods=['POST']) # noqa: F821
@login_required
@validate_request("doc_id")
def list_chunk():
@ -68,9 +68,10 @@ def list_chunk():
"doc_id": sres.field[id]["doc_id"],
"docnm_kwd": sres.field[id]["docnm_kwd"],
"important_kwd": sres.field[id].get("important_kwd", []),
"question_kwd": sres.field[id].get("question_kwd", []),
"image_id": sres.field[id].get("img_id", ""),
"available_int": sres.field[id].get("available_int", 1),
"positions": json.loads(sres.field[id].get("position_list", "[]")),
"available_int": int(sres.field[id].get("available_int", 1)),
"positions": sres.field[id].get("position_int", []),
}
assert isinstance(d["positions"], list)
assert len(d["positions"]) == 0 or (isinstance(d["positions"][0], list) and len(d["positions"][0]) == 5)
@ -83,7 +84,7 @@ def list_chunk():
return server_error_response(e)
@manager.route('/get', methods=['GET'])
@manager.route('/get', methods=['GET']) # noqa: F821
@login_required
def get():
chunk_id = request.args["chunk_id"]
@ -112,10 +113,10 @@ def get():
return server_error_response(e)
@manager.route('/set', methods=['POST'])
@manager.route('/set', methods=['POST']) # noqa: F821
@login_required
@validate_request("doc_id", "chunk_id", "content_with_weight",
"important_kwd")
"important_kwd", "question_kwd")
def set():
req = request.json
d = {
@ -125,6 +126,8 @@ def set():
d["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(d["content_ltks"])
d["important_kwd"] = req["important_kwd"]
d["important_tks"] = rag_tokenizer.tokenize(" ".join(req["important_kwd"]))
d["question_kwd"] = req["question_kwd"]
d["question_tks"] = rag_tokenizer.tokenize("\n".join(req["question_kwd"]))
if "available_int" in req:
d["available_int"] = req["available_int"]
@ -152,7 +155,7 @@ def set():
d = beAdoc(d, arr[0], arr[1], not any(
[rag_tokenizer.is_chinese(t) for t in q + a]))
v, c = embd_mdl.encode([doc.name, req["content_with_weight"]])
v, c = embd_mdl.encode([doc.name, req["content_with_weight"] if not d["question_kwd"] else "\n".join(d["question_kwd"])])
v = 0.1 * v[0] + 0.9 * v[1] if doc.parser_id != ParserType.QA else v[1]
d["q_%d_vec" % len(v)] = v.tolist()
settings.docStoreConn.update({"id": req["chunk_id"]}, d, search.index_name(tenant_id), doc.kb_id)
@ -161,7 +164,7 @@ def set():
return server_error_response(e)
@manager.route('/switch', methods=['POST'])
@manager.route('/switch', methods=['POST']) # noqa: F821
@login_required
@validate_request("chunk_ids", "available_int", "doc_id")
def switch():
@ -181,7 +184,7 @@ def switch():
return server_error_response(e)
@manager.route('/rm', methods=['POST'])
@manager.route('/rm', methods=['POST']) # noqa: F821
@login_required
@validate_request("chunk_ids", "doc_id")
def rm():
@ -200,19 +203,19 @@ def rm():
return server_error_response(e)
@manager.route('/create', methods=['POST'])
@manager.route('/create', methods=['POST']) # noqa: F821
@login_required
@validate_request("doc_id", "content_with_weight")
def create():
req = request.json
md5 = hashlib.md5()
md5.update((req["content_with_weight"] + req["doc_id"]).encode("utf-8"))
chunck_id = md5.hexdigest()
chunck_id = xxhash.xxh64((req["content_with_weight"] + req["doc_id"]).encode("utf-8")).hexdigest()
d = {"id": chunck_id, "content_ltks": rag_tokenizer.tokenize(req["content_with_weight"]),
"content_with_weight": req["content_with_weight"]}
d["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(d["content_ltks"])
d["important_kwd"] = req.get("important_kwd", [])
d["important_tks"] = rag_tokenizer.tokenize(" ".join(req.get("important_kwd", [])))
d["question_kwd"] = req.get("question_kwd", [])
d["question_tks"] = rag_tokenizer.tokenize("\n".join(req.get("question_kwd", [])))
d["create_time"] = str(datetime.datetime.now()).replace("T", " ")[:19]
d["create_timestamp_flt"] = datetime.datetime.now().timestamp()
@ -222,16 +225,23 @@ def create():
return get_data_error_result(message="Document not found!")
d["kb_id"] = [doc.kb_id]
d["docnm_kwd"] = doc.name
d["title_tks"] = rag_tokenizer.tokenize(doc.name)
d["doc_id"] = doc.id
tenant_id = DocumentService.get_tenant_id(req["doc_id"])
if not tenant_id:
return get_data_error_result(message="Tenant not found!")
e, kb = KnowledgebaseService.get_by_id(doc.kb_id)
if not e:
return get_data_error_result(message="Knowledgebase not found!")
if kb.pagerank:
d["pagerank_fea"] = kb.pagerank
embd_id = DocumentService.get_embd_id(req["doc_id"])
embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING.value, embd_id)
v, c = embd_mdl.encode([doc.name, req["content_with_weight"]])
v, c = embd_mdl.encode([doc.name, req["content_with_weight"] if not d["question_kwd"] else "\n".join(d["question_kwd"])])
v = 0.1 * v[0] + 0.9 * v[1]
d["q_%d_vec" % len(v)] = v.tolist()
settings.docStoreConn.insert([d], search.index_name(tenant_id), doc.kb_id)
@ -243,7 +253,7 @@ def create():
return server_error_response(e)
@manager.route('/retrieval_test', methods=['POST'])
@manager.route('/retrieval_test', methods=['POST']) # noqa: F821
@login_required
@validate_request("kb_id", "question")
def retrieval_test():
@ -302,7 +312,7 @@ def retrieval_test():
return server_error_response(e)
@manager.route('/knowledge_graph', methods=['GET'])
@manager.route('/knowledge_graph', methods=['GET']) # noqa: F821
@login_required
def knowledge_graph():
doc_id = request.args["doc_id"]

View File

@ -17,12 +17,15 @@ import json
import re
import traceback
from copy import deepcopy
from api.db.db_models import APIToken
from api.db.services.conversation_service import ConversationService, structure_answer
from api.db.services.user_service import UserTenantService
from flask import request, Response
from flask_login import login_required, current_user
from api.db import LLMType
from api.db.services.dialog_service import DialogService, ConversationService, chat, ask
from api.db.services.dialog_service import DialogService, chat, ask
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle, TenantService, TenantLLMService
from api import settings
@ -30,8 +33,7 @@ from api.utils.api_utils import get_json_result
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from graphrag.mind_map_extractor import MindMapExtractor
@manager.route('/set', methods=['POST'])
@manager.route('/set', methods=['POST']) # noqa: F821
@login_required
def set_conversation():
req = request.json
@ -72,29 +74,71 @@ def set_conversation():
return server_error_response(e)
@manager.route('/get', methods=['GET'])
@manager.route('/get', methods=['GET']) # noqa: F821
@login_required
def get():
conv_id = request.args["conversation_id"]
try:
e, conv = ConversationService.get_by_id(conv_id)
if not e:
return get_data_error_result(message="Conversation not found!")
tenants = UserTenantService.query(user_id=current_user.id)
avatar =None
for tenant in tenants:
if DialogService.query(tenant_id=tenant.tenant_id, id=conv.dialog_id):
dialog = DialogService.query(tenant_id=tenant.tenant_id, id=conv.dialog_id)
if dialog and len(dialog)>0:
avatar = dialog[0].icon
break
else:
return get_json_result(
data=False, message='Only owner of conversation authorized for this operation.',
code=settings.RetCode.OPERATING_ERROR)
def get_value(d, k1, k2):
return d.get(k1, d.get(k2))
for ref in conv.reference:
if isinstance(ref, list):
continue
ref["chunks"] = [{
"id": get_value(ck, "chunk_id", "id"),
"content": get_value(ck, "content", "content_with_weight"),
"document_id": get_value(ck, "doc_id", "document_id"),
"document_name": get_value(ck, "docnm_kwd", "document_name"),
"dataset_id": get_value(ck, "kb_id", "dataset_id"),
"image_id": get_value(ck, "image_id", "img_id"),
"positions": get_value(ck, "positions", "position_int"),
} for ck in ref.get("chunks", [])]
conv = conv.to_dict()
conv["avatar"]=avatar
return get_json_result(data=conv)
except Exception as e:
return server_error_response(e)
@manager.route('/getsse/<dialog_id>', methods=['GET']) # type: ignore # noqa: F821
def getsse(dialog_id):
token = request.headers.get('Authorization').split()
if len(token) != 2:
return get_data_error_result(message='Authorization is not valid!"')
token = token[1]
objs = APIToken.query(beta=token)
if not objs:
return get_data_error_result(message='Token is not valid!"')
try:
e, conv = DialogService.get_by_id(dialog_id)
if not e:
return get_data_error_result(message="Dialog not found!")
conv = conv.to_dict()
conv["avatar"]= conv["icon"]
del conv["icon"]
return get_json_result(data=conv)
except Exception as e:
return server_error_response(e)
@manager.route('/rm', methods=['POST'])
@manager.route('/rm', methods=['POST']) # noqa: F821
@login_required
def rm():
conv_ids = request.json["conversation_ids"]
@ -117,7 +161,7 @@ def rm():
return server_error_response(e)
@manager.route('/list', methods=['GET'])
@manager.route('/list', methods=['GET']) # noqa: F821
@login_required
def list_convsersation():
dialog_id = request.args["dialog_id"]
@ -130,13 +174,14 @@ def list_convsersation():
dialog_id=dialog_id,
order_by=ConversationService.model.create_time,
reverse=True)
convs = [d.to_dict() for d in convs]
return get_json_result(data=convs)
except Exception as e:
return server_error_response(e)
@manager.route('/completion', methods=['POST'])
@manager.route('/completion', methods=['POST']) # noqa: F821
@login_required
@validate_request("conversation_id", "messages")
def completion():
@ -162,24 +207,31 @@ def completion():
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": "", "id": message_id})
else:
def get_value(d, k1, k2):
return d.get(k1, d.get(k2))
for ref in conv.reference:
if isinstance(ref, list):
continue
ref["chunks"] = [{
"id": get_value(ck, "chunk_id", "id"),
"content": get_value(ck, "content", "content_with_weight"),
"document_id": get_value(ck, "doc_id", "document_id"),
"document_name": get_value(ck, "docnm_kwd", "document_name"),
"dataset_id": get_value(ck, "kb_id", "dataset_id"),
"image_id": get_value(ck, "image_id", "img_id"),
"positions": get_value(ck, "positions", "position_int"),
} for ck in ref.get("chunks", [])]
if not conv.reference:
conv.reference = []
conv.reference.append({"chunks": [], "doc_aggs": []})
def fillin_conv(ans):
nonlocal conv, message_id
if not conv.reference:
conv.reference.append(ans["reference"])
else:
conv.reference[-1] = ans["reference"]
conv.message[-1] = {"role": "assistant", "content": ans["answer"],
"id": message_id, "prompt": ans.get("prompt", "")}
ans["id"] = message_id
def stream():
nonlocal dia, msg, req, conv
try:
for ans in chat(dia, msg, True, **req):
fillin_conv(ans)
ans = structure_answer(conv, ans, message_id, conv.id)
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
ConversationService.update_by_id(conv.id, conv.to_dict())
except Exception as e:
@ -200,8 +252,7 @@ def completion():
else:
answer = None
for ans in chat(dia, msg, **req):
answer = ans
fillin_conv(ans)
answer = structure_answer(conv, ans, message_id, req["conversation_id"])
ConversationService.update_by_id(conv.id, conv.to_dict())
break
return get_json_result(data=answer)
@ -209,7 +260,7 @@ def completion():
return server_error_response(e)
@manager.route('/tts', methods=['POST'])
@manager.route('/tts', methods=['POST']) # noqa: F821
@login_required
def tts():
req = request.json
@ -243,7 +294,7 @@ def tts():
return resp
@manager.route('/delete_msg', methods=['POST'])
@manager.route('/delete_msg', methods=['POST']) # noqa: F821
@login_required
@validate_request("conversation_id", "message_id")
def delete_msg():
@ -266,7 +317,7 @@ def delete_msg():
return get_json_result(data=conv)
@manager.route('/thumbup', methods=['POST'])
@manager.route('/thumbup', methods=['POST']) # noqa: F821
@login_required
@validate_request("conversation_id", "message_id")
def thumbup():
@ -281,17 +332,19 @@ def thumbup():
if req["message_id"] == msg.get("id", "") and msg.get("role", "") == "assistant":
if up_down:
msg["thumbup"] = True
if "feedback" in msg: del msg["feedback"]
if "feedback" in msg:
del msg["feedback"]
else:
msg["thumbup"] = False
if feedback: msg["feedback"] = feedback
if feedback:
msg["feedback"] = feedback
break
ConversationService.update_by_id(conv["id"], conv)
return get_json_result(data=conv)
@manager.route('/ask', methods=['POST'])
@manager.route('/ask', methods=['POST']) # noqa: F821
@login_required
@validate_request("question", "kb_ids")
def ask_about():
@ -317,7 +370,7 @@ def ask_about():
return resp
@manager.route('/mindmap', methods=['POST'])
@manager.route('/mindmap', methods=['POST']) # noqa: F821
@login_required
@validate_request("question", "kb_ids")
def mindmap():
@ -339,7 +392,7 @@ def mindmap():
return get_json_result(data=mind_map)
@manager.route('/related_questions', methods=['POST'])
@manager.route('/related_questions', methods=['POST']) # noqa: F821
@login_required
@validate_request("question")
def related_questions():

View File

@ -26,21 +26,21 @@ from api.utils import get_uuid
from api.utils.api_utils import get_json_result
@manager.route('/set', methods=['POST'])
@manager.route('/set', methods=['POST']) # noqa: F821
@login_required
def set_dialog():
req = request.json
dialog_id = req.get("dialog_id")
name = req.get("name", "New Dialog")
description = req.get("description", "A helpful Dialog")
description = req.get("description", "A helpful dialog")
icon = req.get("icon", "")
top_n = req.get("top_n", 6)
top_k = req.get("top_k", 1024)
rerank_id = req.get("rerank_id", "")
if not rerank_id: req["rerank_id"] = ""
if not rerank_id:
req["rerank_id"] = ""
similarity_threshold = req.get("similarity_threshold", 0.1)
vector_similarity_weight = req.get("vector_similarity_weight", 0.3)
if vector_similarity_weight is None: vector_similarity_weight = 0.3
llm_setting = req.get("llm_setting", {})
default_prompt = {
"system": """你是一个智能助手,请总结知识库的内容来回答问题,请列举知识库中的数据详细回答。当所有知识库内容都与问题无关时,你的回答必须包括“知识库中未找到您要的答案!”这句话。回答需要考虑聊天历史。
@ -123,7 +123,7 @@ def set_dialog():
return server_error_response(e)
@manager.route('/get', methods=['GET'])
@manager.route('/get', methods=['GET']) # noqa: F821
@login_required
def get():
dialog_id = request.args["dialog_id"]
@ -149,7 +149,7 @@ def get_kb_names(kb_ids):
return ids, nms
@manager.route('/list', methods=['GET'])
@manager.route('/list', methods=['GET']) # noqa: F821
@login_required
def list_dialogs():
try:
@ -166,7 +166,7 @@ def list_dialogs():
return server_error_response(e)
@manager.route('/rm', methods=['POST'])
@manager.route('/rm', methods=['POST']) # noqa: F821
@login_required
@validate_request("dialog_ids")
def rm():

View File

@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License
#
import json
import os.path
import pathlib
import re
@ -22,19 +21,25 @@ import flask
from flask import request
from flask_login import login_required, current_user
from api.db.db_models import Task, File
from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
from api.db.services.task_service import TaskService, queue_tasks
from api.db.services.user_service import UserTenantService
from deepdoc.parser.html_parser import RAGFlowHtmlParser
from rag.nlp import search
from api.db import FileType, TaskStatus, ParserType, FileSource
from api.db.db_models import File, Task
from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
from api.db.services.task_service import queue_tasks
from api.db.services.user_service import UserTenantService
from api.db.services import duplicate_name
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.utils import get_uuid
from api.db import FileType, TaskStatus, ParserType, FileSource
from api.db.services.task_service import TaskService
from api.db.services.document_service import DocumentService, doc_upload_and_parse
from api.utils.api_utils import (
server_error_response,
get_data_error_result,
validate_request,
)
from api.utils import get_uuid
from api import settings
from api.utils.api_utils import get_json_result
from rag.utils.storage_factory import STORAGE_IMPL
@ -43,7 +48,7 @@ from api.utils.web_utils import html2pdf, is_valid_url
from api.constants import IMG_BASE64_PREFIX
@manager.route('/upload', methods=['POST'])
@manager.route('/upload', methods=['POST']) # noqa: F821
@login_required
@validate_request("kb_id")
def upload():
@ -72,7 +77,7 @@ def upload():
return get_json_result(data=True)
@manager.route('/web_crawl', methods=['POST'])
@manager.route('/web_crawl', methods=['POST']) # noqa: F821
@login_required
@validate_request("kb_id", "name", "url")
def web_crawl():
@ -90,7 +95,8 @@ def web_crawl():
raise LookupError("Can't find this knowledgebase!")
blob = html2pdf(url)
if not blob: return server_error_response(ValueError("Download failure."))
if not blob:
return server_error_response(ValueError("Download failure."))
root_folder = FileService.get_root_folder(current_user.id)
pf_id = root_folder["id"]
@ -138,7 +144,7 @@ def web_crawl():
return get_json_result(data=True)
@manager.route('/create', methods=['POST'])
@manager.route('/create', methods=['POST']) # noqa: F821
@login_required
@validate_request("name", "kb_id")
def create():
@ -174,7 +180,7 @@ def create():
return server_error_response(e)
@manager.route('/list', methods=['GET'])
@manager.route('/list', methods=['GET']) # noqa: F821
@login_required
def list_docs():
kb_id = request.args.get("kb_id")
@ -209,7 +215,7 @@ def list_docs():
return server_error_response(e)
@manager.route('/infos', methods=['POST'])
@manager.route('/infos', methods=['POST']) # noqa: F821
@login_required
def docinfos():
req = request.json
@ -225,7 +231,7 @@ def docinfos():
return get_json_result(data=list(docs.dicts()))
@manager.route('/thumbnails', methods=['GET'])
@manager.route('/thumbnails', methods=['GET']) # noqa: F821
# @login_required
def thumbnails():
doc_ids = request.args.get("doc_ids").split(",")
@ -245,7 +251,7 @@ def thumbnails():
return server_error_response(e)
@manager.route('/change_status', methods=['POST'])
@manager.route('/change_status', methods=['POST']) # noqa: F821
@login_required
@validate_request("doc_id", "status")
def change_status():
@ -284,13 +290,14 @@ def change_status():
return server_error_response(e)
@manager.route('/rm', methods=['POST'])
@manager.route('/rm', methods=['POST']) # noqa: F821
@login_required
@validate_request("doc_id")
def rm():
req = request.json
doc_ids = req["doc_id"]
if isinstance(doc_ids, str): doc_ids = [doc_ids]
if isinstance(doc_ids, str):
doc_ids = [doc_ids]
for doc_id in doc_ids:
if not DocumentService.accessible4deletion(doc_id, current_user.id):
@ -315,6 +322,7 @@ def rm():
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
TaskService.filter_delete([Task.doc_id == doc_id])
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(
message="Database error (Document removal)!")
@ -333,7 +341,7 @@ def rm():
return get_json_result(data=True)
@manager.route('/run', methods=['POST'])
@manager.route('/run', methods=['POST']) # noqa: F821
@login_required
@validate_request("doc_ids", "run")
def run():
@ -348,23 +356,23 @@ def run():
try:
for id in req["doc_ids"]:
info = {"run": str(req["run"]), "progress": 0}
if str(req["run"]) == TaskStatus.RUNNING.value:
if str(req["run"]) == TaskStatus.RUNNING.value and req.get("delete", False):
info["progress_msg"] = ""
info["chunk_num"] = 0
info["token_num"] = 0
DocumentService.update_by_id(id, info)
# if str(req["run"]) == TaskStatus.CANCEL.value:
tenant_id = DocumentService.get_tenant_id(id)
if not tenant_id:
return get_data_error_result(message="Tenant not found!")
e, doc = DocumentService.get_by_id(id)
if not e:
return get_data_error_result(message="Document not found!")
if settings.docStoreConn.indexExist(search.index_name(tenant_id), doc.kb_id):
settings.docStoreConn.delete({"doc_id": id}, search.index_name(tenant_id), doc.kb_id)
if req.get("delete", False):
TaskService.filter_delete([Task.doc_id == id])
if settings.docStoreConn.indexExist(search.index_name(tenant_id), doc.kb_id):
settings.docStoreConn.delete({"doc_id": id}, search.index_name(tenant_id), doc.kb_id)
if str(req["run"]) == TaskStatus.RUNNING.value:
TaskService.filter_delete([Task.doc_id == id])
e, doc = DocumentService.get_by_id(id)
doc = doc.to_dict()
doc["tenant_id"] = tenant_id
@ -376,7 +384,7 @@ def run():
return server_error_response(e)
@manager.route('/rename', methods=['POST'])
@manager.route('/rename', methods=['POST']) # noqa: F821
@login_required
@validate_request("doc_id", "name")
def rename():
@ -417,7 +425,7 @@ def rename():
return server_error_response(e)
@manager.route('/get/<doc_id>', methods=['GET'])
@manager.route('/get/<doc_id>', methods=['GET']) # noqa: F821
# @login_required
def get(doc_id):
try:
@ -442,7 +450,7 @@ def get(doc_id):
return server_error_response(e)
@manager.route('/change_parser', methods=['POST'])
@manager.route('/change_parser', methods=['POST']) # noqa: F821
@login_required
@validate_request("doc_id", "parser_id")
def change_parser():
@ -493,10 +501,13 @@ def change_parser():
return server_error_response(e)
@manager.route('/image/<image_id>', methods=['GET'])
@manager.route('/image/<image_id>', methods=['GET']) # noqa: F821
# @login_required
def get_image(image_id):
try:
arr = image_id.split("-")
if len(arr) != 2:
return get_data_error_result(message="Image not found.")
bkt, nm = image_id.split("-")
response = flask.make_response(STORAGE_IMPL.get(bkt, nm))
response.headers.set('Content-Type', 'image/JPEG')
@ -505,7 +516,7 @@ def get_image(image_id):
return server_error_response(e)
@manager.route('/upload_and_parse', methods=['POST'])
@manager.route('/upload_and_parse', methods=['POST']) # noqa: F821
@login_required
@validate_request("conversation_id")
def upload_and_parse():
@ -524,7 +535,7 @@ def upload_and_parse():
return get_json_result(data=doc_ids)
@manager.route('/parse', methods=['POST'])
@manager.route('/parse', methods=['POST']) # noqa: F821
@login_required
def parse():
url = request.json.get("url") if request.json else ""
@ -548,7 +559,7 @@ def parse():
})
driver = Chrome(options=options)
driver.get(url)
res_headers = [r.response.headers for r in driver.requests]
res_headers = [r.response.headers for r in driver.requests if r and r.response]
if len(res_headers) > 1:
sections = RAGFlowHtmlParser().parser_txt(driver.page_source)
driver.quit()

View File

@ -28,7 +28,7 @@ from api import settings
from api.utils.api_utils import get_json_result
@manager.route('/convert', methods=['POST'])
@manager.route('/convert', methods=['POST']) # noqa: F821
@login_required
@validate_request("file_ids", "kb_ids")
def convert():
@ -92,7 +92,7 @@ def convert():
return server_error_response(e)
@manager.route('/rm', methods=['POST'])
@manager.route('/rm', methods=['POST']) # noqa: F821
@login_required
@validate_request("file_ids")
def rm():

View File

@ -34,7 +34,7 @@ from api.utils.file_utils import filename_type
from rag.utils.storage_factory import STORAGE_IMPL
@manager.route('/upload', methods=['POST'])
@manager.route('/upload', methods=['POST']) # noqa: F821
@login_required
# @validate_request("parent_id")
def upload():
@ -120,7 +120,7 @@ def upload():
return server_error_response(e)
@manager.route('/create', methods=['POST'])
@manager.route('/create', methods=['POST']) # noqa: F821
@login_required
@validate_request("name")
def create():
@ -160,7 +160,7 @@ def create():
return server_error_response(e)
@manager.route('/list', methods=['GET'])
@manager.route('/list', methods=['GET']) # noqa: F821
@login_required
def list_files():
pf_id = request.args.get("parent_id")
@ -192,7 +192,7 @@ def list_files():
return server_error_response(e)
@manager.route('/root_folder', methods=['GET'])
@manager.route('/root_folder', methods=['GET']) # noqa: F821
@login_required
def get_root_folder():
try:
@ -202,7 +202,7 @@ def get_root_folder():
return server_error_response(e)
@manager.route('/parent_folder', methods=['GET'])
@manager.route('/parent_folder', methods=['GET']) # noqa: F821
@login_required
def get_parent_folder():
file_id = request.args.get("file_id")
@ -217,7 +217,7 @@ def get_parent_folder():
return server_error_response(e)
@manager.route('/all_parent_folder', methods=['GET'])
@manager.route('/all_parent_folder', methods=['GET']) # noqa: F821
@login_required
def get_all_parent_folders():
file_id = request.args.get("file_id")
@ -235,7 +235,7 @@ def get_all_parent_folders():
return server_error_response(e)
@manager.route('/rm', methods=['POST'])
@manager.route('/rm', methods=['POST']) # noqa: F821
@login_required
@validate_request("file_ids")
def rm():
@ -284,7 +284,7 @@ def rm():
return server_error_response(e)
@manager.route('/rename', methods=['POST'])
@manager.route('/rename', methods=['POST']) # noqa: F821
@login_required
@validate_request("file_id", "name")
def rename():
@ -322,15 +322,20 @@ def rename():
return server_error_response(e)
@manager.route('/get/<file_id>', methods=['GET'])
# @login_required
@manager.route('/get/<file_id>', methods=['GET']) # noqa: F821
@login_required
def get(file_id):
try:
e, file = FileService.get_by_id(file_id)
if not e:
return get_data_error_result(message="Document not found!")
b, n = File2DocumentService.get_storage_address(file_id=file_id)
response = flask.make_response(STORAGE_IMPL.get(b, n))
blob = STORAGE_IMPL.get(file.parent_id, file.location)
if not blob:
b, n = File2DocumentService.get_storage_address(file_id=file_id)
blob = STORAGE_IMPL.get(b, n)
response = flask.make_response(blob)
ext = re.search(r"\.([^.]+)$", file.name)
if ext:
if file.type == FileType.VISUAL.value:
@ -345,7 +350,7 @@ def get(file_id):
return server_error_response(e)
@manager.route('/mv', methods=['POST'])
@manager.route('/mv', methods=['POST']) # noqa: F821
@login_required
@validate_request("src_file_ids", "dest_file_id")
def move():

View File

@ -21,7 +21,7 @@ from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
from api.db.services.user_service import TenantService, UserTenantService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request, not_allowed_parameters
from api.utils import get_uuid
from api.db import StatusEnum, FileSource
from api.db.services.knowledgebase_service import KnowledgebaseService
@ -32,7 +32,7 @@ from rag.nlp import search
from api.constants import DATASET_NAME_LIMIT
@manager.route('/create', methods=['post'])
@manager.route('/create', methods=['post']) # noqa: F821
@login_required
@validate_request("name")
def create():
@ -67,9 +67,10 @@ def create():
return server_error_response(e)
@manager.route('/update', methods=['post'])
@manager.route('/update', methods=['post']) # noqa: F821
@login_required
@validate_request("kb_id", "name", "description", "permission", "parser_id")
@not_allowed_parameters("id", "tenant_id", "created_by", "create_time", "update_time", "create_date", "update_date", "created_by")
def update():
req = request.json
req["name"] = req["name"].strip()
@ -101,6 +102,15 @@ def update():
if not KnowledgebaseService.update_by_id(kb.id, req):
return get_data_error_result()
if kb.pagerank != req.get("pagerank", 0):
if req.get("pagerank", 0) > 0:
settings.docStoreConn.update({"kb_id": kb.id}, {"pagerank_fea": req["pagerank"]},
search.index_name(kb.tenant_id), kb.id)
else:
# Elasticsearch requires pagerank_fea be non-zero!
settings.docStoreConn.update({"exist": "pagerank_fea"}, {"remove": "pagerank_fea"},
search.index_name(kb.tenant_id), kb.id)
e, kb = KnowledgebaseService.get_by_id(kb.id)
if not e:
return get_data_error_result(
@ -111,7 +121,7 @@ def update():
return server_error_response(e)
@manager.route('/detail', methods=['GET'])
@manager.route('/detail', methods=['GET']) # noqa: F821
@login_required
def detail():
kb_id = request.args["kb_id"]
@ -134,7 +144,7 @@ def detail():
return server_error_response(e)
@manager.route('/list', methods=['GET'])
@manager.route('/list', methods=['GET']) # noqa: F821
@login_required
def list_kbs():
keywords = request.args.get("keywords", "")
@ -151,7 +161,7 @@ def list_kbs():
return server_error_response(e)
@manager.route('/rm', methods=['post'])
@manager.route('/rm', methods=['post']) # noqa: F821
@login_required
@validate_request("kb_id")
def rm():

View File

@ -28,7 +28,7 @@ from rag.llm import EmbeddingModel, ChatModel, RerankModel, CvModel, TTSModel
import requests
@manager.route('/factories', methods=['GET'])
@manager.route('/factories', methods=['GET']) # noqa: F821
@login_required
def factories():
try:
@ -50,7 +50,7 @@ def factories():
return server_error_response(e)
@manager.route('/set_api_key', methods=['POST'])
@manager.route('/set_api_key', methods=['POST']) # noqa: F821
@login_required
@validate_request("llm_factory", "api_key")
def set_api_key():
@ -129,7 +129,7 @@ def set_api_key():
return get_json_result(data=True)
@manager.route('/add_llm', methods=['POST'])
@manager.route('/add_llm', methods=['POST']) # noqa: F821
@login_required
@validate_request("llm_factory")
def add_llm():
@ -216,7 +216,7 @@ def add_llm():
base_url=llm["api_base"])
try:
arr, tc = mdl.encode(["Test if the api key is available"])
if len(arr[0]) == 0 or tc == 0:
if len(arr[0]) == 0:
raise Exception("Fail")
except Exception as e:
msg += f"\nFail to access embedding model({llm['llm_name']})." + str(e)
@ -242,7 +242,7 @@ def add_llm():
)
try:
arr, tc = mdl.similarity("Hello~ Ragflower!", ["Hi, there!", "Ohh, my friend!"])
if len(arr) == 0 or tc == 0:
if len(arr) == 0:
raise Exception("Not known.")
except Exception as e:
msg += f"\nFail to access model({llm['llm_name']})." + str(
@ -255,9 +255,7 @@ def add_llm():
)
try:
img_url = (
"https://upload.wikimedia.org/wikipedia/comm"
"ons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/256"
"0px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
"https://www.8848seo.cn/zb_users/upload/2022/07/20220705101240_99378.jpg"
)
res = requests.get(img_url)
if res.status_code == 200:
@ -292,7 +290,7 @@ def add_llm():
return get_json_result(data=True)
@manager.route('/delete_llm', methods=['POST'])
@manager.route('/delete_llm', methods=['POST']) # noqa: F821
@login_required
@validate_request("llm_factory", "llm_name")
def delete_llm():
@ -303,7 +301,7 @@ def delete_llm():
return get_json_result(data=True)
@manager.route('/delete_factory', methods=['POST'])
@manager.route('/delete_factory', methods=['POST']) # noqa: F821
@login_required
@validate_request("llm_factory")
def delete_factory():
@ -313,7 +311,7 @@ def delete_factory():
return get_json_result(data=True)
@manager.route('/my_llms', methods=['GET'])
@manager.route('/my_llms', methods=['GET']) # noqa: F821
@login_required
def my_llms():
try:
@ -334,7 +332,7 @@ def my_llms():
return server_error_response(e)
@manager.route('/list', methods=['GET'])
@manager.route('/list', methods=['GET']) # noqa: F821
@login_required
def list_app():
self_deploied = ["Youdao", "FastEmbed", "BAAI", "Ollama", "Xinference", "LocalAI", "LM-Studio"]
@ -351,8 +349,10 @@ def list_app():
llm_set = set([m["llm_name"] + "@" + m["fid"] for m in llms])
for o in objs:
if not o.api_key: continue
if o.llm_name + "@" + o.llm_factory in llm_set: continue
if not o.api_key:
continue
if o.llm_name + "@" + o.llm_factory in llm_set:
continue
llms.append({"llm_name": o.llm_name, "model_type": o.model_type, "fid": o.llm_factory, "available": True})
res = {}

39
api/apps/sdk/agent.py Normal file
View File

@ -0,0 +1,39 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from api.db.services.canvas_service import UserCanvasService
from api.utils.api_utils import get_error_data_result, token_required
from api.utils.api_utils import get_result
from flask import request
@manager.route('/agents', methods=['GET']) # noqa: F821
@token_required
def list_agents(tenant_id):
id = request.args.get("id")
title = request.args.get("title")
if id or title:
canvas = UserCanvasService.query(id=id, title=title, user_id=tenant_id)
if not canvas:
return get_error_data_result("The agent doesn't exist.")
page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 30))
orderby = request.args.get("orderby", "update_time")
if request.args.get("desc") == "False" or request.args.get("desc") == "false":
desc = False
else:
desc = True
canvas = UserCanvasService.get_list(tenant_id,page_number,items_per_page,orderby,desc,id,title)
return get_result(data=canvas)

View File

@ -26,7 +26,7 @@ from api.utils.api_utils import get_result
@manager.route('/chats', methods=['POST'])
@manager.route('/chats', methods=['POST']) # noqa: F821
@token_required
def create(tenant_id):
req=request.json
@ -82,7 +82,8 @@ def create(tenant_id):
req["top_k"] = req.get("top_k", 1024)
req["rerank_id"] = req.get("rerank_id", "")
if req.get("rerank_id"):
if not TenantLLMService.query(tenant_id=tenant_id,llm_name=req.get("rerank_id"),model_type="rerank"):
value_rerank_model = ["BAAI/bge-reranker-v2-m3","maidalun1020/bce-reranker-base_v1"]
if req["rerank_id"] not in value_rerank_model and not TenantLLMService.query(tenant_id=tenant_id,llm_name=req.get("rerank_id"),model_type="rerank"):
return get_error_data_result(f"`rerank_model` {req.get('rerank_id')} doesn't exist")
if not req.get("llm_id"):
req["llm_id"] = tenant.llm_id
@ -104,9 +105,12 @@ def create(tenant_id):
"parameters": [
{"key": "knowledge", "optional": False}
],
"empty_response": "Sorry! No relevant content was found in the knowledge base!"
"empty_response": "Sorry! No relevant content was found in the knowledge base!",
"quote":True,
"tts":False,
"refine_multiturn":True
}
key_list_2 = ["system", "prologue", "parameters", "empty_response"]
key_list_2 = ["system", "prologue", "parameters", "empty_response","quote","tts","refine_multiturn"]
if "prompt_config" not in req:
req['prompt_config'] = {}
for key in key_list_2:
@ -147,7 +151,7 @@ def create(tenant_id):
res["avatar"] = res.pop("icon")
return get_result(data=res)
@manager.route('/chats/<chat_id>', methods=['PUT'])
@manager.route('/chats/<chat_id>', methods=['PUT']) # noqa: F821
@token_required
def update(tenant_id,chat_id):
if not DialogService.query(tenant_id=tenant_id, id=chat_id, status=StatusEnum.VALID.value):
@ -158,10 +162,10 @@ def update(tenant_id,chat_id):
req["do_refer"]=req.pop("show_quotation")
if "dataset_ids" in req:
if not ids:
return get_error_data_result("`datasets` can't be empty")
return get_error_data_result("`dataset_ids` can't be empty")
if ids:
for kb_id in ids:
kbs = KnowledgebaseService.accessible(kb_id=chat_id, user_id=tenant_id)
kbs = KnowledgebaseService.accessible(kb_id=kb_id, user_id=tenant_id)
if not kbs:
return get_error_data_result(f"You don't own the dataset {kb_id}")
kbs = KnowledgebaseService.query(id=kb_id)
@ -185,9 +189,6 @@ def update(tenant_id,chat_id):
e, tenant = TenantService.get_by_id(tenant_id)
if not e:
return get_error_data_result(message="Tenant not found!")
if req.get("rerank_model"):
if not TenantLLMService.query(tenant_id=tenant_id,llm_name=req.get("rerank_model"),model_type="rerank"):
return get_error_data_result(f"`rerank_model` {req.get('rerank_model')} doesn't exist")
# prompt
prompt = req.get("prompt")
key_mapping = {"parameters": "variables",
@ -207,6 +208,10 @@ def update(tenant_id,chat_id):
req["prompt_config"] = req.pop("prompt")
e, res = DialogService.get_by_id(chat_id)
res = res.to_json()
if req.get("rerank_id"):
value_rerank_model = ["BAAI/bge-reranker-v2-m3","maidalun1020/bce-reranker-base_v1"]
if req["rerank_id"] not in value_rerank_model and not TenantLLMService.query(tenant_id=tenant_id,llm_name=req.get("rerank_id"),model_type="rerank"):
return get_error_data_result(f"`rerank_model` {req.get('rerank_id')} doesn't exist")
if "name" in req:
if not req.get("name"):
return get_error_data_result(message="`name` is not empty.")
@ -235,7 +240,7 @@ def update(tenant_id,chat_id):
return get_result()
@manager.route('/chats', methods=['DELETE'])
@manager.route('/chats', methods=['DELETE']) # noqa: F821
@token_required
def delete(tenant_id):
req = request.json
@ -257,14 +262,15 @@ def delete(tenant_id):
DialogService.update_by_id(id, temp_dict)
return get_result()
@manager.route('/chats', methods=['GET'])
@manager.route('/chats', methods=['GET']) # noqa: F821
@token_required
def list_chat(tenant_id):
id = request.args.get("id")
name = request.args.get("name")
chat = DialogService.query(id=id,name=name,status=StatusEnum.VALID.value,tenant_id=tenant_id)
if not chat:
return get_error_data_result(message="The chat doesn't exist")
if id or name:
chat = DialogService.query(id=id, name=name, status=StatusEnum.VALID.value, tenant_id=tenant_id)
if not chat:
return get_error_data_result(message="The chat doesn't exist")
page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 30))
orderby = request.args.get("orderby", "create_time")

View File

@ -34,7 +34,7 @@ from api.utils.api_utils import (
)
@manager.route("/datasets", methods=["POST"])
@manager.route("/datasets", methods=["POST"]) # noqa: F821
@token_required
def create(tenant_id):
"""
@ -190,7 +190,7 @@ def create(tenant_id):
return get_result(data=renamed_data)
@manager.route("/datasets", methods=["DELETE"])
@manager.route("/datasets", methods=["DELETE"]) # noqa: F821
@token_required
def delete(tenant_id):
"""
@ -260,7 +260,7 @@ def delete(tenant_id):
return get_result(code=settings.RetCode.SUCCESS)
@manager.route("/datasets/<dataset_id>", methods=["PUT"])
@manager.route("/datasets/<dataset_id>", methods=["PUT"]) # noqa: F821
@token_required
def update(tenant_id, dataset_id):
"""
@ -429,7 +429,7 @@ def update(tenant_id, dataset_id):
return get_result(code=settings.RetCode.SUCCESS)
@manager.route("/datasets", methods=["GET"])
@manager.route("/datasets", methods=["GET"]) # noqa: F821
@token_required
def list(tenant_id):
"""

View File

@ -22,7 +22,7 @@ from api import settings
from api.utils.api_utils import validate_request, build_error_result, apikey_required
@manager.route('/dify/retrieval', methods=['POST'])
@manager.route('/dify/retrieval', methods=['POST']) # noqa: F821
@apikey_required
@validate_request("knowledge_id", "query")
def retrieval(tenant_id):

View File

@ -22,7 +22,7 @@ from rag.nlp import rag_tokenizer
from api.db import LLMType, ParserType
from api.db.services.llm_service import TenantLLMService
from api import settings
import hashlib
import xxhash
import re
from api.utils.api_utils import token_required
from api.db.db_models import Task
@ -41,12 +41,32 @@ from api.utils.api_utils import construct_json_result, get_parser_config
from rag.nlp import search
from rag.utils import rmSpace
from rag.utils.storage_factory import STORAGE_IMPL
import os
from pydantic import BaseModel, Field, validator
MAXIMUM_OF_UPLOADING_FILES = 256
@manager.route("/datasets/<dataset_id>/documents", methods=["POST"])
class Chunk(BaseModel):
id: str = ""
content: str = ""
document_id: str = ""
docnm_kwd: str = ""
important_keywords: list = Field(default_factory=list)
questions: list = Field(default_factory=list)
question_tks: str = ""
image_id: str = ""
available: bool = True
positions: list[list[int]] = Field(default_factory=list)
@validator('positions')
def validate_positions(cls, value):
for sublist in value:
if len(sublist) != 5:
raise ValueError("Each sublist in positions must have a length of 5")
return value
@manager.route("/datasets/<dataset_id>/documents", methods=["POST"]) # noqa: F821
@token_required
def upload(dataset_id, tenant_id):
"""
@ -154,7 +174,7 @@ def upload(dataset_id, tenant_id):
return get_result(data=renamed_doc_list)
@manager.route("/datasets/<dataset_id>/documents/<document_id>", methods=["PUT"])
@manager.route("/datasets/<dataset_id>/documents/<document_id>", methods=["PUT"]) # noqa: F821
@token_required
def update_doc(tenant_id, dataset_id, document_id):
"""
@ -297,7 +317,7 @@ def update_doc(tenant_id, dataset_id, document_id):
return get_result()
@manager.route("/datasets/<dataset_id>/documents/<document_id>", methods=["GET"])
@manager.route("/datasets/<dataset_id>/documents/<document_id>", methods=["GET"]) # noqa: F821
@token_required
def download(tenant_id, dataset_id, document_id):
"""
@ -361,7 +381,7 @@ def download(tenant_id, dataset_id, document_id):
)
@manager.route("/datasets/<dataset_id>/documents", methods=["GET"])
@manager.route("/datasets/<dataset_id>/documents", methods=["GET"]) # noqa: F821
@token_required
def list_docs(dataset_id, tenant_id):
"""
@ -495,7 +515,7 @@ def list_docs(dataset_id, tenant_id):
return get_result(data={"total": tol, "docs": renamed_doc_list})
@manager.route("/datasets/<dataset_id>/documents", methods=["DELETE"])
@manager.route("/datasets/<dataset_id>/documents", methods=["DELETE"]) # noqa: F821
@token_required
def delete(tenant_id, dataset_id):
"""
@ -587,7 +607,7 @@ def delete(tenant_id, dataset_id):
return get_result()
@manager.route("/datasets/<dataset_id>/chunks", methods=["POST"])
@manager.route("/datasets/<dataset_id>/chunks", methods=["POST"]) # noqa: F821
@token_required
def parse(tenant_id, dataset_id):
"""
@ -654,7 +674,7 @@ def parse(tenant_id, dataset_id):
return get_result()
@manager.route("/datasets/<dataset_id>/chunks", methods=["DELETE"])
@manager.route("/datasets/<dataset_id>/chunks", methods=["DELETE"]) # noqa: F821
@token_required
def stop_parsing(tenant_id, dataset_id):
"""
@ -712,7 +732,7 @@ def stop_parsing(tenant_id, dataset_id):
return get_result()
@manager.route("/datasets/<dataset_id>/documents/<document_id>/chunks", methods=["GET"])
@manager.route("/datasets/<dataset_id>/documents/<document_id>/chunks", methods=["GET"]) # noqa: F821
@token_required
def list_chunks(tenant_id, dataset_id, document_id):
"""
@ -844,24 +864,11 @@ def list_chunks(tenant_id, dataset_id, document_id):
"doc_id": sres.field[id]["doc_id"],
"docnm_kwd": sres.field[id]["docnm_kwd"],
"important_kwd": sres.field[id].get("important_kwd", []),
"question_kwd": sres.field[id].get("question_kwd", []),
"img_id": sres.field[id].get("img_id", ""),
"available_int": sres.field[id].get("available_int", 1),
"positions": sres.field[id].get("position_int", "").split("\t"),
"positions": sres.field[id].get("position_int", []),
}
if len(d["positions"]) % 5 == 0:
poss = []
for i in range(0, len(d["positions"]), 5):
poss.append(
[
float(d["positions"][i]),
float(d["positions"][i + 1]),
float(d["positions"][i + 2]),
float(d["positions"][i + 3]),
float(d["positions"][i + 4]),
]
)
d["positions"] = poss
origin_chunks.append(d)
if req.get("id"):
if req.get("id") == id:
@ -879,6 +886,7 @@ def list_chunks(tenant_id, dataset_id, document_id):
"content_with_weight": "content",
"doc_id": "document_id",
"important_kwd": "important_keywords",
"question_kwd": "questions",
"img_id": "image_id",
"available_int": "available",
}
@ -891,10 +899,11 @@ def list_chunks(tenant_id, dataset_id, document_id):
if renamed_chunk["available"] == 1:
renamed_chunk["available"] = True
res["chunks"].append(renamed_chunk)
_ = Chunk(**renamed_chunk) # validate the chunk
return get_result(data=res)
@manager.route(
@manager.route( # noqa: F821
"/datasets/<dataset_id>/documents/<document_id>/chunks", methods=["POST"]
)
@token_required
@ -974,14 +983,16 @@ def add_chunk(tenant_id, dataset_id, document_id):
if not req.get("content"):
return get_error_data_result(message="`content` is required")
if "important_keywords" in req:
if type(req["important_keywords"]) != list:
if not isinstance(req["important_keywords"], list):
return get_error_data_result(
"`important_keywords` is required to be a list"
)
md5 = hashlib.md5()
md5.update((req["content"] + document_id).encode("utf-8"))
chunk_id = md5.hexdigest()
if "questions" in req:
if not isinstance(req["questions"], list):
return get_error_data_result(
"`questions` is required to be a list"
)
chunk_id = xxhash.xxh64((req["content"] + document_id).encode("utf-8")).hexdigest()
d = {
"id": chunk_id,
"content_ltks": rag_tokenizer.tokenize(req["content"]),
@ -992,6 +1003,10 @@ def add_chunk(tenant_id, dataset_id, document_id):
d["important_tks"] = rag_tokenizer.tokenize(
" ".join(req.get("important_keywords", []))
)
d["question_kwd"] = req.get("questions", [])
d["question_tks"] = rag_tokenizer.tokenize(
"\n".join(req.get("questions", []))
)
d["create_time"] = str(datetime.datetime.now()).replace("T", " ")[:19]
d["create_timestamp_flt"] = datetime.datetime.now().timestamp()
d["kb_id"] = dataset_id
@ -1001,7 +1016,7 @@ def add_chunk(tenant_id, dataset_id, document_id):
embd_mdl = TenantLLMService.model_instance(
tenant_id, LLMType.EMBEDDING.value, embd_id
)
v, c = embd_mdl.encode([doc.name, req["content"]])
v, c = embd_mdl.encode([doc.name, req["content"] if not d["question_kwd"] else "\n".join(d["question_kwd"])])
v = 0.1 * v[0] + 0.9 * v[1]
d["q_%d_vec" % len(v)] = v.tolist()
settings.docStoreConn.insert([d], search.index_name(tenant_id), dataset_id)
@ -1013,6 +1028,7 @@ def add_chunk(tenant_id, dataset_id, document_id):
"content_with_weight": "content",
"doc_id": "document_id",
"important_kwd": "important_keywords",
"question_kwd": "questions",
"kb_id": "dataset_id",
"create_timestamp_flt": "create_timestamp",
"create_time": "create_time",
@ -1023,11 +1039,12 @@ def add_chunk(tenant_id, dataset_id, document_id):
if key in key_mapping:
new_key = key_mapping.get(key, key)
renamed_chunk[new_key] = value
_ = Chunk(**renamed_chunk) # validate the chunk
return get_result(data={"chunk": renamed_chunk})
# return get_result(data={"chunk_id": chunk_id})
@manager.route(
@manager.route( # noqa: F821
"datasets/<dataset_id>/documents/<document_id>/chunks", methods=["DELETE"]
)
@token_required
@ -1087,7 +1104,7 @@ def rm_chunk(tenant_id, dataset_id, document_id):
return get_result(message=f"deleted {chunk_number} chunks")
@manager.route(
@manager.route( # noqa: F821
"/datasets/<dataset_id>/documents/<document_id>/chunks/<chunk_id>", methods=["PUT"]
)
@token_required
@ -1166,8 +1183,13 @@ def update_chunk(tenant_id, dataset_id, document_id, chunk_id):
if "important_keywords" in req:
if not isinstance(req["important_keywords"], list):
return get_error_data_result("`important_keywords` should be a list")
d["important_kwd"] = req.get("important_keywords")
d["important_kwd"] = req.get("important_keywords", [])
d["important_tks"] = rag_tokenizer.tokenize(" ".join(req["important_keywords"]))
if "questions" in req:
if not isinstance(req["questions"], list):
return get_error_data_result("`questions` should be a list")
d["question_kwd"] = req.get("questions")
d["question_tks"] = rag_tokenizer.tokenize("\n".join(req["questions"]))
if "available" in req:
d["available_int"] = int(req["available"])
embd_id = DocumentService.get_embd_id(document_id)
@ -1185,14 +1207,14 @@ def update_chunk(tenant_id, dataset_id, document_id, chunk_id):
d, arr[0], arr[1], not any([rag_tokenizer.is_chinese(t) for t in q + a])
)
v, c = embd_mdl.encode([doc.name, d["content_with_weight"]])
v, c = embd_mdl.encode([doc.name, d["content_with_weight"] if not d.get("question_kwd") else "\n".join(d["question_kwd"])])
v = 0.1 * v[0] + 0.9 * v[1] if doc.parser_id != ParserType.QA else v[1]
d["q_%d_vec" % len(v)] = v.tolist()
settings.docStoreConn.update({"id": chunk_id}, d, search.index_name(tenant_id), dataset_id)
return get_result()
@manager.route("/retrieval", methods=["POST"])
@manager.route("/retrieval", methods=["POST"]) # noqa: F821
@token_required
def retrieval_test(tenant_id):
"""
@ -1353,6 +1375,7 @@ def retrieval_test(tenant_id):
"content_with_weight": "content",
"doc_id": "document_id",
"important_kwd": "important_keywords",
"question_kwd": "questions",
"docnm_kwd": "document_keyword",
}
rename_chunk = {}

View File

@ -15,17 +15,19 @@
#
import re
import json
from copy import deepcopy
from uuid import uuid4
from api.db import LLMType
from flask import request, Response
from api.db.services.conversation_service import ConversationService, iframe_completion
from api.db.services.conversation_service import completion as rag_completion
from api.db.services.canvas_service import completion as agent_completion
from api.db.services.dialog_service import ask
from agent.canvas import Canvas
from api.db import StatusEnum
from api.db.db_models import API4Conversation
from api.db.db_models import APIToken
from api.db.services.api_service import API4ConversationService
from api.db.services.canvas_service import UserCanvasService
from api.db.services.dialog_service import DialogService, ConversationService, chat
from api.db.services.dialog_service import DialogService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.utils import get_uuid
from api.utils.api_utils import get_error_data_result
@ -33,9 +35,9 @@ from api.utils.api_utils import get_result, token_required
from api.db.services.llm_service import LLMBundle
@manager.route('/chats/<chat_id>/sessions', methods=['POST'])
@manager.route('/chats/<chat_id>/sessions', methods=['POST']) # noqa: F821
@token_required
def create(tenant_id,chat_id):
def create(tenant_id, chat_id):
req = request.json
req["dialog_id"] = chat_id
dia = DialogService.query(tenant_id=tenant_id, id=req["dialog_id"], status=StatusEnum.VALID.value)
@ -45,7 +47,8 @@ def create(tenant_id,chat_id):
"id": get_uuid(),
"dialog_id": req["dialog_id"],
"name": req.get("name", "New session"),
"message": [{"role": "assistant", "content": "Hi! I am your assistantcan I help you?"}]
"message": [{"role": "assistant", "content": dia[0].prompt_config.get("prologue")}],
"user_id": req.get("user_id", "")
}
if not conv.get("name"):
return get_error_data_result(message="`name` can not be empty.")
@ -60,39 +63,56 @@ def create(tenant_id,chat_id):
return get_result(data=conv)
@manager.route('/agents/<agent_id>/sessions', methods=['POST'])
@manager.route('/agents/<agent_id>/sessions', methods=['POST']) # noqa: F821
@token_required
def create_agent_session(tenant_id, agent_id):
req = request.json
e, cvs = UserCanvasService.get_by_id(agent_id)
if not e:
return get_error_data_result("Agent not found.")
if cvs.user_id != tenant_id:
return get_error_data_result(message="You do not own the agent.")
if not UserCanvasService.query(user_id=tenant_id, id=agent_id):
return get_error_data_result("You cannot access the agent.")
if not isinstance(cvs.dsl, str):
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
canvas = Canvas(cvs.dsl, tenant_id)
canvas.reset()
query = canvas.get_preset_param()
if query:
for ele in query:
if not ele["optional"]:
if not req.get(ele["key"]):
return get_error_data_result(f"`{ele['key']}` is required")
ele["value"] = req[ele["key"]]
if ele["optional"]:
if req.get(ele["key"]):
ele["value"] = req[ele['key']]
else:
if "value" in ele:
ele.pop("value")
cvs.dsl = json.loads(str(canvas))
conv = {
"id": get_uuid(),
"dialog_id": cvs.id,
"user_id": req.get("usr_id","") if isinstance(req, dict) else "",
"user_id": req.get("user_id", "") if isinstance(req, dict) else "",
"message": [{"role": "assistant", "content": canvas.get_prologue()}],
"source": "agent"
"source": "agent",
"dsl": cvs.dsl
}
API4ConversationService.save(**conv)
conv["agent_id"] = conv.pop("dialog_id")
return get_result(data=conv)
@manager.route('/chats/<chat_id>/sessions/<session_id>', methods=['PUT'])
@manager.route('/chats/<chat_id>/sessions/<session_id>', methods=['PUT']) # noqa: F821
@token_required
def update(tenant_id,chat_id,session_id):
def update(tenant_id, chat_id, session_id):
req = request.json
req["dialog_id"] = chat_id
conv_id = session_id
conv = ConversationService.query(id=conv_id,dialog_id=chat_id)
conv = ConversationService.query(id=conv_id, dialog_id=chat_id)
if not conv:
return get_error_data_result(message="Session does not exist")
if not DialogService.query(id=chat_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
@ -108,276 +128,69 @@ def update(tenant_id,chat_id,session_id):
return get_result()
@manager.route('/chats/<chat_id>/completions', methods=['POST'])
@manager.route('/chats/<chat_id>/completions', methods=['POST']) # noqa: F821
@token_required
def completion(tenant_id, chat_id):
def chat_completion(tenant_id, chat_id):
req = request.json
if not req.get("session_id"):
conv = {
"id": get_uuid(),
"dialog_id": chat_id,
"name": req.get("name", "New session"),
"message": [{"role": "assistant", "content": "Hi! I am your assistantcan I help you?"}]
}
if not conv.get("name"):
return get_error_data_result(message="`name` can not be empty.")
ConversationService.save(**conv)
e, conv = ConversationService.get_by_id(conv["id"])
session_id=conv.id
else:
session_id = req.get("session_id")
if not req.get("question"):
return get_error_data_result(message="Please input your question.")
conv = ConversationService.query(id=session_id,dialog_id=chat_id)
if not conv:
return get_error_data_result(message="Session does not exist")
conv = conv[0]
if not DialogService.query(id=chat_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_error_data_result(message="You do not own the chat")
msg = []
question = {
"content": req.get("question"),
"role": "user",
"id": str(uuid4())
}
conv.message.append(question)
for m in conv.message:
if m["role"] == "system": continue
if m["role"] == "assistant" and not msg: continue
msg.append(m)
message_id = msg[-1].get("id")
e, dia = DialogService.get_by_id(conv.dialog_id)
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []})
def fillin_conv(ans):
reference = ans["reference"]
temp_reference = deepcopy(ans["reference"])
nonlocal conv, message_id
if not conv.reference:
conv.reference.append(temp_reference)
else:
conv.reference[-1] = temp_reference
conv.message[-1] = {"role": "assistant", "content": ans["answer"],
"id": message_id, "prompt": ans.get("prompt", "")}
if "chunks" in reference:
chunks = reference.get("chunks")
chunk_list = []
for chunk in chunks:
new_chunk = {
"id": chunk["chunk_id"],
"content": chunk["content_with_weight"],
"document_id": chunk["doc_id"],
"document_name": chunk["docnm_kwd"],
"dataset_id": chunk["kb_id"],
"image_id": chunk.get("image_id", ""),
"similarity": chunk["similarity"],
"vector_similarity": chunk["vector_similarity"],
"term_similarity": chunk["term_similarity"],
"positions": chunk.get("positions", []),
}
chunk_list.append(new_chunk)
reference["chunks"] = chunk_list
ans["id"] = message_id
ans["session_id"]=session_id
def stream():
nonlocal dia, msg, req, conv
try:
for ans in chat(dia, msg, **req):
fillin_conv(ans)
yield "data:" + json.dumps({"code": 0, "data": ans}, ensure_ascii=False) + "\n\n"
ConversationService.update_by_id(conv.id, conv.to_dict())
except Exception as e:
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e),"reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "data": True}, ensure_ascii=False) + "\n\n"
if not req or not req.get("session_id"):
req = {"question": ""}
if not DialogService.query(tenant_id=tenant_id, id=chat_id, status=StatusEnum.VALID.value):
return get_error_data_result(f"You don't own the chat {chat_id}")
if req.get("session_id"):
if not ConversationService.query(id=req["session_id"], dialog_id=chat_id):
return get_error_data_result(f"You don't own the session {req['session_id']}")
if req.get("stream", True):
resp = Response(stream(), mimetype="text/event-stream")
resp = Response(rag_completion(tenant_id, chat_id, **req), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
else:
answer = None
for ans in chat(dia, msg, **req):
for ans in rag_completion(tenant_id, chat_id, **req):
answer = ans
fillin_conv(ans)
ConversationService.update_by_id(conv.id, conv.to_dict())
break
return get_result(data=answer)
@manager.route('/agents/<agent_id>/completions', methods=['POST'])
@manager.route('/agents/<agent_id>/completions', methods=['POST']) # noqa: F821
@token_required
def agent_completion(tenant_id, agent_id):
def agent_completions(tenant_id, agent_id):
req = request.json
e, cvs = UserCanvasService.get_by_id(agent_id)
if not e:
return get_error_data_result("Agent not found.")
if cvs.user_id != tenant_id:
return get_error_data_result(message="You do not own the agent.")
if not isinstance(cvs.dsl, str):
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
canvas = Canvas(cvs.dsl, tenant_id)
if not req.get("session_id"):
session_id = get_uuid()
conv = {
"id": session_id,
"dialog_id": cvs.id,
"user_id": req.get("user_id",""),
"message": [{"role": "assistant", "content": canvas.get_prologue()}],
"source": "agent"
}
API4ConversationService.save(**conv)
conv = API4Conversation(**conv)
cvs = UserCanvasService.query(user_id=tenant_id, id=agent_id)
if not cvs:
return get_error_data_result(f"You don't own the agent {agent_id}")
if req.get("session_id"):
dsl = cvs[0].dsl
if not isinstance(dsl, str):
dsl = json.dumps(dsl)
canvas = Canvas(dsl, tenant_id)
if canvas.get_preset_param():
req["question"] = ""
conv = API4ConversationService.query(id=req["session_id"], dialog_id=agent_id)
if not conv:
return get_error_data_result(f"You don't own the session {req['session_id']}")
else:
session_id = req.get("session_id")
e, conv = API4ConversationService.get_by_id(req["session_id"])
if not e:
return get_error_data_result(message="Session not found!")
messages = conv.message
question = req.get("question")
if not question:
return get_error_data_result("`question` is required.")
question={
"role":"user",
"content":question,
"id": str(uuid4())
}
messages.append(question)
msg = []
for m in messages:
if m["role"] == "system":
continue
if m["role"] == "assistant" and not msg:
continue
msg.append(m)
if not msg[-1].get("id"): msg[-1]["id"] = get_uuid()
message_id = msg[-1]["id"]
if "quote" not in req: req["quote"] = False
stream = req.get("stream", True)
def fillin_conv(ans):
reference = ans["reference"]
temp_reference = deepcopy(ans["reference"])
nonlocal conv, message_id
if not conv.reference:
conv.reference.append(temp_reference)
else:
conv.reference[-1] = temp_reference
conv.message[-1] = {"role": "assistant", "content": ans["answer"], "id": message_id}
if "chunks" in reference:
chunks = reference.get("chunks")
chunk_list = []
for chunk in chunks:
new_chunk = {
"id": chunk["chunk_id"],
"content": chunk["content_with_weight"],
"document_id": chunk["doc_id"],
"document_name": chunk["docnm_kwd"],
"dataset_id": chunk["kb_id"],
"image_id": chunk["image_id"],
"similarity": chunk["similarity"],
"vector_similarity": chunk["vector_similarity"],
"term_similarity": chunk["term_similarity"],
"positions": chunk["positions"],
}
chunk_list.append(new_chunk)
reference["chunks"] = chunk_list
ans["id"] = message_id
ans["session_id"] = session_id
def rename_field(ans):
reference = ans['reference']
if not isinstance(reference, dict):
return
for chunk_i in reference.get('chunks', []):
if 'docnm_kwd' in chunk_i:
chunk_i['doc_name'] = chunk_i['docnm_kwd']
chunk_i.pop('docnm_kwd')
conv.message.append(msg[-1])
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []})
final_ans = {"reference": [], "content": ""}
canvas.messages.append(msg[-1])
canvas.add_user_input(msg[-1]["content"])
if stream:
def sse():
nonlocal answer, cvs
try:
for ans in canvas.run(stream=True):
if ans.get("running_status"):
yield "data:" + json.dumps({"code": 0, "message": "",
"data": {"answer": ans["content"],
"running_status": True}},
ensure_ascii=False) + "\n\n"
continue
for k in ans.keys():
final_ans[k] = ans[k]
ans = {"answer": ans["content"], "reference": ans.get("reference", [])}
fillin_conv(ans)
rename_field(ans)
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans},
ensure_ascii=False) + "\n\n"
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
canvas.history.append(("assistant", final_ans["content"]))
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
API4ConversationService.append_message(conv.id, conv.to_dict())
except Exception as e:
cvs.dsl = json.loads(str(canvas))
API4ConversationService.append_message(conv.id, conv.to_dict())
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
resp = Response(sse(), mimetype="text/event-stream")
req["question"] = ""
if req.get("stream", True):
resp = Response(agent_completion(tenant_id, agent_id, **req), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
for answer in canvas.run(stream=False):
if answer.get("running_status"): continue
final_ans["content"] = "\n".join(answer["content"]) if "content" in answer else ""
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
result = {"answer": final_ans["content"], "reference": final_ans.get("reference", [])}
fillin_conv(result)
API4ConversationService.append_message(conv.id, conv.to_dict())
rename_field(result)
return get_result(data=result)
try:
for answer in agent_completion(tenant_id, agent_id, **req):
return get_result(data=answer)
except Exception as e:
return get_error_data_result(str(e))
@manager.route('/chats/<chat_id>/sessions', methods=['GET'])
@manager.route('/chats/<chat_id>/sessions', methods=['GET']) # noqa: F821
@token_required
def list_session(chat_id,tenant_id):
def list_session(tenant_id, chat_id):
if not DialogService.query(tenant_id=tenant_id, id=chat_id, status=StatusEnum.VALID.value):
return get_error_data_result(message=f"You don't own the assistant {chat_id}.")
id = request.args.get("id")
@ -385,11 +198,12 @@ def list_session(chat_id,tenant_id):
page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 30))
orderby = request.args.get("orderby", "create_time")
user_id = request.args.get("user_id")
if request.args.get("desc") == "False" or request.args.get("desc") == "false":
desc = False
else:
desc = True
convs = ConversationService.get_list(chat_id,page_number,items_per_page,orderby,desc,id,name)
convs = ConversationService.get_list(chat_id, page_number, items_per_page, orderby, desc, id, name, user_id)
if not convs:
return get_result(data=[])
for conv in convs:
@ -410,16 +224,64 @@ def list_session(chat_id,tenant_id):
chunks = conv["reference"][chunk_num]["chunks"]
for chunk in chunks:
new_chunk = {
"id": chunk["chunk_id"],
"content": chunk["content_with_weight"],
"document_id": chunk["doc_id"],
"document_name": chunk["docnm_kwd"],
"dataset_id": chunk["kb_id"],
"image_id": chunk["image_id"],
"similarity": chunk["similarity"],
"vector_similarity": chunk["vector_similarity"],
"term_similarity": chunk["term_similarity"],
"positions": chunk["positions"],
"id": chunk.get("chunk_id", chunk.get("id")),
"content": chunk.get("content_with_weight", chunk.get("content")),
"document_id": chunk.get("doc_id", chunk.get("document_id")),
"document_name": chunk.get("docnm_kwd", chunk.get("document_name")),
"dataset_id": chunk.get("kb_id", chunk.get("dataset_id")),
"image_id": chunk.get("image_id", chunk.get("img_id")),
"positions": chunk.get("positions", chunk.get("position_int")),
}
chunk_list.append(new_chunk)
chunk_num += 1
messages[message_num]["reference"] = chunk_list
message_num += 1
del conv["reference"]
return get_result(data=convs)
@manager.route('/agents/<agent_id>/sessions', methods=['GET']) # noqa: F821
@token_required
def list_agent_session(tenant_id, agent_id):
if not UserCanvasService.query(user_id=tenant_id, id=agent_id):
return get_error_data_result(message=f"You don't own the agent {agent_id}.")
id = request.args.get("id")
page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 30))
orderby = request.args.get("orderby", "update_time")
if request.args.get("desc") == "False" or request.args.get("desc") == "false":
desc = False
else:
desc = True
convs = API4ConversationService.get_list(agent_id, tenant_id, page_number, items_per_page, orderby, desc, id)
if not convs:
return get_result(data=[])
for conv in convs:
conv['messages'] = conv.pop("message")
infos = conv["messages"]
for info in infos:
if "prompt" in info:
info.pop("prompt")
conv["agent_id"] = conv.pop("dialog_id")
if conv["reference"]:
messages = conv["messages"]
message_num = 0
chunk_num = 0
while message_num < len(messages):
if message_num != 0 and messages[message_num]["role"] != "user":
chunk_list = []
if "chunks" in conv["reference"][chunk_num]:
chunks = conv["reference"][chunk_num]["chunks"]
for chunk in chunks:
new_chunk = {
"id": chunk.get("chunk_id", chunk.get("id")),
"content": chunk.get("content_with_weight", chunk.get("content")),
"document_id": chunk.get("doc_id", chunk.get("document_id")),
"document_name": chunk.get("docnm_kwd", chunk.get("document_name")),
"dataset_id": chunk.get("kb_id", chunk.get("dataset_id")),
"image_id": chunk.get("image_id", chunk.get("img_id")),
"positions": chunk.get("positions", chunk.get("position_int")),
}
chunk_list.append(new_chunk)
chunk_num += 1
@ -429,9 +291,9 @@ def list_session(chat_id,tenant_id):
return get_result(data=convs)
@manager.route('/chats/<chat_id>/sessions', methods=["DELETE"])
@manager.route('/chats/<chat_id>/sessions', methods=["DELETE"]) # noqa: F821
@token_required
def delete(tenant_id,chat_id):
def delete(tenant_id, chat_id):
if not DialogService.query(id=chat_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_error_data_result(message="You don't own the chat")
req = request.json
@ -439,22 +301,23 @@ def delete(tenant_id,chat_id):
if not req:
ids = None
else:
ids=req.get("ids")
ids = req.get("ids")
if not ids:
conv_list = []
for conv in convs:
conv_list.append(conv.id)
else:
conv_list=ids
conv_list = ids
for id in conv_list:
conv = ConversationService.query(id=id,dialog_id=chat_id)
conv = ConversationService.query(id=id, dialog_id=chat_id)
if not conv:
return get_error_data_result(message="The chat doesn't own the session")
ConversationService.delete_by_id(id)
return get_result()
@manager.route('/sessions/ask', methods=['POST'])
@manager.route('/sessions/ask', methods=['POST']) # noqa: F821
@token_required
def ask_about(tenant_id):
req = request.json
@ -462,17 +325,18 @@ def ask_about(tenant_id):
return get_error_data_result("`question` is required.")
if not req.get("dataset_ids"):
return get_error_data_result("`dataset_ids` is required.")
if not isinstance(req.get("dataset_ids"),list):
if not isinstance(req.get("dataset_ids"), list):
return get_error_data_result("`dataset_ids` should be a list.")
req["kb_ids"]=req.pop("dataset_ids")
req["kb_ids"] = req.pop("dataset_ids")
for kb_id in req["kb_ids"]:
if not KnowledgebaseService.accessible(kb_id,tenant_id):
if not KnowledgebaseService.accessible(kb_id, tenant_id):
return get_error_data_result(f"You don't own the dataset {kb_id}.")
kbs = KnowledgebaseService.query(id=kb_id)
kb = kbs[0]
if kb.chunk_num == 0:
return get_error_data_result(f"The dataset {kb_id} doesn't own parsed file")
uid = tenant_id
def stream():
nonlocal req, uid
try:
@ -492,7 +356,7 @@ def ask_about(tenant_id):
return resp
@manager.route('/sessions/related_questions', methods=['POST'])
@manager.route('/sessions/related_questions', methods=['POST']) # noqa: F821
@token_required
def related_questions(tenant_id):
req = request.json
@ -529,3 +393,57 @@ Keywords: {question}
Related search terms:
"""}], {"temperature": 0.9})
return get_result(data=[re.sub(r"^[0-9]\. ", "", a) for a in ans.split("\n") if re.match(r"^[0-9]\. ", a)])
@manager.route('/chatbots/<dialog_id>/completions', methods=['POST']) # noqa: F821
def chatbot_completions(dialog_id):
req = request.json
token = request.headers.get('Authorization').split()
if len(token) != 2:
return get_error_data_result(message='Authorization is not valid!"')
token = token[1]
objs = APIToken.query(beta=token)
if not objs:
return get_error_data_result(message='Token is not valid!"')
if "quote" not in req:
req["quote"] = False
if req.get("stream", True):
resp = Response(iframe_completion(dialog_id, **req), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
for answer in iframe_completion(dialog_id, **req):
return get_result(data=answer)
@manager.route('/agentbots/<agent_id>/completions', methods=['POST']) # noqa: F821
def agent_bot_completions(agent_id):
req = request.json
token = request.headers.get('Authorization').split()
if len(token) != 2:
return get_error_data_result(message='Authorization is not valid!"')
token = token[1]
objs = APIToken.query(beta=token)
if not objs:
return get_error_data_result(message='Token is not valid!"')
if "quote" not in req:
req["quote"] = False
if req.get("stream", True):
resp = Response(agent_completion(objs[0].tenant_id, agent_id, **req), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
for answer in agent_completion(objs[0].tenant_id, agent_id, **req):
return get_result(data=answer)

View File

@ -38,7 +38,7 @@ from timeit import default_timer as timer
from rag.utils.redis_conn import REDIS_CONN
@manager.route("/version", methods=["GET"])
@manager.route("/version", methods=["GET"]) # noqa: F821
@login_required
def version():
"""
@ -61,7 +61,7 @@ def version():
return get_json_result(data=get_ragflow_version())
@manager.route("/status", methods=["GET"])
@manager.route("/status", methods=["GET"]) # noqa: F821
@login_required
def status():
"""
@ -170,7 +170,7 @@ def status():
return get_json_result(data=res)
@manager.route("/new_token", methods=["POST"])
@manager.route("/new_token", methods=["POST"]) # noqa: F821
@login_required
def new_token():
"""
@ -205,6 +205,7 @@ def new_token():
obj = {
"tenant_id": tenant_id,
"token": generate_confirmation_token(tenant_id),
"beta": generate_confirmation_token(generate_confirmation_token(tenant_id)).replace("ragflow-", "")[:32],
"create_time": current_timestamp(),
"create_date": datetime_format(datetime.now()),
"update_time": None,
@ -219,7 +220,7 @@ def new_token():
return server_error_response(e)
@manager.route("/token_list", methods=["GET"])
@manager.route("/token_list", methods=["GET"]) # noqa: F821
@login_required
def token_list():
"""
@ -255,13 +256,19 @@ def token_list():
if not tenants:
return get_data_error_result(message="Tenant not found!")
objs = APITokenService.query(tenant_id=tenants[0].tenant_id)
return get_json_result(data=[o.to_dict() for o in objs])
tenant_id = tenants[0].tenant_id
objs = APITokenService.query(tenant_id=tenant_id)
objs = [o.to_dict() for o in objs]
for o in objs:
if not o["beta"]:
o["beta"] = generate_confirmation_token(generate_confirmation_token(tenants[0].tenant_id)).replace("ragflow-", "")[:32]
APITokenService.filter_update([APIToken.tenant_id == tenant_id, APIToken.token == o["token"]], o)
return get_json_result(data=objs)
except Exception as e:
return server_error_response(e)
@manager.route("/token/<token>", methods=["DELETE"])
@manager.route("/token/<token>", methods=["DELETE"]) # noqa: F821
@login_required
def rm(token):
"""

View File

@ -26,7 +26,7 @@ from api.utils import get_uuid, delta_seconds
from api.utils.api_utils import get_json_result, validate_request, server_error_response, get_data_error_result
@manager.route("/<tenant_id>/user/list", methods=["GET"])
@manager.route("/<tenant_id>/user/list", methods=["GET"]) # noqa: F821
@login_required
def user_list(tenant_id):
if current_user.id != tenant_id:
@ -44,7 +44,7 @@ def user_list(tenant_id):
return server_error_response(e)
@manager.route('/<tenant_id>/user', methods=['POST'])
@manager.route('/<tenant_id>/user', methods=['POST']) # noqa: F821
@login_required
@validate_request("email")
def create(tenant_id):
@ -55,32 +55,36 @@ def create(tenant_id):
code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json
usrs = UserService.query(email=req["email"])
if not usrs:
invite_user_email = req["email"]
invite_users = UserService.query(email=invite_user_email)
if not invite_users:
return get_data_error_result(message="User not found.")
user_id = usrs[0].id
user_tenants = UserTenantService.query(user_id=user_id, tenant_id=tenant_id)
user_id_to_invite = invite_users[0].id
user_tenants = UserTenantService.query(user_id=user_id_to_invite, tenant_id=tenant_id)
if user_tenants:
if user_tenants[0].status == UserTenantRole.NORMAL.value:
return get_data_error_result(message="This user is in the team already.")
return get_data_error_result(message="Invitation notification is sent.")
user_tenant_role = user_tenants[0].role
if user_tenant_role == UserTenantRole.NORMAL:
return get_data_error_result(message=f"{invite_user_email} is already in the team.")
if user_tenant_role == UserTenantRole.OWNER:
return get_data_error_result(message=f"{invite_user_email} is the owner of the team.")
return get_data_error_result(message=f"{invite_user_email} is in the team, but the role: {user_tenant_role} is invalid.")
UserTenantService.save(
id=get_uuid(),
user_id=user_id,
user_id=user_id_to_invite,
tenant_id=tenant_id,
invited_by=current_user.id,
role=UserTenantRole.INVITE,
status=StatusEnum.VALID.value)
usr = usrs[0].to_dict()
usr = invite_users[0].to_dict()
usr = {k: v for k, v in usr.items() if k in ["id", "avatar", "email", "nickname"]}
return get_json_result(data=usr)
@manager.route('/<tenant_id>/user/<user_id>', methods=['DELETE'])
@manager.route('/<tenant_id>/user/<user_id>', methods=['DELETE']) # noqa: F821
@login_required
def rm(tenant_id, user_id):
if current_user.id != tenant_id and current_user.id != user_id:
@ -96,7 +100,7 @@ def rm(tenant_id, user_id):
return server_error_response(e)
@manager.route("/list", methods=["GET"])
@manager.route("/list", methods=["GET"]) # noqa: F821
@login_required
def tenant_list():
try:
@ -108,7 +112,7 @@ def tenant_list():
return server_error_response(e)
@manager.route("/agree/<tenant_id>", methods=["PUT"])
@manager.route("/agree/<tenant_id>", methods=["PUT"]) # noqa: F821
@login_required
def agree(tenant_id):
try:

View File

@ -44,7 +44,7 @@ from api.db.services.file_service import FileService
from api.utils.api_utils import get_json_result, construct_response
@manager.route("/login", methods=["POST", "GET"])
@manager.route("/login", methods=["POST", "GET"]) # noqa: F821
def login():
"""
User login endpoint.
@ -115,7 +115,7 @@ def login():
)
@manager.route("/github_callback", methods=["GET"])
@manager.route("/github_callback", methods=["GET"]) # noqa: F821
def github_callback():
"""
GitHub OAuth callback endpoint.
@ -200,7 +200,7 @@ def github_callback():
return redirect("/?auth=%s" % user.get_id())
@manager.route("/feishu_callback", methods=["GET"])
@manager.route("/feishu_callback", methods=["GET"]) # noqa: F821
def feishu_callback():
"""
Feishu OAuth callback endpoint.
@ -330,12 +330,12 @@ def user_info_from_github(access_token):
headers=headers,
).json()
user_info["email"] = next(
(email for email in email_info if email["primary"] == True), None
(email for email in email_info if email["primary"]), None
)["email"]
return user_info
@manager.route("/logout", methods=["GET"])
@manager.route("/logout", methods=["GET"]) # noqa: F821
@login_required
def log_out():
"""
@ -357,7 +357,7 @@ def log_out():
return get_json_result(data=True)
@manager.route("/setting", methods=["POST"])
@manager.route("/setting", methods=["POST"]) # noqa: F821
@login_required
def setting_user():
"""
@ -429,7 +429,7 @@ def setting_user():
)
@manager.route("/info", methods=["GET"])
@manager.route("/info", methods=["GET"]) # noqa: F821
@login_required
def user_profile():
"""
@ -531,7 +531,7 @@ def user_register(user_id, user):
return UserService.query(email=user["email"])
@manager.route("/register", methods=["POST"])
@manager.route("/register", methods=["POST"]) # noqa: F821
@validate_request("nickname", "email", "password")
def user_add():
"""
@ -617,7 +617,7 @@ def user_add():
)
@manager.route("/tenant_info", methods=["GET"])
@manager.route("/tenant_info", methods=["GET"]) # noqa: F821
@login_required
def tenant_info():
"""
@ -655,7 +655,7 @@ def tenant_info():
return server_error_response(e)
@manager.route("/set_tenant_info", methods=["POST"])
@manager.route("/set_tenant_info", methods=["POST"]) # noqa: F821
@login_required
@validate_request("tenant_id", "asr_id", "embd_id", "img2txt_id", "llm_id")
def set_tenant_info():

View File

@ -31,7 +31,6 @@ from peewee import (
)
from playhouse.pool import PooledMySQLDatabase, PooledPostgresqlDatabase
from api.db import SerializedType, ParserType
from api import settings
from api import utils
@ -130,7 +129,7 @@ def is_continuous_field(cls: typing.Type) -> bool:
for p in cls.__bases__:
if p in CONTINUOUS_FIELD_TYPE:
return True
elif p != Field and p != object:
elif p is not Field and p is not object:
if is_continuous_field(p):
return True
else:
@ -703,6 +702,7 @@ class Knowledgebase(DataBaseModel):
default=ParserType.NAIVE.value,
index=True)
parser_config = JSONField(null=False, default={"pages": [[1, 1000000]]})
pagerank = IntegerField(default=0, index=False)
status = CharField(
max_length=1,
null=True,
@ -854,6 +854,8 @@ class Task(DataBaseModel):
help_text="process message",
default="")
retry_count = IntegerField(default=0)
digest = TextField(null=True, help_text="task digest", default="")
chunk_ids = LongTextField(null=True, help_text="chunk ids", default="")
class Dialog(DataBaseModel):
@ -923,6 +925,7 @@ class Conversation(DataBaseModel):
name = CharField(max_length=255, null=True, help_text="converastion name", index=True)
message = JSONField(null=True)
reference = JSONField(null=True, default=[])
user_id = CharField(max_length=255, null=True, help_text="user_id", index=True)
class Meta:
db_table = "conversation"
@ -933,6 +936,7 @@ class APIToken(DataBaseModel):
token = CharField(max_length=255, null=False, index=True)
dialog_id = CharField(max_length=32, null=False, index=True)
source = CharField(max_length=16, null=True, help_text="none|agent|dialog", index=True)
beta = CharField(max_length=255, null=True, index=True)
class Meta:
db_table = "api_token"
@ -947,7 +951,7 @@ class API4Conversation(DataBaseModel):
reference = JSONField(null=True, default=[])
tokens = IntegerField(default=0)
source = CharField(max_length=16, null=True, help_text="none|agent|dialog", index=True)
dsl = JSONField(null=True, default={})
duration = FloatField(default=0, index=True)
round = IntegerField(default=0, index=True)
thumb_up = IntegerField(default=0, index=True)
@ -1066,7 +1070,45 @@ def migrate_db():
pass
try:
migrate(
migrator.add_column("tenant_llm","max_tokens",IntegerField(default=8192,index=True))
migrator.add_column("tenant_llm", "max_tokens", IntegerField(default=8192, index=True))
)
except Exception:
pass
try:
migrate(
migrator.add_column("api_4_conversation", "dsl", JSONField(null=True, default={}))
)
except Exception:
pass
try:
migrate(
migrator.add_column("knowledgebase", "pagerank", IntegerField(default=0, index=False))
)
except Exception:
pass
try:
migrate(
migrator.add_column("api_token", "beta", CharField(max_length=255, null=True, index=True))
)
except Exception:
pass
try:
migrate(
migrator.add_column("task", "digest", TextField(null=True, help_text="task digest", default=""))
)
except Exception:
pass
try:
migrate(
migrator.add_column("task", "chunk_ids", LongTextField(null=True, help_text="chunk ids", default=""))
)
except Exception:
pass
try:
migrate(
migrator.add_column("conversation", "user_id",
CharField(max_length=255, null=True, help_text="user_id", index=True))
)
except Exception:
pass

View File

@ -170,7 +170,7 @@ def add_graph_templates():
cnvs = json.load(open(os.path.join(dir, fnm), "r"))
try:
CanvasTemplateService.save(**cnvs)
except:
except Exception:
CanvasTemplateService.update_by_id(cnvs["id"], cnvs)
except Exception:
logging.exception("Add graph templates error: ")

View File

@ -15,13 +15,14 @@
#
import pathlib
import re
from .user_service import UserService
from .user_service import UserService as UserService
def duplicate_name(query_func, **kwargs):
fnm = kwargs["name"]
objs = query_func(**kwargs)
if not objs: return fnm
if not objs:
return fnm
ext = pathlib.Path(fnm).suffix #.jpg
nm = re.sub(r"%s$"%ext, "", fnm)
r = re.search(r"\(([0-9]+)\)$", nm)
@ -31,8 +32,8 @@ def duplicate_name(query_func, **kwargs):
nm = re.sub(r"\([0-9]+\)$", "", nm)
c += 1
nm = f"{nm}({c})"
if ext: nm += f"{ext}"
if ext:
nm += f"{ext}"
kwargs["name"] = nm
return duplicate_name(query_func, **kwargs)

View File

@ -39,6 +39,25 @@ class APITokenService(CommonService):
class API4ConversationService(CommonService):
model = API4Conversation
@classmethod
@DB.connection_context()
def get_list(cls, dialog_id, tenant_id,
page_number, items_per_page,
orderby, desc, id, user_id=None):
sessions = cls.model.select().where(cls.model.dialog_id == dialog_id)
if id:
sessions = sessions.where(cls.model.id == id)
if user_id:
sessions = sessions.where(cls.model.user_id == user_id)
if desc:
sessions = sessions.order_by(cls.model.getter_by(orderby).desc())
else:
sessions = sessions.order_by(cls.model.getter_by(orderby).asc())
sessions = sessions.where(cls.model.user_id == tenant_id)
sessions = sessions.paginate(page_number, items_per_page)
return list(sessions.dicts())
@classmethod
@DB.connection_context()
def append_message(cls, id, conversation):
@ -48,7 +67,8 @@ class API4ConversationService(CommonService):
@classmethod
@DB.connection_context()
def stats(cls, tenant_id, from_date, to_date, source=None):
if len(to_date) == 10: to_date += " 23:59:59"
if len(to_date) == 10:
to_date += " 23:59:59"
return cls.model.select(
cls.model.create_date.truncate("day").alias("dt"),
peewee.fn.COUNT(

View File

@ -13,10 +13,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from datetime import datetime
import peewee
from api.db.db_models import DB, API4Conversation, APIToken, Dialog, CanvasTemplate, UserCanvas
import json
import traceback
from uuid import uuid4
from agent.canvas import Canvas
from api.db.db_models import DB, CanvasTemplate, UserCanvas, API4Conversation
from api.db.services.api_service import API4ConversationService
from api.db.services.common_service import CommonService
from api.db.services.conversation_service import structure_answer
from api.utils import get_uuid
class CanvasTemplateService(CommonService):
@ -25,3 +30,137 @@ class CanvasTemplateService(CommonService):
class UserCanvasService(CommonService):
model = UserCanvas
@classmethod
@DB.connection_context()
def get_list(cls, tenant_id,
page_number, items_per_page, orderby, desc, id, title):
agents = cls.model.select()
if id:
agents = agents.where(cls.model.id == id)
if title:
agents = agents.where(cls.model.title == title)
agents = agents.where(cls.model.user_id == tenant_id)
if desc:
agents = agents.order_by(cls.model.getter_by(orderby).desc())
else:
agents = agents.order_by(cls.model.getter_by(orderby).asc())
agents = agents.paginate(page_number, items_per_page)
return list(agents.dicts())
def completion(tenant_id, agent_id, question, session_id=None, stream=True, **kwargs):
e, cvs = UserCanvasService.get_by_id(agent_id)
assert e, "Agent not found."
assert cvs.user_id == tenant_id, "You do not own the agent."
if not isinstance(cvs.dsl,str):
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
canvas = Canvas(cvs.dsl, tenant_id)
canvas.reset()
message_id = str(uuid4())
if not session_id:
query = canvas.get_preset_param()
if query:
for ele in query:
if not ele["optional"]:
if not kwargs.get(ele["key"]):
assert False, f"`{ele['key']}` is required"
ele["value"] = kwargs[ele["key"]]
if ele["optional"]:
if kwargs.get(ele["key"]):
ele["value"] = kwargs[ele['key']]
else:
if "value" in ele:
ele.pop("value")
cvs.dsl = json.loads(str(canvas))
session_id=get_uuid()
conv = {
"id": session_id,
"dialog_id": cvs.id,
"user_id": kwargs.get("usr_id", "") if isinstance(kwargs, dict) else "",
"message": [{"role": "assistant", "content": canvas.get_prologue()}],
"source": "agent",
"dsl": cvs.dsl
}
API4ConversationService.save(**conv)
if query:
yield "data:" + json.dumps({"code": 0,
"message": "",
"data": {
"session_id": session_id,
"answer": canvas.get_prologue(),
"reference": [],
"param": canvas.get_preset_param()
}
},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
return
else:
conv = API4Conversation(**conv)
else:
e, conv = API4ConversationService.get_by_id(session_id)
assert e, "Session not found!"
canvas = Canvas(json.dumps(conv.dsl), tenant_id)
canvas.messages.append({"role": "user", "content": question, "id": message_id})
canvas.add_user_input(question)
if not conv.message:
conv.message = []
conv.message.append({
"role": "user",
"content": question,
"id": message_id
})
if not conv.reference:
conv.reference = []
conv.reference.append({"chunks": [], "doc_aggs": []})
final_ans = {"reference": [], "content": ""}
if stream:
try:
for ans in canvas.run(stream=stream):
if ans.get("running_status"):
yield "data:" + json.dumps({"code": 0, "message": "",
"data": {"answer": ans["content"],
"running_status": True}},
ensure_ascii=False) + "\n\n"
continue
for k in ans.keys():
final_ans[k] = ans[k]
ans = {"answer": ans["content"], "reference": ans.get("reference", [])}
ans = structure_answer(conv, ans, message_id, session_id)
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans},
ensure_ascii=False) + "\n\n"
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
canvas.history.append(("assistant", final_ans["content"]))
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
conv.dsl = json.loads(str(canvas))
API4ConversationService.append_message(conv.id, conv.to_dict())
except Exception as e:
traceback.print_exc()
conv.dsl = json.loads(str(canvas))
API4ConversationService.append_message(conv.id, conv.to_dict())
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
else:
for answer in canvas.run(stream=False):
if answer.get("running_status"):
continue
final_ans["content"] = "\n".join(answer["content"]) if "content" in answer else ""
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
conv.dsl = json.loads(str(canvas))
result = {"answer": final_ans["content"], "reference": final_ans.get("reference", [])}
result = structure_answer(conv, result, message_id, session_id)
API4ConversationService.append_message(conv.id, conv.to_dict())
yield result
break

View File

@ -115,7 +115,7 @@ class CommonService:
try:
obj = cls.model.query(id=pid)[0]
return True, obj
except Exception as e:
except Exception:
return False, None
@classmethod

View File

@ -0,0 +1,232 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from uuid import uuid4
from api.db import StatusEnum
from api.db.db_models import Conversation, DB
from api.db.services.api_service import API4ConversationService
from api.db.services.common_service import CommonService
from api.db.services.dialog_service import DialogService, chat
from api.utils import get_uuid
import json
class ConversationService(CommonService):
model = Conversation
@classmethod
@DB.connection_context()
def get_list(cls, dialog_id, page_number, items_per_page, orderby, desc, id, name, user_id=None):
sessions = cls.model.select().where(cls.model.dialog_id == dialog_id)
if id:
sessions = sessions.where(cls.model.id == id)
if name:
sessions = sessions.where(cls.model.name == name)
if user_id:
sessions = sessions.where(cls.model.user_id == user_id)
if desc:
sessions = sessions.order_by(cls.model.getter_by(orderby).desc())
else:
sessions = sessions.order_by(cls.model.getter_by(orderby).asc())
sessions = sessions.paginate(page_number, items_per_page)
return list(sessions.dicts())
def structure_answer(conv, ans, message_id, session_id):
reference = ans["reference"]
if not isinstance(reference, dict):
reference = {}
ans["reference"] = {}
def get_value(d, k1, k2):
return d.get(k1, d.get(k2))
chunk_list = [{
"id": get_value(chunk, "chunk_id", "id"),
"content": get_value(chunk, "content", "content_with_weight"),
"document_id": get_value(chunk, "doc_id", "document_id"),
"document_name": get_value(chunk, "docnm_kwd", "document_name"),
"dataset_id": get_value(chunk, "kb_id", "dataset_id"),
"image_id": get_value(chunk, "image_id", "img_id"),
"positions": get_value(chunk, "positions", "position_int"),
} for chunk in reference.get("chunks", [])]
reference["chunks"] = chunk_list
ans["id"] = message_id
ans["session_id"] = session_id
if not conv:
return ans
if not conv.message:
conv.message = []
if not conv.message or conv.message[-1].get("role", "") != "assistant":
conv.message.append({"role": "assistant", "content": ans["answer"], "id": message_id})
else:
conv.message[-1] = {"role": "assistant", "content": ans["answer"], "id": message_id}
if conv.reference:
conv.reference[-1] = reference
return ans
def completion(tenant_id, chat_id, question, name="New session", session_id=None, stream=True, **kwargs):
assert name, "`name` can not be empty."
dia = DialogService.query(id=chat_id, tenant_id=tenant_id, status=StatusEnum.VALID.value)
assert dia, "You do not own the chat."
if not session_id:
session_id = get_uuid()
conv = {
"id": session_id,
"dialog_id": chat_id,
"name": name,
"message": [{"role": "assistant", "content": dia[0].prompt_config.get("prologue")}],
"user_id": kwargs.get("user_id", "")
}
ConversationService.save(**conv)
yield "data:" + json.dumps({"code": 0, "message": "",
"data": {
"answer": conv["message"][0]["content"],
"reference": {},
"audio_binary": None,
"id": None,
"session_id": session_id
}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
return
conv = ConversationService.query(id=session_id, dialog_id=chat_id)
if not conv:
raise LookupError("Session does not exist")
conv = conv[0]
msg = []
question = {
"content": question,
"role": "user",
"id": str(uuid4())
}
conv.message.append(question)
for m in conv.message:
if m["role"] == "system":
continue
if m["role"] == "assistant" and not msg:
continue
msg.append(m)
message_id = msg[-1].get("id")
e, dia = DialogService.get_by_id(conv.dialog_id)
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []})
if stream:
try:
for ans in chat(dia, msg, True, **kwargs):
ans = structure_answer(conv, ans, message_id, session_id)
yield "data:" + json.dumps({"code": 0, "data": ans}, ensure_ascii=False) + "\n\n"
ConversationService.update_by_id(conv.id, conv.to_dict())
except Exception as e:
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "data": True}, ensure_ascii=False) + "\n\n"
else:
answer = None
for ans in chat(dia, msg, False, **kwargs):
answer = structure_answer(conv, ans, message_id, session_id)
ConversationService.update_by_id(conv.id, conv.to_dict())
break
yield answer
def iframe_completion(dialog_id, question, session_id=None, stream=True, **kwargs):
e, dia = DialogService.get_by_id(dialog_id)
assert e, "Dialog not found"
if not session_id:
session_id = get_uuid()
conv = {
"id": session_id,
"dialog_id": dialog_id,
"user_id": kwargs.get("user_id", ""),
"message": [{"role": "assistant", "content": dia.prompt_config["prologue"]}]
}
API4ConversationService.save(**conv)
yield "data:" + json.dumps({"code": 0, "message": "",
"data": {
"answer": conv["message"][0]["content"],
"reference": {},
"audio_binary": None,
"id": None,
"session_id": session_id
}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
return
else:
session_id = session_id
e, conv = API4ConversationService.get_by_id(session_id)
assert e, "Session not found!"
if not conv.message:
conv.message = []
messages = conv.message
question = {
"role": "user",
"content": question,
"id": str(uuid4())
}
messages.append(question)
msg = []
for m in messages:
if m["role"] == "system":
continue
if m["role"] == "assistant" and not msg:
continue
msg.append(m)
if not msg[-1].get("id"):
msg[-1]["id"] = get_uuid()
message_id = msg[-1]["id"]
if not conv.reference:
conv.reference = []
conv.reference.append({"chunks": [], "doc_aggs": []})
if stream:
try:
for ans in chat(dia, msg, True, **kwargs):
ans = structure_answer(conv, ans, message_id, session_id)
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans},
ensure_ascii=False) + "\n\n"
API4ConversationService.append_message(conv.id, conv.to_dict())
except Exception as e:
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
else:
answer = None
for ans in chat(dia, msg, False, **kwargs):
answer = structure_answer(conv, ans, message_id, session_id)
API4ConversationService.append_message(conv.id, conv.to_dict())
break
yield answer

View File

@ -18,12 +18,13 @@ import binascii
import os
import json
import re
from collections import defaultdict
from copy import deepcopy
from timeit import default_timer as timer
import datetime
from datetime import timedelta
from api.db import LLMType, ParserType,StatusEnum
from api.db.db_models import Dialog, Conversation,DB
from api.db import LLMType, ParserType, StatusEnum
from api.db.db_models import Dialog, DB
from api.db.services.common_service import CommonService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMService, TenantLLMService, LLMBundle
@ -40,14 +41,14 @@ class DialogService(CommonService):
@classmethod
@DB.connection_context()
def get_list(cls, tenant_id,
page_number, items_per_page, orderby, desc, id , name):
page_number, items_per_page, orderby, desc, id, name):
chats = cls.model.select()
if id:
chats = chats.where(cls.model.id == id)
if name:
chats = chats.where(cls.model.name == name)
chats = chats.where(
(cls.model.tenant_id == tenant_id)
(cls.model.tenant_id == tenant_id)
& (cls.model.status == StatusEnum.VALID.value)
)
if desc:
@ -60,27 +61,6 @@ class DialogService(CommonService):
return list(chats.dicts())
class ConversationService(CommonService):
model = Conversation
@classmethod
@DB.connection_context()
def get_list(cls,dialog_id,page_number, items_per_page, orderby, desc, id , name):
sessions = cls.model.select().where(cls.model.dialog_id ==dialog_id)
if id:
sessions = sessions.where(cls.model.id == id)
if name:
sessions = sessions.where(cls.model.name == name)
if desc:
sessions = sessions.order_by(cls.model.getter_by(orderby).desc())
else:
sessions = sessions.order_by(cls.model.getter_by(orderby).asc())
sessions = sessions.paginate(page_number, items_per_page)
return list(sessions.dicts())
def message_fit_in(msg, max_length=4000):
def count():
nonlocal msg
@ -106,21 +86,21 @@ def message_fit_in(msg, max_length=4000):
return c, msg
ll = num_tokens_from_string(msg_[0]["content"])
l = num_tokens_from_string(msg_[-1]["content"])
if ll / (ll + l) > 0.8:
ll2 = num_tokens_from_string(msg_[-1]["content"])
if ll / (ll + ll2) > 0.8:
m = msg_[0]["content"]
m = encoder.decode(encoder.encode(m)[:max_length - l])
m = encoder.decode(encoder.encode(m)[:max_length - ll2])
msg[0]["content"] = m
return max_length, msg
m = msg_[1]["content"]
m = encoder.decode(encoder.encode(m)[:max_length - l])
m = encoder.decode(encoder.encode(m)[:max_length - ll2])
msg[1]["content"] = m
return max_length, msg
def llm_id2llm_type(llm_id):
llm_id = llm_id.split("@")[0]
llm_id, _ = TenantLLMService.split_model_name_and_factory(llm_id)
fnm = os.path.join(get_project_base_directory(), "conf")
llm_factories = json.load(open(os.path.join(fnm, "llm_factories.json"), "r"))
for llm_factory in llm_factories["factory_llm_infos"]:
@ -129,31 +109,65 @@ def llm_id2llm_type(llm_id):
return llm["model_type"].strip(",")[-1]
def kb_prompt(kbinfos, max_tokens):
knowledges = [ck["content_with_weight"] for ck in kbinfos["chunks"]]
used_token_count = 0
chunks_num = 0
for i, c in enumerate(knowledges):
used_token_count += num_tokens_from_string(c)
chunks_num += 1
if max_tokens * 0.97 < used_token_count:
knowledges = knowledges[:i]
break
doc2chunks = defaultdict(list)
for i, ck in enumerate(kbinfos["chunks"]):
if i >= chunks_num:
break
doc2chunks[ck["docnm_kwd"]].append(ck["content_with_weight"])
knowledges = []
for nm, chunks in doc2chunks.items():
txt = f"Document: {nm} \nContains the following relevant fragments:\n"
for i, chunk in enumerate(chunks, 1):
txt += f"{i}. {chunk}\n"
knowledges.append(txt)
return knowledges
def chat(dialog, messages, stream=True, **kwargs):
assert messages[-1]["role"] == "user", "The last content of this conversation is not from user."
st = timer()
tmp = dialog.llm_id.split("@")
fid = None
llm_id = tmp[0]
if len(tmp)>1: fid = tmp[1]
llm = LLMService.query(llm_name=llm_id) if not fid else LLMService.query(llm_name=llm_id, fid=fid)
chat_start_ts = timer()
# Get llm model name and model provider name
llm_id, model_provider = TenantLLMService.split_model_name_and_factory(dialog.llm_id)
# Get llm model instance by model and provide name
llm = LLMService.query(llm_name=llm_id) if not model_provider else LLMService.query(llm_name=llm_id, fid=model_provider)
if not llm:
llm = TenantLLMService.query(tenant_id=dialog.tenant_id, llm_name=llm_id) if not fid else \
TenantLLMService.query(tenant_id=dialog.tenant_id, llm_name=llm_id, llm_factory=fid)
# Model name is provided by tenant, but not system built-in
llm = TenantLLMService.query(tenant_id=dialog.tenant_id, llm_name=llm_id) if not model_provider else \
TenantLLMService.query(tenant_id=dialog.tenant_id, llm_name=llm_id, llm_factory=model_provider)
if not llm:
raise LookupError("LLM(%s) not found" % dialog.llm_id)
max_tokens = 8192
else:
max_tokens = llm[0].max_tokens
check_llm_ts = timer()
kbs = KnowledgebaseService.get_by_ids(dialog.kb_ids)
embd_nms = list(set([kb.embd_id for kb in kbs]))
if len(embd_nms) != 1:
embedding_list = list(set([kb.embd_id for kb in kbs]))
if len(embedding_list) != 1:
yield {"answer": "**ERROR**: Knowledge bases use different embedding models.", "reference": []}
return {"answer": "**ERROR**: Knowledge bases use different embedding models.", "reference": []}
is_kg = all([kb.parser_id == ParserType.KG for kb in kbs])
retr = settings.retrievaler if not is_kg else settings.kg_retrievaler
embedding_model_name = embedding_list[0]
is_knowledge_graph = all([kb.parser_id == ParserType.KG for kb in kbs])
retriever = settings.retrievaler if not is_knowledge_graph else settings.kg_retrievaler
questions = [m["content"] for m in messages if m["role"] == "user"][-3:]
attachments = kwargs["doc_ids"].split(",") if "doc_ids" in kwargs else None
@ -163,15 +177,21 @@ def chat(dialog, messages, stream=True, **kwargs):
if "doc_ids" in m:
attachments.extend(m["doc_ids"])
embd_mdl = LLMBundle(dialog.tenant_id, LLMType.EMBEDDING, embd_nms[0])
create_retriever_ts = timer()
embd_mdl = LLMBundle(dialog.tenant_id, LLMType.EMBEDDING, embedding_model_name)
if not embd_mdl:
raise LookupError("Embedding model(%s) not found" % embd_nms[0])
raise LookupError("Embedding model(%s) not found" % embedding_model_name)
bind_embedding_ts = timer()
if llm_id2llm_type(dialog.llm_id) == "image2text":
chat_mdl = LLMBundle(dialog.tenant_id, LLMType.IMAGE2TEXT, dialog.llm_id)
else:
chat_mdl = LLMBundle(dialog.tenant_id, LLMType.CHAT, dialog.llm_id)
bind_llm_ts = timer()
prompt_config = dialog.prompt_config
field_map = KnowledgebaseService.get_field_map(dialog.kb_ids)
tts_mdl = None
@ -198,32 +218,35 @@ def chat(dialog, messages, stream=True, **kwargs):
questions = [full_question(dialog.tenant_id, dialog.llm_id, messages)]
else:
questions = questions[-1:]
refineQ_tm = timer()
keyword_tm = timer()
refine_question_ts = timer()
rerank_mdl = None
if dialog.rerank_id:
rerank_mdl = LLMBundle(dialog.tenant_id, LLMType.RERANK, dialog.rerank_id)
for _ in range(len(questions) // 2):
questions.append(questions[-1])
bind_reranker_ts = timer()
generate_keyword_ts = bind_reranker_ts
if "knowledge" not in [p["key"] for p in prompt_config["parameters"]]:
kbinfos = {"total": 0, "chunks": [], "doc_aggs": []}
else:
if prompt_config.get("keyword", False):
questions[-1] += keyword_extraction(chat_mdl, questions[-1])
keyword_tm = timer()
generate_keyword_ts = timer()
tenant_ids = list(set([kb.tenant_id for kb in kbs]))
kbinfos = retr.retrieval(" ".join(questions), embd_mdl, tenant_ids, dialog.kb_ids, 1, dialog.top_n,
dialog.similarity_threshold,
dialog.vector_similarity_weight,
doc_ids=attachments,
top=dialog.top_k, aggs=False, rerank_mdl=rerank_mdl)
knowledges = [ck["content_with_weight"] for ck in kbinfos["chunks"]]
kbinfos = retriever.retrieval(" ".join(questions), embd_mdl, tenant_ids, dialog.kb_ids, 1, dialog.top_n,
dialog.similarity_threshold,
dialog.vector_similarity_weight,
doc_ids=attachments,
top=dialog.top_k, aggs=False, rerank_mdl=rerank_mdl)
retrieval_ts = timer()
knowledges = kb_prompt(kbinfos, max_tokens)
logging.debug(
"{}->{}".format(" ".join(questions), "\n->".join(knowledges)))
retrieval_tm = timer()
if not knowledges and prompt_config.get("empty_response"):
empty_res = prompt_config["empty_response"]
@ -247,21 +270,25 @@ def chat(dialog, messages, stream=True, **kwargs):
max_tokens - used_token_count)
def decorate_answer(answer):
nonlocal prompt_config, knowledges, kwargs, kbinfos, prompt, retrieval_tm
nonlocal prompt_config, knowledges, kwargs, kbinfos, prompt, retrieval_ts
finish_chat_ts = timer()
refs = []
if knowledges and (prompt_config.get("quote", True) and kwargs.get("quote", True)):
answer, idx = retr.insert_citations(answer,
[ck["content_ltks"]
for ck in kbinfos["chunks"]],
[ck["vector"]
for ck in kbinfos["chunks"]],
embd_mdl,
tkweight=1 - dialog.vector_similarity_weight,
vtweight=dialog.vector_similarity_weight)
answer, idx = retriever.insert_citations(answer,
[ck["content_ltks"]
for ck in kbinfos["chunks"]],
[ck["vector"]
for ck in kbinfos["chunks"]],
embd_mdl,
tkweight=1 - dialog.vector_similarity_weight,
vtweight=dialog.vector_similarity_weight)
idx = set([kbinfos["chunks"][int(i)]["doc_id"] for i in idx])
recall_docs = [
d for d in kbinfos["doc_aggs"] if d["doc_id"] in idx]
if not recall_docs: recall_docs = kbinfos["doc_aggs"]
if not recall_docs:
recall_docs = kbinfos["doc_aggs"]
kbinfos["doc_aggs"] = recall_docs
refs = deepcopy(kbinfos)
@ -270,11 +297,21 @@ def chat(dialog, messages, stream=True, **kwargs):
del c["vector"]
if answer.lower().find("invalid key") >= 0 or answer.lower().find("invalid api") >= 0:
answer += " Please set LLM API-Key in 'User Setting -> Model Providers -> API-Key'"
done_tm = timer()
prompt += "\n\n### Elapsed\n - Refine Question: %.1f ms\n - Keywords: %.1f ms\n - Retrieval: %.1f ms\n - LLM: %.1f ms" % (
(refineQ_tm - st) * 1000, (keyword_tm - refineQ_tm) * 1000, (retrieval_tm - keyword_tm) * 1000,
(done_tm - retrieval_tm) * 1000)
answer += " Please set LLM API-Key in 'User Setting -> Model providers -> API-Key'"
finish_chat_ts = timer()
total_time_cost = (finish_chat_ts - chat_start_ts) * 1000
check_llm_time_cost = (check_llm_ts - chat_start_ts) * 1000
create_retriever_time_cost = (create_retriever_ts - check_llm_ts) * 1000
bind_embedding_time_cost = (bind_embedding_ts - create_retriever_ts) * 1000
bind_llm_time_cost = (bind_llm_ts - bind_embedding_ts) * 1000
refine_question_time_cost = (refine_question_ts - bind_llm_ts) * 1000
bind_reranker_time_cost = (bind_reranker_ts - refine_question_ts) * 1000
generate_keyword_time_cost = (generate_keyword_ts - bind_reranker_ts) * 1000
retrieval_time_cost = (retrieval_ts - generate_keyword_ts) * 1000
generate_result_time_cost = (finish_chat_ts - retrieval_ts) * 1000
prompt = f"{prompt}\n\n - Total: {total_time_cost:.1f}ms\n - Check LLM: {check_llm_time_cost:.1f}ms\n - Create retriever: {create_retriever_time_cost:.1f}ms\n - Bind embedding: {bind_embedding_time_cost:.1f}ms\n - Bind LLM: {bind_llm_time_cost:.1f}ms\n - Tune question: {refine_question_time_cost:.1f}ms\n - Bind reranker: {bind_reranker_time_cost:.1f}ms\n - Generate keyword: {generate_keyword_time_cost:.1f}ms\n - Retrieval: {retrieval_time_cost:.1f}ms\n - Generate answer: {generate_result_time_cost:.1f}ms"
return {"answer": answer, "reference": refs, "prompt": prompt}
if stream:
@ -301,15 +338,15 @@ def chat(dialog, messages, stream=True, **kwargs):
def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
sys_prompt = "你是一个DBA。你需要这对以下表的字段结构根据用户的问题列表写出最后一个问题对应的SQL。"
user_promt = """
表名:{}
数据库表字段说明如下:
sys_prompt = "You are a Database Administrator. You need to check the fields of the following tables based on the user's list of questions and write the SQL corresponding to the last question."
user_prompt = """
Table name: {};
Table of database fields are as follows:
{}
问题如下:
Question are as follows:
{}
请写出SQL, 且只要SQL不要有其他说明及文字。
Please write the SQL, only SQL, without any other explanations or text.
""".format(
index_name(tenant_id),
"\n".join([f"{k}: {v}" for k, v in field_map.items()]),
@ -318,10 +355,10 @@ def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
tried_times = 0
def get_table():
nonlocal sys_prompt, user_promt, question, tried_times
sql = chat_mdl.chat(sys_prompt, [{"role": "user", "content": user_promt}], {
nonlocal sys_prompt, user_prompt, question, tried_times
sql = chat_mdl.chat(sys_prompt, [{"role": "user", "content": user_prompt}], {
"temperature": 0.06})
logging.debug(f"{question} ==> {user_promt} get SQL: {sql}")
logging.debug(f"{question} ==> {user_prompt} get SQL: {sql}")
sql = re.sub(r"[\r\n]+", " ", sql.lower())
sql = re.sub(r".*select ", "select ", sql.lower())
sql = re.sub(r" +", " ", sql)
@ -349,21 +386,23 @@ def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
if tbl is None:
return None
if tbl.get("error") and tried_times <= 2:
user_promt = """
表名:{}
数据库表字段说明如下:
user_prompt = """
Table name: {};
Table of database fields are as follows:
{}
Question are as follows:
{}
Please write the SQL, only SQL, without any other explanations or text.
The SQL error you provided last time is as follows:
{}
问题如下:
Error issued by database as follows:
{}
你上一次给出的错误SQL如下
{}
后台报错如下:
{}
请纠正SQL中的错误再写一遍且只要SQL不要有其他说明及文字。
Please correct the error and write SQL again, only SQL, without any other explanations or text.
""".format(
index_name(tenant_id),
"\n".join([f"{k}: {v}" for k, v in field_map.items()]),
@ -378,21 +417,21 @@ def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
docid_idx = set([ii for ii, c in enumerate(
tbl["columns"]) if c["name"] == "doc_id"])
docnm_idx = set([ii for ii, c in enumerate(
doc_name_idx = set([ii for ii, c in enumerate(
tbl["columns"]) if c["name"] == "docnm_kwd"])
clmn_idx = [ii for ii in range(
len(tbl["columns"])) if ii not in (docid_idx | docnm_idx)]
column_idx = [ii for ii in range(
len(tbl["columns"])) if ii not in (docid_idx | doc_name_idx)]
# compose markdown table
clmns = "|" + "|".join([re.sub(r"(/.*|[^]+)", "", field_map.get(tbl["columns"][i]["name"],
tbl["columns"][i]["name"])) for i in
clmn_idx]) + ("|Source|" if docid_idx and docid_idx else "|")
# compose Markdown table
columns = "|" + "|".join([re.sub(r"(/.*|[^]+)", "", field_map.get(tbl["columns"][i]["name"],
tbl["columns"][i]["name"])) for i in
column_idx]) + ("|Source|" if docid_idx and docid_idx else "|")
line = "|" + "|".join(["------" for _ in range(len(clmn_idx))]) + \
line = "|" + "|".join(["------" for _ in range(len(column_idx))]) + \
("|------|" if docid_idx and docid_idx else "")
rows = ["|" +
"|".join([rmSpace(str(r[i])) for i in clmn_idx]).replace("None", " ") +
"|".join([rmSpace(str(r[i])) for i in column_idx]).replace("None", " ") +
"|" for r in tbl["rows"]]
rows = [r for r in rows if re.sub(r"[ |]+", "", r)]
if quota:
@ -401,24 +440,24 @@ def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
rows = "\n".join([r + f" ##{ii}$$ |" for ii, r in enumerate(rows)])
rows = re.sub(r"T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+Z)?\|", "|", rows)
if not docid_idx or not docnm_idx:
if not docid_idx or not doc_name_idx:
logging.warning("SQL missing field: " + sql)
return {
"answer": "\n".join([clmns, line, rows]),
"answer": "\n".join([columns, line, rows]),
"reference": {"chunks": [], "doc_aggs": []},
"prompt": sys_prompt
}
docid_idx = list(docid_idx)[0]
docnm_idx = list(docnm_idx)[0]
doc_name_idx = list(doc_name_idx)[0]
doc_aggs = {}
for r in tbl["rows"]:
if r[docid_idx] not in doc_aggs:
doc_aggs[r[docid_idx]] = {"doc_name": r[docnm_idx], "count": 0}
doc_aggs[r[docid_idx]] = {"doc_name": r[doc_name_idx], "count": 0}
doc_aggs[r[docid_idx]]["count"] += 1
return {
"answer": "\n".join([clmns, line, rows]),
"reference": {"chunks": [{"doc_id": r[docid_idx], "docnm_kwd": r[docnm_idx]} for r in tbl["rows"]],
"answer": "\n".join([columns, line, rows]),
"reference": {"chunks": [{"doc_id": r[docid_idx], "docnm_kwd": r[doc_name_idx]} for r in tbl["rows"]],
"doc_aggs": [{"doc_id": did, "doc_name": d["doc_name"], "count": d["count"]} for did, d in
doc_aggs.items()]},
"prompt": sys_prompt
@ -437,13 +476,15 @@ def relevant(tenant_id, llm_id, question, contents: list):
Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.
No other words needed except 'yes' or 'no'.
"""
if not contents:return False
if not contents:
return False
contents = "Documents: \n" + " - ".join(contents)
contents = f"Question: {question}\n" + contents
if num_tokens_from_string(contents) >= chat_mdl.max_length - 4:
contents = encoder.decode(encoder.encode(contents)[:chat_mdl.max_length - 4])
ans = chat_mdl.chat(prompt, [{"role": "user", "content": contents}], {"temperature": 0.01})
if ans.lower().find("yes") >= 0: return True
if ans.lower().find("yes") >= 0:
return True
return False
@ -485,8 +526,10 @@ Requirements:
]
_, msg = message_fit_in(msg, chat_mdl.max_length)
kwd = chat_mdl.chat(prompt, msg[1:], {"temperature": 0.2})
if isinstance(kwd, tuple): kwd = kwd[0]
if kwd.find("**ERROR**") >=0: return ""
if isinstance(kwd, tuple):
kwd = kwd[0]
if kwd.find("**ERROR**") >= 0:
return ""
return kwd
@ -512,8 +555,10 @@ Requirements:
]
_, msg = message_fit_in(msg, chat_mdl.max_length)
kwd = chat_mdl.chat(prompt, msg[1:], {"temperature": 0.2})
if isinstance(kwd, tuple): kwd = kwd[0]
if kwd.find("**ERROR**") >= 0: return ""
if isinstance(kwd, tuple):
kwd = kwd[0]
if kwd.find("**ERROR**") >= 0:
return ""
return kwd
@ -524,7 +569,8 @@ def full_question(tenant_id, llm_id, messages):
chat_mdl = LLMBundle(tenant_id, LLMType.CHAT, llm_id)
conv = []
for m in messages:
if m["role"] not in ["user", "assistant"]: continue
if m["role"] not in ["user", "assistant"]:
continue
conv.append("{}: {}".format(m["role"].upper(), m["content"]))
conv = "\n".join(conv)
today = datetime.date.today().isoformat()
@ -585,7 +631,8 @@ Output: What's the weather in Rochester on {tomorrow}?
def tts(tts_mdl, text):
if not tts_mdl or not text: return
if not tts_mdl or not text:
return
bin = b""
for chunk in tts_mdl.tts(text):
bin += chunk
@ -594,26 +641,17 @@ def tts(tts_mdl, text):
def ask(question, kb_ids, tenant_id):
kbs = KnowledgebaseService.get_by_ids(kb_ids)
tenant_ids = [kb.tenant_id for kb in kbs]
embd_nms = list(set([kb.embd_id for kb in kbs]))
embedding_list = list(set([kb.embd_id for kb in kbs]))
is_kg = all([kb.parser_id == ParserType.KG for kb in kbs])
retr = settings.retrievaler if not is_kg else settings.kg_retrievaler
is_knowledge_graph = all([kb.parser_id == ParserType.KG for kb in kbs])
retriever = settings.retrievaler if not is_knowledge_graph else settings.kg_retrievaler
embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING, embd_nms[0])
embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING, embedding_list[0])
chat_mdl = LLMBundle(tenant_id, LLMType.CHAT)
max_tokens = chat_mdl.max_length
kbinfos = retr.retrieval(question, embd_mdl, tenant_ids, kb_ids, 1, 12, 0.1, 0.3, aggs=False)
knowledges = [ck["content_with_weight"] for ck in kbinfos["chunks"]]
used_token_count = 0
for i, c in enumerate(knowledges):
used_token_count += num_tokens_from_string(c)
if max_tokens * 0.97 < used_token_count:
knowledges = knowledges[:i]
break
tenant_ids = list(set([kb.tenant_id for kb in kbs]))
kbinfos = retriever.retrieval(question, embd_mdl, tenant_ids, kb_ids, 1, 12, 0.1, 0.3, aggs=False)
knowledges = kb_prompt(kbinfos, max_tokens)
prompt = """
Role: You're a smart assistant. Your name is Miss R.
Task: Summarize the information from knowledge bases and answer user's question.
@ -623,29 +661,30 @@ def ask(question, kb_ids, tenant_id):
- Answer with markdown format text.
- Answer in language of user's question.
- DO NOT make things up, especially for numbers.
### Information from knowledge bases
%s
The above is information from knowledge bases.
"""%"\n".join(knowledges)
""" % "\n".join(knowledges)
msg = [{"role": "user", "content": question}]
def decorate_answer(answer):
nonlocal knowledges, kbinfos, prompt
answer, idx = retr.insert_citations(answer,
[ck["content_ltks"]
for ck in kbinfos["chunks"]],
[ck["vector"]
for ck in kbinfos["chunks"]],
embd_mdl,
tkweight=0.7,
vtweight=0.3)
answer, idx = retriever.insert_citations(answer,
[ck["content_ltks"]
for ck in kbinfos["chunks"]],
[ck["vector"]
for ck in kbinfos["chunks"]],
embd_mdl,
tkweight=0.7,
vtweight=0.3)
idx = set([kbinfos["chunks"][int(i)]["doc_id"] for i in idx])
recall_docs = [
d for d in kbinfos["doc_aggs"] if d["doc_id"] in idx]
if not recall_docs: recall_docs = kbinfos["doc_aggs"]
if not recall_docs:
recall_docs = kbinfos["doc_aggs"]
kbinfos["doc_aggs"] = recall_docs
refs = deepcopy(kbinfos)
for c in refs["chunks"]:
@ -661,4 +700,3 @@ def ask(question, kb_ids, tenant_id):
answer = ans
yield {"answer": answer, "reference": {}}
yield decorate_answer(answer)

View File

@ -14,7 +14,7 @@
# limitations under the License.
#
import logging
import hashlib
import xxhash
import json
import random
import re
@ -282,6 +282,31 @@ class DocumentService(CommonService):
return
return docs[0]["embd_id"]
@classmethod
@DB.connection_context()
def get_chunking_config(cls, doc_id):
configs = (
cls.model.select(
cls.model.id,
cls.model.kb_id,
cls.model.parser_id,
cls.model.parser_config,
Knowledgebase.language,
Knowledgebase.embd_id,
Tenant.id.alias("tenant_id"),
Tenant.img2txt_id,
Tenant.asr_id,
Tenant.llm_id,
)
.join(Knowledgebase, on=(cls.model.kb_id == Knowledgebase.id))
.join(Tenant, on=(Knowledgebase.tenant_id == Tenant.id))
.where(cls.model.id == doc_id)
)
configs = configs.dicts()
if not configs:
return None
return configs[0]
@classmethod
@DB.connection_context()
def get_doc_id_by_doc_name(cls, doc_name):
@ -319,6 +344,8 @@ class DocumentService(CommonService):
old[k] = v
dfs_update(d.parser_config, config)
if not config.get("raptor") and d.parser_config.get("raptor"):
del d.parser_config["raptor"]
cls.update_by_id(id, {"parser_config": d.parser_config})
@classmethod
@ -407,17 +434,25 @@ class DocumentService(CommonService):
def queue_raptor_tasks(doc):
chunking_config = DocumentService.get_chunking_config(doc["id"])
hasher = xxhash.xxh64()
for field in sorted(chunking_config.keys()):
hasher.update(str(chunking_config[field]).encode("utf-8"))
def new_task():
nonlocal doc
return {
"id": get_uuid(),
"doc_id": doc["id"],
"from_page": 0,
"to_page": -1,
"from_page": 100000000,
"to_page": 100000000,
"progress_msg": "Start to do RAPTOR (Recursive Abstractive Processing for Tree-Organized Retrieval)."
}
task = new_task()
for field in ["doc_id", "from_page", "to_page"]:
hasher.update(str(task.get(field, "")).encode("utf-8"))
task["digest"] = hasher.hexdigest()
bulk_insert_into_db(Task, [task], True)
task["type"] = "raptor"
assert REDIS_CONN.queue_product(SVR_QUEUE_NAME, message=task), "Can't access Redis. Please check the Redis' status."
@ -425,11 +460,12 @@ def queue_raptor_tasks(doc):
def doc_upload_and_parse(conversation_id, file_objs, user_id):
from rag.app import presentation, picture, naive, audio, email
from api.db.services.dialog_service import ConversationService, DialogService
from api.db.services.dialog_service import DialogService
from api.db.services.file_service import FileService
from api.db.services.llm_service import LLMBundle
from api.db.services.user_service import TenantService
from api.db.services.api_service import API4ConversationService
from api.db.services.conversation_service import ConversationService
e, conv = ConversationService.get_by_id(conversation_id)
if not e:
@ -482,10 +518,7 @@ def doc_upload_and_parse(conversation_id, file_objs, user_id):
for ck in th.result():
d = deepcopy(doc)
d.update(ck)
md5 = hashlib.md5()
md5.update((ck["content_with_weight"] +
str(d["doc_id"])).encode("utf-8"))
d["id"] = md5.hexdigest()
d["id"] = xxhash.xxh64((ck["content_with_weight"] + str(d["doc_id"])).encode("utf-8")).hexdigest()
d["create_time"] = str(datetime.now()).replace("T", " ")[:19]
d["create_timestamp_flt"] = datetime.now().timestamp()
if not d.get("image"):
@ -532,7 +565,8 @@ def doc_upload_and_parse(conversation_id, file_objs, user_id):
try:
mind_map = json.dumps(mindmap([c["content_with_weight"] for c in docs if c["doc_id"] == doc_id]).output,
ensure_ascii=False, indent=2)
if len(mind_map) < 32: raise Exception("Few content: " + mind_map)
if len(mind_map) < 32:
raise Exception("Few content: " + mind_map)
cks.append({
"id": get_uuid(),
"doc_id": doc_id,

View File

@ -20,7 +20,7 @@ from api.db.db_models import DB
from api.db.db_models import File, File2Document
from api.db.services.common_service import CommonService
from api.db.services.document_service import DocumentService
from api.utils import current_timestamp, datetime_format, get_uuid
from api.utils import current_timestamp, datetime_format
class File2DocumentService(CommonService):
@ -63,7 +63,7 @@ class File2DocumentService(CommonService):
def update_by_file_id(cls, file_id, obj):
obj["update_time"] = current_timestamp()
obj["update_date"] = datetime_format(datetime.now())
num = cls.model.update(obj).where(cls.model.id == file_id).execute()
# num = cls.model.update(obj).where(cls.model.id == file_id).execute()
e, obj = cls.get_by_id(cls.model.id)
return obj

View File

@ -85,7 +85,8 @@ class FileService(CommonService):
.join(Document, on=(File2Document.document_id == Document.id))
.join(Knowledgebase, on=(Knowledgebase.id == Document.kb_id))
.where(cls.model.id == file_id))
if not kbs: return []
if not kbs:
return []
kbs_info_list = []
for kb in list(kbs.dicts()):
kbs_info_list.append({"kb_id": kb['id'], "kb_name": kb['name']})
@ -304,7 +305,8 @@ class FileService(CommonService):
@classmethod
@DB.connection_context()
def add_file_from_kb(cls, doc, kb_folder_id, tenant_id):
for _ in File2DocumentService.get_by_document_id(doc["id"]): return
for _ in File2DocumentService.get_by_document_id(doc["id"]):
return
file = {
"id": get_uuid(),
"parent_id": kb_folder_id,
@ -343,6 +345,8 @@ class FileService(CommonService):
MAX_FILE_NUM_PER_USER = int(os.environ.get('MAX_FILE_NUM_PER_USER', 0))
if MAX_FILE_NUM_PER_USER > 0 and DocumentService.get_doc_count(kb.tenant_id) >= MAX_FILE_NUM_PER_USER:
raise RuntimeError("Exceed the maximum file number of a free user!")
if len(file.filename) >= 128:
raise RuntimeError("Exceed the maximum length of file name!")
filename = duplicate_name(
DocumentService.query,

View File

@ -104,7 +104,8 @@ class KnowledgebaseService(CommonService):
cls.model.token_num,
cls.model.chunk_num,
cls.model.parser_id,
cls.model.parser_config]
cls.model.parser_config,
cls.model.pagerank]
kbs = cls.model.select(*fields).join(Tenant, on=(
(Tenant.id == cls.model.tenant_id) & (Tenant.status == StatusEnum.VALID.value))).where(
(cls.model.id == kb_id),

View File

@ -13,8 +13,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
import logging
import os
from api.db.services.user_service import TenantService
from api.utils.file_utils import get_project_base_directory
from rag.llm import EmbeddingModel, CvModel, ChatModel, RerankModel, Seq2txtModel, TTSModel
from api.db import LLMType
from api.db.db_models import DB
@ -36,11 +40,11 @@ class TenantLLMService(CommonService):
@classmethod
@DB.connection_context()
def get_api_key(cls, tenant_id, model_name):
arr = model_name.split("@")
if len(arr) < 2:
objs = cls.query(tenant_id=tenant_id, llm_name=model_name)
mdlnm, fid = TenantLLMService.split_model_name_and_factory(model_name)
if not fid:
objs = cls.query(tenant_id=tenant_id, llm_name=mdlnm)
else:
objs = cls.query(tenant_id=tenant_id, llm_name=arr[0], llm_factory=arr[1])
objs = cls.query(tenant_id=tenant_id, llm_name=mdlnm, llm_factory=fid)
if not objs:
return
return objs[0]
@ -61,6 +65,25 @@ class TenantLLMService(CommonService):
return list(objs)
@staticmethod
def split_model_name_and_factory(model_name):
arr = model_name.split("@")
if len(arr) < 2:
return model_name, None
if len(arr) > 2:
return "@".join(arr[0:-1]), arr[-1]
# model name must be xxx@yyy
try:
model_factories = json.load(open(os.path.join(get_project_base_directory(), "conf/llm_factories.json"), "r"))["factory_llm_infos"]
model_providers = set([f["name"] for f in model_factories])
if arr[-1] not in model_providers:
return model_name, None
return arr[0], arr[-1]
except Exception as e:
logging.exception(f"TenantLLMService.split_model_name_and_factory got exception: {e}")
return model_name, None
@classmethod
@DB.connection_context()
def model_instance(cls, tenant_id, llm_type,
@ -85,19 +108,18 @@ class TenantLLMService(CommonService):
assert False, "LLM type error"
model_config = cls.get_api_key(tenant_id, mdlnm)
tmp = mdlnm.split("@")
fid = None if len(tmp) < 2 else tmp[1]
mdlnm = tmp[0]
if model_config: model_config = model_config.to_dict()
mdlnm, fid = TenantLLMService.split_model_name_and_factory(mdlnm)
if model_config:
model_config = model_config.to_dict()
if not model_config:
if llm_type in [LLMType.EMBEDDING, LLMType.RERANK]:
llm = LLMService.query(llm_name=mdlnm) if not fid else LLMService.query(llm_name=mdlnm, fid=fid)
if llm and llm[0].fid in ["Youdao", "FastEmbed", "BAAI"]:
model_config = {"llm_factory": llm[0].fid, "api_key":"", "llm_name": mdlnm, "api_base": ""}
model_config = {"llm_factory": llm[0].fid, "api_key": "", "llm_name": mdlnm, "api_base": ""}
if not model_config:
if mdlnm == "flag-embedding":
model_config = {"llm_factory": "Tongyi-Qianwen", "api_key": "",
"llm_name": llm_name, "api_base": ""}
"llm_name": llm_name, "api_base": ""}
else:
if not mdlnm:
raise LookupError(f"Type of {llm_type} model is not set.")
@ -168,16 +190,23 @@ class TenantLLMService(CommonService):
else:
assert False, "LLM type error"
llm_name = mdlnm.split("@")[0] if "@" in mdlnm else mdlnm
llm_name, llm_factory = TenantLLMService.split_model_name_and_factory(mdlnm)
num = 0
try:
for u in cls.query(tenant_id=tenant_id, llm_name=llm_name):
num += cls.model.update(used_tokens=u.used_tokens + used_tokens)\
.where(cls.model.tenant_id == tenant_id, cls.model.llm_name == llm_name)\
if llm_factory:
tenant_llms = cls.query(tenant_id=tenant_id, llm_name=llm_name, llm_factory=llm_factory)
else:
tenant_llms = cls.query(tenant_id=tenant_id, llm_name=llm_name)
if not tenant_llms:
return num
else:
tenant_llm = tenant_llms[0]
num = cls.model.update(used_tokens=tenant_llm.used_tokens + used_tokens) \
.where(cls.model.tenant_id == tenant_id, cls.model.llm_factory == tenant_llm.llm_factory, cls.model.llm_name == llm_name) \
.execute()
except Exception as e:
pass
except Exception:
logging.exception("TenantLLMService.increase_usage got exception")
return num
@classmethod
@ -204,14 +233,14 @@ class LLMBundle(object):
for lm in LLMService.query(llm_name=llm_name):
self.max_length = lm.max_tokens
break
def encode(self, texts: list, batch_size=32):
emd, used_tokens = self.mdl.encode(texts, batch_size)
def encode(self, texts: list):
embeddings, used_tokens = self.mdl.encode(texts)
if not TenantLLMService.increase_usage(
self.tenant_id, self.llm_type, used_tokens):
logging.error(
"LLMBundle.encode can't update token usage for {}/EMBEDDING used_tokens: {}".format(self.tenant_id, used_tokens))
return emd, used_tokens
return embeddings, used_tokens
def encode_queries(self, query: str):
emd, used_tokens = self.mdl.encode_queries(query)
@ -247,20 +276,21 @@ class LLMBundle(object):
def tts(self, text):
for chunk in self.mdl.tts(text):
if isinstance(chunk,int):
if isinstance(chunk, int):
if not TenantLLMService.increase_usage(
self.tenant_id, self.llm_type, chunk, self.llm_name):
logging.error(
"LLMBundle.tts can't update token usage for {}/TTS".format(self.tenant_id))
self.tenant_id, self.llm_type, chunk, self.llm_name):
logging.error(
"LLMBundle.tts can't update token usage for {}/TTS".format(self.tenant_id))
return
yield chunk
yield chunk
def chat(self, system, history, gen_conf):
txt, used_tokens = self.mdl.chat(system, history, gen_conf)
if isinstance(txt, int) and not TenantLLMService.increase_usage(
self.tenant_id, self.llm_type, used_tokens, self.llm_name):
logging.error(
"LLMBundle.chat can't update token usage for {}/CHAT llm_name: {}, used_tokens: {}".format(self.tenant_id, self.llm_name, used_tokens))
"LLMBundle.chat can't update token usage for {}/CHAT llm_name: {}, used_tokens: {}".format(self.tenant_id, self.llm_name,
used_tokens))
return txt
def chat_streamly(self, system, history, gen_conf):
@ -269,6 +299,7 @@ class LLMBundle(object):
if not TenantLLMService.increase_usage(
self.tenant_id, self.llm_type, txt, self.llm_name):
logging.error(
"LLMBundle.chat_streamly can't update token usage for {}/CHAT llm_name: {}, content: {}".format(self.tenant_id, self.llm_name, txt))
"LLMBundle.chat_streamly can't update token usage for {}/CHAT llm_name: {}, content: {}".format(self.tenant_id, self.llm_name,
txt))
return
yield txt

View File

@ -15,6 +15,9 @@
#
import os
import random
import xxhash
import bisect
from datetime import datetime
from api.db.db_utils import bulk_insert_into_db
from deepdoc.parser import PdfParser
@ -29,6 +32,18 @@ from deepdoc.parser.excel_parser import RAGFlowExcelParser
from rag.settings import SVR_QUEUE_NAME
from rag.utils.storage_factory import STORAGE_IMPL
from rag.utils.redis_conn import REDIS_CONN
from api import settings
from rag.nlp import search
def trim_header_by_lines(text: str, max_length) -> str:
len_text = len(text)
if len_text <= max_length:
return text
for i in range(len_text):
if text[i] == '\n' and len_text - i <= max_length:
return text[i + 1:]
return text
class TaskService(CommonService):
@ -53,93 +68,144 @@ class TaskService(CommonService):
Knowledgebase.tenant_id,
Knowledgebase.language,
Knowledgebase.embd_id,
Knowledgebase.pagerank,
Tenant.img2txt_id,
Tenant.asr_id,
Tenant.llm_id,
cls.model.update_time]
docs = cls.model.select(*fields) \
.join(Document, on=(cls.model.doc_id == Document.id)) \
.join(Knowledgebase, on=(Document.kb_id == Knowledgebase.id)) \
.join(Tenant, on=(Knowledgebase.tenant_id == Tenant.id)) \
.where(cls.model.id == task_id)
cls.model.update_time,
]
docs = (
cls.model.select(*fields)
.join(Document, on=(cls.model.doc_id == Document.id))
.join(Knowledgebase, on=(Document.kb_id == Knowledgebase.id))
.join(Tenant, on=(Knowledgebase.tenant_id == Tenant.id))
.where(cls.model.id == task_id)
)
docs = list(docs.dicts())
if not docs: return None
if not docs:
return None
msg = "\nTask has been received."
prog = random.random() / 10.
msg = f"\n{datetime.now().strftime('%H:%M:%S')} Task has been received."
prog = random.random() / 10.0
if docs[0]["retry_count"] >= 3:
msg = "\nERROR: Task is abandoned after 3 times attempts."
prog = -1
cls.model.update(progress_msg=cls.model.progress_msg + msg,
progress=prog,
retry_count=docs[0]["retry_count"]+1
).where(
cls.model.id == docs[0]["id"]).execute()
cls.model.update(
progress_msg=cls.model.progress_msg + msg,
progress=prog,
retry_count=docs[0]["retry_count"] + 1,
).where(cls.model.id == docs[0]["id"]).execute()
if docs[0]["retry_count"] >= 3: return None
if docs[0]["retry_count"] >= 3:
return None
return docs[0]
@classmethod
@DB.connection_context()
def get_tasks(cls, doc_id: str):
fields = [
cls.model.id,
cls.model.from_page,
cls.model.progress,
cls.model.digest,
cls.model.chunk_ids,
]
tasks = (
cls.model.select(*fields).order_by(cls.model.from_page.asc(), cls.model.create_time.desc())
.where(cls.model.doc_id == doc_id)
)
tasks = list(tasks.dicts())
if not tasks:
return None
return tasks
@classmethod
@DB.connection_context()
def update_chunk_ids(cls, id: str, chunk_ids: str):
cls.model.update(chunk_ids=chunk_ids).where(cls.model.id == id).execute()
@classmethod
@DB.connection_context()
def get_ongoing_doc_name(cls):
with DB.lock("get_task", -1):
docs = cls.model.select(*[Document.id, Document.kb_id, Document.location, File.parent_id]) \
.join(Document, on=(cls.model.doc_id == Document.id)) \
.join(File2Document, on=(File2Document.document_id == Document.id), join_type=JOIN.LEFT_OUTER) \
.join(File, on=(File2Document.file_id == File.id), join_type=JOIN.LEFT_OUTER) \
.where(
docs = (
cls.model.select(
*[Document.id, Document.kb_id, Document.location, File.parent_id]
)
.join(Document, on=(cls.model.doc_id == Document.id))
.join(
File2Document,
on=(File2Document.document_id == Document.id),
join_type=JOIN.LEFT_OUTER,
)
.join(
File,
on=(File2Document.file_id == File.id),
join_type=JOIN.LEFT_OUTER,
)
.where(
Document.status == StatusEnum.VALID.value,
Document.run == TaskStatus.RUNNING.value,
~(Document.type == FileType.VIRTUAL.value),
cls.model.progress < 1,
cls.model.create_time >= current_timestamp() - 1000 * 600
cls.model.create_time >= current_timestamp() - 1000 * 600,
)
)
docs = list(docs.dicts())
if not docs: return []
if not docs:
return []
return list(set([(d["parent_id"] if d["parent_id"] else d["kb_id"], d["location"]) for d in docs]))
return list(
set(
[
(
d["parent_id"] if d["parent_id"] else d["kb_id"],
d["location"],
)
for d in docs
]
)
)
@classmethod
@DB.connection_context()
def do_cancel(cls, id):
try:
task = cls.model.get_by_id(id)
_, doc = DocumentService.get_by_id(task.doc_id)
return doc.run == TaskStatus.CANCEL.value or doc.progress < 0
except Exception:
pass
return False
task = cls.model.get_by_id(id)
_, doc = DocumentService.get_by_id(task.doc_id)
return doc.run == TaskStatus.CANCEL.value or doc.progress < 0
@classmethod
@DB.connection_context()
def update_progress(cls, id, info):
if os.environ.get("MACOS"):
if info["progress_msg"]:
cls.model.update(progress_msg=cls.model.progress_msg + "\n" + info["progress_msg"]).where(
cls.model.id == id).execute()
task = cls.model.get_by_id(id)
progress_msg = trim_header_by_lines(task.progress_msg + "\n" + info["progress_msg"], 1000)
cls.model.update(progress_msg=progress_msg).where(cls.model.id == id).execute()
if "progress" in info:
cls.model.update(progress=info["progress"]).where(
cls.model.id == id).execute()
cls.model.id == id
).execute()
return
with DB.lock("update_progress", -1):
if info["progress_msg"]:
cls.model.update(progress_msg=cls.model.progress_msg + "\n" + info["progress_msg"]).where(
cls.model.id == id).execute()
task = cls.model.get_by_id(id)
progress_msg = trim_header_by_lines(task.progress_msg + "\n" + info["progress_msg"], 1000)
cls.model.update(progress_msg=progress_msg).where(cls.model.id == id).execute()
if "progress" in info:
cls.model.update(progress=info["progress"]).where(
cls.model.id == id).execute()
cls.model.id == id
).execute()
def queue_tasks(doc: dict, bucket: str, name: str):
def new_task():
return {
"id": get_uuid(),
"doc_id": doc["id"]
}
tsks = []
return {"id": get_uuid(), "doc_id": doc["id"], "progress": 0.0, "from_page": 0, "to_page": 100000000}
parse_task_array = []
if doc["type"] == FileType.PDF.value:
file_bin = STORAGE_IMPL.get(bucket, name)
@ -159,7 +225,7 @@ def queue_tasks(doc: dict, bucket: str, name: str):
task = new_task()
task["from_page"] = p
task["to_page"] = min(p + page_size, e)
tsks.append(task)
parse_task_array.append(task)
elif doc["parser_id"] == "table":
file_bin = STORAGE_IMPL.get(bucket, name)
@ -168,12 +234,61 @@ def queue_tasks(doc: dict, bucket: str, name: str):
task = new_task()
task["from_page"] = i
task["to_page"] = min(i + 3000, rn)
tsks.append(task)
parse_task_array.append(task)
else:
tsks.append(new_task())
parse_task_array.append(new_task())
bulk_insert_into_db(Task, tsks, True)
chunking_config = DocumentService.get_chunking_config(doc["id"])
for task in parse_task_array:
hasher = xxhash.xxh64()
for field in sorted(chunking_config.keys()):
hasher.update(str(chunking_config[field]).encode("utf-8"))
for field in ["doc_id", "from_page", "to_page"]:
hasher.update(str(task.get(field, "")).encode("utf-8"))
task_digest = hasher.hexdigest()
task["digest"] = task_digest
task["progress"] = 0.0
prev_tasks = TaskService.get_tasks(doc["id"])
ck_num = 0
if prev_tasks:
for task in parse_task_array:
ck_num += reuse_prev_task_chunks(task, prev_tasks, chunking_config)
TaskService.filter_delete([Task.doc_id == doc["id"]])
chunk_ids = []
for task in prev_tasks:
if task["chunk_ids"]:
chunk_ids.extend(task["chunk_ids"].split())
if chunk_ids:
settings.docStoreConn.delete({"id": chunk_ids}, search.index_name(chunking_config["tenant_id"]),
chunking_config["kb_id"])
DocumentService.update_by_id(doc["id"], {"chunk_num": ck_num})
bulk_insert_into_db(Task, parse_task_array, True)
DocumentService.begin2parse(doc["id"])
for t in tsks:
assert REDIS_CONN.queue_product(SVR_QUEUE_NAME, message=t), "Can't access Redis. Please check the Redis' status."
unfinished_task_array = [task for task in parse_task_array if task["progress"] < 1.0]
for unfinished_task in unfinished_task_array:
assert REDIS_CONN.queue_product(
SVR_QUEUE_NAME, message=unfinished_task
), "Can't access Redis. Please check the Redis' status."
def reuse_prev_task_chunks(task: dict, prev_tasks: list[dict], chunking_config: dict):
idx = bisect.bisect_left(prev_tasks, (task.get("from_page", 0), task.get("digest", "")),
key=lambda x: (x.get("from_page", 0), x.get("digest", "")))
if idx >= len(prev_tasks):
return 0
prev_task = prev_tasks[idx]
if prev_task["progress"] < 1.0 or prev_task["digest"] != task["digest"] or not prev_task["chunk_ids"]:
return 0
task["chunk_ids"] = prev_task["chunk_ids"]
task["progress"] = 1.0
if "from_page" in task and "to_page" in task:
task["progress_msg"] = f"Page({task['from_page']}~{task['to_page']}): "
else:
task["progress_msg"] = ""
task["progress_msg"] += "reused previous task's chunks."
prev_task["chunk_ids"] = ""
return len(task["chunk_ids"].split())

View File

@ -22,7 +22,7 @@ from api.db import UserTenantRole
from api.db.db_models import DB, UserTenant
from api.db.db_models import User, Tenant
from api.db.services.common_service import CommonService
from api.utils import get_uuid, get_format_time, current_timestamp, datetime_format
from api.utils import get_uuid, current_timestamp, datetime_format
from api.db import StatusEnum

View File

@ -18,17 +18,10 @@
# from beartype.claw import beartype_all # <-- you didn't sign up for this
# beartype_all(conf=BeartypeConf(violation_type=UserWarning)) # <-- emit warnings from all code
import logging
from api.utils.log_utils import initRootLogger
initRootLogger("ragflow_server")
for module in ["pdfminer"]:
module_logger = logging.getLogger(module)
module_logger.setLevel(logging.WARNING)
for module in ["peewee"]:
module_logger = logging.getLogger(module)
module_logger.handlers.clear()
module_logger.propagate = True
import logging
import os
import signal
import sys
@ -47,6 +40,7 @@ from api.db.db_models import init_database_tables as init_web_db
from api.db.init_data import init_web_data
from api.versions import get_ragflow_version
from api.utils import show_configs
from rag.settings import print_rag_settings
def update_progress():
@ -75,6 +69,7 @@ if __name__ == '__main__':
)
show_configs()
settings.init_settings()
print_rag_settings()
# init db
init_web_db()

View File

@ -163,9 +163,10 @@ def init_settings():
global DOC_ENGINE, docStoreConn, retrievaler, kg_retrievaler
DOC_ENGINE = os.environ.get('DOC_ENGINE', "elasticsearch")
if DOC_ENGINE == "elasticsearch":
lower_case_doc_engine = DOC_ENGINE.lower()
if lower_case_doc_engine == "elasticsearch":
docStoreConn = rag.utils.es_conn.ESConnection()
elif DOC_ENGINE == "infinity":
elif lower_case_doc_engine == "infinity":
docStoreConn = rag.utils.infinity_conn.InfinityConnection()
else:
raise Exception(f"Not supported doc engine: {DOC_ENGINE}")

View File

@ -24,6 +24,7 @@ import time
import uuid
import requests
import logging
import copy
from enum import Enum, IntEnum
import importlib
from Cryptodome.PublicKey import RSA
@ -65,6 +66,10 @@ CONFIGS = read_config()
def show_configs():
msg = f"Current configs, from {conf_realpath(SERVICE_CONF)}:"
for k, v in CONFIGS.items():
if isinstance(v, dict):
if "password" in v:
v = copy.deepcopy(v)
v["password"] = "*" * 8
msg += f"\n\t{k}: {v}"
logging.info(msg)

View File

@ -36,7 +36,6 @@ from werkzeug.http import HTTP_STATUS_CODES
from api.db.db_models import APIToken
from api import settings
from api import settings
from api.utils import CustomJSONEncoder, get_uuid
from api.utils import json_dumps
from api.constants import REQUEST_WAIT_SEC, REQUEST_MAX_WAIT_SEC
@ -174,6 +173,18 @@ def validate_request(*args, **kwargs):
return wrapper
def not_allowed_parameters(*params):
def decorator(f):
def wrapper(*args, **kwargs):
input_arguments = flask_request.json or flask_request.form.to_dict()
for param in params:
if param in input_arguments:
return get_json_result(
code=settings.RetCode.ARGUMENT_ERROR, message=f"Parameter {param} isn't allowed")
return f(*args, **kwargs)
return wrapper
return decorator
def is_localhost(ip):
return ip in {'127.0.0.1', '::1', '[::1]', 'localhost'}
@ -272,7 +283,10 @@ def construct_error_response(e):
def token_required(func):
@wraps(func)
def decorated_function(*args, **kwargs):
authorization_list=flask_request.headers.get('Authorization').split()
authorization_str=flask_request.headers.get('Authorization')
if not authorization_str:
return get_json_result(data=False,message="`Authorization` can't be empty")
authorization_list=authorization_str.split()
if len(authorization_list) < 2:
return get_json_result(data=False,message="Please check your authorization format.")
token = authorization_list[1]

View File

@ -28,7 +28,7 @@ def get_project_base_directory():
)
return PROJECT_BASE
def initRootLogger(logfile_basename: str, log_level: int = logging.INFO, log_format: str = "%(asctime)-15s %(levelname)-8s %(process)d %(message)s"):
def initRootLogger(logfile_basename: str, log_format: str = "%(asctime)-15s %(levelname)-8s %(process)d %(message)s"):
logger = logging.getLogger()
if logger.hasHandlers():
return
@ -36,19 +36,40 @@ def initRootLogger(logfile_basename: str, log_level: int = logging.INFO, log_for
log_path = os.path.abspath(os.path.join(get_project_base_directory(), "logs", f"{logfile_basename}.log"))
os.makedirs(os.path.dirname(log_path), exist_ok=True)
logger.setLevel(log_level)
formatter = logging.Formatter(log_format)
handler1 = RotatingFileHandler(log_path, maxBytes=10*1024*1024, backupCount=5)
handler1.setLevel(log_level)
handler1.setFormatter(formatter)
logger.addHandler(handler1)
handler2 = logging.StreamHandler()
handler2.setLevel(log_level)
handler2.setFormatter(formatter)
logger.addHandler(handler2)
logging.captureWarnings(True)
msg = f"{logfile_basename} log path: {log_path}"
LOG_LEVELS = os.environ.get("LOG_LEVELS", "")
pkg_levels = {}
for pkg_name_level in LOG_LEVELS.split(","):
terms = pkg_name_level.split("=")
if len(terms)!= 2:
continue
pkg_name, pkg_level = terms[0], terms[1]
pkg_name = pkg_name.strip()
pkg_level = logging.getLevelName(pkg_level.strip().upper())
if not isinstance(pkg_level, int):
pkg_level = logging.INFO
pkg_levels[pkg_name] = logging.getLevelName(pkg_level)
for pkg_name in ['peewee', 'pdfminer']:
if pkg_name not in pkg_levels:
pkg_levels[pkg_name] = logging.getLevelName(logging.WARNING)
if 'root' not in pkg_levels:
pkg_levels['root'] = logging.getLevelName(logging.INFO)
for pkg_name, pkg_level in pkg_levels.items():
pkg_logger = logging.getLogger(pkg_name)
pkg_logger.setLevel(pkg_level)
msg = f"{logfile_basename} log path: {log_path}, log levels: {pkg_levels}"
logger.info(msg)

View File

@ -45,5 +45,5 @@ try:
pool = Pool(processes=1)
thread = pool.apply_async(download_nltk_data)
binary = thread.get(timeout=60)
except Exception as e:
except Exception:
print('\x1b[6;37;41m WARNING \x1b[0m' + "Downloading NLTK data failure.", flush=True)

View File

@ -42,28 +42,11 @@ def get_ragflow_version() -> str:
def get_closest_tag_and_count():
try:
# Get the current commit hash
commit_id = (
subprocess.check_output(["git", "rev-parse", "--short", "HEAD"])
version_info = (
subprocess.check_output(["git", "describe", "--tags", "--match=v*", "--first-parent", "--always"])
.strip()
.decode("utf-8")
)
# Get the closest tag
closest_tag = (
subprocess.check_output(["git", "describe", "--tags", "--abbrev=0"])
.strip()
.decode("utf-8")
)
# Get the commit count since the closest tag
process = subprocess.Popen(
["git", "rev-list", "--count", f"{closest_tag}..HEAD"],
stdout=subprocess.PIPE,
)
commits_count, _ = process.communicate()
commits_count = int(commits_count.strip())
if commits_count == 0:
return closest_tag
else:
return f"{commit_id}({closest_tag}~{commits_count})"
return version_info
except Exception:
return "unknown"

View File

@ -5,22 +5,27 @@
"create_time": {"type": "varchar", "default": ""},
"create_timestamp_flt": {"type": "float", "default": 0.0},
"img_id": {"type": "varchar", "default": ""},
"docnm_kwd": {"type": "varchar", "default": ""},
"title_tks": {"type": "varchar", "default": ""},
"title_sm_tks": {"type": "varchar", "default": ""},
"name_kwd": {"type": "varchar", "default": ""},
"important_kwd": {"type": "varchar", "default": ""},
"important_tks": {"type": "varchar", "default": ""},
"docnm_kwd": {"type": "varchar", "default": "", "analyzer": "whitespace"},
"title_tks": {"type": "varchar", "default": "", "analyzer": "whitespace"},
"title_sm_tks": {"type": "varchar", "default": "", "analyzer": "whitespace"},
"name_kwd": {"type": "varchar", "default": "", "analyzer": "whitespace"},
"important_kwd": {"type": "varchar", "default": "", "analyzer": "whitespace"},
"important_tks": {"type": "varchar", "default": "", "analyzer": "whitespace"},
"question_kwd": {"type": "varchar", "default": "", "analyzer": "whitespace"},
"question_tks": {"type": "varchar", "default": "", "analyzer": "whitespace"},
"content_with_weight": {"type": "varchar", "default": ""},
"content_ltks": {"type": "varchar", "default": ""},
"content_sm_ltks": {"type": "varchar", "default": ""},
"page_num_list": {"type": "varchar", "default": ""},
"top_list": {"type": "varchar", "default": ""},
"position_list": {"type": "varchar", "default": ""},
"content_ltks": {"type": "varchar", "default": "", "analyzer": "whitespace"},
"content_sm_ltks": {"type": "varchar", "default": "", "analyzer": "whitespace"},
"authors_tks": {"type": "varchar", "default": "", "analyzer": "whitespace"},
"authors_sm_tks": {"type": "varchar", "default": "", "analyzer": "whitespace"},
"page_num_int": {"type": "varchar", "default": ""},
"top_int": {"type": "varchar", "default": ""},
"position_int": {"type": "varchar", "default": ""},
"weight_int": {"type": "integer", "default": 0},
"weight_flt": {"type": "float", "default": 0.0},
"rank_int": {"type": "integer", "default": 0},
"available_int": {"type": "integer", "default": 1},
"knowledge_graph_kwd": {"type": "varchar", "default": ""},
"entities_kwd": {"type": "varchar", "default": ""}
"knowledge_graph_kwd": {"type": "varchar", "default": "", "analyzer": "whitespace"},
"entities_kwd": {"type": "varchar", "default": "", "analyzer": "whitespace"},
"pagerank_fea": {"type": "integer", "default": 0}
}

View File

@ -525,6 +525,18 @@
"tags": "TEXT EMBEDDING",
"max_tokens": 8196,
"model_type": "embedding"
},
{
"llm_name": "jina-reranker-v2-base-multilingual",
"tags": "RE-RANK,8k",
"max_tokens": 8196,
"model_type": "rerank"
},
{
"llm_name": "jina-embeddings-v3",
"tags": "TEXT EMBEDDING",
"max_tokens": 8196,
"model_type": "embedding"
}
]
},
@ -606,26 +618,32 @@
},
{
"llm_name": "open-mistral-7b",
"tags": "LLM,CHAT,32k",
"max_tokens": 32000,
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat"
},
{
"llm_name": "ministral-8b-latest",
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat"
},
{
"llm_name": "ministral-3b-latest",
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat"
},
{
"llm_name": "mistral-large-latest",
"tags": "LLM,CHAT,32k",
"max_tokens": 32000,
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat"
},
{
"llm_name": "mistral-small-latest",
"tags": "LLM,CHAT,32k",
"max_tokens": 32000,
"model_type": "chat"
},
{
"llm_name": "mistral-medium-latest",
"tags": "LLM,CHAT,32k",
"max_tokens": 32000,
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat"
},
{
@ -634,11 +652,29 @@
"max_tokens": 32000,
"model_type": "chat"
},
{
"llm_name": "mistral-nemo",
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat"
},
{
"llm_name": "mistral-embed",
"tags": "LLM,CHAT,8k",
"max_tokens": 8192,
"model_type": "embedding"
},
{
"llm_name": "pixtral-large-latest",
"tags": "LLM,CHAT,32k",
"max_tokens": 32000,
"model_type": "image2text"
},
{
"llm_name": "pixtral-12b",
"tags": "LLM,CHAT,32k",
"max_tokens": 32000,
"model_type": "image2text"
}
]
},
@ -916,6 +952,12 @@
"max_tokens": 30720,
"model_type": "chat"
},
{
"llm_name": "gemini-2.0-flash-exp",
"tags": "LLM,CHAT,1024K",
"max_tokens": 1048576,
"model_type": "chat"
},
{
"llm_name": "gemini-1.0-pro-vision-latest",
"tags": "LLM,IMAGE2TEXT,12K",
@ -972,6 +1014,18 @@
"max_tokens": 131072,
"model_type": "chat"
},
{
"llm_name": "llama-3.3-70b-versatile",
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat"
},
{
"llm_name": "llama-3.3-70b-specdec",
"tags": "LLM,CHAT,8k",
"max_tokens": 8192,
"model_type": "chat"
},
{
"llm_name": "mixtral-8x7b-32768",
"tags": "LLM,CHAT,5k",
@ -2432,6 +2486,18 @@
"max_tokens": 4000,
"model_type": "embedding"
},
{
"llm_name": "voyage-3",
"tags": "TEXT EMBEDDING,32000",
"max_tokens": 32000,
"model_type": "embedding"
},
{
"llm_name": "voyage-3-lite",
"tags": "TEXT EMBEDDING,32000",
"max_tokens": 32000,
"model_type": "embedding"
},
{
"llm_name": "rerank-1",
"tags": "RE-RANK, 8000",
@ -2443,6 +2509,18 @@
"tags": "RE-RANK, 4000",
"max_tokens": 4000,
"model_type": "rerank"
},
{
"llm_name": "rerank-2",
"tags": "RE-RANK, 16000",
"max_tokens": 16000,
"model_type": "rerank"
},
{
"llm_name": "rerank-2-lite",
"tags": "RE-RANK, 8000",
"max_tokens": 8000,
"model_type": "rerank"
}
]
},

View File

@ -140,13 +140,21 @@
}
},
{
"string": {
"rank_feature": {
"match": "*_fea",
"mapping": {
"type": "rank_feature"
}
}
},
{
"rank_features": {
"match": "*_feas",
"mapping": {
"type": "rank_features"
}
}
},
{
"dense_vector": {
"match": "*_512_vec",

View File

@ -0,0 +1,2 @@
from beartype.claw import beartype_this_package
beartype_this_package()

View File

@ -18,4 +18,16 @@ from .ppt_parser import RAGFlowPptParser as PptParser
from .html_parser import RAGFlowHtmlParser as HtmlParser
from .json_parser import RAGFlowJsonParser as JsonParser
from .markdown_parser import RAGFlowMarkdownParser as MarkdownParser
from .txt_parser import RAGFlowTxtParser as TxtParser
from .txt_parser import RAGFlowTxtParser as TxtParser
__all__ = [
"PdfParser",
"PlainParser",
"DocxParser",
"ExcelParser",
"PptParser",
"HtmlParser",
"JsonParser",
"MarkdownParser",
"TxtParser",
]

View File

@ -29,7 +29,8 @@ class RAGFlowExcelParser:
for sheetname in wb.sheetnames:
ws = wb[sheetname]
rows = list(ws.rows)
if not rows: continue
if not rows:
continue
tb_rows_0 = "<tr>"
for t in list(rows[0]):
@ -40,7 +41,9 @@ class RAGFlowExcelParser:
tb = ""
tb += f"<table><caption>{sheetname}</caption>"
tb += tb_rows_0
for r in list(rows[1 + chunk_i * chunk_rows:1 + (chunk_i + 1) * chunk_rows]):
for r in list(
rows[1 + chunk_i * chunk_rows : 1 + (chunk_i + 1) * chunk_rows]
):
tb += "<tr>"
for i, c in enumerate(r):
if c.value is None:
@ -62,20 +65,21 @@ class RAGFlowExcelParser:
for sheetname in wb.sheetnames:
ws = wb[sheetname]
rows = list(ws.rows)
if not rows:continue
if not rows:
continue
ti = list(rows[0])
for r in list(rows[1:]):
l = []
fields = []
for i, c in enumerate(r):
if not c.value:
continue
t = str(ti[i].value) if i < len(ti) else ""
t += ("" if t else "") + str(c.value)
l.append(t)
l = "; ".join(l)
fields.append(t)
line = "; ".join(fields)
if sheetname.lower().find("sheet") < 0:
l += " ——" + sheetname
res.append(l)
line += " ——" + sheetname
res.append(line)
return res
@staticmethod
@ -86,7 +90,7 @@ class RAGFlowExcelParser:
for sheetname in wb.sheetnames:
ws = wb[sheetname]
total += len(list(ws.rows))
return total
return total
if fnm.split(".")[-1].lower() in ["csv", "txt"]:
encoding = find_codec(binary)

View File

@ -36,7 +36,7 @@ class RAGFlowHtmlParser:
@classmethod
def parser_txt(cls, txt):
if type(txt) != str:
if not isinstance(txt, str):
raise TypeError("txt type should be str!")
html_doc = readability.Document(txt)
title = html_doc.title()

View File

@ -4,6 +4,7 @@
import json
from typing import Any
from rag.nlp import find_codec
class RAGFlowJsonParser:
def __init__(
@ -22,7 +23,7 @@ class RAGFlowJsonParser:
txt = binary.decode(encoding, errors="ignore")
json_data = json.loads(txt)
chunks = self.split_json(json_data, True)
sections = [json.dumps(l, ensure_ascii=False) for l in chunks if l]
sections = [json.dumps(line, ensure_ascii=False) for line in chunks if line]
return sections
@staticmethod
@ -53,7 +54,7 @@ class RAGFlowJsonParser:
def _json_split(
self,
data: dict[str, Any],
data,
current_path: list[str] | None,
chunks: list[dict] | None,
) -> list[dict]:
@ -86,15 +87,16 @@ class RAGFlowJsonParser:
def split_json(
self,
json_data: dict[str, Any],
json_data,
convert_lists: bool = False,
) -> list[dict]:
"""Splits JSON into a list of JSON chunks"""
if convert_lists:
chunks = self._json_split(self._list_to_dict_preprocessing(json_data))
preprocessed_data = self._list_to_dict_preprocessing(json_data)
chunks = self._json_split(preprocessed_data, None, None)
else:
chunks = self._json_split(json_data)
chunks = self._json_split(json_data, None, None)
# Remove the last chunk if it's empty
if not chunks[-1]:

View File

@ -21,7 +21,6 @@ import re
import pdfplumber
from PIL import Image
import numpy as np
from timeit import default_timer as timer
from pypdf import PdfReader as pdf2_read
from api import settings
@ -152,7 +151,7 @@ class RAGFlowPdfParser:
max(len(up["text"]), len(down["text"])),
len(tks_all) - len(tks_up) - len(tks_down),
len(tks_down) - len(tks_up),
tks_down[-1] == tks_up[-1],
tks_down[-1] == tks_up[-1] if tks_down and tks_up else False,
max(down["in_row"], up["in_row"]),
abs(down["in_row"] - up["in_row"]),
len(tks_down) == 1 and rag_tokenizer.tag(tks_down[0]).find("n") >= 0,
@ -752,7 +751,7 @@ class RAGFlowPdfParser:
"x1": np.max([b["x1"] for b in bxs]),
"bottom": np.max([b["bottom"] for b in bxs]) - ht
}
louts = [l for l in self.page_layout[pn] if l["type"] == ltype]
louts = [layout for layout in self.page_layout[pn] if layout["type"] == ltype]
ii = Recognizer.find_overlapped(b, louts, naive=True)
if ii is not None:
b = louts[ii]
@ -763,7 +762,8 @@ class RAGFlowPdfParser:
"layoutno", "")))
left, top, right, bott = b["x0"], b["top"], b["x1"], b["bottom"]
if right < left: right = left + 1
if right < left:
right = left + 1
poss.append((pn + self.page_from, left, right, top, bott))
return self.page_images[pn] \
.crop((left * ZM, top * ZM,
@ -845,7 +845,8 @@ class RAGFlowPdfParser:
top = bx["top"] - self.page_cum_height[pn[0] - 1]
bott = bx["bottom"] - self.page_cum_height[pn[0] - 1]
page_images_cnt = len(self.page_images)
if pn[-1] - 1 >= page_images_cnt: return ""
if pn[-1] - 1 >= page_images_cnt:
return ""
while bott * ZM > self.page_images[pn[-1] - 1].size[1]:
bott -= self.page_images[pn[-1] - 1].size[1] / ZM
pn.append(pn[-1] + 1)
@ -889,7 +890,6 @@ class RAGFlowPdfParser:
nonlocal mh, pw, lines, widths
lines.append(line)
widths.append(width(line))
width_mean = np.mean(widths)
mmj = self.proj_match(
line["text"]) or line.get(
"layout_type",
@ -948,7 +948,6 @@ class RAGFlowPdfParser:
self.page_cum_height = [0]
self.page_layout = []
self.page_from = page_from
st = timer()
try:
self.pdf = pdfplumber.open(fnm) if isinstance(
fnm, str) else pdfplumber.open(BytesIO(fnm))
@ -956,8 +955,12 @@ class RAGFlowPdfParser:
enumerate(self.pdf.pages[page_from:page_to])]
self.page_images_x2 = [p.to_image(resolution=72 * zoomin * 2).annotated for i, p in
enumerate(self.pdf.pages[page_from:page_to])]
self.page_chars = [[{**c, 'top': c['top'], 'bottom': c['bottom']} for c in page.dedupe_chars().chars if self._has_color(c)] for page in
self.pdf.pages[page_from:page_to]]
try:
self.page_chars = [[{**c, 'top': c['top'], 'bottom': c['bottom']} for c in page.dedupe_chars().chars if self._has_color(c)] for page in self.pdf.pages[page_from:page_to]]
except Exception as e:
logging.warning(f"Failed to extract characters for pages {page_from}-{page_to}: {str(e)}")
self.page_chars = [[] for _ in range(page_to - page_from)] # If failed to extract, using empty list instead.
self.total_page = len(self.pdf.pages)
except Exception:
logging.exception("RAGFlowPdfParser __images__")
@ -990,7 +993,7 @@ class RAGFlowPdfParser:
else:
self.is_english = False
st = timer()
# st = timer()
for i, img in enumerate(self.page_images_x2):
chars = self.page_chars[i] if not self.is_english else []
self.mean_height.append(
@ -1024,8 +1027,8 @@ class RAGFlowPdfParser:
self.page_cum_height = np.cumsum(self.page_cum_height)
assert len(self.page_cum_height) == len(self.page_images) + 1
if len(self.boxes) == 0 and zoomin < 9: self.__images__(fnm, zoomin * 3, page_from,
page_to, callback)
if len(self.boxes) == 0 and zoomin < 9:
self.__images__(fnm, zoomin * 3, page_from, page_to, callback)
def __call__(self, fnm, need_image=True, zoomin=3, return_html=False):
self.__images__(fnm, zoomin)
@ -1164,7 +1167,7 @@ class PlainParser(object):
if not self.outlines:
logging.warning("Miss outlines")
return [(l, "") for l in lines], []
return [(line, "") for line in lines], []
def crop(self, ck, need_position):
raise NotImplementedError

View File

@ -10,7 +10,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from io import BytesIO
from pptx import Presentation
@ -53,9 +53,12 @@ class RAGFlowPptParser(object):
texts = []
for shape in sorted(
slide.shapes, key=lambda x: ((x.top if x.top is not None else 0) // 10, x.left)):
txt = self.__extract(shape)
if txt:
texts.append(txt)
try:
txt = self.__extract(shape)
if txt:
texts.append(txt)
except Exception as e:
logging.exception(e)
txts.append("\n".join(texts))
return txts

View File

@ -15,21 +15,42 @@ import datetime
def refactor(cv):
for n in ["raw_txt", "parser_name", "inference", "ori_text", "use_time", "time_stat"]:
if n in cv and cv[n] is not None: del cv[n]
for n in [
"raw_txt",
"parser_name",
"inference",
"ori_text",
"use_time",
"time_stat",
]:
if n in cv and cv[n] is not None:
del cv[n]
cv["is_deleted"] = 0
if "basic" not in cv: cv["basic"] = {}
if cv["basic"].get("photo2"): del cv["basic"]["photo2"]
if "basic" not in cv:
cv["basic"] = {}
if cv["basic"].get("photo2"):
del cv["basic"]["photo2"]
for n in ["education", "work", "certificate", "project", "language", "skill", "training"]:
if n not in cv or cv[n] is None: continue
if type(cv[n]) == type({}): cv[n] = [v for _, v in cv[n].items()]
if type(cv[n]) != type([]):
for n in [
"education",
"work",
"certificate",
"project",
"language",
"skill",
"training",
]:
if n not in cv or cv[n] is None:
continue
if isinstance(cv[n], dict):
cv[n] = [v for _, v in cv[n].items()]
if not isinstance(cv[n], list):
del cv[n]
continue
vv = []
for v in cv[n]:
if "external" in v and v["external"] is not None: del v["external"]
if "external" in v and v["external"] is not None:
del v["external"]
vv.append(v)
cv[n] = {str(i): vv[i] for i in range(len(vv))}
@ -42,24 +63,44 @@ def refactor(cv):
cv["basic"][t] = cv["basic"][n]
del cv["basic"][n]
work = sorted([v for _, v in cv.get("work", {}).items()], key=lambda x: x.get("start_time", ""))
edu = sorted([v for _, v in cv.get("education", {}).items()], key=lambda x: x.get("start_time", ""))
work = sorted(
[v for _, v in cv.get("work", {}).items()],
key=lambda x: x.get("start_time", ""),
)
edu = sorted(
[v for _, v in cv.get("education", {}).items()],
key=lambda x: x.get("start_time", ""),
)
if work:
cv["basic"]["work_start_time"] = work[0].get("start_time", "")
cv["basic"]["management_experience"] = 'Y' if any(
[w.get("management_experience", '') == 'Y' for w in work]) else 'N'
cv["basic"]["management_experience"] = (
"Y"
if any([w.get("management_experience", "") == "Y" for w in work])
else "N"
)
cv["basic"]["annual_salary"] = work[-1].get("annual_salary_from", "0")
for n in ["annual_salary_from", "annual_salary_to", "industry_name", "position_name", "responsibilities",
"corporation_type", "scale", "corporation_name"]:
for n in [
"annual_salary_from",
"annual_salary_to",
"industry_name",
"position_name",
"responsibilities",
"corporation_type",
"scale",
"corporation_name",
]:
cv["basic"][n] = work[-1].get(n, "")
if edu:
for n in ["school_name", "discipline_name"]:
if n in edu[-1]: cv["basic"][n] = edu[-1][n]
if n in edu[-1]:
cv["basic"][n] = edu[-1][n]
cv["basic"]["updated_at"] = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
if "contact" not in cv: cv["contact"] = {}
if not cv["contact"].get("name"): cv["contact"]["name"] = cv["basic"].get("name", "")
return cv
if "contact" not in cv:
cv["contact"] = {}
if not cv["contact"].get("name"):
cv["contact"]["name"] = cv["basic"].get("name", "")
return cv

View File

@ -21,13 +21,18 @@ from . import regions
current_file_path = os.path.dirname(os.path.abspath(__file__))
GOODS = pd.read_csv(os.path.join(current_file_path, "res/corp_baike_len.csv"), sep="\t", header=0).fillna(0)
GOODS = pd.read_csv(
os.path.join(current_file_path, "res/corp_baike_len.csv"), sep="\t", header=0
).fillna(0)
GOODS["cid"] = GOODS["cid"].astype(str)
GOODS = GOODS.set_index(["cid"])
CORP_TKS = json.load(open(os.path.join(current_file_path, "res/corp.tks.freq.json"), "r"))
CORP_TKS = json.load(
open(os.path.join(current_file_path, "res/corp.tks.freq.json"), "r")
)
GOOD_CORP = json.load(open(os.path.join(current_file_path, "res/good_corp.json"), "r"))
CORP_TAG = json.load(open(os.path.join(current_file_path, "res/corp_tag.json"), "r"))
def baike(cid, default_v=0):
global GOODS
try:
@ -39,27 +44,41 @@ def baike(cid, default_v=0):
def corpNorm(nm, add_region=True):
global CORP_TKS
if not nm or type(nm)!=type(""):return ""
if not nm or isinstance(nm, str):
return ""
nm = rag_tokenizer.tradi2simp(rag_tokenizer.strQ2B(nm)).lower()
nm = re.sub(r"&amp;", "&", nm)
nm = re.sub(r"[\(\)\+'\"\t \*\\【】-]+", " ", nm)
nm = re.sub(r"([—-]+.*| +co\..*|corp\..*| +inc\..*| +ltd.*)", "", nm, 10000, re.IGNORECASE)
nm = re.sub(r"(计算机|技术|(技术|科技|网络)*有限公司|公司|有限|研发中心|中国|总部)$", "", nm, 10000, re.IGNORECASE)
if not nm or (len(nm)<5 and not regions.isName(nm[0:2])):return nm
nm = re.sub(
r"([—-]+.*| +co\..*|corp\..*| +inc\..*| +ltd.*)", "", nm, 10000, re.IGNORECASE
)
nm = re.sub(
r"(计算机|技术|(技术|科技|网络)*有限公司|公司|有限|研发中心|中国|总部)$",
"",
nm,
10000,
re.IGNORECASE,
)
if not nm or (len(nm) < 5 and not regions.isName(nm[0:2])):
return nm
tks = rag_tokenizer.tokenize(nm).split()
reg = [t for i,t in enumerate(tks) if regions.isName(t) and (t != "中国" or i > 0)]
reg = [t for i, t in enumerate(tks) if regions.isName(t) and (t != "中国" or i > 0)]
nm = ""
for t in tks:
if regions.isName(t) or t in CORP_TKS:continue
if re.match(r"[0-9a-zA-Z\\,.]+", t) and re.match(r".*[0-9a-zA-Z\,.]+$", nm):nm += " "
if regions.isName(t) or t in CORP_TKS:
continue
if re.match(r"[0-9a-zA-Z\\,.]+", t) and re.match(r".*[0-9a-zA-Z\,.]+$", nm):
nm += " "
nm += t
r = re.search(r"^([^a-z0-9 \(\)&]{2,})[a-z ]{4,}$", nm.strip())
if r:nm = r.group(1)
if r:
nm = r.group(1)
r = re.search(r"^([a-z ]{3,})[^a-z0-9 \(\)&]{2,}$", nm.strip())
if r:nm = r.group(1)
return nm.strip() + (("" if not reg else "(%s)"%reg[0]) if add_region else "")
if r:
nm = r.group(1)
return nm.strip() + (("" if not reg else "(%s)" % reg[0]) if add_region else "")
def rmNoise(n):
@ -67,33 +86,40 @@ def rmNoise(n):
n = re.sub(r"[,. &()]+", "", n)
return n
GOOD_CORP = set([corpNorm(rmNoise(c), False) for c in GOOD_CORP])
for c,v in CORP_TAG.items():
for c, v in CORP_TAG.items():
cc = corpNorm(rmNoise(c), False)
if not cc:
logging.debug(c)
CORP_TAG = {corpNorm(rmNoise(c), False):v for c,v in CORP_TAG.items()}
CORP_TAG = {corpNorm(rmNoise(c), False): v for c, v in CORP_TAG.items()}
def is_good(nm):
global GOOD_CORP
if nm.find("外派")>=0:return False
if nm.find("外派") >= 0:
return False
nm = rmNoise(nm)
nm = corpNorm(nm, False)
for n in GOOD_CORP:
if re.match(r"[0-9a-zA-Z]+$", n):
if n == nm: return True
elif nm.find(n)>=0:return True
if n == nm:
return True
elif nm.find(n) >= 0:
return True
return False
def corp_tag(nm):
global CORP_TAG
nm = rmNoise(nm)
nm = corpNorm(nm, False)
for n in CORP_TAG.keys():
if re.match(r"[0-9a-zA-Z., ]+$", n):
if n == nm: return CORP_TAG[n]
elif nm.find(n)>=0:
if len(n)<3 and len(nm)/len(n)>=2:continue
if n == nm:
return CORP_TAG[n]
elif nm.find(n) >= 0:
if len(n) < 3 and len(nm) / len(n) >= 2:
continue
return CORP_TAG[n]
return []

View File

@ -11,27 +11,31 @@
# limitations under the License.
#
TBL = {"94":"EMBA",
"6":"MBA",
"95":"MPA",
"92":"专升本",
"4":"",
"90":"",
"91":"",
"86":"",
"3":"博士",
"10":"博士",
"1":"本科",
"2":"硕士",
"87":"职高",
"89":""
TBL = {
"94": "EMBA",
"6": "MBA",
"95": "MPA",
"92": "升本",
"4": "",
"90": "",
"91": "",
"86": "初中",
"3": "博士",
"10": "博士后",
"1": "本科",
"2": "硕士",
"87": "",
"89": "高中",
}
TBL_ = {v:k for k,v in TBL.items()}
TBL_ = {v: k for k, v in TBL.items()}
def get_name(id):
return TBL.get(str(id), "")
def get_id(nm):
if not nm:return ""
if not nm:
return ""
return TBL_.get(nm.upper().strip(), "")

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -16,8 +16,11 @@ import json
import re
import copy
import pandas as pd
current_file_path = os.path.dirname(os.path.abspath(__file__))
TBL = pd.read_csv(os.path.join(current_file_path, "res/schools.csv"), sep="\t", header=0).fillna("")
TBL = pd.read_csv(
os.path.join(current_file_path, "res/schools.csv"), sep="\t", header=0
).fillna("")
TBL["name_en"] = TBL["name_en"].map(lambda x: x.lower().strip())
GOOD_SCH = json.load(open(os.path.join(current_file_path, "res/good_sch.json"), "r"))
GOOD_SCH = set([re.sub(r"[,. &()]+", "", c) for c in GOOD_SCH])
@ -26,14 +29,15 @@ GOOD_SCH = set([re.sub(r"[,. &()]+", "", c) for c in GOOD_SCH])
def loadRank(fnm):
global TBL
TBL["rank"] = 1000000
with open(fnm, "r", encoding='utf-8') as f:
with open(fnm, "r", encoding="utf-8") as f:
while True:
l = f.readline()
if not l:break
l = l.strip("\n").split(",")
line = f.readline()
if not line:
break
line = line.strip("\n").split(",")
try:
nm,rk = l[0].strip(),int(l[1])
#assert len(TBL[((TBL.name_cn == nm) | (TBL.name_en == nm))]),f"<{nm}>"
nm, rk = line[0].strip(), int(line[1])
# assert len(TBL[((TBL.name_cn == nm) | (TBL.name_en == nm))]),f"<{nm}>"
TBL.loc[((TBL.name_cn == nm) | (TBL.name_en == nm)), "rank"] = rk
except Exception:
pass
@ -44,27 +48,35 @@ loadRank(os.path.join(current_file_path, "res/school.rank.csv"))
def split(txt):
tks = []
for t in re.sub(r"[ \t]+", " ",txt).split():
if tks and re.match(r".*[a-zA-Z]$", tks[-1]) and \
re.match(r"[a-zA-Z]", t) and tks:
for t in re.sub(r"[ \t]+", " ", txt).split():
if (
tks
and re.match(r".*[a-zA-Z]$", tks[-1])
and re.match(r"[a-zA-Z]", t)
and tks
):
tks[-1] = tks[-1] + " " + t
else:tks.append(t)
else:
tks.append(t)
return tks
def select(nm):
global TBL
if not nm:return
if isinstance(nm, list):nm = str(nm[0])
if not nm:
return
if isinstance(nm, list):
nm = str(nm[0])
nm = split(nm)[0]
nm = str(nm).lower().strip()
nm = re.sub(r"[(][^()]+[)]", "", nm.lower())
nm = re.sub(r"(^the |[,.&();;·]+|^(英国|美国|瑞士))", "", nm)
nm = re.sub(r"大学.*学院", "大学", nm)
tbl = copy.deepcopy(TBL)
tbl["hit_alias"] = tbl["alias"].map(lambda x:nm in set(x.split("+")))
res = tbl[((tbl.name_cn == nm) | (tbl.name_en == nm) | (tbl.hit_alias == True))]
if res.empty:return
tbl["hit_alias"] = tbl["alias"].map(lambda x: nm in set(x.split("+")))
res = tbl[((tbl.name_cn == nm) | (tbl.name_en == nm) | tbl.hit_alias)]
if res.empty:
return
return json.loads(res.to_json(orient="records"))[0]
@ -74,4 +86,3 @@ def is_good(nm):
nm = re.sub(r"[(][^()]+[)]", "", nm.lower())
nm = re.sub(r"[''`‘’“”,. &();]+", "", nm)
return nm in GOOD_SCH

View File

@ -25,7 +25,8 @@ from xpinyin import Pinyin
from contextlib import contextmanager
class TimeoutException(Exception): pass
class TimeoutException(Exception):
pass
@contextmanager
@ -50,8 +51,10 @@ def rmHtmlTag(line):
def highest_degree(dg):
if not dg: return ""
if type(dg) == type(""): dg = [dg]
if not dg:
return ""
if isinstance(dg, str):
dg = [dg]
m = {"初中": 0, "高中": 1, "中专": 2, "大专": 3, "专升本": 4, "本科": 5, "硕士": 6, "博士": 7, "博士后": 8}
return sorted([(d, m.get(d, -1)) for d in dg], key=lambda x: x[1] * -1)[0][0]
@ -68,10 +71,12 @@ def forEdu(cv):
for ii, n in enumerate(sorted(cv["education_obj"], key=lambda x: x.get("start_time", "3"))):
e = {}
if n.get("end_time"):
if n["end_time"] > edu_end_dt: edu_end_dt = n["end_time"]
if n["end_time"] > edu_end_dt:
edu_end_dt = n["end_time"]
try:
dt = n["end_time"]
if re.match(r"[0-9]{9,}", dt): dt = turnTm2Dt(dt)
if re.match(r"[0-9]{9,}", dt):
dt = turnTm2Dt(dt)
y, m, d = getYMD(dt)
ed_dt.append(str(y))
e["end_dt_kwd"] = str(y)
@ -80,7 +85,8 @@ def forEdu(cv):
if n.get("start_time"):
try:
dt = n["start_time"]
if re.match(r"[0-9]{9,}", dt): dt = turnTm2Dt(dt)
if re.match(r"[0-9]{9,}", dt):
dt = turnTm2Dt(dt)
y, m, d = getYMD(dt)
st_dt.append(str(y))
e["start_dt_kwd"] = str(y)
@ -89,13 +95,20 @@ def forEdu(cv):
r = schools.select(n.get("school_name", ""))
if r:
if str(r.get("type", "")) == "1": fea.append("211")
if str(r.get("type", "")) == "2": fea.append("211")
if str(r.get("is_abroad", "")) == "1": fea.append("留学")
if str(r.get("is_double_first", "")) == "1": fea.append("双一流")
if str(r.get("is_985", "")) == "1": fea.append("985")
if str(r.get("is_world_known", "")) == "1": fea.append("海外知名")
if r.get("rank") and cv["school_rank_int"] > r["rank"]: cv["school_rank_int"] = r["rank"]
if str(r.get("type", "")) == "1":
fea.append("211")
if str(r.get("type", "")) == "2":
fea.append("211")
if str(r.get("is_abroad", "")) == "1":
fea.append("留学")
if str(r.get("is_double_first", "")) == "1":
fea.append("双一流")
if str(r.get("is_985", "")) == "1":
fea.append("985")
if str(r.get("is_world_known", "")) == "1":
fea.append("海外知名")
if r.get("rank") and cv["school_rank_int"] > r["rank"]:
cv["school_rank_int"] = r["rank"]
if n.get("school_name") and isinstance(n["school_name"], str):
sch.append(re.sub(r"(211|985|重点大学|[,&;-])", "", n["school_name"]))
@ -106,22 +119,25 @@ def forEdu(cv):
maj.append(n["discipline_name"])
e["major_kwd"] = n["discipline_name"]
if not n.get("degree") and "985" in fea and not first_fea: n["degree"] = "1"
if not n.get("degree") and "985" in fea and not first_fea:
n["degree"] = "1"
if n.get("degree"):
d = degrees.get_name(n["degree"])
if d: e["degree_kwd"] = d
if d == "本科" and ("专科" in deg or "专升本" in deg or "中专" in deg or "大专" in deg or re.search(r"(成人|自考|自学考试)",
n.get(
"school_name",
""))): d = "专升本"
if d: deg.append(d)
if d:
e["degree_kwd"] = d
if d == "本科" and ("专科" in deg or "专升本" in deg or "中专" in deg or "大专" in deg or re.search(r"(成人|自考|自学考试)", n.get("school_name",""))):
d = "专升本"
if d:
deg.append(d)
# for first degree
if not fdeg and d in ["中专", "专升本", "专科", "本科", "大专"]:
fdeg = [d]
if n.get("school_name"): fsch = [n["school_name"]]
if n.get("discipline_name"): fmaj = [n["discipline_name"]]
if n.get("school_name"):
fsch = [n["school_name"]]
if n.get("discipline_name"):
fmaj = [n["discipline_name"]]
first_fea = copy.deepcopy(fea)
edu_nst.append(e)
@ -140,16 +156,26 @@ def forEdu(cv):
else:
cv["sch_rank_kwd"].append("一般学校")
if edu_nst: cv["edu_nst"] = edu_nst
if fea: cv["edu_fea_kwd"] = list(set(fea))
if first_fea: cv["edu_first_fea_kwd"] = list(set(first_fea))
if maj: cv["major_kwd"] = maj
if fsch: cv["first_school_name_kwd"] = fsch
if fdeg: cv["first_degree_kwd"] = fdeg
if fmaj: cv["first_major_kwd"] = fmaj
if st_dt: cv["edu_start_kwd"] = st_dt
if ed_dt: cv["edu_end_kwd"] = ed_dt
if ed_dt: cv["edu_end_int"] = max([int(t) for t in ed_dt])
if edu_nst:
cv["edu_nst"] = edu_nst
if fea:
cv["edu_fea_kwd"] = list(set(fea))
if first_fea:
cv["edu_first_fea_kwd"] = list(set(first_fea))
if maj:
cv["major_kwd"] = maj
if fsch:
cv["first_school_name_kwd"] = fsch
if fdeg:
cv["first_degree_kwd"] = fdeg
if fmaj:
cv["first_major_kwd"] = fmaj
if st_dt:
cv["edu_start_kwd"] = st_dt
if ed_dt:
cv["edu_end_kwd"] = ed_dt
if ed_dt:
cv["edu_end_int"] = max([int(t) for t in ed_dt])
if deg:
if "本科" in deg and "专科" in deg:
deg.append("专升本")
@ -158,8 +184,10 @@ def forEdu(cv):
cv["highest_degree_kwd"] = highest_degree(deg)
if edu_end_dt:
try:
if re.match(r"[0-9]{9,}", edu_end_dt): edu_end_dt = turnTm2Dt(edu_end_dt)
if edu_end_dt.strip("\n") == "至今": edu_end_dt = cv.get("updated_at_dt", str(datetime.date.today()))
if re.match(r"[0-9]{9,}", edu_end_dt):
edu_end_dt = turnTm2Dt(edu_end_dt)
if edu_end_dt.strip("\n") == "至今":
edu_end_dt = cv.get("updated_at_dt", str(datetime.date.today()))
y, m, d = getYMD(edu_end_dt)
cv["work_exp_flt"] = min(int(str(datetime.date.today())[0:4]) - int(y), cv.get("work_exp_flt", 1000))
except Exception as e:
@ -171,7 +199,8 @@ def forEdu(cv):
or not cv.get("degree_kwd"):
for c in sch:
if schools.is_good(c):
if "tag_kwd" not in cv: cv["tag_kwd"] = []
if "tag_kwd" not in cv:
cv["tag_kwd"] = []
cv["tag_kwd"].append("好学校")
cv["tag_kwd"].append("好学历")
break
@ -180,28 +209,39 @@ def forEdu(cv):
any([d.lower() in ["硕士", "博士", "mba", "博士"] for d in cv.get("degree_kwd", [])])) \
or all([d.lower() in ["硕士", "博士", "mba", "博士后"] for d in cv.get("degree_kwd", [])]) \
or any([d in ["mba", "emba", "博士后"] for d in cv.get("degree_kwd", [])]):
if "tag_kwd" not in cv: cv["tag_kwd"] = []
if "好学历" not in cv["tag_kwd"]: cv["tag_kwd"].append("好学历")
if "tag_kwd" not in cv:
cv["tag_kwd"] = []
if "好学历" not in cv["tag_kwd"]:
cv["tag_kwd"].append("好学历")
if cv.get("major_kwd"): cv["major_tks"] = rag_tokenizer.tokenize(" ".join(maj))
if cv.get("school_name_kwd"): cv["school_name_tks"] = rag_tokenizer.tokenize(" ".join(sch))
if cv.get("first_school_name_kwd"): cv["first_school_name_tks"] = rag_tokenizer.tokenize(" ".join(fsch))
if cv.get("first_major_kwd"): cv["first_major_tks"] = rag_tokenizer.tokenize(" ".join(fmaj))
if cv.get("major_kwd"):
cv["major_tks"] = rag_tokenizer.tokenize(" ".join(maj))
if cv.get("school_name_kwd"):
cv["school_name_tks"] = rag_tokenizer.tokenize(" ".join(sch))
if cv.get("first_school_name_kwd"):
cv["first_school_name_tks"] = rag_tokenizer.tokenize(" ".join(fsch))
if cv.get("first_major_kwd"):
cv["first_major_tks"] = rag_tokenizer.tokenize(" ".join(fmaj))
return cv
def forProj(cv):
if not cv.get("project_obj"): return cv
if not cv.get("project_obj"):
return cv
pro_nms, desc = [], []
for i, n in enumerate(
sorted(cv.get("project_obj", []), key=lambda x: str(x.get("updated_at", "")) if type(x) == type({}) else "",
sorted(cv.get("project_obj", []), key=lambda x: str(x.get("updated_at", "")) if isinstance(x, dict) else "",
reverse=True)):
if n.get("name"): pro_nms.append(n["name"])
if n.get("describe"): desc.append(str(n["describe"]))
if n.get("responsibilities"): desc.append(str(n["responsibilities"]))
if n.get("achivement"): desc.append(str(n["achivement"]))
if n.get("name"):
pro_nms.append(n["name"])
if n.get("describe"):
desc.append(str(n["describe"]))
if n.get("responsibilities"):
desc.append(str(n["responsibilities"]))
if n.get("achivement"):
desc.append(str(n["achivement"]))
if pro_nms:
# cv["pro_nms_tks"] = rag_tokenizer.tokenize(" ".join(pro_nms))
@ -233,15 +273,16 @@ def forWork(cv):
work_st_tm = ""
corp_tags = []
for i, n in enumerate(
sorted(cv.get("work_obj", []), key=lambda x: str(x.get("start_time", "")) if type(x) == type({}) else "",
sorted(cv.get("work_obj", []), key=lambda x: str(x.get("start_time", "")) if isinstance(x, dict) else "",
reverse=True)):
if type(n) == type(""):
if isinstance(n, str):
try:
n = json_loads(n)
except Exception:
continue
if n.get("start_time") and (not work_st_tm or n["start_time"] < work_st_tm): work_st_tm = n["start_time"]
if n.get("start_time") and (not work_st_tm or n["start_time"] < work_st_tm):
work_st_tm = n["start_time"]
for c in flds:
if not n.get(c) or str(n[c]) == '0':
fea[c].append("")
@ -262,14 +303,18 @@ def forWork(cv):
fea[c].append(rmHtmlTag(str(n[c]).lower()))
y, m, d = getYMD(n.get("start_time"))
if not y or not m: continue
if not y or not m:
continue
st = "%s-%02d-%02d" % (y, int(m), int(d))
latest_job_tm = st
y, m, d = getYMD(n.get("end_time"))
if (not y or not m) and i > 0: continue
if not y or not m or int(y) > 2022: y, m, d = getYMD(str(n.get("updated_at", "")))
if not y or not m: continue
if (not y or not m) and i > 0:
continue
if not y or not m or int(y) > 2022:
y, m, d = getYMD(str(n.get("updated_at", "")))
if not y or not m:
continue
ed = "%s-%02d-%02d" % (y, int(m), int(d))
try:
@ -279,22 +324,28 @@ def forWork(cv):
if n.get("scale"):
r = re.search(r"^([0-9]+)", str(n["scale"]))
if r: scales.append(int(r.group(1)))
if r:
scales.append(int(r.group(1)))
if goodcorp:
if "tag_kwd" not in cv: cv["tag_kwd"] = []
if "tag_kwd" not in cv:
cv["tag_kwd"] = []
cv["tag_kwd"].append("好公司")
if goodcorp_:
if "tag_kwd" not in cv: cv["tag_kwd"] = []
if "tag_kwd" not in cv:
cv["tag_kwd"] = []
cv["tag_kwd"].append("好公司(曾)")
if corp_tags:
if "tag_kwd" not in cv: cv["tag_kwd"] = []
if "tag_kwd" not in cv:
cv["tag_kwd"] = []
cv["tag_kwd"].extend(corp_tags)
cv["corp_tag_kwd"] = [c for c in corp_tags if re.match(r"(综合|行业)", c)]
if latest_job_tm: cv["latest_job_dt"] = latest_job_tm
if fea["corporation_id"]: cv["corporation_id"] = fea["corporation_id"]
if latest_job_tm:
cv["latest_job_dt"] = latest_job_tm
if fea["corporation_id"]:
cv["corporation_id"] = fea["corporation_id"]
if fea["position_name"]:
cv["position_name_tks"] = rag_tokenizer.tokenize(fea["position_name"][0])
@ -317,18 +368,23 @@ def forWork(cv):
cv["responsibilities_ltks"] = rag_tokenizer.tokenize(fea["responsibilities"][0])
cv["resp_ltks"] = rag_tokenizer.tokenize(" ".join(fea["responsibilities"][1:]))
if fea["subordinates_count"]: fea["subordinates_count"] = [int(i) for i in fea["subordinates_count"] if
if fea["subordinates_count"]:
fea["subordinates_count"] = [int(i) for i in fea["subordinates_count"] if
re.match(r"[^0-9]+$", str(i))]
if fea["subordinates_count"]: cv["max_sub_cnt_int"] = np.max(fea["subordinates_count"])
if fea["subordinates_count"]:
cv["max_sub_cnt_int"] = np.max(fea["subordinates_count"])
if type(cv.get("corporation_id")) == type(1): cv["corporation_id"] = [str(cv["corporation_id"])]
if not cv.get("corporation_id"): cv["corporation_id"] = []
if isinstance(cv.get("corporation_id"), int):
cv["corporation_id"] = [str(cv["corporation_id"])]
if not cv.get("corporation_id"):
cv["corporation_id"] = []
for i in cv.get("corporation_id", []):
cv["baike_flt"] = max(corporations.baike(i), cv["baike_flt"] if "baike_flt" in cv else 0)
if work_st_tm:
try:
if re.match(r"[0-9]{9,}", work_st_tm): work_st_tm = turnTm2Dt(work_st_tm)
if re.match(r"[0-9]{9,}", work_st_tm):
work_st_tm = turnTm2Dt(work_st_tm)
y, m, d = getYMD(work_st_tm)
cv["work_exp_flt"] = min(int(str(datetime.date.today())[0:4]) - int(y), cv.get("work_exp_flt", 1000))
except Exception as e:
@ -339,28 +395,37 @@ def forWork(cv):
cv["dua_flt"] = np.mean(duas)
cv["cur_dua_int"] = duas[0]
cv["job_num_int"] = len(duas)
if scales: cv["scale_flt"] = np.max(scales)
if scales:
cv["scale_flt"] = np.max(scales)
return cv
def turnTm2Dt(b):
if not b: return
if not b:
return
b = str(b).strip()
if re.match(r"[0-9]{10,}", b): b = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(int(b[:10])))
if re.match(r"[0-9]{10,}", b):
b = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(int(b[:10])))
return b
def getYMD(b):
y, m, d = "", "", "01"
if not b: return (y, m, d)
if not b:
return (y, m, d)
b = turnTm2Dt(b)
if re.match(r"[0-9]{4}", b): y = int(b[:4])
if re.match(r"[0-9]{4}", b):
y = int(b[:4])
r = re.search(r"[0-9]{4}.?([0-9]{1,2})", b)
if r: m = r.group(1)
if r:
m = r.group(1)
r = re.search(r"[0-9]{4}.?[0-9]{,2}.?([0-9]{1,2})", b)
if r: d = r.group(1)
if not d or int(d) == 0 or int(d) > 31: d = "1"
if not m or int(m) > 12 or int(m) < 1: m = "1"
if r:
d = r.group(1)
if not d or int(d) == 0 or int(d) > 31:
d = "1"
if not m or int(m) > 12 or int(m) < 1:
m = "1"
return (y, m, d)
@ -369,7 +434,8 @@ def birth(cv):
cv["integerity_flt"] *= 0.9
return cv
y, m, d = getYMD(cv["birth"])
if not m or not y: return cv
if not m or not y:
return cv
b = "%s-%02d-%02d" % (y, int(m), int(d))
cv["birth_dt"] = b
cv["birthday_kwd"] = "%02d%02d" % (int(m), int(d))
@ -380,7 +446,8 @@ def birth(cv):
def parse(cv):
for k in cv.keys():
if cv[k] == '\\N': cv[k] = ''
if cv[k] == '\\N':
cv[k] = ''
# cv = cv.asDict()
tks_fld = ["address", "corporation_name", "discipline_name", "email", "expect_city_names",
"expect_industry_name", "expect_position_name", "industry_name", "industry_names", "name",
@ -402,9 +469,12 @@ def parse(cv):
rmkeys = []
for k in cv.keys():
if cv[k] is None: rmkeys.append(k)
if (type(cv[k]) == type([]) or type(cv[k]) == type("")) and len(cv[k]) == 0: rmkeys.append(k)
for k in rmkeys: del cv[k]
if cv[k] is None:
rmkeys.append(k)
if (isinstance(cv[k], list) or isinstance(cv[k], str)) and len(cv[k]) == 0:
rmkeys.append(k)
for k in rmkeys:
del cv[k]
integerity = 0.
flds_num = 0.
@ -414,7 +484,8 @@ def parse(cv):
flds_num += len(flds)
for f in flds:
v = str(cv.get(f, ""))
if len(v) > 0 and v != '0' and v != '[]': integerity += 1
if len(v) > 0 and v != '0' and v != '[]':
integerity += 1
hasValues(tks_fld)
hasValues(small_tks_fld)
@ -433,7 +504,8 @@ def parse(cv):
(r"[ \(\)人/·0-9-]+", ""),
(r".*(元|规模|于|=|北京|上海|至今|中国|工资|州|shanghai|强|餐饮|融资|职).*", "")]:
cv["corporation_type"] = re.sub(p, r, cv["corporation_type"], 1000, re.IGNORECASE)
if len(cv["corporation_type"]) < 2: del cv["corporation_type"]
if len(cv["corporation_type"]) < 2:
del cv["corporation_type"]
if cv.get("political_status"):
for p, r in [
@ -441,9 +513,11 @@ def parse(cv):
(r".*(无党派|公民).*", "群众"),
(r".*团员.*", "团员")]:
cv["political_status"] = re.sub(p, r, cv["political_status"])
if not re.search(r"[党团群]", cv["political_status"]): del cv["political_status"]
if not re.search(r"[党团群]", cv["political_status"]):
del cv["political_status"]
if cv.get("phone"): cv["phone"] = re.sub(r"^0*86([0-9]{11})", r"\1", re.sub(r"[^0-9]+", "", cv["phone"]))
if cv.get("phone"):
cv["phone"] = re.sub(r"^0*86([0-9]{11})", r"\1", re.sub(r"[^0-9]+", "", cv["phone"]))
keys = list(cv.keys())
for k in keys:
@ -454,9 +528,11 @@ def parse(cv):
cv[k] = [a for _, a in cv[k].items()]
nms = []
for n in cv[k]:
if type(n) != type({}) or "name" not in n or not n.get("name"): continue
if not isinstance(n, dict) or "name" not in n or not n.get("name"):
continue
n["name"] = re.sub(r"(442|\t )", "", n["name"]).strip().lower()
if not n["name"]: continue
if not n["name"]:
continue
nms.append(n["name"])
if nms:
t = k[:-4]
@ -469,15 +545,18 @@ def parse(cv):
# tokenize fields
if k in tks_fld:
cv[f"{k}_tks"] = rag_tokenizer.tokenize(cv[k])
if k in small_tks_fld: cv[f"{k}_sm_tks"] = rag_tokenizer.tokenize(cv[f"{k}_tks"])
if k in small_tks_fld:
cv[f"{k}_sm_tks"] = rag_tokenizer.tokenize(cv[f"{k}_tks"])
# keyword fields
if k in kwd_fld: cv[f"{k}_kwd"] = [n.lower()
if k in kwd_fld:
cv[f"{k}_kwd"] = [n.lower()
for n in re.split(r"[\t,;. ]",
re.sub(r"([^a-zA-Z])[ ]+([^a-zA-Z ])", r"\1\2", cv[k])
) if n]
if k in num_fld and cv.get(k): cv[f"{k}_int"] = cv[k]
if k in num_fld and cv.get(k):
cv[f"{k}_int"] = cv[k]
cv["email_kwd"] = cv.get("email_tks", "").replace(" ", "")
# for name field
@ -501,10 +580,12 @@ def parse(cv):
cv["name_py_pref0_tks"] = ""
cv["name_py_pref_tks"] = ""
for py in PY.get_pinyins(nm[:20], ''):
for i in range(2, len(py) + 1): cv["name_py_pref_tks"] += " " + py[:i]
for i in range(2, len(py) + 1):
cv["name_py_pref_tks"] += " " + py[:i]
for py in PY.get_pinyins(nm[:20], ' '):
py = py.split()
for i in range(1, len(py) + 1): cv["name_py_pref0_tks"] += " " + "".join(py[:i])
for i in range(1, len(py) + 1):
cv["name_py_pref0_tks"] += " " + "".join(py[:i])
cv["name_kwd"] = name
cv["name_pinyin_kwd"] = PY.get_pinyins(nm[:20], ' ')[:3]
@ -526,22 +607,30 @@ def parse(cv):
cv["updated_at_dt"] = cv["updated_at"].strftime('%Y-%m-%d %H:%M:%S')
else:
y, m, d = getYMD(str(cv.get("updated_at", "")))
if not y: y = "2012"
if not m: m = "01"
if not d: d = "01"
if not y:
y = "2012"
if not m:
m = "01"
if not d:
d = "01"
cv["updated_at_dt"] = "%s-%02d-%02d 00:00:00" % (y, int(m), int(d))
# long text tokenize
if cv.get("responsibilities"): cv["responsibilities_ltks"] = rag_tokenizer.tokenize(rmHtmlTag(cv["responsibilities"]))
if cv.get("responsibilities"):
cv["responsibilities_ltks"] = rag_tokenizer.tokenize(rmHtmlTag(cv["responsibilities"]))
# for yes or no field
fea = []
for f, y, n in is_fld:
if f not in cv: continue
if cv[f] == '': fea.append(y)
if cv[f] == '': fea.append(n)
if f not in cv:
continue
if cv[f] == '':
fea.append(y)
if cv[f] == '':
fea.append(n)
if fea: cv["tag_kwd"] = fea
if fea:
cv["tag_kwd"] = fea
cv = forEdu(cv)
cv = forProj(cv)
@ -550,9 +639,11 @@ def parse(cv):
cv["corp_proj_sch_deg_kwd"] = [c for c in cv.get("corp_tag_kwd", [])]
for i in range(len(cv["corp_proj_sch_deg_kwd"])):
for j in cv.get("sch_rank_kwd", []): cv["corp_proj_sch_deg_kwd"][i] += "+" + j
for j in cv.get("sch_rank_kwd", []):
cv["corp_proj_sch_deg_kwd"][i] += "+" + j
for i in range(len(cv["corp_proj_sch_deg_kwd"])):
if cv.get("highest_degree_kwd"): cv["corp_proj_sch_deg_kwd"][i] += "+" + cv["highest_degree_kwd"]
if cv.get("highest_degree_kwd"):
cv["corp_proj_sch_deg_kwd"][i] += "+" + cv["highest_degree_kwd"]
try:
if not cv.get("work_exp_flt") and cv.get("work_start_time"):
@ -565,17 +656,21 @@ def parse(cv):
cv["work_exp_flt"] = int(str(datetime.date.today())[0:4]) - int(y)
except Exception as e:
logging.exception("parse {} ==> {}".format(e, cv.get("work_start_time")))
if "work_exp_flt" not in cv and cv.get("work_experience", 0): cv["work_exp_flt"] = int(cv["work_experience"]) / 12.
if "work_exp_flt" not in cv and cv.get("work_experience", 0):
cv["work_exp_flt"] = int(cv["work_experience"]) / 12.
keys = list(cv.keys())
for k in keys:
if not re.search(r"_(fea|tks|nst|dt|int|flt|ltks|kwd|id)$", k): del cv[k]
if not re.search(r"_(fea|tks|nst|dt|int|flt|ltks|kwd|id)$", k):
del cv[k]
for k in cv.keys():
if not re.search("_(kwd|id)$", k) or type(cv[k]) != type([]): continue
if not re.search("_(kwd|id)$", k) or not isinstance(cv[k], list):
continue
cv[k] = list(set([re.sub("(市)$", "", str(n)) for n in cv[k] if n not in ['中国', '0']]))
keys = [k for k in cv.keys() if re.search(r"_feas*$", k)]
for k in keys:
if cv[k] <= 0: del cv[k]
if cv[k] <= 0:
del cv[k]
cv["tob_resume_id"] = str(cv["tob_resume_id"])
cv["id"] = cv["tob_resume_id"]
@ -592,5 +687,6 @@ def dealWithInt64(d):
if isinstance(d, list):
d = [dealWithInt64(t) for t in d]
if isinstance(d, np.integer): d = int(d)
if isinstance(d, np.integer):
d = int(d)
return d

View File

@ -51,6 +51,7 @@ class RAGFlowTxtParser:
dels = [d for d in dels if d]
dels = "|".join(dels)
secs = re.split(r"(%s)" % dels, txt)
for sec in secs: add_chunk(sec)
for sec in secs:
add_chunk(sec)
return [[c, ""] for c in cks]

View File

@ -15,7 +15,7 @@ import pdfplumber
from .ocr import OCR
from .recognizer import Recognizer
from .layout_recognizer import LayoutRecognizer
from .layout_recognizer import LayoutRecognizer4YOLOv10 as LayoutRecognizer
from .table_structure_recognizer import TableStructureRecognizer
@ -47,7 +47,7 @@ def init_in_out(args):
try:
images.append(Image.open(fnm))
outputs.append(os.path.split(fnm)[-1])
except Exception as e:
except Exception:
traceback.print_exc()
if os.path.isdir(args.inputs):
@ -56,6 +56,16 @@ def init_in_out(args):
else:
images_and_outputs(args.inputs)
for i in range(len(outputs)): outputs[i] = os.path.join(args.output_dir, outputs[i])
for i in range(len(outputs)):
outputs[i] = os.path.join(args.output_dir, outputs[i])
return images, outputs
return images, outputs
__all__ = [
"OCR",
"Recognizer",
"LayoutRecognizer",
"TableStructureRecognizer",
"init_in_out",
]

View File

@ -14,11 +14,14 @@ import os
import re
from collections import Counter
from copy import deepcopy
import cv2
import numpy as np
from huggingface_hub import snapshot_download
from api.utils.file_utils import get_project_base_directory
from deepdoc.vision import Recognizer
from deepdoc.vision.operators import nms
class LayoutRecognizer(Recognizer):
@ -42,7 +45,7 @@ class LayoutRecognizer(Recognizer):
get_project_base_directory(),
"rag/res/deepdoc")
super().__init__(self.labels, domain, model_dir)
except Exception as e:
except Exception:
model_dir = snapshot_download(repo_id="InfiniFlow/deepdoc",
local_dir=os.path.join(get_project_base_directory(), "rag/res/deepdoc"),
local_dir_use_symlinks=False)
@ -77,7 +80,7 @@ class LayoutRecognizer(Recognizer):
"page_number": pn,
} for b in lts if float(b["score"]) >= 0.8 or b["type"] not in self.garbage_layouts]
lts = self.sort_Y_firstly(lts, np.mean(
[l["bottom"] - l["top"] for l in lts]) / 2)
[lt["bottom"] - lt["top"] for lt in lts]) / 2)
lts = self.layouts_cleanup(bxs, lts)
page_layout.append(lts)
@ -149,3 +152,88 @@ class LayoutRecognizer(Recognizer):
ocr_res = [b for b in ocr_res if b["text"].strip() not in garbag_set]
return ocr_res, page_layout
class LayoutRecognizer4YOLOv10(LayoutRecognizer):
labels = [
"title",
"Text",
"Reference",
"Figure",
"Figure caption",
"Table",
"Table caption",
"Table caption",
"Equation",
"Figure caption",
]
def __init__(self, domain):
domain = "layout"
super().__init__(domain)
self.auto = False
self.scaleFill = False
self.scaleup = True
self.stride = 32
self.center = True
def preprocess(self, image_list):
inputs = []
new_shape = self.input_shape # height, width
for img in image_list:
shape = img.shape[:2]# current shape [height, width]
# Scale ratio (new / old)
r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
# Compute padding
new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
dw /= 2 # divide padding into 2 sides
dh /= 2
ww, hh = new_unpad
img = np.array(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)).astype(np.float32)
img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)
top, bottom = int(round(dh - 0.1)) if self.center else 0, int(round(dh + 0.1))
left, right = int(round(dw - 0.1)) if self.center else 0, int(round(dw + 0.1))
img = cv2.copyMakeBorder(
img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=(114, 114, 114)
) # add border
img /= 255.0
img = img.transpose(2, 0, 1)
img = img[np.newaxis, :, :, :].astype(np.float32)
inputs.append({self.input_names[0]: img, "scale_factor": [shape[1]/ww, shape[0]/hh, dw, dh]})
return inputs
def postprocess(self, boxes, inputs, thr):
thr = 0.08
boxes = np.squeeze(boxes)
scores = boxes[:, 4]
boxes = boxes[scores > thr, :]
scores = scores[scores > thr]
if len(boxes) == 0:
return []
class_ids = boxes[:, -1].astype(int)
boxes = boxes[:, :4]
boxes[:, 0] -= inputs["scale_factor"][2]
boxes[:, 2] -= inputs["scale_factor"][2]
boxes[:, 1] -= inputs["scale_factor"][3]
boxes[:, 3] -= inputs["scale_factor"][3]
input_shape = np.array([inputs["scale_factor"][0], inputs["scale_factor"][1], inputs["scale_factor"][0],
inputs["scale_factor"][1]])
boxes = np.multiply(boxes, input_shape, dtype=np.float32)
unique_class_ids = np.unique(class_ids)
indices = []
for class_id in unique_class_ids:
class_indices = np.where(class_ids == class_id)[0]
class_boxes = boxes[class_indices, :]
class_scores = scores[class_indices]
class_keep_boxes = nms(class_boxes, class_scores, 0.45)
indices.extend(class_indices[class_keep_boxes])
return [{
"type": self.label_list[class_ids[i]].lower(),
"bbox": [float(t) for t in boxes[i].tolist()],
"score": float(scores[i])
} for i in indices]

View File

@ -18,8 +18,10 @@ import os
from huggingface_hub import snapshot_download
from api.utils.file_utils import get_project_base_directory
from .operators import *
from .operators import * # noqa: F403
import math
import numpy as np
import cv2
import onnxruntime as ort
from .postprocess import build_post_process
@ -484,7 +486,7 @@ class OCR(object):
"rag/res/deepdoc")
self.text_detector = TextDetector(model_dir)
self.text_recognizer = TextRecognizer(model_dir)
except Exception as e:
except Exception:
model_dir = snapshot_download(repo_id="InfiniFlow/deepdoc",
local_dir=os.path.join(get_project_base_directory(), "rag/res/deepdoc"),
local_dir_use_symlinks=False)

Some files were not shown because too many files have changed in this diff Show More