Compare commits

..

289 Commits

Author SHA1 Message Date
92a4a095c9 fix: Fixed an issue where quotes in messages could not be displayed #2677 (#2683)
### What problem does this PR solve?

fix: Fixed an issue where quotes in messages could not be displayed
#2677

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-30 12:40:12 +08:00
2368d738ab fix: Search page search results are cleared after output #2677 (#2678)
### What problem does this PR solve?

fix: Search page search results are cleared after output #2677

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-30 11:00:03 +08:00
833e3a08cd update poetry lock (#2676)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-30 10:59:47 +08:00
7a73fec2e5 upgrade opencv-python-headless (#2674)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-30 09:28:38 +08:00
2f8e0e66ef change opencv version (#2673)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-30 09:13:11 +08:00
5b4b252895 Fixed huggingface url (#2667)
### What problem does this PR solve?
Fixed huggingface url. Close #2665

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 20:38:11 +08:00
9081150c2c Translated Korean README (#2666)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-29 20:03:25 +08:00
cb295ec106 Translated Japanese README (#2664)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-29 19:27:48 +08:00
4f5210352c added back oc9 (#2663)
### What problem does this PR solve?

added back oc9

### Type of change

- [x] Refactoring
2024-09-29 18:32:48 +08:00
f98ec9034f Fix docker file bugs (#2662)
### What problem does this PR solve?

Fix docker file bugs

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-29 18:24:24 +08:00
4b8ecba32b Updated CN readme (#2661)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-29 17:27:15 +08:00
892166ec24 document preparation for release (#2660)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-09-29 16:29:02 +08:00
a411330b09 Add build image and launch from source in README (#2658)
### What problem does this PR solve?

Move the build image and launch from source back to README.

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-09-29 16:28:07 +08:00
5a8ae4a289 fix: Filter the timePeriod options based on the userType parameter #1739 (#2657)
### What problem does this PR solve?

fix: Filter the timePeriod options based on the userType parameter #1739

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-29 15:40:20 +08:00
3f16377412 change url of local llm deploy guide (#2659)
### What problem does this PR solve?


### Type of change

- [x] Other (please describe): I made a mistake with an URL and now I
need to change it
2024-09-29 15:39:05 +08:00
d3b37b0b70 fix: Fixed the issue where the error message was not displayed when uploading a file that was too large #1782 (#2654)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-29 15:22:05 +08:00
01db00b587 Updated component description (#2651)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-29 14:53:52 +08:00
25f07e8e29 fix template error (#2653)
### What problem does this PR solve?

#2478

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-29 14:47:06 +08:00
daa65199e8 trival (#2650)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 13:20:02 +08:00
fc867cb959 rename get_txt to get_text (#2649)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 12:47:09 +08:00
fb694143ee refine general purpose chat bot (#2648)
### What problem does this PR solve?

#2478

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-29 12:20:44 +08:00
a8280d9fd2 Add doc for dev image (#2641)
Add doc for dev image

### Type of change

- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-09-29 10:51:46 +08:00
aea553c3a8 Add get_txt function (#2639)
### What problem does this PR solve?

Add get_txt function to reduce duplicate code

### Type of change

- [x] Refactoring

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-29 10:29:56 +08:00
57237634f1 Refactoring large integers to improve readability (#2636)
### What problem does this PR solve?

Refactoring large integers

### Type of change

- [x] Refactoring
2024-09-29 10:17:42 +08:00
604061c4a5 Fix mutable default argument (#2635)
### What problem does this PR solve?

The default value of Python function parameters cannot be mutable.
Modifying this parameter inside the function will permanently change the
default value

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 10:16:00 +08:00
c103dd2746 change chunk.status to chunk.available (#2646)
### What problem does this PR solve?

#1102

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-29 10:13:07 +08:00
e82e8fde13 Fix logger error (#2643)
### What problem does this PR solve?

Fix logger error: AttributeError: 'Logger' object has no attribute
'basicConfig'

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 09:49:59 +08:00
a44ed9626a handle nits in task_executor (#2637)
### What problem does this PR solve?

- fix typo
- fix string format
- format import

### Type of change

- [x] Refactoring
2024-09-29 09:49:45 +08:00
ff9c11c970 fix: Fixed the issue where the conversation list was not displayed on the conversation page #2625 (#2638)
### What problem does this PR solve?

fix: Fixed the issue where the conversation list was not displayed on
the conversation page #2625

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 09:43:23 +08:00
674d342761 refine get_input (#2630)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-27 20:20:36 +08:00
a246e5644b feat: Add top_n to DeepLForm #1739 (#2629)
### What problem does this PR solve?

feat: Add top_n to DeepLForm #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-27 19:22:33 +08:00
96f56a3c43 add huggingface model (#2624)
### What problem does this PR solve?

#2469

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-27 19:15:38 +08:00
1b2f66fc11 Added doc on dev-slim (#2627)
Added doc on dev-slim

### Type of change

- [x] Documentation Update
- [x] Refactoring

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-27 19:15:27 +08:00
ca2de896c7 fix: Fixed an issue where the first message would be displayed when sending the second message #2625 (#2626)
### What problem does this PR solve?

fix: Fixed an issue where the first message would be displayed when
sending the second message #2625

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-27 18:20:19 +08:00
34abcf7704 style: fix typo and format code (#2618)
### What problem does this PR solve?

- Fix typo
- Remove unused import
- Format code

### Type of change

- [x] Other (please describe): typo and format
2024-09-27 13:17:25 +08:00
4c0b79c4f6 remove repeat func (#2619)
### What problem does this PR solve?

- remove repeat func

### Type of change

- [x] Other (please describe): remove repeat func
2024-09-27 13:15:26 +08:00
e11a74eed5 Update Yichat base_url (#2620)
### What problem does this PR solve?

Update Yichat base_url

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-09-27 12:55:58 +08:00
297b2d0ac9 force eml file to be parsed by EMAIL (#2615)
### What problem does this PR solve?
#2613
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-27 10:29:30 +08:00
b16f16e19e Bug fix - email processing could be run now from API (#2613)
### What problem does this PR solve?

If .eml file is uploaded, there is always General method chosen for
email processing, even if parsing_method is defined in the request. This
change solves this issue.

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Adam Kobus <adam.kobus@gitlab.eleader.biz>
2024-09-27 10:24:46 +08:00
35598c04ce fix generate bug (#2614)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-27 10:22:13 +08:00
09d1f7f333 Support agent for aibot (#2609)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-26 18:06:56 +08:00
240450ea52 Remove WenCai imageurl and update investment_advisor prompt (#2606)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-09-26 17:27:53 +08:00
1de3032650 fix AzureOpenAI issue` (#2608)
### What problem does this PR solve?

#1599

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-26 17:25:16 +08:00
41548bf019 Added two developer guide and removed from README ' builder docker image' and 'launch service from source' (#2590)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-09-26 16:15:57 +08:00
b68d349bd6 Fix: renrank_model and pdf_parser bugs | Update: session API (#2601)
### What problem does this PR solve?

Fix: renrank_model and pdf_parser bugs | Update: session API
#2575
#2559
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-09-26 16:05:25 +08:00
f6bfe4d970 feat: Add component Concentrator #1739 (#2604)
### What problem does this PR solve?

feat: Add component Concentrator #1739
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-26 14:47:28 +08:00
cb2ae708f3 Fix soft link. Close #2587 (#2602)
Fix soft link

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-26 14:33:38 +08:00
d7f26786d4 Update dsl_examples and Fix component concentrator (#2597)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-09-26 11:58:50 +08:00
b05fab14f7 Add component Concentrator (#2586)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-25 18:44:31 +08:00
e6da0c7c7b deprecate init a super user (#2589)
### What problem does this PR solve?
#2295

### Type of change

- [x] Refactoring
2024-09-25 18:30:27 +08:00
ef89e3ebea remove onnx copy command from dockerfile (#2584)
### What problem does this PR solve?

#2564

### Type of change

- [x] Refactoring
2024-09-25 17:14:59 +08:00
8ede1c7bf5 trival (#2582)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-25 16:26:44 +08:00
6363d58e98 Add template investment_advisor (#2580)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-25 16:22:06 +08:00
c262011393 revert error in Dockerfile (#2581)
### What problem does this PR solve?
#2295

### Type of change


- [x] Refactoring
2024-09-25 16:10:29 +08:00
dda1367ab2 make it lighten (#2577)
### What problem does this PR solve?

#2295

### Type of change

- [x] Refactoring
2024-09-25 13:38:40 +08:00
e4c9cf2264 feat: If the model is not set, a pop-up window will remind the user #2295 (#2574)
### What problem does this PR solve?

feat: If the model is not set, a pop-up window will remind the user
#2295

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-25 11:16:00 +08:00
e3b3ec3f79 multi-arch-build (#2571)
### What problem does this PR solve?

Build multi-arch docker image `infiniflow/ragflow:poetry` on
`linux/amd64` and `linux/arm64`.

### Type of change

- [x] Refactoring
2024-09-25 10:37:20 +08:00
08d5637770 Fix tokenizer bug (#2573)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-25 10:30:49 +08:00
7bb28ca2bd add lighten control (#2567)
### What problem does this PR solve?

#2295

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-24 19:22:01 +08:00
9251fb39af feat: Delete Model Provider #2376 (#2565)
### What problem does this PR solve?

feat: Delete Model Provider #2376

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-09-24 19:10:06 +08:00
91dbce30bd feat: Add component Jin10 #1739 (#2563)
### What problem does this PR solve?

feat: Add component Jin10  #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-24 18:54:09 +08:00
949a999478 feat: Add component YahooFinance #1739 (#2561)
### What problem does this PR solve?

feat: Add component YahooFinance #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-24 16:46:41 +08:00
d40041cc82 refine multi-turn chat in agent (#2560)
### What problem does this PR solve?

#2484

### Type of change

- [x] Performance Improvement
- [ ] Other (please describe):
2024-09-24 16:20:19 +08:00
832c90ac3e fix: Web code build fails on ARM machines #2554 (#2557)
### What problem does this PR solve?

fix: Web code build fails on ARM machines #2554

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-24 15:27:26 +08:00
7b3099b1a1 add an API of delete llm supplier (#2556)
### What problem does this PR solve?

#1853

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-24 15:24:15 +08:00
4681638974 Streaming output is supported, dialogue share is not (#2555)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-24 15:14:44 +08:00
ecf441c830 refine using rerank model (#2553)
### What problem does this PR solve?

#2552

### Type of change

- [x] Performance Improvement
2024-09-24 12:38:18 +08:00
d9c2a128a5 SparkTTS (#2535)
### What problem does this PR solve?

SparkTTS

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-09-24 12:15:12 +08:00
38e3475714 refine markdown prompt (#2551)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-09-24 12:04:16 +08:00
90644246d6 Updated README on debugging web and python (#2544)
### What problem does this PR solve?

Updated README on debugging web and python

### Type of change

- [x] Documentation Update
2024-09-24 11:46:03 +08:00
100c60017f fix component rewrite bug (#2549)
### What problem does this PR solve?

#2545

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-24 11:31:42 +08:00
51dd6d1f90 fix: Initial language is English, but the UI is in Chinese #2514 (#2541)
### What problem does this PR solve?

fix: Initial language is English, but the UI is in Chinese #2514

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-23 16:28:27 +08:00
521ea6afcb feat: Refine reteival of multi-turn conversation #2362 (#2539)
### What problem does this PR solve?

feat: Refine reteival of multi-turn conversation #2362

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-23 15:26:11 +08:00
dd019e7ba1 feat: Configurable for excel, html table or row based text #2516 (#2538)
### What problem does this PR solve?

feat: Configurable for excel, html table or row based text #2516

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-23 14:58:51 +08:00
db1be22a2f fix: Merge models of the same category #2479 (#2536)
### What problem does this PR solve?

fix: Merge models of the same category #2479

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-23 14:07:00 +08:00
139268de6f Reverted replacing npm with yarn (#2531)
Reverted replacing npm with yarn

### Type of change

- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-23 11:08:31 +08:00
f6ceb43e36 fix: Add model by ollama in model provider page, user can't choose the model in chat window. #2479 (#2529)
### What problem does this PR solve?

fix: Add model by ollama in model provider page, user can't choose the
model in chat window. #2479

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-23 10:53:18 +08:00
d8a43416f5 Rework Dockerfile.scratch (#2525)
### What problem does this PR solve?

Rework Dockerfile.scratch
- Multiple stage Dockerfile
- Removed conda
- Replaced pip with poetry
- Added missing dependencies and fixed package version conflicts
- Added deepdoc models

### Type of change

- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-23 10:00:44 +08:00
4a6a2a0f1b refine xinference (#2521)
### What problem does this PR solve?

#1588

### Type of change

- [x] Refactoring
2024-09-20 18:37:01 +08:00
9bbef8216d refine reteival of multi-turn conversation (#2520)
### What problem does this PR solve?

#2362 #2484

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Performance Improvement
2024-09-20 17:25:55 +08:00
78856703c4 make excel parsing configurable (#2517)
### What problem does this PR solve?

#2516

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-20 15:33:38 +08:00
099c37ba95 rm key set in xinference (#2511)
### What problem does this PR solve?

#2492

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-20 10:55:52 +08:00
a44f1f735d fix self deployed llm lost (#2510)
### What problem does this PR solve?

#2509 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-20 10:41:25 +08:00
ae6f68e625 Update README_zh.md (#2491)
核心镜像swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:dev 大小为 19.1G,
不是9G

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-20 10:22:47 +08:00
5dd19c6a57 remove setting-system/index.tsx error import (#2507)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

Regarding the code merge #ca0c22f3184b9229e7e86de699842bb3b0e502c2, the
ragflow/web code will not run. This commit solves this problem.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-20 10:21:48 +08:00
5968f148bc refactor add LLM (#2508)
### What problem does this PR solve?

#2487

### Type of change

- [x] Refactoring
2024-09-20 10:20:35 +08:00
4f962d6bff BugFix: Fixed api_key generation error for VolcEngine (#2502)
BugFix: Fixed api_key generation error for VolcEngine with python's
f-string syntax

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: 海贼宅 <stu_xyx@163.com>
2024-09-20 10:03:43 +08:00
ddb8be9219 Web: Display the icon of the currently used storage. (#2504)
https://github.com/infiniflow/ragflow/issues/2503


### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Before:

<img width="611" alt="image"
src="https://github.com/user-attachments/assets/02a3a1ee-7bfb-4fe0-9b15-11ced69cc8a3">

After:

<img width="796" alt="image"
src="https://github.com/user-attachments/assets/371136af-8d16-47aa-909b-26609d3ad60e">

<img width="557" alt="image"
src="https://github.com/user-attachments/assets/9268362f-2b6a-4ea1-9fe7-659f7292e5e1">
2024-09-20 09:49:16 +08:00
422c229e52 Storage: Rename all the variables about get file to storage from minio. (#2497)
https://github.com/infiniflow/ragflow/issues/2496

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-19 19:19:27 +08:00
b5d1d2fec4 refine TTS (#2500)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-19 19:15:16 +08:00
d545633a6c OpenAITTS (#2493)
### What problem does this PR solve?

OpenAITTS

### Type of change


- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-19 16:55:18 +08:00
af0b4b0828 fix(Add model api): Add VolcEngine to create api_key format error (#2490)
### What problem does this PR solve?


Add VolcEngine to create api_key format error
When constructing the json string, there was an extra "," at the end,
which caused a formatting error. This commit fixed the problem.


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-19 15:10:49 +08:00
6c6380d27a update document sdk (#2485)
### Type of change
#2485
- [x] Performance Improvement
2024-09-19 12:52:35 +08:00
2324b88579 fix parser for pptx of which files are from filemanager (#2482)
### What problem does this PR solve?

#2467

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-18 19:13:37 +08:00
2b0dc01a88 rename some attributes in document sdk (#2481)
### What problem does this PR solve?

#1102

### Type of change

- [x] Performance Improvement

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-18 18:46:37 +08:00
01acc3fd5a fix duplicated llm name betweeen different suppliers (#2477)
### What problem does this PR solve?

#2465

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-18 16:09:22 +08:00
2484e26cb5 fix superuser password not base64 encoded (#2475)
### What problem does this PR solve?

Fixes the _superuser_ `admin@ragflow.io` not being accessible due to how
entered passwords are used. Unless this is expected behavior?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-18 14:30:45 +08:00
7195742ca5 rename create_timestamp_flt to create_timestamp_float (#2473)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-09-18 12:50:05 +08:00
62cb5f1bac update document sdk (#2445)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-18 11:08:19 +08:00
e7dd487779 fix ppt file from filemanager error (#2470)
### What problem does this PR solve?

#2467

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-18 09:22:14 +08:00
e41268efc6 Add Multi-Language Descriptions for 'Switch' Component and Update Message Assistant Placeholder (#2450)
### What problem does this PR solve?

_This PR addresses the need to describe the "Switch" component across
different languages and corrects a misleading description for a
placeholder message not exclusively tied to a specific assistant type.
By providing clearer and more accurate descriptions, this PR aims to
improve user understanding and usability of the Switch component and the
"Message Resume Assistant..." placeholder in a multilingual context._

### Explanation of Changes

1. **Added Descriptions for "Switch" Component**: 
- Descriptions were added for the "Switch" component in three different
locales:
- **English (EN)**: Provides a concise description of what the "Switch"
component does, focusing on its ability to evaluate conditions and
direct the flow of execution.
- **Simplified Chinese (ZH)**: Translated the English description into
Simplified Chinese to cater to users who prefer this locale.
- **Traditional Chinese (ZH-Traditional)**: Added a Traditional Chinese
version of the description to support users in regions that use
Traditional Chinese.
   
2. **Corrected "Message Resume Assistant..." to "Message the
Assistant..."**:
- Updated the description from "Message Resume Assistant..." to "Message
the Assistant..." in the English locale. This correction makes the
description more generic and accurate, reflecting the placeholder's
broader functionality, which is not limited to Resume Assistants. It now
clearly communicates that the placeholder can be used with various types
of assistants, not just those related to resumes.

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-15 16:16:10 +08:00
2f33ec7ad0 feat: When voice is turned on, the page will not display an empty reply message when the answer is empty #1877 (#2447)
### What problem does this PR solve?

feat: When voice is turned on, the page will not display an empty reply
message when the answer is empty #1877

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-14 18:39:13 +08:00
3b1375ef99 feat: If the tts model is not set, the Text to Speech switch is not allowed to be turned on #1877 (#2446)
### What problem does this PR solve?

feat: If the tts model is not set, the Text to Speech switch is not
allowed to be turned on #1877

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-14 17:45:29 +08:00
2c05e6e6bd Update and rename agentic_rag_introduction.md to agent_introduction.md (#2443)
### What problem does this PR solve?

#2441 

### Type of change


- [x] Documentation Update
2024-09-14 17:36:57 +08:00
8ccc696723 Update _category_.json (#2442)
### What problem does this PR solve?

#2441 

### Type of change

- [x] Documentation Update
2024-09-14 17:36:35 +08:00
1621313c0f feat: After the voice in the new conversation window is played, jump to the tab of the conversation #1877 (#2440)
### What problem does this PR solve?

feat: After the voice in the new conversation window is played, jump to
the tab of the conversation #1877

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-14 17:19:04 +08:00
b94c15ef1e prepare document for release (#2438)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-09-14 16:09:42 +08:00
8a16c8cc44 fix duplicate function name (#2437)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-14 16:04:02 +08:00
b12a437a30 feat: Supports text output and sound output #1877 (#2436)
### What problem does this PR solve?

feat: Supports text output and sound output #1877

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-14 15:58:02 +08:00
deeb950e1c feat: Add html to the description text of the parsing method general #336 (#2432)
### What problem does this PR solve?

feat: Add html to the description text of the parsing method general
#336

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-14 15:18:34 +08:00
6a0702f55f feat: Display mindmap in drawer #2247 (#2430)
### What problem does this PR solve?

feat: Display mindmap in drawer #2247

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-14 14:42:36 +08:00
3044cb85fd fix batch size error for qianwen embedding (#2431)
### What problem does this PR solve?

#2402

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-14 14:40:57 +08:00
d3262ca378 refine the warning message for rewrite component (#2429)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-09-14 14:17:03 +08:00
99a7c0fb97 update sdk document and chunk (#2421)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-14 13:24:21 +08:00
7e75b9d778 fix parsing spaces in russian language PDFs (#1987) (#2427)
### What problem does this PR solve?

[#1987](https://github.com/infiniflow/ragflow/issues/1987)

When scanning PDF files character by character, the parser excluded
spaces if the string did not match regex. Text from [Russian
documents](https://github.com/user-attachments/files/16659706/dogovor_oferta.pdf)
needs spaces, but it does not match the regex because it uses different
alphabet. That's why PDFs were parsed incorrectly and were almost
unusable as source. Fixed that by adding Russian alphabet to regex.

There might be problems with other languages that use different
alphabets. I additionally tested [PDF in
Spanish](https://www.scusd.edu/sites/main/files/file-attachments/howtohelpyourchildsucceedinschoolspanish.pdf?1338307816)
and old [a-zA-Z...] regex parses it correctly with spaces.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-14 13:14:39 +08:00
a467f31238 minor (#2422)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-09-14 09:34:35 +08:00
54342ae0a2 boost highlight performace (#2419)
### What problem does this PR solve?

#2415

### Type of change

- [x] Performance Improvement
2024-09-13 18:10:32 +08:00
bdcf195b20 Initial draft of Create a General-purpose chatbot (#2411)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-09-13 17:21:03 +08:00
3f571a13c2 fix empty children in mindmap (#2418)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-13 17:19:47 +08:00
9d4bb5767c make highlight friendly to English (#2417)
### What problem does this PR solve?

#2415

### Type of change

- [x] Performance Improvement
2024-09-13 17:03:51 +08:00
5e7b93e802 add updates for README (#2413)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-09-13 14:31:04 +08:00
ec4def9a44 feat: When the mindmap data is empty, it will not be displayed on the search page #2247 (#2414)
### What problem does this PR solve?

feat: When the mindmap data is empty, it will not be displayed on the
search page #2247
### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-09-13 14:30:51 +08:00
2bd71d722b feat: Modify the style of the answer card on the search page #2247 (#2412)
### What problem does this PR solve?

feat: Modify the style of the answer card on the search page #2247

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-13 12:31:31 +08:00
8f2c0176b4 feat: Use Spin to wrap the chunk list on the search page #2247 (#2409)
### What problem does this PR solve?

feat: Use Spin to wrap the chunk list on the search page #2247
### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-13 11:38:09 +08:00
b261b6aac0 fix pip install error (#2407)
### What problem does this PR solve?

#2356

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-13 10:06:54 +08:00
cbdf54cf36 feat: Click on the chunk on the search page to locate the corresponding file location #2247 (#2399)
### What problem does this PR solve?

feat: Click on the chunk on the search page to locate the corresponding
file location #2247

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-13 08:54:26 +08:00
db0606e064 feat: Wrap the searched chunk with a Popover #2247 (#2398)
### What problem does this PR solve?

feat: Wrap the searched chunk with a Popover #2247

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-12 19:15:44 +08:00
cfae63d107 Add RAGFlow benchmark (#2387)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-12 19:01:00 +08:00
88f8c8ed86 Fix volcengine yfinance confliction (#2386)
### What problem does this PR solve?

#2379 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-12 19:00:35 +08:00
4158697fe6 feat: Add component AkShare #1739 (#2390)
### What problem does this PR solve?

 feat: Add component AkShare #1739

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-12 17:58:05 +08:00
5f9cb16a3c feat: Add component WenCai #1739 (#2388)
### What problem does this PR solve?

feat: Add component WenCai #1739

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-12 17:51:43 +08:00
4730145696 debug backend API for TAB 'search' (#2389)
### What problem does this PR solve?
#2247

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-12 17:51:20 +08:00
68d0210e92 feat: Use Tree to display the knowledge base list on the search page #2247 (#2385)
### What problem does this PR solve?

feat: Use Tree to display the knowledge base list on the search page
#2247
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-12 17:23:32 +08:00
f8e9a0590f Common: Support postgreSQL database as the metadata db. (#2357)
https://github.com/infiniflow/ragflow/issues/2356

### What problem does this PR solve?

As title

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
2024-09-12 15:12:39 +08:00
ba834aee26 Add a default value for do_refer in Dialog (#2383)
### What problem does this PR solve?

Add a default value for do_refer in Dialog

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-09-12 15:11:57 +08:00
983540614e feat: Cover the entire search page with a background image #2247 (#2381)
### What problem does this PR solve?

feat: Cover the entire search page with a background image #2247

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-12 14:20:04 +08:00
6722b3d558 update sdk document (#2374)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-12 14:19:45 +08:00
6000c3e304 feat: Catching errors with IndentedTree #2247 (#2380)
### What problem does this PR solve?

feat: Catching errors with IndentedTree #2247

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-12 13:34:33 +08:00
333608a1d4 add search TAB backend api (#2375)
### What problem does this PR solve?
 #2247

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-11 19:49:18 +08:00
8052cbc70e feat: Retrieval chunks by page #2247 (#2373)
### What problem does this PR solve?

feat: Retrieval chunks by page #2247

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-11 19:48:11 +08:00
b0e0e1fdd0 fix json error (#2372)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-11 19:10:49 +08:00
8e3228d461 feat: Catch errors in getting mindmap #2247 (#2368)
### What problem does this PR solve?

feat: Catch errors in getting mindmap #2247

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-11 16:19:14 +08:00
f789098e9f feat: Proxy the api address to the local nginx address #2350 (#2366)
### What problem does this PR solve?

feat: Proxy the api address to the local nginx address #2350

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-11 15:17:52 +08:00
d6e6c530d7 fix OpenRouter add bug and the way to add OpenRouter model (#2364)
### What problem does this PR solve?

#2359  fix OpenRouter add bug and the way to add OpenRouter model

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-09-11 15:10:25 +08:00
22c5affacc Update templates:Text2sql and DB Description (#2361)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-09-11 12:25:57 +08:00
35b7d17d97 fix SILICONFLOW embedding error (#2363)
### What problem does this PR solve?

#2335  fix SILICONFLOW embedding error

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-09-11 12:17:44 +08:00
1fc14ff6d4 SDK for session (#2354)
### What problem does this PR solve?

Includes SDK for creating, updating sessions, getting sessions, listing
sessions, and dialogues
#1102 
### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-09-11 12:03:55 +08:00
7fad48f42c fix: empty or contains only empty strings. (#2347)
### What problem does this PR solve?
the setting was kept empty for Empty_response. In expectation, this case
should get a response from the LLM if can't find the references from the
knowledgebase.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)


![image](https://github.com/user-attachments/assets/9c382b1d-40f6-43b0-848c-fa6863f9a253)

![image](https://github.com/user-attachments/assets/032d2001-97a2-4faa-91bf-c9c57caf2070)

Co-authored-by: Theta Wang (ncu) <chunshan.connect@gmail.com>
2024-09-11 09:32:12 +08:00
77988fe3c2 fix: 123.60.95.134 redirect attack #2350 (#2352)
### What problem does this PR solve?
fix: 123.60.95.134 redirect attack #2350


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-10 17:25:15 +08:00
cb00f36f62 fix categorize error (#2348)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-10 16:03:30 +08:00
7edb4ad7dc fix: Make markdown support line breaks #2315 (#2343)
### What problem does this PR solve?

fix: Make markdown support line breaks #2315

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-10 12:15:02 +08:00
66c54e75f3 add default model types (#2342)
### What problem does this PR solve?


### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-09-10 11:39:44 +08:00
f60dfffb4b add model types to factories API (#2341)
### What problem does this PR solve?

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-09-10 11:26:01 +08:00
f1ad778250 feat: Dynamically change the background image on the search homepage every day #2247 (#2338)
### What problem does this PR solve?

feat: Dynamically change the background image on the search homepage
every day #2247

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-10 11:25:12 +08:00
7c8f159751 Fixed a docusaurus display issue (#2340)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-09-10 11:24:59 +08:00
c57cc0769b Added a brief introduction to Agentic RAG (#2331)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-09 20:16:28 +08:00
869df1f704 minor (#2328)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-09-09 19:30:17 +08:00
42eeb38247 feat: Add RetrievalDocuments to SearchPage #2247 (#2327)
### What problem does this PR solve?
feat: Add RetrievalDocuments to SearchPage #2247
feat: Click on the link in the reference to display the pdf drawer #2247

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-09 19:20:16 +08:00
7241c73c7a Add benchmark ndcg@10 (#2326)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-09 19:20:00 +08:00
336a639164 SDK for session (#2312)
### What problem does this PR solve?

SDK for session
#1102 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Feiue <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-09 17:18:08 +08:00
ceae4df889 feat: The search box is displayed globally when the page is loaded for the first time #2247 (#2325)
### What problem does this PR solve?

feat: The search box is displayed globally when the page is loaded for
the first time #2247

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-09 16:37:04 +08:00
884dcbcb7e updates dead link (#2324)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2024-09-09 16:36:52 +08:00
4b57177523 fix error for files from filemanager (#2323)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-09 14:27:22 +08:00
4130519599 feat: Hide search tab #2247 (#2322)
### What problem does this PR solve?

feat: Hide search tab #2247

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-09 14:06:27 +08:00
0c73f77c4d Update .env (#2319)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-09-09 14:02:48 +08:00
fbe68034aa feat: Click on the relevant question tag to continue searching for answers #2247 (#2320)
### What problem does this PR solve?

feat: Click on the relevant question tag to continue searching for
answers #2247

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-09 14:02:08 +08:00
22acd0ac67 fix minio error (#2321)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-09 14:01:25 +08:00
4cf122c6db Doc updates for newly updates (#2317)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-09-09 13:35:50 +08:00
6a77c94365 Update .env For CVE-2024-37288 (#2318)
fix: es CVE-2024-37288

https://discuss.elastic.co/t/kibana-8-15-1-security-update-esa-2024-27-esa-2024-28/366119

### What problem does this PR solve?

### Type of change
- [x] Performance Improvement
2024-09-09 13:34:08 +08:00
80656309f7 fix azure-openai add bug (#2314)
### What problem does this PR solve?

#2236  fix azure-openai add bug

### Type of change


- [x] Bug Fix

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-09-09 12:10:45 +08:00
9f7d187ab3 add elapsed time of conversation (#2316)
### What problem does this PR solve?

#2315

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-09 12:08:50 +08:00
63da2cb7d5 fix SILICONFLOW rerank error (#2313)
### What problem does this PR solve?

#2231  fix SILICONFLOW rerank error

### Type of change


- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-09-09 11:41:37 +08:00
cb69c742b0 add support for TongyiQwen tts (#2311)
### What problem does this PR solve?

add support for TongyiQwen tts
#1853

### Type of change


- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-09-09 11:01:43 +08:00
2ac72899ef reduce interval of task executor heart beat (#2308)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
- [ ] Other (please describe):
2024-09-09 10:19:10 +08:00
473f9892fb Updated component descriptions (#2293)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-09-09 10:16:16 +08:00
fe4b2bf969 feat: Show chat tab #2247 (#2307)
### What problem does this PR solve?

feat: Show chat tab #2247
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-09 10:04:22 +08:00
c18b78b261 add a lib (#2306)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-09 09:48:12 +08:00
8dd3adc443 Storage: Support the s3, azure blob as the object storage of ragflow. (#2278)
### What problem does this PR solve?

issue: https://github.com/infiniflow/ragflow/issues/2277

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-09 09:41:14 +08:00
e85fea31a8 feat: Fetch mind map in search page #2247 (#2292)
### What problem does this PR solve?
feat: Fetch mind map in search page #2247

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-06 19:56:17 +08:00
H
1aba978de2 Add component AkShare (#2290)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-06 19:09:41 +08:00
H
7e0b3d19d6 Add component TuShare (#2288)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-06 18:33:35 +08:00
788ca41d9e feat: Added md.svg #345 (#2289)
### What problem does this PR solve?

feat: Added md.svg #345

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-06 18:32:06 +08:00
6b23308f26 Added kibana (#2286)
Added kibana to make elastic management easier.
PR #1710 did this. 
PR #1714 revert this.
This PR did again and fix some bugs.

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-09-06 16:02:44 +08:00
925dd2aa85 feat: Search for the answers you want based on the selected knowledge base #2247 (#2287)
### What problem does this PR solve?

feat: Search for the answers you want based on the selected knowledge
base #2247

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-06 15:42:55 +08:00
b5a2711c05 Fix agent retrieval nothing (#2283)
### What problem does this PR solve?

Fix agent retrieval nothing

### Type of change

- [✓] Bug Fix (non-breaking change which fixes an issue)
2024-09-06 15:02:41 +08:00
H
c6e723f2ee Fix graphrag : "role" user (#2273)
### What problem does this PR solve?

#2270 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-06 10:04:01 +08:00
H
fd3e55cfcf Add component Jin10 (#2271)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-05 19:45:05 +08:00
6ae0da92cb feat: send question with retrieval api #2247 (#2272)
### What problem does this PR solve?
feat: send question with retrieval api #2247

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-05 19:32:55 +08:00
H
9377192859 Add component WenCai (#2269)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-05 17:50:21 +08:00
42671e08f1 Fine tweaks to template descriptions (#2264)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-09-05 16:16:03 +08:00
b2f87a9f8f feat: Add sidebar to SearchPage #2247 (#2267)
### What problem does this PR solve?

feat: Add sidebar to SearchPage #2247

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-05 16:04:04 +08:00
878dca26bb SDK for Assistant (#2266)
### What problem does this PR solve?

SDK for Assistant
#1102 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Feiue <10215101452@stu.ecun.edu.cn>
2024-09-05 15:08:02 +08:00
445576ec88 feat: Set the global scroll bar style #2247 (#2265)
### What problem does this PR solve?

feat: Set the global scroll bar style #2247
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-05 14:47:48 +08:00
04de0c4cef update agent\templates\medical_consultation.json (#2260)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-09-05 12:15:12 +08:00
7e65df87dd feat: Add Sider to SearchPage #2247 (#2262)
### What problem does this PR solve?

feat: Add Sider to SearchPage #2247

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-05 12:12:44 +08:00
7c98cb5075 feat: Make agent template support HTML #1842 (#2259)
### What problem does this PR solve?

feat: Make agent template support HTML #1842
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-05 10:49:52 +08:00
6df0f44e71 Bump flask-cors from 4.0.1 to 5.0.0 (#2251)
Bumps [flask-cors](https://github.com/corydolphin/flask-cors) from 4.0.1
to 5.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/corydolphin/flask-cors/releases">flask-cors's
releases</a>.</em></p>
<blockquote>
<h2>5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Breaking: Change default to disable private network access by <a
href="https://github.com/corydolphin"><code>@​corydolphin</code></a> in
<a
href="https://redirect.github.com/corydolphin/flask-cors/pull/368">corydolphin/flask-cors#368</a>
This effectively resolves <a
href="https://github.com/advisories/GHSA-hxwh-jpp2-84pm">https://github.com/advisories/GHSA-hxwh-jpp2-84pm</a>
<a
href="https://osv.dev/vulnerability/PYSEC-2024-71">https://osv.dev/vulnerability/PYSEC-2024-71</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/corydolphin/flask-cors/compare/4.0.2...5.0.0">https://github.com/corydolphin/flask-cors/compare/4.0.2...5.0.0</a></p>
<h2>4.0.2</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump requests from 2.31.0 to 2.32.0 in /docs by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/corydolphin/flask-cors/pull/358">corydolphin/flask-cors#358</a></li>
<li>Backwards Compatible Fix for CVE-2024-6221 by <a
href="https://github.com/adrianosela"><code>@​adrianosela</code></a> in
<a
href="https://redirect.github.com/corydolphin/flask-cors/pull/363">corydolphin/flask-cors#363</a></li>
<li>Add unit tests for Private-Network by <a
href="https://github.com/corydolphin"><code>@​corydolphin</code></a> in
<a
href="https://redirect.github.com/corydolphin/flask-cors/pull/367">corydolphin/flask-cors#367</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/dependabot"><code>@​dependabot</code></a> made
their first contribution in <a
href="https://redirect.github.com/corydolphin/flask-cors/pull/358">corydolphin/flask-cors#358</a></li>
<li><a
href="https://github.com/adrianosela"><code>@​adrianosela</code></a>
made their first contribution in <a
href="https://redirect.github.com/corydolphin/flask-cors/pull/363">corydolphin/flask-cors#363</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/corydolphin/flask-cors/compare/4.0.1...4.0.2">https://github.com/corydolphin/flask-cors/compare/4.0.1...4.0.2</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/corydolphin/flask-cors/blob/main/CHANGELOG.md">flask-cors's
changelog</a>.</em></p>
<blockquote>
<h1>Change Log</h1>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c8514760cf"><code>c851476</code></a>
V5: Breaking: Change default to disable private network access (<a
href="https://redirect.github.com/corydolphin/flask-cors/issues/368">#368</a>)</li>
<li><a
href="561ed263e6"><code>561ed26</code></a>
Add unit tests for Private-Network (<a
href="https://redirect.github.com/corydolphin/flask-cors/issues/367">#367</a>)</li>
<li><a
href="7ae310c56a"><code>7ae310c</code></a>
Backwards Compatible Fix for CVE-2024-6221 (<a
href="https://redirect.github.com/corydolphin/flask-cors/issues/363">#363</a>)</li>
<li><a
href="f25c6b2ed2"><code>f25c6b2</code></a>
--- (<a
href="https://redirect.github.com/corydolphin/flask-cors/issues/358">#358</a>)</li>
<li>See full diff in <a
href="https://github.com/corydolphin/flask-cors/compare/4.0.1...5.0.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=flask-cors&package-manager=pip&previous-version=4.0.1&new-version=5.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/infiniflow/ragflow/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-05 09:34:22 +08:00
c998ad7a18 Bump nltk from 3.8.1 to 3.9 (#2250)
Bumps [nltk](https://github.com/nltk/nltk) from 3.8.1 to 3.9.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/nltk/nltk/blob/develop/ChangeLog">nltk's
changelog</a>.</em></p>
<blockquote>
<p>Version 3.9.1 2024-08-19</p>
<ul>
<li>Fixed bug that prevented wordnet from loading</li>
</ul>
<p>Version 3.9 2024-08-18</p>
<ul>
<li>Fix security vulnerability CVE-2024-39705 (breaking change)</li>
<li>Replace pickled models (punkt, chunker, taggers) by new pickle-free
&quot;_tab&quot; packages</li>
<li>No longer sort WordNet synsets and relations (sort in calling
function when required)</li>
<li>Add Python 3.12 support</li>
<li>Many other minor fixes</li>
</ul>
<p>Thanks to the following contributors to 3.8.2:
Tom Aarsen, Cat Lee Ball, Veralara Bernhard, Carlos Brandt, Konstantin
Chernyshev, Michael Higgins,
Eric Kafe, Vivek Kalyan, David Lukes, Rob Malouf, purificant, Alex
Rudnick, Liling Tan, Akihiro Yamazaki.</p>
<p>Version 3.8.1 2023-01-02</p>
<ul>
<li>Resolve RCE vulnerability in localhost WordNet Browser (<a
href="https://redirect.github.com/nltk/nltk/issues/3100">#3100</a>)</li>
<li>Remove unused tool scripts (<a
href="https://redirect.github.com/nltk/nltk/issues/3099">#3099</a>)</li>
<li>Resolve XSS vulnerability in localhost WordNet Browser (<a
href="https://redirect.github.com/nltk/nltk/issues/3096">#3096</a>)</li>
<li>Add Python 3.11 support (<a
href="https://redirect.github.com/nltk/nltk/issues/3090">#3090</a>)</li>
</ul>
<p>Thanks to the following contributors to 3.8.1:
Francis Bond, John Vandenberg, Tom Aarsen</p>
<p>Version 3.8 2022-12-12</p>
<ul>
<li>Refactor dispersion plot (<a
href="https://redirect.github.com/nltk/nltk/issues/3082">#3082</a>)</li>
<li>Provide type hints for LazyCorpusLoader variables (<a
href="https://redirect.github.com/nltk/nltk/issues/3081">#3081</a>)</li>
<li>Throw warning when LanguageModel is initialized with incorrect
vocabulary (<a
href="https://redirect.github.com/nltk/nltk/issues/3080">#3080</a>)</li>
<li>Fix WordNet's all_synsets() function (<a
href="https://redirect.github.com/nltk/nltk/issues/3078">#3078</a>)</li>
<li>Resolve TreebankWordDetokenizer inconsistency with end-of-string
contractions (<a
href="https://redirect.github.com/nltk/nltk/issues/3070">#3070</a>)</li>
<li>Support both iso639-3 codes and BCP-47 language tags (<a
href="https://redirect.github.com/nltk/nltk/issues/3060">#3060</a>)</li>
<li>Avoid DeprecationWarning in Regexp tokenizer (<a
href="https://redirect.github.com/nltk/nltk/issues/3055">#3055</a>)</li>
<li>Fix many doctests, add doctests to CI (<a
href="https://redirect.github.com/nltk/nltk/issues/3054">#3054</a>, <a
href="https://redirect.github.com/nltk/nltk/issues/3050">#3050</a>, <a
href="https://redirect.github.com/nltk/nltk/issues/3048">#3048</a>)</li>
<li>Fix bool field not being read in VerbNet (<a
href="https://redirect.github.com/nltk/nltk/issues/3044">#3044</a>)</li>
<li>Greatly improve time efficiency of SyllableTokenizer when tokenizing
numbers (<a
href="https://redirect.github.com/nltk/nltk/issues/3042">#3042</a>)</li>
<li>Fix encodings of Polish udhr corpus reader (<a
href="https://redirect.github.com/nltk/nltk/issues/3038">#3038</a>)</li>
<li>Allow TweetTokenizer to tokenize emoji flag sequences (<a
href="https://redirect.github.com/nltk/nltk/issues/3034">#3034</a>)</li>
<li>Prevent LazyModule from increasing the size of
nltk.<strong>dict</strong> (<a
href="https://redirect.github.com/nltk/nltk/issues/3033">#3033</a>)</li>
<li>Fix CoreNLPServer non-default port issue (<a
href="https://redirect.github.com/nltk/nltk/issues/3031">#3031</a>)</li>
<li>Add &quot;acion&quot; suffix to the Spanish SnowballStemmer (<a
href="https://redirect.github.com/nltk/nltk/issues/3030">#3030</a>)</li>
<li>Allow loading WordNet without OMW (<a
href="https://redirect.github.com/nltk/nltk/issues/3026">#3026</a>)</li>
<li>Use input() in nltk.chat.chatbot() for Jupyter support (<a
href="https://redirect.github.com/nltk/nltk/issues/3022">#3022</a>)</li>
<li>Fix edit_distance_align() in distance.py (<a
href="https://redirect.github.com/nltk/nltk/issues/3017">#3017</a>)</li>
<li>Tackle performance and accuracy regression of sentence tokenizer
since NLTK 3.6.6 (<a
href="https://redirect.github.com/nltk/nltk/issues/3014">#3014</a>)</li>
<li>Add the Iota operator to semantic logic (<a
href="https://redirect.github.com/nltk/nltk/issues/3010">#3010</a>)</li>
<li>Resolve critical errors in WordNet app (<a
href="https://redirect.github.com/nltk/nltk/issues/3008">#3008</a>)</li>
<li>Resolve critical error in CHILDES Corpus (<a
href="https://redirect.github.com/nltk/nltk/issues/2998">#2998</a>)</li>
<li>Make WordNet information_content() accept adjective satellites (<a
href="https://redirect.github.com/nltk/nltk/issues/2995">#2995</a>)</li>
<li>Add &quot;strict=True&quot; parameter to CoreNLP (<a
href="https://redirect.github.com/nltk/nltk/issues/2993">#2993</a>, <a
href="https://redirect.github.com/nltk/nltk/issues/3043">#3043</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="24936a2d0c"><code>24936a2</code></a>
Bump version to 3.9</li>
<li><a
href="c222897403"><code>c222897</code></a>
Merge branch 'develop' of <a
href="https://github.com/nltk/nltk">https://github.com/nltk/nltk</a>
into develop</li>
<li><a
href="34c3a4ad4e"><code>34c3a4a</code></a>
Merge branch 'develop' of <a
href="https://github.com/nltk/nltk">https://github.com/nltk/nltk</a>
into develop</li>
<li><a
href="253dd3acd1"><code>253dd3a</code></a>
add black</li>
<li><a
href="c43727fad6"><code>c43727f</code></a>
Update version</li>
<li><a
href="7137405da3"><code>7137405</code></a>
Merge pull request <a
href="https://redirect.github.com/nltk/nltk/issues/3066">#3066</a> from
asishm/bugfix-lambda-closure-leak</li>
<li><a
href="369cb9f85d"><code>369cb9f</code></a>
Merge pull request <a
href="https://redirect.github.com/nltk/nltk/issues/3245">#3245</a> from
ekaf/hotfix-closuredup</li>
<li><a
href="501c70e20a"><code>501c70e</code></a>
Merge branch 'develop' into hotfix-closuredup</li>
<li><a
href="bf05dc4cf2"><code>bf05dc4</code></a>
Merge pull request <a
href="https://redirect.github.com/nltk/nltk/issues/3306">#3306</a> from
ekaf/py3_compat</li>
<li><a
href="66539c7cc7"><code>66539c7</code></a>
Sorted output in unit/test_wordnet.py</li>
<li>Additional commits viewable in <a
href="https://github.com/nltk/nltk/compare/3.8.1...3.9">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=nltk&package-manager=pip&previous-version=3.8.1&new-version=3.9)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/infiniflow/ragflow/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-05 09:34:11 +08:00
1dcc416c70 Bump cryptography from 42.0.5 to 43.0.1 (#2253)
Bumps [cryptography](https://github.com/pyca/cryptography) from 42.0.5
to 43.0.1.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's
changelog</a>.</em></p>
<blockquote>
<p>43.0.1 - 2024-09-03</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.3.2.
<p>.. _v43-0-0:</p>
<p>43.0.0 - 2024-07-20<br />
</code></pre></p>
<ul>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Support for OpenSSL less
than 1.1.1e has been
removed.  Users on older version of OpenSSL will need to upgrade.</li>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Dropped support for
LibreSSL &lt; 3.8.</li>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.3.1.</li>
<li>Updated the minimum supported Rust version (MSRV) to 1.65.0, from
1.63.0.</li>

<li>:func:<code>~cryptography.hazmat.primitives.asymmetric.rsa.generate_private_key</code>
now enforces a minimum RSA key size of 1024-bit. Note that 1024-bit is
still
considered insecure, users should generally use a key size of
2048-bits.</li>

<li>:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.serialize_certificates</code>
now emits ASN.1 that more closely follows the recommendations in
:rfc:<code>2315</code>.</li>
<li>Added new :doc:<code>/hazmat/decrepit/index</code> module which
contains outdated and
insecure cryptographic primitives.

:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.CAST5</code>,

:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.SEED</code>,

:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.IDEA</code>,
and

:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.Blowfish</code>,
which were
deprecated in 37.0.0, have been added to this module. They will be
removed
from the <code>cipher</code> module in 45.0.0.</li>
<li>Moved
:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.TripleDES</code>
and
:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.ARC4</code>
into
:doc:<code>/hazmat/decrepit/index</code> and deprecated them in the
<code>cipher</code> module.
They will be removed from the <code>cipher</code> module in 48.0.0.</li>
<li>Added support for deterministic
:class:<code>~cryptography.hazmat.primitives.asymmetric.ec.ECDSA</code>
(:rfc:<code>6979</code>)</li>
<li>Added support for client certificate verification to the
:mod:<code>X.509 path validation
&lt;cryptography.x509.verification&gt;</code> APIs in the
form of
:class:<code>~cryptography.x509.verification.ClientVerifier</code>,
:class:<code>~cryptography.x509.verification.VerifiedClient</code>, and
<code>PolicyBuilder</code>

:meth:<code>~cryptography.x509.verification.PolicyBuilder.build_client_verifier</code>.</li>
<li>Added Certificate

:attr:<code>~cryptography.x509.Certificate.public_key_algorithm_oid</code>
and Certificate Signing Request

:attr:<code>~cryptography.x509.CertificateSigningRequest.public_key_algorithm_oid</code>
to determine the
:class:<code>~cryptography.hazmat._oid.PublicKeyAlgorithmOID</code>
Object Identifier of the public key found inside the certificate.</li>
<li>Added
:attr:<code>~cryptography.x509.InvalidityDate.invalidity_date_utc</code>,
a
timezone-aware alternative to the naïve <code>datetime</code> attribute

:attr:<code>~cryptography.x509.InvalidityDate.invalidity_date</code>.</li>
<li>Added support for parsing empty DN string in</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a773387828"><code>a773387</code></a>
bump for 43.0.1 (<a
href="https://redirect.github.com/pyca/cryptography/issues/11533">#11533</a>)</li>
<li><a
href="0393fef575"><code>0393fef</code></a>
Backport setuptools version ban (<a
href="https://redirect.github.com/pyca/cryptography/issues/11526">#11526</a>)</li>
<li><a
href="6687bab97a"><code>6687bab</code></a>
Bump openssl from 0.10.65 to 0.10.66 in /src/rust (<a
href="https://redirect.github.com/pyca/cryptography/issues/11320">#11320</a>)
(<a
href="https://redirect.github.com/pyca/cryptography/issues/11324">#11324</a>)</li>
<li><a
href="ebf14f2edc"><code>ebf14f2</code></a>
bump for 43.0.0 and update changelog (<a
href="https://redirect.github.com/pyca/cryptography/issues/11311">#11311</a>)</li>
<li><a
href="42788a0353"><code>42788a0</code></a>
Fix exchange with keys that had Q automatically computed (<a
href="https://redirect.github.com/pyca/cryptography/issues/11309">#11309</a>)</li>
<li><a
href="2dbdfb8f39"><code>2dbdfb8</code></a>
don't assign unused name (<a
href="https://redirect.github.com/pyca/cryptography/issues/11310">#11310</a>)</li>
<li><a
href="ccc66e6cdf"><code>ccc66e6</code></a>
Bump openssl from 0.10.64 to 0.10.65 in /src/rust (<a
href="https://redirect.github.com/pyca/cryptography/issues/11308">#11308</a>)</li>
<li><a
href="4310c8727b"><code>4310c87</code></a>
Bump sphinxcontrib-qthelp from 1.0.7 to 1.0.8 (<a
href="https://redirect.github.com/pyca/cryptography/issues/11307">#11307</a>)</li>
<li><a
href="f66a9c4b4f"><code>f66a9c4</code></a>
Bump sphinxcontrib-htmlhelp from 2.0.5 to 2.0.6 (<a
href="https://redirect.github.com/pyca/cryptography/issues/11306">#11306</a>)</li>
<li><a
href="a8fcf18ee0"><code>a8fcf18</code></a>
Bump openssl-sys from 0.9.102 to 0.9.103 in /src/rust (<a
href="https://redirect.github.com/pyca/cryptography/issues/11305">#11305</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pyca/cryptography/compare/42.0.5...43.0.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=42.0.5&new-version=43.0.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/infiniflow/ragflow/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-05 09:34:01 +08:00
8c075f8287 Bump aiohttp from 3.9.4 to 3.10.2 (#2254)
Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.9.4 to
3.10.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/aio-libs/aiohttp/releases">aiohttp's
releases</a>.</em></p>
<blockquote>
<h2>3.10.2</h2>
<h2>Bug fixes</h2>
<ul>
<li>
<p>Fixed server checks for circular symbolic links to be compatible with
Python 3.13 -- by :user:<code>steverep</code>.</p>
<p><em>Related issues and pull requests on GitHub:</em>
<a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8565">#8565</a>.</p>
</li>
<li>
<p>Fixed request body not being read when ignoring an Upgrade request --
by :user:<code>Dreamsorcerer</code>.</p>
<p><em>Related issues and pull requests on GitHub:</em>
<a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8597">#8597</a>.</p>
</li>
<li>
<p>Fixed an edge case where shutdown would wait for timeout when the
handler was already completed -- by
:user:<code>Dreamsorcerer</code>.</p>
<p><em>Related issues and pull requests on GitHub:</em>
<a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8611">#8611</a>.</p>
</li>
<li>
<p>Fixed connecting to <code>npipe://</code>, <code>tcp://</code>, and
<code>unix://</code> urls -- by :user:<code>bdraco</code>.</p>
<p><em>Related issues and pull requests on GitHub:</em>
<a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8632">#8632</a>.</p>
</li>
<li>
<p>Fixed WebSocket ping tasks being prematurely garbage collected -- by
:user:<code>bdraco</code>.</p>
<p>There was a small risk that WebSocket ping tasks would be prematurely
garbage collected because the event loop only holds a weak reference to
the task. The garbage collection risk has been fixed by holding a strong
reference to the task. Additionally, the task is now scheduled eagerly
with Python 3.12+ to increase the chance it can be completed immediately
and avoid having to hold any references to the task.</p>
<p><em>Related issues and pull requests on GitHub:</em>
<a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8641">#8641</a>.</p>
</li>
<li>
<p>Fixed incorrectly following symlinks for compressed file variants --
by :user:<code>steverep</code>.</p>
<p><em>Related issues and pull requests on GitHub:</em></p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst">aiohttp's
changelog</a>.</em></p>
<blockquote>
<h1>3.10.2 (2024-08-08)</h1>
<h2>Bug fixes</h2>
<ul>
<li>
<p>Fixed server checks for circular symbolic links to be compatible with
Python 3.13 -- by :user:<code>steverep</code>.</p>
<p><em>Related issues and pull requests on GitHub:</em>
:issue:<code>8565</code>.</p>
</li>
<li>
<p>Fixed request body not being read when ignoring an Upgrade request --
by :user:<code>Dreamsorcerer</code>.</p>
<p><em>Related issues and pull requests on GitHub:</em>
:issue:<code>8597</code>.</p>
</li>
<li>
<p>Fixed an edge case where shutdown would wait for timeout when the
handler was already completed -- by
:user:<code>Dreamsorcerer</code>.</p>
<p><em>Related issues and pull requests on GitHub:</em>
:issue:<code>8611</code>.</p>
</li>
<li>
<p>Fixed connecting to <code>npipe://</code>, <code>tcp://</code>, and
<code>unix://</code> urls -- by :user:<code>bdraco</code>.</p>
<p><em>Related issues and pull requests on GitHub:</em>
:issue:<code>8632</code>.</p>
</li>
<li>
<p>Fixed WebSocket ping tasks being prematurely garbage collected -- by
:user:<code>bdraco</code>.</p>
<p>There was a small risk that WebSocket ping tasks would be prematurely
garbage collected because the event loop only holds a weak reference to
the task. The garbage collection risk has been fixed by holding a strong
reference to the task. Additionally, the task is now scheduled eagerly
with Python 3.12+ to increase the chance it can be completed immediately
and avoid having to hold any references to the task.</p>
<p><em>Related issues and pull requests on GitHub:</em>
:issue:<code>8641</code>.</p>
</li>
<li>
<p>Fixed incorrectly following symlinks for compressed file variants --
by :user:<code>steverep</code>.</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="491106e65a"><code>491106e</code></a>
Release 3.10.2 (<a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8655">#8655</a>)</li>
<li><a
href="ce2e975881"><code>ce2e975</code></a>
[PR <a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8652">#8652</a>/b0536ae6
backport][3.10] Do not follow symlinks for compressed file...</li>
<li><a
href="6a778061eb"><code>6a77806</code></a>
[PR <a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8636">#8636</a>/51d872e
backport][3.10] Remove Request.wait_for_disconnection() met...</li>
<li><a
href="1f92213c3e"><code>1f92213</code></a>
[PR <a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8642">#8642</a>/e4942771
backport][3.10] Fix response to circular symlinks with Pyt...</li>
<li><a
href="2ef14a6631"><code>2ef14a6</code></a>
[PR <a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8641">#8641</a>/0a88bab
backport][3.10] Fix WebSocket ping tasks being prematurely ...</li>
<li><a
href="68e84968de"><code>68e8496</code></a>
[PR <a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8608">#8608</a>/c4acabc
backport][3.10] Fix timer handle churn in websocket heartbe...</li>
<li><a
href="72f41aab59"><code>72f41aa</code></a>
[PR <a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8632">#8632</a>/b2691f2
backport][3.10] Fix connecting to npipe://, tcp://, and uni...</li>
<li><a
href="bf83dbe19e"><code>bf83dbe</code></a>
[PR <a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8634">#8634</a>/c7293e19
backport][3.10] Backport <a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8620">#8620</a>
as improvements to various ...</li>
<li><a
href="4815765a6b"><code>4815765</code></a>
[PR <a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8597">#8597</a>/c99a1e27
backport][3.10] Fix reading of body when ignoring an upgra...</li>
<li><a
href="266608d2e4"><code>266608d</code></a>
[PR <a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8611">#8611</a>/1fcef940
backport][3.10] Fix handler waiting on shutdown (<a
href="https://redirect.github.com/aio-libs/aiohttp/issues/8627">#8627</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/aio-libs/aiohttp/compare/v3.9.4...v3.10.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=aiohttp&package-manager=pip&previous-version=3.9.4&new-version=3.10.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/infiniflow/ragflow/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-05 09:33:47 +08:00
9b90a44323 feat: Add SearchPage #2247 (#2248)
### What problem does this PR solve?

feat: Add SearchPage #2247

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-05 09:33:05 +08:00
H
426fdafb66 Add component yahoo finance (#2244)
### What problem does this PR solve?


### Type of change

- [ ] New Feature (non-breaking change which adds functionality)
2024-09-04 19:51:07 +08:00
02fb7a88e3 fix issue wrong agent prologue for api (#2246)
### What problem does this PR solve?

#2242
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-04 19:11:55 +08:00
0fe19f3fbc fix QWenSeq2txt bug (#2245)
### What problem does this PR solve?

#2243

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-04 18:25:43 +08:00
H
9b4cceb3f7 Fix component qweather (#2240)
### What problem does this PR solve?

#2239 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-04 17:53:11 +08:00
65255f2a8e Add Authorization checks (#2235)
### What problem does this PR solve?

Add Authorization checks

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Feiue <10215101452@stu.ecun.edu.cn>
2024-09-04 11:53:45 +08:00
9dd380d474 feat: Comment out tts item #2088 (#2232)
### What problem does this PR solve?

feat: Comment out tts item #2088
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-04 11:11:52 +08:00
0164856343 Add Authorization checks (#2221)
### What problem does this PR solve?

Add Authorization checks
#2203

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Feiue <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-04 10:36:15 +08:00
H
4f05803690 Fix bug : bad escape \x at position xxx (line xx, column xx) in agent (#2226)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-03 19:56:06 +08:00
abc32803cc add stream chat with TTS (#2228)
### What problem does this PR solve?



### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-03 19:49:14 +08:00
07de36ec86 feat: Supports pronunciation while outputting text #2088 (#2227)
### What problem does this PR solve?

feat: Supports pronunciation while outputting text #2088

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-03 19:47:50 +08:00
87a998e9e5 fix tts add bug (#2224)
### What problem does this PR solve?

fix tts add bug

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-09-03 18:40:20 +08:00
0aafa281a5 Add Authorization checks (#2218)
### What problem does this PR solve?

Add Authorization checks
#2203

### Type of change


- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Feiue <10215101452@stu.ecun.edu.cn>
2024-09-03 16:28:46 +08:00
2871455e4e fix zhipuCV bug (#2215)
### What problem does this PR solve?

#2198  fix zhipuCV bug

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-09-03 15:11:53 +08:00
f09b204ae4 fix error response disformat usage (#2213)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-03 14:57:58 +08:00
5a2c542ce2 make term similarity robust (#2212)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-09-03 14:30:07 +08:00
4d9e9f0dbb Add Authorization checks (#2209)
### What problem does this PR solve?

Add Authorization checks 
#2203

### Type of change


- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Feiue <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-03 13:45:02 +08:00
6d232f1bdb enable 3 char words to finegrind tokenize (#2210)
### What problem does this PR solve?


### Type of change


- [x] Performance Improvement
2024-09-03 13:37:32 +08:00
21179a9be9 Minor editorial updates (#2207)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-09-03 11:18:10 +08:00
9081bc969a feat: Add tts Switch to chat configuration modal #2088 (#2206)
### What problem does this PR solve?

feat: Add tts Switch to chat configuration modal  #2088

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-03 10:36:49 +08:00
e949594579 feat: add tenant api of create & delete user (#2204)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-03 10:36:22 +08:00
1a1888ed22 feat: Play audio #2088 (#2200)
### What problem does this PR solve?
feat: Play audio #2088


### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-09-03 09:55:19 +08:00
97e4eccf03 Update api.md (#2196)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-09-02 18:54:33 +08:00
b10eb8d085 fix re sub bug (#2199)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-02 18:53:30 +08:00
H
1d2c081710 Fix component PubMed (#2195)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-02 18:49:09 +08:00
ad09d4bb24 fix tts interface error (#2197)
### What problem does this PR solve?

fix tts interface error

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-02 18:40:57 +08:00
b9c383612d fix folder name suffix checking` (#2194)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-02 16:21:57 +08:00
H
ab9efb3c23 Fix component PubMed (#2192)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-02 16:19:41 +08:00
922f79e757 Fixed a broken link (#2190)
To fix a broken link

### Type of change

- [x] Documentation Update
2024-09-02 14:31:31 +08:00
c04686d426 fix: After sending the message for the first time, the returned content is not streamed out #2067 #1832 (#2191)
### What problem does this PR solve?

fix: After sending the message for the first time, the returned content
is not streamed out #2067 #1832

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-02 14:31:00 +08:00
9a85f83569 feat: add api of tenant app (#2177)
### What problem does this PR solve?

add api of tenant app

### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2024-09-02 12:08:16 +08:00
5decdde182 add support for Google Cloud (#2175)
### What problem does this PR solve?

#1853 add support for Google Cloud

### Type of change


- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-09-02 12:06:41 +08:00
def18308d0 fix: Copied API link error #2188 (#2189)
### What problem does this PR solve?

fix: Copied API link error #2188

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-02 11:00:27 +08:00
fc6d8ee77f ignore when save image fail (#2178)
### What problem does this PR solve?


### Type of change
- [x] Performance Improvement
2024-08-30 18:41:31 +08:00
5400467da1 feat: Select derived messages from backend #2088 (#2176)
### What problem does this PR solve?

feat: Select derived messages from backend #2088

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-08-30 17:53:30 +08:00
2c771fb0b4 Complete DataSet SDK implementation (#2171)
### What problem does this PR solve?

Complete DataSet SDK implementation
#1102

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Feiue <10215101452@stu.ecun.edu.cn>
2024-08-30 16:54:22 +08:00
667632ba00 feat: Hide the delete button on the agent page #2088 (#2167)
### What problem does this PR solve?

feat: Hide the delete button on the agent page #2088

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-08-29 19:06:18 +08:00
a82f092dac feat: Regenerate chat message #2088 (#2166)
### What problem does this PR solve?

feat: Regenerate chat message #2088
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-08-29 18:37:18 +08:00
742d0f0ea9 re-generate for conversation (#2165)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-08-29 18:32:58 +08:00
69bbf8e9c5 fix anthropic bug (#2161)
### What problem does this PR solve?

#2159  fix anthropic bug

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-08-29 17:13:58 +08:00
12975cf128 Fix some security vulnerabilities. (#2160)
### What problem does this PR solve?

Fix some security vulnerabilities

### Type of change

- [x] Performance Improvement

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-08-29 16:21:32 +08:00
99993e5026 add support for Voyage AI (#2159)
### What problem does this PR solve?

#1853  #2138 add support for Voyage AI

### Type of change
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-08-29 16:14:49 +08:00
15b78bd894 Fix the issue about No module named 'graspologic' #2157 (#2158)
### What problem does this PR solve?

Fix the issue #2157 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-08-29 15:54:25 +08:00
f8a479bf88 feat: Delete message by id #2088 (#2155)
### What problem does this PR solve?

feat: Delete message by id #2088

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-08-29 14:32:04 +08:00
f87e7242cd complete implementation of dataset SDK (#2147)
### What problem does this PR solve?

Complete implementation of dataset SDK.
#1102

### Type of change


- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Feiue <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-08-29 14:31:31 +08:00
fc1ac3a962 fix delete message error (#2153)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-08-29 14:07:14 +08:00
212bb8e601 add retry count to task (#2152)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-08-29 13:31:41 +08:00
06abef66ef add support for Anthropic (#2148)
### What problem does this PR solve?

#1853  add support for Anthropic

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-08-29 13:30:06 +08:00
0abc01311b Update Dockerfile.arm (#2150)
added NLTK lib  install

### What problem does this PR solve?


### Type of change
- [x] Performance Improvement
2024-08-29 13:03:43 +08:00
1eb6286339 feat: Send message with uuid #2088 (#2149)
### What problem does this PR solve?

feat: Send message with uuid #2088

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-08-29 11:24:27 +08:00
4bd6c3145c alter message id (#2146)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-08-28 19:23:53 +08:00
190e144a70 feat: Show prompt send to LLM for assistant answer #2098 (#2142)
### What problem does this PR solve?

feat: Show prompt send to LLM for assistant answer #2098
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-08-28 19:05:15 +08:00
H
527ebec2f5 Fix Logical operator (#2143)
### What problem does this PR solve?

#2120 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-08-28 19:04:48 +08:00
a0b7c78dca optimize text parser (#2144)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-08-28 18:11:19 +08:00
54f7c6ea8e feat: Submit Feedback #2088 (#2134)
### What problem does this PR solve?

feat: Submit Feedback #2088

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-08-28 16:39:21 +08:00
H
f843dd05e5 Fix exeSQL component output (#2141)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-08-28 16:39:10 +08:00
3abc9be1c2 fix(graphrag): variable refernence error (#2136)
### What problem does this PR solve?

fix: Use wrong variable in graph rag step.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)


Co-authored-by: 陈晓强 <chenxiaoqiang@cvte.com>
2024-08-28 16:35:42 +08:00
e627ee9ea4 fix: build on ARM #2137 (#2139)
Fix building on ARM architecture.

### What problem does this PR solve?

Fix building on ARM architecture.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)


### Related issues

- https://github.com/infiniflow/ragflow/issues/2137
- https://github.com/infiniflow/ragflow/issues/610
- https://github.com/infiniflow/ragflow/issues/434
- https://github.com/infiniflow/ragflow/issues/253
2024-08-28 16:29:56 +08:00
6c1f1a9f53 remove alter log (#2140)
### What problem does this PR solve?


### Type of change
- [x] Refactoring
2024-08-28 16:29:23 +08:00
H
b51237be17 Fix Text2SQL (#2131)
### What problem does this PR solve?

Fix exeSQL component 
Update DB Assistant template 
Fix canvas Message window size

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-08-28 14:14:13 +08:00
5daed10136 make task resumable (#2132)
### What problem does this PR solve?

### Type of change


- [x] Performance Improvement
2024-08-28 14:06:27 +08:00
074d4f5031 fix: Delete the model.ts file of chat #1306 (#2129)
### What problem does this PR solve?
fix:  Delete the model.ts file of chat #1306

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-08-28 11:56:35 +08:00
e9f5468a49 fix the max token of Tongyi-Qianwen text-embedding-v3 model to 8k (#2118)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

fix the max token of Tongyi-Qianwen text-embedding-v3 model to 8k

close #2117 

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-08-28 10:14:19 +08:00
a2b4d0190c feat: Align the cards on the Model Providers page #2111 (#2125)
### What problem does this PR solve?

feat: Align the cards on the Model Providers page #2111

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-08-27 19:27:58 +08:00
c8097e97cb feat: Modify the modal style of creating an agent so that there are more templates in the field of view #2122 (#2123)
### What problem does this PR solve?

feat: Modify the modal style of creating an agent so that there are more
templates in the field of view #2122

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-08-27 18:49:08 +08:00
fc172b4a79 feat: Pop-up prompt message after modifying the dialog settings #2088 (#2114)
### What problem does this PR solve?

feat: Pop-up prompt message after modifying the dialog settings #2088

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-08-27 15:24:28 +08:00
0bea7f21ae create and update dataset (#2110)
### What problem does this PR solve?

Added the ability to create and update dataset for SDK

### Type of change


- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: root <root@xwg>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-08-27 15:23:50 +08:00
61d2a74b25 feat: Fetch conversation list by @tanstack/react-query and displays error message that task_executor does not exist #2088 (#2112)
### What problem does this PR solve?

feat: Fetch conversation list by @tanstack/react-query
feat: Displays error message that task_executor does not exist

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-08-27 14:45:17 +08:00
1d88b197fb add fish audio zh and zh-traditional (#2109)
### What problem does this PR solve?

add fish audio zh and zh-traditional

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-08-27 14:44:50 +08:00
b88c3897b9 add tts api (#2107)
### What problem does this PR solve?

add tts api 


- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-08-27 13:15:54 +08:00
2da4e7aa46 add support for Tencent Cloud ASR (#2102)
### What problem does this PR solve?

add support for Tencent Cloud ASR

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-08-27 11:47:11 +08:00
cf038e099f update groq llm (#2103)
### What problem does this PR solve?

#2076   update groq llm.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-08-27 11:42:00 +08:00
88d52e335c fix no tts issue (#2101)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-08-26 18:06:50 +08:00
H
13785edaae Fix API key validation api/conversation (#2100)
### What problem does this PR solve?

#2081 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-08-26 17:29:44 +08:00
6d3e3e4e3c add prompt to message (#2099)
### What problem does this PR solve?

#2098

### Type of change
 
- [x] New Feature (non-breaking change which adds functionality)
2024-08-26 16:14:15 +08:00
6b7c028578 add support for TTS model (#2095)
### What problem does this PR solve?

add support for TTS model
#1853

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-08-26 15:19:43 +08:00
c3e344b0f1 fix callback function error (#2096)
### What problem does this PR solve?

#2085

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-08-26 14:12:52 +08:00
e9202999cb alter way of alerting info of empty task (#2094)
### What problem does this PR solve?

#2043

### Type of change

- [x] Refactoring
2024-08-26 13:45:26 +08:00
a6d85c6c2f Provide detailed error information. (#2093)
### What problem does this PR solve?

Most 'index failure' error is caused by ES.

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-08-26 13:36:00 +08:00
7539d142a9 VolcEngine SDK V3 adaptation (#2082)
1) Configuration interface update
2) Back-end adaptation API update
Note: The official no longer supports the Skylark1/2 series, and all
have been switched to the Doubao series


![image](https://github.com/user-attachments/assets/f6fd8782-0cdf-4c0b-ac8f-9eb130f667a5)

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):

Co-authored-by: 海贼宅 <stu_xyx@163.com>
2024-08-26 13:34:29 +08:00
e953f01951 add thumb up api (#2092)
### What problem does this PR solve?

#2088

### Type of change
 
- [x] New Feature (non-breaking change which adds functionality)
2024-08-26 13:27:41 +08:00
eb20b60b13 add inferface for message thumbup (#2091)
### What problem does this PR solve?

#2088

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-08-26 12:58:19 +08:00
d48731ac8c add message id to conversions (#2090)
### What problem does this PR solve?

#2088 
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-08-26 12:05:15 +08:00
b4a5d83b44 feat: Add FeedbackModal #2088 (#2089)
### What problem does this PR solve?

feat: Add FeedbackModal #2088

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-08-26 11:53:56 +08:00
99af1cbeac Update license (#2086)
Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-08-25 18:58:20 +08:00
63d0b39c5c Update quickstart.mdx (#2084)
### What problem does this PR solve?

[minor] Fixed a docusaurus display issue.

### Type of change

- [x] Documentation Update
2024-08-25 09:12:44 +08:00
863cec1bad prepare for sdk http api (#2075)
### What problem does this PR solve?

#1605

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-08-23 19:36:17 +08:00
e14e0ec695 create dataset (#2074)
### What problem does this PR solve?

You can use sdk to create a dataset

### Type of change

- [x] New Feature

---------

Co-authored-by: root <root@xwg>
2024-08-23 18:38:20 +08:00
6228b1bd53 fix: Filter out disabled values ​​from the llm options #2072 (#2073)
### What problem does this PR solve?

fix: Filter out disabled values ​​from the llm options #2072

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-08-23 17:01:35 +08:00
301 changed files with 25464 additions and 4913 deletions

View File

@ -1,16 +1,10 @@
---
sidebar_position: 0
slug: /contribution_guidelines
---
# Contribution guidelines
Thanks for wanting to contribute to RAGFlow. This document offers guidlines and major considerations for submitting your contributions.
This document offers guidlines and major considerations for submitting your contributions to RAGFlow.
- To report a bug, file a [GitHub issue](https://github.com/infiniflow/ragflow/issues/new/choose) with us.
- For further questions, you can explore existing discussions or initiate a new one in [Discussions](https://github.com/orgs/infiniflow/discussions).
## What you can contribute
The list below mentions some contributions you can make, but it is not a complete list.
@ -27,7 +21,7 @@ The list below mentions some contributions you can make, but it is not a complet
### General workflow
1. Fork our GitHub repository.
2. Clone your fork to your local machine:
2. Clone your fork to your local machine:
`git clone git@github.com:<yourname>/ragflow.git`
3. Create a local branch:
`git checkout -b my-branch`
@ -39,14 +33,16 @@ The list below mentions some contributions you can make, but it is not a complet
### Before filing a PR
- Consider splitting a large PR into multiple smaller, standalone PRs to keep a traceable development history.
- Consider splitting a large PR into multiple smaller, standalone PRs to keep a traceable development history.
- Ensure that your PR addresses just one issue, or keep any unrelated changes small.
- Add test cases when contributing new features. They demonstrate that your code functions correctly and protect against potential issues from future changes.
### Describing your PR
### Describing your PR
- Ensure that your PR title is concise and clear, providing all the required information.
- Refer to a corresponding GitHub issue in your PR description if applicable.
- Refer to a corresponding GitHub issue in your PR description if applicable.
- Include sufficient design details for *breaking changes* or *API changes* in your description.
### Reviewing & merging a PR
- Ensure that your PR passes all Continuous Integration (CI) tests before merging it.
Ensure that your PR passes all Continuous Integration (CI) tests before merging it.

View File

@ -1,23 +1,108 @@
FROM infiniflow/ragflow-base:v2.0
USER root
# base stage
FROM ubuntu:24.04 AS base
USER root
ENV LIGHTEN=0
WORKDIR /ragflow
ADD ./web ./web
RUN cd ./web && npm i --force && npm run build
RUN rm -f /etc/apt/apt.conf.d/docker-clean \
&& echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
ADD ./api ./api
ADD ./conf ./conf
ADD ./deepdoc ./deepdoc
ADD ./rag ./rag
ADD ./agent ./agent
ADD ./graphrag ./graphrag
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt-get --no-install-recommends install -y ca-certificates
# if you located in China, you can use tsinghua mirror to speed up apt
RUN sed -i 's|http://archive.ubuntu.com|https://mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list.d/ubuntu.sources
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y curl libpython3-dev nginx libglib2.0-0 libglx-mesa0 pkg-config libicu-dev libgdiplus \
&& rm -rf /var/lib/apt/lists/* \
&& curl -sSL https://install.python-poetry.org | python3 -
RUN curl -o libssl1.deb http://archive.ubuntu.com/ubuntu/pool/main/o/openssl1.0/libssl1.0.0_1.0.2n-1ubuntu5_amd64.deb && dpkg -i libssl1.deb && rm -f libssl1.deb
ENV PYTHONDONTWRITEBYTECODE=1 DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1
# Configure Poetry
ENV POETRY_NO_INTERACTION=1
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
ENV POETRY_VIRTUALENVS_CREATE=true
ENV POETRY_REQUESTS_TIMEOUT=15
# builder stage
FROM base AS builder
USER root
WORKDIR /ragflow
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y nodejs npm cargo && \
rm -rf /var/lib/apt/lists/*
COPY web web
RUN cd web && npm i --force && npm run build
# install dependencies from poetry.lock file
COPY pyproject.toml poetry.toml poetry.lock ./
RUN --mount=type=cache,target=/root/.cache/pypoetry,sharing=locked \
if [ "$LIGHTEN" -eq 0 ]; then \
/root/.local/bin/poetry install --sync --no-cache --no-root --with=full; \
else \
/root/.local/bin/poetry install --sync --no-cache --no-root; \
fi
# production stage
FROM base AS production
USER root
WORKDIR /ragflow
# Install python packages' dependencies
# cv2 requires libGL.so.1
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y --no-install-recommends nginx libgl1 vim less && \
rm -rf /var/lib/apt/lists/*
COPY web web
COPY api api
COPY conf conf
COPY deepdoc deepdoc
COPY rag rag
COPY agent agent
COPY graphrag graphrag
COPY pyproject.toml poetry.toml poetry.lock ./
# Copy models downloaded via download_deps.py
RUN mkdir -p /ragflow/rag/res/deepdoc /root/.ragflow
RUN --mount=type=bind,source=huggingface.co,target=/huggingface.co \
tar --exclude='.*' -cf - \
/huggingface.co/InfiniFlow/text_concat_xgb_v1.0 \
/huggingface.co/InfiniFlow/deepdoc \
| tar -xf - --strip-components=3 -C /ragflow/rag/res/deepdoc
RUN --mount=type=bind,source=huggingface.co,target=/huggingface.co \
tar -cf - \
/huggingface.co/BAAI/bge-large-zh-v1.5 \
/huggingface.co/BAAI/bge-reranker-v2-m3 \
/huggingface.co/maidalun1020/bce-embedding-base_v1 \
/huggingface.co/maidalun1020/bce-reranker-base_v1 \
| tar -xf - --strip-components=2 -C /root/.ragflow
# Copy compiled web pages
COPY --from=builder /ragflow/web/dist /ragflow/web/dist
# Copy Python environment and packages
ENV VIRTUAL_ENV=/ragflow/.venv
COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
ENV PATH="${VIRTUAL_ENV}/bin:/root/.local/bin:${PATH}"
# Download nltk data
RUN python3 -m nltk.downloader wordnet punkt punkt_tab
ENV PYTHONPATH=/ragflow/
ENV HF_ENDPOINT=https://hf-mirror.com
ADD docker/entrypoint.sh ./entrypoint.sh
ADD docker/.env ./
COPY docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
ENTRYPOINT ["./entrypoint.sh"]

View File

@ -1,34 +0,0 @@
FROM python:3.11
USER root
WORKDIR /ragflow
COPY requirements_arm.txt /ragflow/requirements.txt
RUN pip install -i https://mirrors.aliyun.com/pypi/simple/ --default-timeout=1000 -r requirements.txt &&\
python -c "import nltk;nltk.download('punkt');nltk.download('wordnet')"
RUN apt-get update && \
apt-get install -y curl gnupg && \
rm -rf /var/lib/apt/lists/*
RUN curl -sL https://deb.nodesource.com/setup_20.x | bash - && \
apt-get install -y --fix-missing nodejs nginx ffmpeg libsm6 libxext6 libgl1
ADD ./web ./web
RUN cd ./web && npm i --force && npm run build
ADD ./api ./api
ADD ./conf ./conf
ADD ./deepdoc ./deepdoc
ADD ./rag ./rag
ADD ./agent ./agent
ADD ./graphrag ./graphrag
ENV PYTHONPATH=/ragflow/
ENV HF_ENDPOINT=https://hf-mirror.com
ADD docker/entrypoint.sh ./entrypoint.sh
ADD docker/.env ./
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

View File

@ -1,27 +0,0 @@
FROM infiniflow/ragflow-base:v2.0
USER root
WORKDIR /ragflow
## for cuda > 12.0
RUN pip uninstall -y onnxruntime-gpu
RUN pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
ADD ./web ./web
RUN cd ./web && npm i --force && npm run build
ADD ./api ./api
ADD ./conf ./conf
ADD ./deepdoc ./deepdoc
ADD ./rag ./rag
ADD ./agent ./agent
ADD ./graphrag ./graphrag
ENV PYTHONPATH=/ragflow/
ENV HF_ENDPOINT=https://hf-mirror.com
ADD docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

View File

@ -1,56 +0,0 @@
FROM ubuntu:22.04
USER root
WORKDIR /ragflow
RUN apt-get update && apt-get install -y wget curl build-essential libopenmpi-dev
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh && \
bash ~/miniconda.sh -b -p /root/miniconda3 && \
rm ~/miniconda.sh && ln -s /root/miniconda3/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
echo ". /root/miniconda3/etc/profile.d/conda.sh" >> ~/.bashrc && \
echo "conda activate base" >> ~/.bashrc
ENV PATH /root/miniconda3/bin:$PATH
RUN conda create -y --name py11 python=3.11
ENV CONDA_DEFAULT_ENV py11
ENV CONDA_PREFIX /root/miniconda3/envs/py11
ENV PATH $CONDA_PREFIX/bin:$PATH
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
RUN apt-get install -y nginx
ADD ./web ./web
ADD ./api ./api
ADD ./conf ./conf
ADD ./deepdoc ./deepdoc
ADD ./rag ./rag
ADD ./requirements.txt ./requirements.txt
ADD ./agent ./agent
ADD ./graphrag ./graphrag
RUN apt install openmpi-bin openmpi-common libopenmpi-dev
ENV LD_LIBRARY_PATH /usr/lib/x86_64-linux-gnu/openmpi/lib:$LD_LIBRARY_PATH
RUN rm /root/miniconda3/envs/py11/compiler_compat/ld
RUN cd ./web && npm i --force && npm run build
RUN conda run -n py11 pip install -i https://mirrors.aliyun.com/pypi/simple/ -r ./requirements.txt
RUN apt-get update && \
apt-get install -y libglib2.0-0 libgl1-mesa-glx && \
rm -rf /var/lib/apt/lists/*
RUN conda run -n py11 pip install -i https://mirrors.aliyun.com/pypi/simple/ ollama
RUN conda run -n py11 python -m nltk.downloader punkt
RUN conda run -n py11 python -m nltk.downloader wordnet
ENV PYTHONPATH=/ragflow/
ENV HF_ENDPOINT=https://hf-mirror.com
ADD docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

101
Dockerfile.slim Normal file
View File

@ -0,0 +1,101 @@
# base stage
FROM ubuntu:24.04 AS base
USER root
ENV LIGHTEN=1
WORKDIR /ragflow
RUN rm -f /etc/apt/apt.conf.d/docker-clean \
&& echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt-get --no-install-recommends install -y ca-certificates
# if you located in China, you can use tsinghua mirror to speed up apt
RUN sed -i 's|http://archive.ubuntu.com|https://mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list.d/ubuntu.sources
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y curl libpython3-dev nginx libglib2.0-0 libglx-mesa0 pkg-config libicu-dev libgdiplus \
&& rm -rf /var/lib/apt/lists/* \
&& curl -sSL https://install.python-poetry.org | python3 -
RUN curl -o libssl1.deb http://archive.ubuntu.com/ubuntu/pool/main/o/openssl1.0/libssl1.0.0_1.0.2n-1ubuntu5_amd64.deb && dpkg -i libssl1.deb && rm -f libssl1.deb
ENV PYTHONDONTWRITEBYTECODE=1 DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1
# Configure Poetry
ENV POETRY_NO_INTERACTION=1
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
ENV POETRY_VIRTUALENVS_CREATE=true
ENV POETRY_REQUESTS_TIMEOUT=15
# builder stage
FROM base AS builder
USER root
WORKDIR /ragflow
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y nodejs npm cargo && \
rm -rf /var/lib/apt/lists/*
COPY web web
RUN cd web && npm i --force && npm run build
# install dependencies from poetry.lock file
COPY pyproject.toml poetry.toml poetry.lock ./
RUN --mount=type=cache,target=/root/.cache/pypoetry,sharing=locked \
if [ "$LIGHTEN" -eq 0 ]; then \
/root/.local/bin/poetry install --sync --no-cache --no-root --with=full; \
else \
/root/.local/bin/poetry install --sync --no-cache --no-root; \
fi
# production stage
FROM base AS production
USER root
WORKDIR /ragflow
# Install python packages' dependencies
# cv2 requires libGL.so.1
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y --no-install-recommends nginx libgl1 vim less && \
rm -rf /var/lib/apt/lists/*
COPY web web
COPY api api
COPY conf conf
COPY deepdoc deepdoc
COPY rag rag
COPY agent agent
COPY graphrag graphrag
COPY pyproject.toml poetry.toml poetry.lock ./
# Copy models downloaded via download_deps.py
RUN mkdir -p /ragflow/rag/res/deepdoc /root/.ragflow
RUN --mount=type=bind,source=huggingface.co,target=/huggingface.co \
tar --exclude='.*' -cf - \
/huggingface.co/InfiniFlow/text_concat_xgb_v1.0 \
/huggingface.co/InfiniFlow/deepdoc \
| tar -xf - --strip-components=3 -C /ragflow/rag/res/deepdoc
# Copy compiled web pages
COPY --from=builder /ragflow/web/dist /ragflow/web/dist
# Copy Python environment and packages
ENV VIRTUAL_ENV=/ragflow/.venv
COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
ENV PATH="${VIRTUAL_ENV}/bin:/root/.local/bin:${PATH}"
# Download nltk data
RUN python3 -m nltk.downloader wordnet punkt punkt_tab
ENV PYTHONPATH=/ragflow/
COPY docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

183
README.md
View File

@ -18,7 +18,7 @@
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.10.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.10.0"></a>
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.12.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.12.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
@ -42,8 +42,9 @@
- 🔎 [System Architecture](#-system-architecture)
- 🎬 [Get Started](#-get-started)
- 🔧 [Configurations](#-configurations)
- 🛠️ [Build from source](#-build-from-source)
- 🛠️ [Launch service from source](#-launch-service-from-source)
- 🪛 [Build the docker image without embedding models](#-build-the-docker-image-without-embedding-models)
- 🪚 [Build the docker image including embedding models](#-build-the-docker-image-including-embedding-models)
- 🔨 [Launch service from source for development](#-launch-service-from-source-for-development)
- 📚 [Documentation](#-documentation)
- 📜 [Roadmap](#-roadmap)
- 🏄 [Community](#-community)
@ -66,24 +67,16 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Latest Updates
- 2024-09-29 Optimizes multi-round conversations.
- 2024-09-13 Adds search mode for knowledge base Q&A.
- 2024-09-09 Adds a medical consultant agent template.
- 2024-08-22 Support text to SQL statements through RAG.
- 2024-08-02 Supports GraphRAG inspired by [graphrag](https://github.com/microsoft/graphrag) and mind map.
- 2024-07-23 Supports audio file parsing.
- 2024-07-21 Supports more LLMs (LocalAI, OpenRouter, StepFun, and Nvidia).
- 2024-07-18 Adds more components (Wikipedia, PubMed, Baidu, and Duckduckgo) to the graph.
- 2024-07-08 Supports workflow based on [Graph](./graph/README.md).
- 2024-06-27 Supports Markdown and Docx in the Q&A parsing method.
- 2024-06-27 Supports extracting images from Docx files.
- 2024-06-27 Supports extracting tables from Markdown files.
- 2024-06-06 Supports [Self-RAG](https://huggingface.co/papers/2310.11511), which is enabled by default in dialog settings.
- 2024-05-30 Integrates [BCE](https://github.com/netease-youdao/BCEmbedding) and [BGE](https://github.com/FlagOpen/FlagEmbedding) reranker models.
- 2024-07-08 Supports workflow based on [Graph](./agent/README.md).
- 2024-06-27 Supports Markdown and Docx in the Q&A parsing method, extracting images from Docx files, extracting tables from Markdown files.
- 2024-05-23 Supports [RAPTOR](https://arxiv.org/html/2401.18059v1) for better text retrieval.
- 2024-05-15 Integrates OpenAI GPT-4o.
## 🌟 Key Features
@ -159,15 +152,12 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
```
3. Build the pre-built Docker images and start up the server:
> Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_VERSION` in **docker/.env** to the intended version, for example `RAGFLOW_VERSION=v0.10.0`, before running the following commands.
> Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_IMAGE` in **docker/.env** to the intended version, for example `RAGFLOW_IMAGE=infiniflow/ragflow:v0.12.0`, before running the following commands.
```bash
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
```
> The core image is about 9 GB in size and may take a while to load.
@ -180,19 +170,19 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
_The following output confirms a successful launch of the system:_
```bash
____ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
/____/
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
* Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit
```
> If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network anomaly` error because, at that moment, your RAGFlow may not be fully initialized.
> If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network abnormal` error because, at that moment, your RAGFlow may not be fully initialized.
5. In your web browser, enter the IP address of your server and log in to RAGFlow.
> With the default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (**sans** port number) as the default HTTP serving port `80` can be omitted when using the default configurations.
@ -200,7 +190,7 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
> See [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) for more information.
_The show is now on!_
_The show is on!_
## 🔧 Configurations
@ -216,118 +206,89 @@ You must ensure that changes to the [.env](./docker/.env) file are in line with
To update the default HTTP serving port (80), go to [docker-compose.yml](./docker/docker-compose.yml) and change `80:80` to `<YOUR_SERVING_PORT>:80`.
> Updates to all system configurations require a system reboot to take effect:
>
Updates to the above configurations require a reboot of all containers to take effect:
> ```bash
> $ docker-compose up -d
> $ docker-compose -f docker/docker-compose.yml up -d
> ```
## 🛠️ Build from source
## 🪛 Build the Docker image without embedding models
To build the Docker images from source:
This image is approximately 1 GB in size and relies on external LLM and embedding services.
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
$ docker build -t infiniflow/ragflow:dev .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```
## 🛠️ Launch service from source
## 🪚 Build the Docker image including embedding models
To launch the service from source:
This image is approximately 9 GB in size. As it includes embedding models, it relies on external LLM services only.
1. Clone the repository:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
```
## 🔨 Launch service from source for development
1. Install Poetry, or skip this step if it is already installed:
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
curl -sSL https://install.python-poetry.org | python3 -
```
2. Create a virtual environment, ensuring that Anaconda or Miniconda is installed:
2. Clone the source code and install Python dependencies:
```bash
$ conda create -n ragflow python=3.11.0
$ conda activate ragflow
$ pip install -r requirements.txt
```
```bash
# If your CUDA version is higher than 12.0, run the following additional commands:
$ pip uninstall -y onnxruntime-gpu
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
~/.local/bin/poetry install --sync --no-root # install RAGFlow dependent python modules
```
3. Copy the entry script and configure environment variables:
3. Launch the dependent services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose:
```bash
# Get the Python path:
$ which python
# Get the ragflow project path:
$ pwd
```
```bash
$ cp docker/entrypoint.sh .
$ vi entrypoint.sh
docker compose -f docker/docker-compose-base.yml up -d
```
Add the following line to `/etc/hosts` to resolve all hosts specified in **docker/service_conf.yaml** to `127.0.0.1`:
```
127.0.0.1 es01 mysql minio redis
```
In **docker/service_conf.yaml**, update mysql port to `5455` and es port to `1200`, as specified in **docker/.env**.
4. If you cannot access HuggingFace, set the `HF_ENDPOINT` environment variable to use a mirror site:
```bash
# Adjust configurations according to your actual situation (the following two export commands are newly added):
# - Assign the result of `which python` to `PY`.
# - Assign the result of `pwd` to `PYTHONPATH`.
# - Comment out `LD_LIBRARY_PATH`, if it is configured.
# - Optional: Add Hugging Face mirror.
PY=${PY}
export PYTHONPATH=${PYTHONPATH}
export HF_ENDPOINT=https://hf-mirror.com
```
4. Launch the third-party services (MinIO, Elasticsearch, Redis, and MySQL):
5. Launch backend service:
```bash
$ cd docker
$ docker compose -f docker-compose-base.yml up -d
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
5. Check the configuration files, ensuring that:
- The settings in **docker/.env** match those in **conf/service_conf.yaml**.
- The IP addresses and ports for related services in **service_conf.yaml** match the local machine IP and ports exposed by the container.
6. Launch the RAGFlow backend service:
6. Install frontend dependencies:
```bash
$ chmod +x ./entrypoint.sh
$ bash ./entrypoint.sh
```
7. Launch the frontend service:
cd web
npm install --force
```
7. Configure frontend to update `proxy.target` in **.umirc.ts** to `http://127.0.0.1:9380`:
8. Launch frontend service:
```bash
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ vim .umirc.ts
# Update proxy.target to http://127.0.0.1:9380
$ npm run dev
```
npm run dev
```
8. Deploy the frontend service:
_The following output confirms a successful launch of the system:_
```bash
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ umi build
$ mkdir -p /ragflow/web
$ cp -r dist /ragflow/web
$ apt install nginx -y
$ cp ../docker/nginx/proxy.conf /etc/nginx
$ cp ../docker/nginx/nginx.conf /etc/nginx
$ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d
$ systemctl start nginx
```
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
## 📚 Documentation
@ -348,4 +309,4 @@ See the [RAGFlow Roadmap 2024](https://github.com/infiniflow/ragflow/issues/162)
## 🙌 Contributing
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our [Contribution Guidelines](./docs/references/CONTRIBUTING.md) first.
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our [Contribution Guidelines](./CONTRIBUTING.md) first.

View File

@ -18,8 +18,8 @@
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.10.0-brightgreen"
alt="docker pull infiniflow/ragflow:v0.10.0"></a>
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.12.0-brightgreen"
alt="docker pull infiniflow/ragflow:v0.12.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
@ -48,19 +48,16 @@
## 🔥 最新情報
- 2024-09-29 マルチラウンドダイアログを最適化。
- 2024-09-13 ナレッジベース Q&A の検索モードを追加しました。
- 2024-09-09 エージェントに医療相談テンプレートを追加しました。
- 2024-08-22 RAG を介して SQL ステートメントへのテキストをサポートします。
- 2024-08-02 [graphrag](https://github.com/microsoft/graphrag) からインスピレーションを得た GraphRAG とマインド マップをサポートします。
- 2024-07-23 音声ファイルの解析をサポートしました。
- 2024-07-21 より多くの LLM サプライヤー (LocalAI/OpenRouter/StepFun/Nvidia) をサポートします。
- 2024-07-18 グラフにコンポーネント(Wikipedia/PubMed/Baidu/Duckduckgo)を追加しました
- 2024-07-08 [Graph](./graph/README.md) ベースのワークフローをサポート
- 2024-06-27 Q&A解析方式はMarkdownファイルとDocxファイルをサポートしています。
- 2024-06-27 Docxファイルからの画像の抽出をサポートします。
- 2024-06-27 Markdownファイルからテーブルを抽出することをサポートします。
- 2024-06-06 会話設定でデフォルトでチェックされている [Self-RAG](https://huggingface.co/papers/2310.11511) をサポートします。
- 2024-05-30 [BCE](https://github.com/netease-youdao/BCEmbedding) 、[BGE](https://github.com/FlagOpen/FlagEmbedding) reranker を統合。
- 2024-07-08 [Graph](./agent/README.md) ベースのワークフローをサポート
- 2024-06-27 Q&A 解析メソッドで Markdown と Docx をサポートし、Docx ファイルから画像を抽出し、Markdown ファイルからテーブルを抽出します
- 2024-05-23 より良いテキスト検索のために [RAPTOR](https://arxiv.org/html/2401.18059v1) をサポート。
- 2024-05-15 OpenAI GPT-4oを統合しました。
## 🌟 主な特徴
@ -143,7 +140,7 @@
$ docker compose up -d
```
> 上記のコマンドを実行すると、RAGFlowの開発版dockerイメージが自動的にダウンロードされます。 特定のバージョンのDockerイメージをダウンロードして実行したい場合は、docker/.envファイルのRAGFLOW_VERSION変数を見つけて、対応するバージョンに変更してください。 例えば、RAGFLOW_VERSION=v0.10.0として、上記のコマンドを実行してください。
> 上記のコマンドを実行すると、RAGFlowの開発版dockerイメージが自動的にダウンロードされます。 特定のバージョンのDockerイメージをダウンロードして実行したい場合は、docker/.envファイルのRAGFLOW_IMAGE変数を見つけて、対応するバージョンに変更してください。 例えば、`RAGFLOW_IMAGE=infiniflow/ragflow:v0.12.0`として、上記のコマンドを実行してください。
> コアイメージのサイズは約 9 GB で、ロードに時間がかかる場合があります。
@ -156,12 +153,11 @@
_以下の出力は、システムが正常に起動したことを確認するものです:_
```bash
____ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
/____/
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
@ -198,78 +194,83 @@
> $ docker-compose up -d
> ```
## 🛠️ ソースからビルドする
## 🪛 ソースコードでDockerイメージを作成埋め込みモデルなし
ソースからDockerイメージをビルドするには:
この Docker イメージのサイズは約 1GB で、外部の大モデルと埋め込みサービスに依存しています。
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
$ docker build -t infiniflow/ragflow:v0.10.0 .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```
## 🛠️ ソースコードからサービスを起動する方法
## 🪚 ソースコードをコンパイルしたDockerイメージ埋め込みモデルを含む
ソースコードからサービスを起動する場合は、以下の手順に従ってください:
1. リポジトリをクローンします
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
```
2. 仮想環境を作成しますAnacondaまたはMinicondaがインストールされていることを確認してください
```bash
$ conda create -n ragflow python=3.11.0
$ conda activate ragflow
$ pip install -r requirements.txt
```
CUDAのバージョンが12.0以上の場合、以下の追加コマンドを実行してください:
```bash
$ pip uninstall -y onnxruntime-gpu
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
```
3. エントリースクリプトをコピーし、環境変数を設定します
```bash
$ cp docker/entrypoint.sh .
$ vi entrypoint.sh
```
以下のコマンドで Python のパスとragflowプロジェクトのパスを取得します
```bash
$ which python
$ pwd
```
`which python` の出力を `PY` の値として、`pwd` の出力を `PYTHONPATH` の値として設定します。
`LD_LIBRARY_PATH` が既に設定されている場合は、コメントアウトできます。
この Docker のサイズは約 9GB で、埋め込みモデルを含むため、外部の大モデルサービスのみが必要です。
```bash
# 実際の状況に応じて設定を調整してください。以下の二つの export は新たに追加された設定です
PY=${PY}
export PYTHONPATH=${PYTHONPATH}
# オプションHugging Face ミラーを追加
export HF_ENDPOINT=https://hf-mirror.com
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
```
4. 基本サービスを起動しま
```bash
$ cd docker
$ docker compose -f docker-compose-base.yml up -d
```
## 🔨 ソースコードからサービスを起動する方法
5. 設定ファイルを確認します
**docker/.env** 内の設定が**conf/service_conf.yaml**内の設定と一致していることを確認してください。**service_conf.yaml**内の関連サービスのIPアドレスとポートは、ローカルマシンのIPアドレスとコンテナが公開するポートに変更する必要があります。
1. Poetry をインストールする。すでにインストールされている場合は、このステップをスキップしてください:
```bash
curl -sSL https://install.python-poetry.org | python3 -
```
6. サービスを起動します
```bash
$ chmod +x ./entrypoint.sh
$ bash ./entrypoint.sh
```
2. ソースコードをクローンし、Python の依存関係をインストールする:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
~/.local/bin/poetry install --sync --no-root # install RAGFlow dependent python modules
```
3. Docker Compose を使用して依存サービスMinIO、Elasticsearch、Redis、MySQLを起動する:
```bash
docker compose -f docker/docker-compose-base.yml up -d
```
`/etc/hosts` に以下の行を追加して、**docker/service_conf.yaml** に指定されたすべてのホストを `127.0.0.1` に解決します:
```
127.0.0.1 es01 mysql minio redis
```
**docker/service_conf.yaml** で mysql のポートを `5455` に、es のポートを `1200` に更新します(**docker/.env** に指定された通り).
4. HuggingFace にアクセスできない場合は、`HF_ENDPOINT` 環境変数を設定してミラーサイトを使用してください:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
5. バックエンドサービスを起動する:
```bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
6. フロントエンドの依存関係をインストールする:
```bash
cd web
npm install --force
```
7. フロントエンドを設定し、**.umirc.ts** の `proxy.target` を `http://127.0.0.1:9380` に更新します:
8. フロントエンドサービスを起動する:
```bash
npm run dev
```
_以下の画面で、システムが正常に起動したことを示します:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
## 📚 ドキュメンテーション
@ -290,4 +291,4 @@ $ bash ./entrypoint.sh
## 🙌 コントリビュート
RAGFlow はオープンソースのコラボレーションによって発展してきました。この精神に基づき、私たちはコミュニティからの多様なコントリビュートを受け入れています。 参加を希望される方は、まず [コントリビューションガイド](./docs/references/CONTRIBUTING.md)をご覧ください。
RAGFlow はオープンソースのコラボレーションによって発展してきました。この精神に基づき、私たちはコミュニティからの多様なコントリビュートを受け入れています。 参加を希望される方は、まず [コントリビューションガイド](./CONTRIBUTING.md)をご覧ください。

View File

@ -18,7 +18,7 @@
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.10.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.10.0"></a>
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.12.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.12.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
@ -49,31 +49,24 @@
## 🔥 업데이트
- 2024-09-29 다단계 대화를 최적화합니다.
- 2024-09-13 지식베이스 Q&A 검색 모드를 추가합니다.
- 2024-09-09 Agent에 의료상담 템플릿을 추가하였습니다.
- 2024-08-22 RAG를 통해 SQL 문에 텍스트를 지원합니다.
- 2024-08-02: [graphrag](https://github.com/microsoft/graphrag)와 마인드맵에서 영감을 받은 GraphRAG를 지원합니다.
- 2024-07-23: 오디오 파일 분석을 지원합니다.
- 2024-07-21: 더 많은 LLMs(LocalAI, OpenRouter, StepFun, Nvidia)를 지원합니다.
- 2024-07-08: [Graph](./agent/README.md)를 기반으로 한 워크플로우를 지원합니다.
- 2024-07-18: 그래프에 더 많은 구성요소(Wikipedia, PubMed, Baidu, Duckduckgo)를 추가합니다.
- 2024-07-08: [Graph](./graph/README.md)를 기반으로 한 워크플로우를 지원합니다.
- 2024-06-27: Q&A 분석 방법에서 Markdown과 Docx를 지원합니다.
- 2024-06-27: Docx 파일에서 이미지 추출을 지원합니다.
- 2024-06-27: Markdown 파일에서 표 추출을 지원합니다.
- 2024-06-06: 대화 설정에서 기본으로 [Self-RAG](https://huggingface.co/papers/2310.11511)를 지원합니다.
- 2024-05-30: [BCE](https://github.com/netease-youdao/BCEmbedding) 및 [BGE](https://github.com/FlagOpen/FlagEmbedding) reranker 모델을 통합합니다.
- 2024-06-27 Q&A 구문 분석 방식에서 Markdown 및 Docx를 지원하고, Docx 파일에서 이미지 추출, Markdown 파일에서 테이블 추출을 지원합니다.
- 2024-05-23: 더 나은 텍스트 검색을 위해 [RAPTOR](https://arxiv.org/html/2401.18059v1)를 지원합니다.
- 2024-05-15: OpenAI GPT-4o를 통합합니다.
## 🌟 주요 기능
@ -147,7 +140,7 @@
3. 미리 빌드된 Docker 이미지를 생성하고 서버를 시작하세요:
> 다음 명령어를 실행하면 *dev* 버전의 RAGFlow Docker 이미지가 자동으로 다운로드됩니다. 특정 Docker 버전을 다운로드하고 실행하려면, **docker/.env** 파일에서 `RAGFLOW_VERSION`을 원하는 버전으로 업데이트한 후, 예를 들어 `RAGFLOW_VERSION=v0.10.0`로 업데이트 한 뒤, 다음 명령어를 실행하세요.
> 다음 명령어를 실행하면 *dev* 버전의 RAGFlow Docker 이미지가 자동으로 다운로드됩니다. 특정 Docker 버전을 다운로드하고 실행하려면, **docker/.env** 파일에서 `RAGFLOW_IMAGE`을 원하는 버전으로 업데이트한 후, 예를 들어 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.12.0`로 업데이트 한 뒤, 다음 명령어를 실행하세요.
```bash
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
@ -166,19 +159,18 @@
_다음 출력 결과로 시스템이 성공적으로 시작되었음을 확인합니다:_
```bash
____ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
/____/
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
* Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit
```
> 만약 확인 단계를 건너뛰고 바로 RAGFlow에 로그인하면, RAGFlow가 완전히 초기화되지 않았기 때문에 브라우저에서 `network anomaly` 오류가 발생할 수 있습니다.
> 만약 확인 단계를 건너뛰고 바로 RAGFlow에 로그인하면, RAGFlow가 완전히 초기화되지 않았기 때문에 브라우저에서 `network abnormal` 오류가 발생할 수 있습니다.
5. 웹 브라우저에 서버의 IP 주소를 입력하고 RAGFlow에 로그인하세요.
> 기본 설정을 사용할 경우, `http://IP_OF_YOUR_MACHINE`만 입력하면 됩니다 (포트 번호는 제외). 기본 HTTP 서비스 포트 `80`은 기본 구성으로 사용할 때 생략할 수 있습니다.
@ -207,110 +199,83 @@
> $ docker-compose up -d
> ```
## 🛠️ 소스에서 빌드하기
## 🪛 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함하지 않음)
Docker 이미지를 소스에서 빌드하려면:
Docker 이미지의 크기는 약 1GB이며, 외부 대형 모델과 임베딩 서비스에 의존합니다.
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
$ docker build -t infiniflow/ragflow:dev .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```
## 🪚 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함)
## 🛠️ 소스에서 서비스 시작하기
이 Docker의 크기는 약 9GB이며, 이미 임베딩 모델을 포함하고 있으므로 외부 대형 모델 서비스에만 의존하면 됩니다.
서비스를 소스에서 시작하려면:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
```
1. 레포지토리를 클론하세요:
## 🔨 소스 코드로 서비스를 시작합니다.
1. Poetry를 설치하거나 이미 설치된 경우 이 단계를 건너뜁니다:
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
curl -sSL https://install.python-poetry.org | python3 -
```
2. 가상 환경을 생성하고, Anaconda 또는 Miniconda가 설치되어 있는지 확인하세요:
2. 소스 코드를 클론하고 Python 의존성을 설치합니다:
```bash
$ conda create -n ragflow python=3.11.0
$ conda activate ragflow
$ pip install -r requirements.txt
```
```bash
# CUDA 버전이 12.0보다 높은 경우, 다음 명령어를 추가로 실행하세요:
$ pip uninstall -y onnxruntime-gpu
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
~/.local/bin/poetry install --sync --no-root # install RAGFlow dependent python modules
```
3. 진입 스크립트를 복사하고 환경 변수를 설정하세요:
3. Docker Compose를 사용하여 의존 서비스(MinIO, Elasticsearch, Redis 및 MySQL)를 시작합니다:
```bash
# 파이썬 경로를 받아옵니다:
$ which python
# RAGFlow 프로젝트 경로를 받아옵니다:
$ pwd
```
```bash
$ cp docker/entrypoint.sh .
$ vi entrypoint.sh
docker compose -f docker/docker-compose-base.yml up -d
```
`/etc/hosts` 에 다음 줄을 추가하여 **docker/service_conf.yaml** 에 지정된 모든 호스트를 `127.0.0.1` 로 해결합니다:
```
127.0.0.1 es01 mysql minio redis
```
**docker/service_conf.yaml** 에서 mysql 포트를 `5455` 로, es 포트를 `1200` 으로 업데이트합니다( **docker/.env** 에 지정된 대로).
4. HuggingFace에 접근할 수 없는 경우, `HF_ENDPOINT` 환경 변수를 설정하여 미러 사이트를 사용하세요:
```bash
# 실제 상황에 맞게 설정 조정하기 (다음 두 개의 export 명령어는 새로 추가되었습니다):
# - `which python`의 결과를 `PY`에 할당합니다.
# - `pwd`의 결과를 `PYTHONPATH`에 할당합니다.
# - `LD_LIBRARY_PATH`가 설정되어 있는 경우 주석 처리합니다.
# - 선택 사항: Hugging Face 미러 추가.
PY=${PY}
export PYTHONPATH=${PYTHONPATH}
export HF_ENDPOINT=https://hf-mirror.com
```
4. 다른 서비스(MinIO, Elasticsearch, Redis, MySQL)를 시작하세요:
5. 백엔드 서비스를 시작합니다:
```bash
$ cd docker
$ docker compose -f docker-compose-base.yml up -d
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
5. 설정 파일을 확인하여 다음 사항을 확인하세요:
- **docker/.env**의 설정이 **conf/service_conf.yaml**의 설정과 일치하는지 확인합니다.
- **service_conf.yaml**의 관련 서비스에 대한 IP 주소와 포트가 로컬 머신의 IP 주소와 컨테이너에서 노출된 포트와 일치하는지 확인합니다.
6. RAGFlow 백엔드 서비스를 시작합니다:
6. 프론트엔드 의존성을 설치합니다:
```bash
$ chmod +x ./entrypoint.sh
$ bash ./entrypoint.sh
cd web
npm install --force
```
7. **.umirc.ts** 에서 `proxy.target` 을 `http://127.0.0.1:9380` 으로 업데이트합니다:
8. 프론트엔드 서비스를 시작합니다:
```bash
npm run dev
```
7. 프론트엔드 서비스를 시작합니다:
_다음 인터페이스는 시스템이 성공적으로 시작되었음을 나타냅니다:_
```bash
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ vim .umirc.ts
# proxy.target을 http://127.0.0.1:9380로 업데이트합니다.
$ npm run dev
```
8. 프론트엔드 서비스를 배포합니다:
```bash
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ umi build
$ mkdir -p /ragflow/web
$ cp -r dist /ragflow/web
$ apt install nginx -y
$ cp ../docker/nginx/proxy.conf /etc/nginx
$ cp ../docker/nginx/nginx.conf /etc/nginx
$ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d
$ systemctl start nginx
```
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
## 📚 문서
@ -331,4 +296,4 @@ $ docker compose up -d
## 🙌 컨트리뷰션
RAGFlow는 오픈소스 협업을 통해 발전합니다. 이러한 정신을 바탕으로, 우리는 커뮤니티의 다양한 기여를 환영합니다. 참여하고 싶으시다면, 먼저 [가이드라인](./docs/references/CONTRIBUTING.md)을 검토해 주세요.
RAGFlow는 오픈소스 협업을 통해 발전합니다. 이러한 정신을 바탕으로, 우리는 커뮤니티의 다양한 기여를 환영합니다. 참여하고 싶으시다면, 먼저 [가이드라인](./CONTRIBUTING.md)을 검토해 주세요.

View File

@ -18,7 +18,7 @@
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.10.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.10.0"></a>
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.12.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.12.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
@ -47,19 +47,15 @@
## 🔥 近期更新
- 2024-08-22 支持用RAG技术实现从自然语言到SQL语句的转换。
- 2024-09-29 优化多轮对话.
- 2024-09-13 增加知识库问答搜索模式。
- 2024-09-09 在 Agent 中加入医疗问诊模板。
- 2024-08-22 支持用 RAG 技术实现从自然语言到 SQL 语句的转换。
- 2024-08-02 支持 GraphRAG 启发于 [graphrag](https://github.com/microsoft/graphrag) 和思维导图。
- 2024-07-23 支持解析音频文件。
- 2024-07-21 支持更多的大模型供应商(LocalAI/OpenRouter/StepFun/Nvidia)
- 2024-07-18 在Graph中支持算子Wikipedia、PubMed、Baidu和Duckduckgo
- 2024-07-08 支持 Agentic RAG: 基于 [Graph](./graph/README.md) 的工作流。
- 2024-06-27 Q&A 解析方式支持 Markdown 文件和 Docx 文件。
- 2024-06-27 支持提取出 Docx 文件中的图片。
- 2024-06-27 支持提取出 Markdown 文件中的表格。
- 2024-06-06 支持 [Self-RAG](https://huggingface.co/papers/2310.11511) ,在对话设置里面默认勾选。
- 2024-05-30 集成 [BCE](https://github.com/netease-youdao/BCEmbedding) 和 [BGE](https://github.com/FlagOpen/FlagEmbedding) 重排序模型。
- 2024-07-08 支持 Agentic RAG: 基于 [Graph](./agent/README.md) 的工作流
- 2024-06-27 Q&A 解析方式支持 Markdown 文件和 Docx 文件,支持提取出 Docx 文件中的图片和 Markdown 文件中的表格
- 2024-05-23 实现 [RAPTOR](https://arxiv.org/html/2401.18059v1) 提供更好的文本检索。
- 2024-05-15 集成大模型 OpenAI GPT-4o。
## 🌟 主要功能
@ -139,12 +135,12 @@
```bash
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose -f docker-compose-CN.yml up -d
$ docker compose -f docker-compose.yml up -d
```
> 请注意,运行上述命令会自动下载 RAGFlow 的开发版本 docker 镜像。如果你想下载并运行特定版本的 docker 镜像,请在 docker/.env 文件中找到 RAGFLOW_VERSION 变量,将其改为对应版本。例如 RAGFLOW_VERSION=v0.10.0,然后运行上述命令。
> 请注意,运行上述命令会自动下载 RAGFlow 的开发版本 docker 镜像。如果你想下载并运行特定版本的 docker 镜像,请在 docker/.env 文件中找到 RAGFLOW_IMAGE 变量,将其改为对应版本。例如 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.12.0`,然后运行上述命令。
> 核心镜像文件大约 9 GB可能需要一定时间拉取。请耐心等待。
> 核心镜像下载大小为 9 GB可能需要一定时间拉取。请耐心等待。
4. 服务器启动成功后再次确认服务器状态:
@ -155,19 +151,18 @@
_出现以下界面提示说明服务器启动成功_
```bash
____ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
/____/
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
* Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit
```
> 如果您跳过这一步系统确认步骤就登录 RAGFlow你的浏览器有可能会提示 `network anomaly` 或 `网络异常`,因为 RAGFlow 可能并未完全启动成功。
> 如果您跳过这一步系统确认步骤就登录 RAGFlow你的浏览器有可能会提示 `network abnormal` 或 `网络异常`,因为 RAGFlow 可能并未完全启动成功。
5. 在你的浏览器中输入你的服务器对应的 IP 地址并登录 RAGFlow。
> 上面这个例子中,您只需输入 http://IP_OF_YOUR_MACHINE 即可:未改动过配置则无需输入端口(默认的 HTTP 服务端口 80
@ -183,122 +178,100 @@
- [.env](./docker/.env):存放一些基本的系统环境变量,比如 `SVR_HTTP_PORT`、`MYSQL_PASSWORD`、`MINIO_PASSWORD` 等。
- [service_conf.yaml](./docker/service_conf.yaml):配置各类后台服务。
- [docker-compose-CN.yml](./docker/docker-compose-CN.yml): 系统依赖该文件完成启动。
- [docker-compose.yml](./docker/docker-compose.yml): 系统依赖该文件完成启动。
请务必确保 [.env](./docker/.env) 文件中的变量设置与 [service_conf.yaml](./docker/service_conf.yaml) 文件中的配置保持一致!
如果不能访问镜像站点hub.docker.com或者模型站点huggingface.co请按照[.env](./docker/.env)注释修改`RAGFLOW_IMAGE`和`HF_ENDPOINT`。
> [./docker/README](./docker/README.md) 文件提供了环境变量设置和服务配置的详细信息。请**一定要**确保 [./docker/README](./docker/README.md) 文件当中列出来的环境变量的值与 [service_conf.yaml](./docker/service_conf.yaml) 文件当中的系统配置保持一致。
如需更新默认的 HTTP 服务端口(80), 可以在 [docker-compose-CN.yml](./docker/docker-compose-CN.yml) 文件中将配置 `80:80` 改为 `<YOUR_SERVING_PORT>:80`。
如需更新默认的 HTTP 服务端口(80), 可以在 [docker-compose.yml](./docker/docker-compose.yml) 文件中将配置 `80:80` 改为 `<YOUR_SERVING_PORT>:80`。
> 所有系统配置都需要通过系统重启生效:
>
> ```bash
> $ docker compose -f docker-compose-CN.yml up -d
> $ docker compose -f docker-compose.yml up -d
> ```
## 🛠️ 源码编译、安装 Docker 镜像
## 🪛 源码编译 Docker 镜像(不含 embedding 模型)
如需从源码安装 Docker 镜像
Docker 镜像大小约 1 GB 左右并且依赖外部的大模型和 embedding 服务。
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
$ docker build -t infiniflow/ragflow:v0.10.0 .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```
## 🛠️ 源码启动服务
## 🪚 源码编译 Docker 镜像(包含 embedding 模型)
如需从源码启动服务,请参考以下步骤:
1. 克隆仓库
本 Docker 大小约 9 GB 左右。由于已包含 embedding 模型,所以只需依赖外部的大模型服务即可。
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
```
2. 创建虚拟环境(确保已安装 Anaconda 或 Miniconda
## 🔨 以源代码启动服务
```bash
$ conda create -n ragflow python=3.11.0
$ conda activate ragflow
$ pip install -r requirements.txt
```
如果 cuda > 12.0,需额外执行以下命令:
```bash
$ pip uninstall -y onnxruntime-gpu
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
```
1. 安装 Poetry。如已经安装可跳过本步骤
```bash
curl -sSL https://install.python-poetry.org | python3 -
```
3. 拷贝入口脚本并配置环境变量
2. 下载源代码并安装 Python 依赖:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
~/.local/bin/poetry install --sync --no-root # install RAGFlow dependent python modules
```
```bash
$ cp docker/entrypoint.sh .
$ vi entrypoint.sh
```
使用以下命令获取python路径及ragflow项目路径
```bash
$ which python
$ pwd
```
3. 通过 Docker Compose 启动依赖的服务MinIO, Elasticsearch, Redis, and MySQL
```bash
docker compose -f docker/docker-compose-base.yml up -d
```
将上述 `which python` 的输出作为 `PY` 的值,将 `pwd` 的输出作为 `PYTHONPATH` 的值。
在 `/etc/hosts` 中添加以下代码,将 **docker/service_conf.yaml** 文件中的所有 host 地址都解析为 `127.0.0.1`
```
127.0.0.1 es01 mysql minio redis
```
在文件 **docker/service_conf.yaml** 中,对照 **docker/.env** 的配置将 mysql 端口更新为 `5455`es 端口更新为 `1200`。
`LD_LIBRARY_PATH` 如果环境已经配置好,可以注释掉。
4. 如果无法访问 HuggingFace可以把环境变量 `HF_ENDPOINT` 设成相应的镜像站点:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
```bash
# 此处配置需要按照实际情况调整,两个 export 为新增配置
PY=${PY}
export PYTHONPATH=${PYTHONPATH}
# 可选:添加 Hugging Face 镜像
export HF_ENDPOINT=https://hf-mirror.com
```
5. 启动后端服务:
```bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
4. 启动基础服务
6. 安装前端依赖:
```bash
cd web
npm install --force
```
7. 配置前端,将 **.umirc.ts** 的 `proxy.target` 更新为 `http://127.0.0.1:9380`
8. 启动前端服务:
```bash
npm run dev
```
```bash
$ cd docker
$ docker compose -f docker-compose-base.yml up -d
```
_以下界面说明系统已经成功启动_
5. 检查配置文件
确保**docker/.env**中的配置与**conf/service_conf.yaml**中配置一致, **service_conf.yaml**中相关服务的IP地址与端口应该改成本机IP地址及容器映射出来的端口。
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
6. 启动服务
```bash
$ chmod +x ./entrypoint.sh
$ bash ./entrypoint.sh
```
7. 启动WebUI服务
```bash
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ vim .umirc.ts
# 修改proxy.target为http://127.0.0.1:9380
$ npm run dev
```
8. 部署WebUI服务
```bash
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ umi build
$ mkdir -p /ragflow/web
$ cp -r dist /ragflow/web
$ apt install nginx -y
$ cp ../docker/nginx/proxy.conf /etc/nginx
$ cp ../docker/nginx/nginx.conf /etc/nginx
$ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d
$ systemctl start nginx
```
## 📚 技术文档
- [Quickstart](https://ragflow.io/docs/dev/)
@ -318,7 +291,7 @@ $ systemctl start nginx
## 🙌 贡献指南
RAGFlow 只有通过开源协作才能蓬勃发展。秉持这一精神,我们欢迎来自社区的各种贡献。如果您有意参与其中,请查阅我们的 [贡献者指南](./docs/references/CONTRIBUTING.md) 。
RAGFlow 只有通过开源协作才能蓬勃发展。秉持这一精神,我们欢迎来自社区的各种贡献。如果您有意参与其中,请查阅我们的 [贡献者指南](./CONTRIBUTING.md) 。
## 🤝 商务合作

View File

@ -18,7 +18,7 @@ main
### Actual behavior
The restricted_loads function at [api/utils/__init__.py#L215](https://github.com/infiniflow/ragflow/blob/main/api/utils/__init__.py#L215) is still vulnerable leading via code execution.
The main reson is that numpy module has a numpy.f2py.diagnose.run_command function directly execute commands, but the restricted_loads function allows users import functions in module numpy.
The main reason is that numpy module has a numpy.f2py.diagnose.run_command function directly execute commands, but the restricted_loads function allows users import functions in module numpy.
### Steps to reproduce

View File

@ -260,7 +260,7 @@ class Canvas(ABC):
def get_history(self, window_size):
convs = []
for role, obj in self.history[window_size * -2:]:
for role, obj in self.history[(window_size + 1) * -1:]:
convs.append({"role": role, "content": (obj if role == "user" else
'\n'.join(pd.DataFrame(obj)['content']))})
return convs
@ -300,3 +300,6 @@ class Canvas(ABC):
return pat + " => " + pat
return False
def get_prologue(self):
return self.components["begin"]["obj"]._param.prologue

View File

@ -9,6 +9,7 @@ from .relevant import Relevant, RelevantParam
from .message import Message, MessageParam
from .rewrite import RewriteQuestion, RewriteQuestionParam
from .keyword import KeywordExtract, KeywordExtractParam
from .concentrator import Concentrator, ConcentratorParam
from .baidu import Baidu, BaiduParam
from .duckduckgo import DuckDuckGo, DuckDuckGoParam
from .wikipedia import Wikipedia, WikipediaParam
@ -22,6 +23,12 @@ from .github import GitHub, GitHubParam
from .baidufanyi import BaiduFanyi, BaiduFanyiParam
from .qweather import QWeather, QWeatherParam
from .exesql import ExeSQL, ExeSQLParam
from .yahoofinance import YahooFinance, YahooFinanceParam
from .wencai import WenCai, WenCaiParam
from .jin10 import Jin10, Jin10Param
from .tushare import TuShare, TuShareParam
from .akshare import AkShare, AkShareParam
def component_class(class_name):
m = importlib.import_module("agent.component")

View File

@ -0,0 +1,56 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
import pandas as pd
import akshare as ak
from agent.component.base import ComponentBase, ComponentParamBase
class AkShareParam(ComponentParamBase):
"""
Define the AkShare component parameters.
"""
def __init__(self):
super().__init__()
self.top_n = 10
def check(self):
self.check_positive_integer(self.top_n, "Top N")
class AkShare(ComponentBase, ABC):
component_name = "AkShare"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = ",".join(ans["content"]) if "content" in ans else ""
if not ans:
return AkShare.be_output("")
try:
ak_res = []
stock_news_em_df = ak.stock_news_em(symbol=ans)
stock_news_em_df = stock_news_em_df.head(self._param.top_n)
ak_res = [{"content": '<a href="' + i["新闻链接"] + '">' + i["新闻标题"] + '</a>\n 新闻内容: ' + i[
"新闻内容"] + " \n发布时间:" + i["发布时间"] + " \n文章来源: " + i["文章来源"]} for index, i in stock_news_em_df.iterrows()]
except Exception as e:
return AkShare.be_output("**ERROR**: " + str(e))
if not ak_res:
return AkShare.be_output("")
return pd.DataFrame(ak_res)

View File

@ -444,7 +444,7 @@ class ComponentBase(ABC):
if DEBUG: print(self.component_name, reversed_cpnts[::-1])
for u in reversed_cpnts[::-1]:
if self.get_component_name(u) in ["switch"]: continue
if self.get_component_name(u) in ["switch", "concentrator"]: continue
if self.component_name.lower() == "generate" and self.get_component_name(u) == "retrieval":
o = self._canvas.get_component(u)["obj"].output(allow_partial=False)[1]
if o is not None:
@ -472,7 +472,7 @@ class ComponentBase(ABC):
if "content" in df:
df = df.drop_duplicates(subset=['content']).reset_index(drop=True)
return df
return pd.DataFrame()
return pd.DataFrame(self._canvas.get_history(3)[-1:])
def get_stream_input(self):
reversed_cpnts = []

View File

@ -82,6 +82,6 @@ class Categorize(Generate, ABC):
if ans.lower().find(c.lower()) >= 0:
return Categorize.be_output(self._param.category_description[c]["to"])
return Categorize.be_output(self._param.category_description.items()[-1][1]["to"])
return Categorize.be_output(list(self._param.category_description.items())[-1][1]["to"])

View File

@ -0,0 +1,36 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
from agent.component.base import ComponentBase, ComponentParamBase
class ConcentratorParam(ComponentParamBase):
"""
Define the Concentrator component parameters.
"""
def __init__(self):
super().__init__()
def check(self):
return True
class Concentrator(ComponentBase, ABC):
component_name = "Concentrator"
def _run(self, history, **kwargs):
return Concentrator.be_output("")

View File

@ -54,7 +54,7 @@ class ExeSQL(ComponentBase, ABC):
setattr(self, "_loop", 0)
if self._loop >= self._param.loop:
self._loop = 0
raise Exception("Maximum loop time exceeds. Can't query the correct data via sql statement.")
raise Exception("Maximum loop time exceeds. Can't query the correct data via SQL statement.")
self._loop += 1
ans = self.get_input()
@ -63,7 +63,7 @@ class ExeSQL(ComponentBase, ABC):
ans = re.sub(r';.*?SELECT ', '; SELECT ', ans, flags=re.IGNORECASE)
ans = re.sub(r';[^;]*$', r';', ans)
if not ans:
return ExeSQL.be_output("SQL statement not found!")
raise Exception("SQL statement not found!")
if self._param.db_type in ["mysql", "mariadb"]:
db = MySQLDatabase(self._param.database, user=self._param.username, host=self._param.host,
@ -75,13 +75,16 @@ class ExeSQL(ComponentBase, ABC):
try:
db.connect()
except Exception as e:
return ExeSQL.be_output("**Error**: \nDatabase Connection Failed! \n" + str(e))
raise Exception("Database Connection Failed! \n" + str(e))
sql_res = []
for single_sql in re.split(r';', ans):
for single_sql in re.split(r';', ans.replace(r"\n", " ")):
if not single_sql:
continue
try:
query = db.execute_sql(single_sql)
if query.rowcount == 0:
sql_res.append({"content": "\nTotal: " + str(query.rowcount) + "\n No record in the database!"})
continue
single_res = pd.DataFrame([i for i in query.fetchmany(size=self._param.top_n)])
single_res.columns = [i[0] for i in query.description]
sql_res.append({"content": "\nTotal: " + str(query.rowcount) + "\n" + single_res.to_markdown()})
@ -91,6 +94,6 @@ class ExeSQL(ComponentBase, ABC):
db.close()
if not sql_res:
return ExeSQL.be_output("No record in the database!")
return ExeSQL.be_output("")
return pd.DataFrame(sql_res)

View File

@ -112,8 +112,7 @@ class Generate(ComponentBase):
kwargs["input"] = input
for n, v in kwargs.items():
# prompt = re.sub(r"\{%s\}"%n, re.escape(str(v)), prompt)
prompt = re.sub(r"\{%s\}" % n, str(v), prompt)
prompt = re.sub(r"\{%s\}" % n, re.escape(str(v)), prompt)
downstreams = self._canvas.get_component(self._id)["downstream"]
if kwargs.get("stream") and len(downstreams) == 1 and self._canvas.get_component(downstreams[0])[
@ -123,13 +122,13 @@ class Generate(ComponentBase):
if "empty_response" in retrieval_res.columns and not "".join(retrieval_res["content"]):
res = {"content": "\n- ".join(retrieval_res["empty_response"]) if "\n- ".join(
retrieval_res["empty_response"]) else "Nothing found in knowledgebase!", "reference": []}
return Generate.be_output(res)
return pd.DataFrame([res])
ans = chat_mdl.chat(prompt, self._canvas.get_history(self._param.message_history_window_size),
self._param.gen_conf())
if self._param.cite and "content_ltks" in retrieval_res.columns and "vector" in retrieval_res.columns:
df = self.set_cite(retrieval_res, ans)
return pd.DataFrame(df)
res = self.set_cite(retrieval_res, ans)
return pd.DataFrame([res])
return Generate.be_output(ans)

130
agent/component/jin10.py Normal file
View File

@ -0,0 +1,130 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
from abc import ABC
import pandas as pd
import requests
from agent.component.base import ComponentBase, ComponentParamBase
class Jin10Param(ComponentParamBase):
"""
Define the Jin10 component parameters.
"""
def __init__(self):
super().__init__()
self.type = "flash"
self.secret_key = "xxx"
self.flash_type = '1'
self.calendar_type = 'cj'
self.calendar_datatype = 'data'
self.symbols_type = 'GOODS'
self.symbols_datatype = 'symbols'
self.contain = ""
self.filter = ""
def check(self):
self.check_valid_value(self.type, "Type", ['flash', 'calendar', 'symbols', 'news'])
self.check_valid_value(self.flash_type, "Flash Type", ['1', '2', '3', '4', '5'])
self.check_valid_value(self.calendar_type, "Calendar Type", ['cj', 'qh', 'hk', 'us'])
self.check_valid_value(self.calendar_datatype, "Calendar DataType", ['data', 'event', 'holiday'])
self.check_valid_value(self.symbols_type, "Symbols Type", ['GOODS', 'FOREX', 'FUTURE', 'CRYPTO'])
self.check_valid_value(self.symbols_datatype, 'Symbols DataType', ['symbols', 'quotes'])
class Jin10(ComponentBase, ABC):
component_name = "Jin10"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else ""
if not ans:
return Jin10.be_output("")
jin10_res = []
headers = {'secret-key': self._param.secret_key}
try:
if self._param.type == "flash":
params = {
'category': self._param.flash_type,
'contain': self._param.contain,
'filter': self._param.filter
}
response = requests.get(
url='https://open-data-api.jin10.com/data-api/flash?category=' + self._param.flash_type,
headers=headers, data=json.dumps(params))
response = response.json()
for i in response['data']:
jin10_res.append({"content": i['data']['content']})
if self._param.type == "calendar":
params = {
'category': self._param.calendar_type
}
response = requests.get(
url='https://open-data-api.jin10.com/data-api/calendar/' + self._param.calendar_datatype + '?category=' + self._param.calendar_type,
headers=headers, data=json.dumps(params))
response = response.json()
jin10_res.append({"content": pd.DataFrame(response['data']).to_markdown()})
if self._param.type == "symbols":
params = {
'type': self._param.symbols_type
}
if self._param.symbols_datatype == "quotes":
params['codes'] = 'BTCUSD'
response = requests.get(
url='https://open-data-api.jin10.com/data-api/' + self._param.symbols_datatype + '?type=' + self._param.symbols_type,
headers=headers, data=json.dumps(params))
response = response.json()
if self._param.symbols_datatype == "symbols":
for i in response['data']:
i['Commodity Code'] = i['c']
i['Stock Exchange'] = i['e']
i['Commodity Name'] = i['n']
i['Commodity Type'] = i['t']
del i['c'], i['e'], i['n'], i['t']
if self._param.symbols_datatype == "quotes":
for i in response['data']:
i['Selling Price'] = i['a']
i['Buying Price'] = i['b']
i['Commodity Code'] = i['c']
i['Stock Exchange'] = i['e']
i['Highest Price'] = i['h']
i['Yesterdays Closing Price'] = i['hc']
i['Lowest Price'] = i['l']
i['Opening Price'] = i['o']
i['Latest Price'] = i['p']
i['Market Quote Time'] = i['t']
del i['a'], i['b'], i['c'], i['e'], i['h'], i['hc'], i['l'], i['o'], i['p'], i['t']
jin10_res.append({"content": pd.DataFrame(response['data']).to_markdown()})
if self._param.type == "news":
params = {
'contain': self._param.contain,
'filter': self._param.filter
}
response = requests.get(
url='https://open-data-api.jin10.com/data-api/news',
headers=headers, data=json.dumps(params))
response = response.json()
jin10_res.append({"content": pd.DataFrame(response['data']).to_markdown()})
except Exception as e:
return Jin10.be_output("**ERROR**: " + str(e))
if not jin10_res:
return Jin10.be_output("")
return pd.DataFrame(jin10_res)

View File

@ -15,6 +15,7 @@
#
from abc import ABC
from Bio import Entrez
import re
import pandas as pd
import xml.etree.ElementTree as ET
from agent.settings import DEBUG
@ -47,12 +48,15 @@ class PubMed(ComponentBase, ABC):
try:
Entrez.email = self._param.email
pubmedids = Entrez.read(Entrez.esearch(db='pubmed', retmax=self._param.top_n, term=ans))['IdList']
pubmedcnt = ET.fromstring(
Entrez.efetch(db='pubmed', id=",".join(pubmedids), retmode="xml").read().decode("utf-8"))
pubmedcnt = ET.fromstring(re.sub(r'<(/?)b>|<(/?)i>', '', Entrez.efetch(db='pubmed', id=",".join(pubmedids),
retmode="xml").read().decode(
"utf-8")))
pubmed_res = [{"content": 'Title:' + child.find("MedlineCitation").find("Article").find(
"ArticleTitle").text + '\nUrl:<a href=" https://pubmed.ncbi.nlm.nih.gov/' + child.find(
"MedlineCitation").find("PMID").text + '">' + '</a>\n' + 'Abstract:' + child.find(
"MedlineCitation").find("Article").find("Abstract").find("AbstractText").text} for child in
"MedlineCitation").find("PMID").text + '">' + '</a>\n' + 'Abstract:' + (
child.find("MedlineCitation").find("Article").find("Abstract").find(
"AbstractText").text if child.find("MedlineCitation").find(
"Article").find("Abstract") else "No abstract available")} for child in
pubmedcnt.findall("PubmedArticle")]
except Exception as e:
return PubMed.be_output("**ERROR**: " + str(e))

View File

@ -51,7 +51,7 @@ class QWeatherParam(ComponentParamBase):
['zh', 'zh-hant', 'en', 'de', 'es', 'fr', 'it', 'ja', 'ko', 'ru', 'hi', 'th', 'ar', 'pt',
'bn', 'ms', 'nl', 'el', 'la', 'sv', 'id', 'pl', 'tr', 'cs', 'et', 'vi', 'fil', 'fi',
'he', 'is', 'nb'])
self.check_vaild_value(self.time_period, "Time period", ['now', '3d', '7d', '10d', '15d', '30d'])
self.check_valid_value(self.time_period, "Time period", ['now', '3d', '7d', '10d', '15d', '30d'])
class QWeather(ComponentBase, ABC):

View File

@ -54,8 +54,8 @@ class Retrieval(ComponentBase, ABC):
for role, cnt in history[::-1][:self._param.message_history_window_size]:
if role != "user":continue
query.append(cnt)
query = "\n".join(query)
# query = "\n".join(query)
query = query[0]
kbs = KnowledgebaseService.get_by_ids(self._param.kb_ids)
if not kbs:
raise ValueError("Can't find knowledgebases by {}".format(self._param.kb_ids))
@ -76,7 +76,8 @@ class Retrieval(ComponentBase, ABC):
if not kbinfos["chunks"]:
df = Retrieval.be_output("")
df["empty_response"] = self._param.empty_response
if self._param.empty_response and self._param.empty_response.strip():
df["empty_response"] = self._param.empty_response
return df
df = pd.DataFrame(kbinfos["chunks"])

View File

@ -54,7 +54,7 @@ class RewriteQuestion(Generate, ABC):
setattr(self, "_loop", 0)
if self._loop >= self._param.loop:
self._loop = 0
raise Exception("Maximum loop time exceeds. Can't find relevant information.")
raise Exception("Sorry! Nothing relevant found.")
self._loop += 1
q = "Question: "
for r, c in self._canvas.history[::-1]:
@ -65,6 +65,8 @@ class RewriteQuestion(Generate, ABC):
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
ans = chat_mdl.chat(self._param.get_prompt(), [{"role": "user", "content": q}],
self._param.gen_conf())
self._canvas.history.pop()
self._canvas.history.append(("user", ans))
print(ans, ":::::::::::::::::::::::::::::::::")
return RewriteQuestion.be_output(ans)

View File

@ -42,8 +42,6 @@ class SwitchParam(ComponentParamBase):
self.check_empty(self.conditions, "[Switch] conditions")
for cond in self.conditions:
if not cond["to"]: raise ValueError(f"[Switch] 'To' can not be empty!")
if cond["logical_operator"] not in ["and", "or"] and len(cond["items"]) > 1:
raise ValueError(f"[Switch] Please set logical_operator correctly!")
class Switch(ComponentBase, ABC):
@ -51,34 +49,15 @@ class Switch(ComponentBase, ABC):
def _run(self, history, **kwargs):
for cond in self._param.conditions:
if len(cond["items"]) == 1:
out = self._canvas.get_component(cond["items"][0]["cpn_id"])["obj"].output()[1]
cpn_input = "" if "content" not in out.columns else " ".join(out["content"])
if self.process_operator(cpn_input, cond["items"][0]["operator"], cond["items"][0]["value"]):
return Switch.be_output(cond["to"])
continue
if cond["logical_operator"] == "and":
res = True
for item in cond["items"]:
out = self._canvas.get_component(item["cpn_id"])["obj"].output()[1]
cpn_input = "" if "content" not in out.columns else " ".join(out["content"])
if not self.process_operator(cpn_input, item["operator"], item["value"]):
res = False
break
if res:
return Switch.be_output(cond["to"])
continue
res = False
res = []
for item in cond["items"]:
out = self._canvas.get_component(item["cpn_id"])["obj"].output()[1]
cpn_input = "" if "content" not in out.columns else " ".join(out["content"])
if self.process_operator(cpn_input, item["operator"], item["value"]):
res = True
break
if res:
res.append(self.process_operator(cpn_input, item["operator"], item["value"]))
if cond["logical_operator"] != "and" and any(res):
return Switch.be_output(cond["to"])
if all(res):
return Switch.be_output(cond["to"])
return Switch.be_output(self._param.end_cpn_id)
@ -124,4 +103,4 @@ class Switch(ComponentBase, ABC):
except Exception as e:
return True if input <= value else False
raise ValueError('Not supported operator' + operator)
raise ValueError('Not supported operator' + operator)

View File

@ -0,0 +1,72 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
from abc import ABC
import pandas as pd
import time
import requests
from agent.component.base import ComponentBase, ComponentParamBase
class TuShareParam(ComponentParamBase):
"""
Define the TuShare component parameters.
"""
def __init__(self):
super().__init__()
self.token = "xxx"
self.src = "eastmoney"
self.start_date = "2024-01-01 09:00:00"
self.end_date = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
self.keyword = ""
def check(self):
self.check_valid_value(self.src, "Quick News Source",
["sina", "wallstreetcn", "10jqka", "eastmoney", "yuncaijing", "fenghuang", "jinrongjie"])
class TuShare(ComponentBase, ABC):
component_name = "TuShare"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = ",".join(ans["content"]) if "content" in ans else ""
if not ans:
return TuShare.be_output("")
try:
tus_res = []
params = {
"api_name": "news",
"token": self._param.token,
"params": {"src": self._param.src, "start_date": self._param.start_date,
"end_date": self._param.end_date}
}
response = requests.post(url="http://api.tushare.pro", data=json.dumps(params).encode('utf-8'))
response = response.json()
if response['code'] != 0:
return TuShare.be_output(response['msg'])
df = pd.DataFrame(response['data']['items'])
df.columns = response['data']['fields']
tus_res.append({"content": (df[df['content'].str.contains(self._param.keyword, case=False)]).to_markdown()})
except Exception as e:
return TuShare.be_output("**ERROR**: " + str(e))
if not tus_res:
return TuShare.be_output("")
return pd.DataFrame(tus_res)

80
agent/component/wencai.py Normal file
View File

@ -0,0 +1,80 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
import pandas as pd
import pywencai
from agent.component.base import ComponentBase, ComponentParamBase
class WenCaiParam(ComponentParamBase):
"""
Define the WenCai component parameters.
"""
def __init__(self):
super().__init__()
self.top_n = 10
self.query_type = "stock"
def check(self):
self.check_positive_integer(self.top_n, "Top N")
self.check_valid_value(self.query_type, "Query type",
['stock', 'zhishu', 'fund', 'hkstock', 'usstock', 'threeboard', 'conbond', 'insurance',
'futures', 'lccp',
'foreign_exchange'])
class WenCai(ComponentBase, ABC):
component_name = "WenCai"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = ",".join(ans["content"]) if "content" in ans else ""
if not ans:
return WenCai.be_output("")
try:
wencai_res = []
res = pywencai.get(query=ans, query_type=self._param.query_type, perpage=self._param.top_n)
if isinstance(res, pd.DataFrame):
wencai_res.append({"content": res.to_markdown()})
if isinstance(res, dict):
for item in res.items():
if isinstance(item[1], list):
wencai_res.append({"content": item[0] + "\n" + pd.DataFrame(item[1]).to_markdown()})
continue
if isinstance(item[1], str):
wencai_res.append({"content": item[0] + "\n" + item[1]})
continue
if isinstance(item[1], dict):
if "meta" in item[1].keys():
continue
wencai_res.append({"content": pd.DataFrame.from_dict(item[1], orient='index').to_markdown()})
continue
if isinstance(item[1], pd.DataFrame):
if "image_url" in item[1].columns:
continue
wencai_res.append({"content": item[1].to_markdown()})
continue
wencai_res.append({"content": item[0] + "\n" + str(item[1])})
except Exception as e:
return WenCai.be_output("**ERROR**: " + str(e))
if not wencai_res:
return WenCai.be_output("")
return pd.DataFrame(wencai_res)

View File

@ -0,0 +1,83 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
import pandas as pd
from agent.component.base import ComponentBase, ComponentParamBase
import yfinance as yf
class YahooFinanceParam(ComponentParamBase):
"""
Define the YahooFinance component parameters.
"""
def __init__(self):
super().__init__()
self.info = True
self.history = False
self.count = False
self.financials = False
self.income_stmt = False
self.balance_sheet = False
self.cash_flow_statement = False
self.news = True
def check(self):
self.check_boolean(self.info, "get all stock info")
self.check_boolean(self.history, "get historical market data")
self.check_boolean(self.count, "show share count")
self.check_boolean(self.financials, "show financials")
self.check_boolean(self.income_stmt, "income statement")
self.check_boolean(self.balance_sheet, "balance sheet")
self.check_boolean(self.cash_flow_statement, "cash flow statement")
self.check_boolean(self.news, "show news")
class YahooFinance(ComponentBase, ABC):
component_name = "YahooFinance"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = "".join(ans["content"]) if "content" in ans else ""
if not ans:
return YahooFinance.be_output("")
yohoo_res = []
try:
msft = yf.Ticker(ans)
if self._param.info:
yohoo_res.append({"content": "info:\n" + pd.Series(msft.info).to_markdown() + "\n"})
if self._param.history:
yohoo_res.append({"content": "history:\n" + msft.history().to_markdown() + "\n"})
if self._param.financials:
yohoo_res.append({"content": "calendar:\n" + pd.DataFrame(msft.calendar).to_markdown() + "\n"})
if self._param.balance_sheet:
yohoo_res.append({"content": "balance sheet:\n" + msft.balance_sheet.to_markdown() + "\n"})
yohoo_res.append(
{"content": "quarterly balance sheet:\n" + msft.quarterly_balance_sheet.to_markdown() + "\n"})
if self._param.cash_flow_statement:
yohoo_res.append({"content": "cash flow statement:\n" + msft.cashflow.to_markdown() + "\n"})
yohoo_res.append(
{"content": "quarterly cash flow statement:\n" + msft.quarterly_cashflow.to_markdown() + "\n"})
if self._param.news:
yohoo_res.append({"content": "news:\n" + pd.DataFrame(msft.news).to_markdown() + "\n"})
except Exception as e:
print("**ERROR** " + str(e))
if not yohoo_res:
return YahooFinance.be_output("")
return pd.DataFrame(yohoo_res)

View File

@ -1,7 +1,7 @@
{
"id": 6,
"title": "DB Assistant",
"description": "Database query assistant. It converts questions into SQL statements and queries them in the database. You need to provide 3 kinds of knowledge base: 1. DDL data in the database. 2. Sample text questions converted to SQL statements. 3. A description of the database contents, including but not limited to: tables, records, and so on. You will also need to set up database configuration information: like IP Port ...",
"description": "An advanced agent that converts user queries into SQL statements, executes the queries, and assesses and returns the results. You must prepare three knowledge bases: 1: DDL for your database; 2: Examples of user queries converted to SQL statements; 3: A comprehensive description of your database, including but not limited to tables and records. You are also required to configure the corresponding database.",
"canvas_type": "chatbot",
"dsl": {
"answer": [],
@ -63,7 +63,7 @@
}
],
"presence_penalty": 0.4,
"prompt": "## You are the Repair SQL Statement Helper, please modify the original SQL statement based on the SQL query error report.\n\n## The contents of the SQL query error report and the original SQL statement are as follows:\n{exesql_input}\n\n## Answer only the modified SQL statement. Please do not give any explanation, just answer the code.",
"prompt": "## You are the Repair SQL Statement Helper, please modify the original SQL statement based on the SQL query error report.\n\n## The contents of the SQL query error report and the original SQL statement are as follows:\n{exesql_input}\n\n## Answer only the modified SQL statement. Each SQL statement ends with semicolon and do not give any explanation, just answer the code.",
"temperature": 0.1,
"top_p": 0.3
}
@ -102,7 +102,7 @@
}
],
"presence_penalty": 0.4,
"prompt": "\n##The user provides a question and you provide SQL. You will only respond with SQL code and not with any explanations.\n\n##You may use the following DDL statements as a reference for what tables might be available. Use responses to past questions also to guide you: {ddl_input}.\n\n##You may use the following documentation as a reference for what tables might be available. Use responses to past questions also to guide you: {db_input}.\n\n##You may use the following SQL statements as a reference for what tables might be available. Use responses to past questions also to guide you: {sql_input}.\n\n##Respond with only SQL code. Do not answer with any explanations -- just the code.",
"prompt": "\n##The user provides a question and you provide SQL. You will only respond with SQL code and not with any explanations.\n\n##You may use the following DDL statements as a reference for what tables might be available. Use responses to past questions also to guide you: {ddl_input}.\n\n##You may use the following documentation as a reference for what tables might be available. Use responses to past questions also to guide you: {db_input}.\n\n##You may use the following SQL statements as a reference for what tables might be available. Use responses to past questions also to guide you: {sql_input}.\n\n##Respond with only SQL code. Each SQL code ends with semicolon and do not give any explanation -- just the code.",
"temperature": 0.1,
"top_p": 0.3
}
@ -120,6 +120,7 @@
"obj": {
"component_name": "Retrieval",
"params": {
"empty_response": "Nothing found in DB-Description!",
"kb_ids": [
"b510f8f45f6011ef904f0242ac160006"
],
@ -139,6 +140,7 @@
"obj": {
"component_name": "Retrieval",
"params": {
"empty_response": "Nothing found in DDL!",
"kb_ids": [
"9870268e5f6011efb8570242ac160006"
],
@ -158,6 +160,7 @@
"obj": {
"component_name": "Retrieval",
"params": {
"empty_response": "Nothing found in Q->SQL!",
"kb_ids": [
"dd401bcc5b9e11efae770242ac160006"
],
@ -408,6 +411,7 @@
{
"data": {
"form": {
"empty_response": "Nothing found in Q->SQL!",
"kb_ids": [
"dd401bcc5b9e11efae770242ac160006"
],
@ -493,6 +497,7 @@
{
"data": {
"form": {
"empty_response": "Nothing found in DB-Description!",
"kb_ids": [
"b510f8f45f6011ef904f0242ac160006"
],
@ -523,6 +528,7 @@
{
"data": {
"form": {
"empty_response": "Nothing found in DDL!",
"kb_ids": [
"9870268e5f6011efb8570242ac160006"
],

View File

@ -1,7 +1,7 @@
{
"id": 2,
"title": "HR call-out assistant(Chinese)",
"description": "A HR call-out assistant. It will introduce the given job, answer the candidates' question about this job. And the most important thing is that it will try to obtain the contact information of the candidates. What you need to do is to link a knowledgebase which contains job description in 'Retrieval' component.",
"title": "HR recruitment pitch assistant (Chinese)",
"description": "A recruitment pitch assistant capable of pitching a candidate, presenting a job opportunity, addressing queries, and requesting the candidate's contact details. Let's begin by linking a knowledge base containing the job description in 'Retrieval'!",
"canvas_type": "chatbot",
"dsl": {
"answer": [],

View File

@ -1,7 +1,7 @@
{
"id": 3,
"title": "Customer service",
"description": "A call-in customer service chat bot. It will provide useful information about the products, answer customers' questions and soothe the customers' bad emotions.",
"description": "A customer service chatbot that explains product specifications, addresses customer queries, and alleviates negative emotions.",
"canvas_type": "chatbot",
"dsl": {
"answer": [],

File diff suppressed because one or more lines are too long

View File

@ -1,7 +1,7 @@
{
"id": 4,
"title": "Interpreter",
"description": "An interpreter. Type the content you want to translate and the object language like: Hi there => Spanish. Hava a try!",
"description": "A simple interpreter that translates user input into a target language. Try 'Hi there => Spanish' to see the translation!",
"canvas_type": "chatbot",
"dsl": {
"answer": [],

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1,7 +1,7 @@
{
"id": 5,
"title": "Text To SQL",
"description": "An agent tool provides the ability to convert text questions into SQL statements, you will need to provide 3 kinds of knowledge bases. 1. DDL data for the database. 2. Examples of text questions converted into SQL statements. 3. A description of the database contents, including but not limited to: tables, records, etc... ",
"description": "An agent that converts user queries into SQL statements. You must prepare three knowledge bases: 1: DDL for your database; 2: Examples of user queries converted to SQL statements; 3: A comprehensive description of your database, including but not limited to tables and records.",
"canvas_type": "chatbot",
"dsl": {
"answer": [],
@ -69,6 +69,7 @@
"obj": {
"component_name": "Retrieval",
"params": {
"empty_response": "Nothing found in DB-Description!",
"kb_ids": [
"0ab5de985ba911efad9942010a8a0006"
],
@ -88,6 +89,7 @@
"obj": {
"component_name": "Retrieval",
"params": {
"empty_response": "Nothing found in DDL!",
"kb_ids": [
"b1a6a45e5ba811ef80dc42010a8a0006"
],
@ -107,6 +109,7 @@
"obj": {
"component_name": "Retrieval",
"params": {
"empty_response": "Nothing found in Q-SQL!",
"kb_ids": [
"31257b925b9f11ef9f0142010a8a0004"
],
@ -286,6 +289,7 @@
{
"data": {
"form": {
"empty_response": "Nothing found in Q-SQL!",
"kb_ids": [
"31257b925b9f11ef9f0142010a8a0004"
],
@ -371,6 +375,7 @@
{
"data": {
"form": {
"empty_response": "Nothing found in DB-Description!",
"kb_ids": [
"0ab5de985ba911efad9942010a8a0006"
],
@ -401,6 +406,7 @@
{
"data": {
"form": {
"empty_response": "Nothing found in DDL!",
"kb_ids": [
"b1a6a45e5ba811ef80dc42010a8a0006"
],

View File

@ -1,7 +1,7 @@
{
"id": 0,
"title": "WebSearch Assistant",
"description": "A chat assistant that combines information both from knowledge base and web search engines. It integrates information from the knowledge base and relevant search engines to answer a given question. What you need to do is setting up knowleage base in 'Retrieval'.",
"description": "A chat assistant template that integrates information extracted from a knowledge base and web searches to respond to queries. Let's begin by setting up your knowledge base in 'Retrieval'!",
"canvas_type": "chatbot",
"dsl": {
"answer": [],

View File

@ -26,20 +26,48 @@
"category_description": {
"product_related": {
"description": "The question is about the product usage, appearance and how it works.",
"examples": "Why it always beaming?\nHow to install it onto the wall?\nIt leaks, what to do?"
"examples": "Why it always beaming?\nHow to install it onto the wall?\nIt leaks, what to do?",
"to": "message:0"
},
"others": {
"description": "The question is not about the product usage, appearance and how it works.",
"examples": "How are you doing?\nWhat is your name?\nAre you a robot?\nWhat's the weather?\nWill it rain?"
"examples": "How are you doing?\nWhat is your name?\nAre you a robot?\nWhat's the weather?\nWill it rain?",
"to": "message:1"
}
}
}
},
"downstream": [],
"downstream": ["message:0","message:1"],
"upstream": ["answer:0"]
},
"message:0": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 0!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["categorize:0"]
},
"message:1": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 1!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["categorize:0"]
}
},
"history": [],
"messages": [],
"path": [],
"reference": [],
"answer": []
}
}

View File

@ -0,0 +1,113 @@
{
"components": {
"begin": {
"obj":{
"component_name": "Begin",
"params": {
"prologue": "Hi there!"
}
},
"downstream": ["answer:0"],
"upstream": []
},
"answer:0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["categorize:0"],
"upstream": ["begin"]
},
"categorize:0": {
"obj": {
"component_name": "Categorize",
"params": {
"llm_id": "deepseek-chat",
"category_description": {
"product_related": {
"description": "The question is about the product usage, appearance and how it works.",
"examples": "Why it always beaming?\nHow to install it onto the wall?\nIt leaks, what to do?",
"to": "concentrator:0"
},
"others": {
"description": "The question is not about the product usage, appearance and how it works.",
"examples": "How are you doing?\nWhat is your name?\nAre you a robot?\nWhat's the weather?\nWill it rain?",
"to": "concentrator:1"
}
}
}
},
"downstream": ["concentrator:0","concentrator:1"],
"upstream": ["answer:0"]
},
"concentrator:0": {
"obj": {
"component_name": "Concentrator",
"params": {}
},
"downstream": ["message:0"],
"upstream": ["categorize:0"]
},
"concentrator:1": {
"obj": {
"component_name": "Concentrator",
"params": {}
},
"downstream": ["message:1_0","message:1_1","message:1_2"],
"upstream": ["categorize:0"]
},
"message:0": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 0_0!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:0"]
},
"message:1_0": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 1_0!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:1"]
},
"message:1_1": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 1_1!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:1"]
},
"message:1_2": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 1_2!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:1"]
}
},
"history": [],
"messages": [],
"path": [],
"reference": [],
"answer": []
}

View File

@ -65,7 +65,7 @@ commands.register_commands(app)
def search_pages_path(pages_dir):
app_path_list = [path for path in pages_dir.glob('*_app.py') if not path.name.startswith('.')]
api_path_list = [path for path in pages_dir.glob('*_api.py') if not path.name.startswith('.')]
api_path_list = [path for path in pages_dir.glob('*sdk/*.py') if not path.name.startswith('.')]
app_path_list.extend(api_path_list)
return app_path_list
@ -73,7 +73,7 @@ def search_pages_path(pages_dir):
def register_page(page_path):
path = f'{page_path}'
page_name = page_path.stem.rstrip('_api') if "_api" in path else page_path.stem.rstrip('_app')
page_name = page_path.stem.rstrip('_app')
module_name = '.'.join(page_path.parts[page_path.parts.index('api'):-1] + (page_name,))
spec = spec_from_file_location(module_name, page_path)
@ -83,7 +83,7 @@ def register_page(page_path):
sys.modules[module_name] = page
spec.loader.exec_module(page)
page_name = getattr(page, 'page_name', page_name)
url_prefix = f'/api/{API_VERSION}/{page_name}' if "_api" in path else f'/{API_VERSION}/{page_name}'
url_prefix = f'/api/{API_VERSION}/{page_name}' if "/sdk/" in path else f'/{API_VERSION}/{page_name}'
app.register_blueprint(page.manager, url_prefix=url_prefix)
return url_prefix
@ -91,7 +91,8 @@ def register_page(page_path):
pages_dir = [
Path(__file__).parent,
Path(__file__).parent.parent / 'api' / 'apps', # FIXME: ragflow/api/api/apps, can be remove?
Path(__file__).parent.parent / 'api' / 'apps',
Path(__file__).parent.parent / 'api' / 'apps' / 'sdk',
]
client_urls_prefix = [

View File

@ -39,7 +39,7 @@ from itsdangerous import URLSafeTimedSerializer
from api.utils.file_utils import filename_type, thumbnail
from rag.nlp import keyword_extraction
from rag.utils.minio_conn import MINIO
from rag.utils.storage_factory import STORAGE_IMPL
from api.db.services.canvas_service import CanvasTemplateService, UserCanvasService
from agent.canvas import Canvas
@ -151,14 +151,17 @@ def set_conversation():
req = request.json
try:
if objs[0].source == "agent":
e, c = UserCanvasService.get_by_id(objs[0].dialog_id)
e, cvs = UserCanvasService.get_by_id(objs[0].dialog_id)
if not e:
return server_error_response("canvas not found.")
if not isinstance(cvs.dsl, str):
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
canvas = Canvas(cvs.dsl, objs[0].tenant_id)
conv = {
"id": get_uuid(),
"dialog_id": c.id,
"dialog_id": cvs.id,
"user_id": request.args.get("user_id", ""),
"message": [{"role": "assistant", "content": "Hi there!"}],
"message": [{"role": "assistant", "content": canvas.get_prologue()}],
"source": "agent"
}
API4ConversationService.save(**conv)
@ -199,15 +202,18 @@ def completion():
continue
if m["role"] == "assistant" and not msg:
continue
msg.append({"role": m["role"], "content": m["content"]})
msg.append(m)
if not msg[-1].get("id"): msg[-1]["id"] = get_uuid()
message_id = msg[-1]["id"]
def fillin_conv(ans):
nonlocal conv
nonlocal conv, message_id
if not conv.reference:
conv.reference.append(ans["reference"])
else:
conv.reference[-1] = ans["reference"]
conv.message[-1] = {"role": "assistant", "content": ans["answer"]}
conv.message[-1] = {"role": "assistant", "content": ans["answer"], "id": message_id}
ans["id"] = message_id
def rename_field(ans):
reference = ans['reference']
@ -233,7 +239,7 @@ def completion():
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": ""})
conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []})
final_ans = {"reference": [], "content": ""}
@ -260,7 +266,7 @@ def completion():
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans},
ensure_ascii=False) + "\n\n"
canvas.messages.append({"role": "assistant", "content": final_ans["content"]})
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
@ -279,7 +285,7 @@ def completion():
return resp
final_ans["content"] = "\n".join(answer["content"]) if "content" in answer else ""
canvas.messages.append({"role": "assistant", "content": final_ans["content"]})
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
@ -300,7 +306,7 @@ def completion():
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": ""})
conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []})
def stream():
@ -342,12 +348,22 @@ def completion():
@manager.route('/conversation/<conversation_id>', methods=['GET'])
# @login_required
def get(conversation_id):
token = request.headers.get('Authorization').split()[1]
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
try:
e, conv = API4ConversationService.get_by_id(conversation_id)
if not e:
return get_data_error_result(retmsg="Conversation not found!")
conv = conv.to_dict()
if token != APIToken.query(dialog_id=conv['dialog_id'])[0].token:
return get_json_result(data=False, retmsg='Token is not valid for this conversation_id!"',
retcode=RetCode.AUTHENTICATION_ERROR)
for referenct_i in conv['reference']:
if referenct_i is None or len(referenct_i) == 0:
continue
@ -411,10 +427,10 @@ def upload():
retmsg="This type of file has not been supported yet!")
location = filename
while MINIO.obj_exist(kb_id, location):
while STORAGE_IMPL.obj_exist(kb_id, location):
location += "_"
blob = request.files['file'].read()
MINIO.put(kb_id, location, blob)
STORAGE_IMPL.put(kb_id, location, blob)
doc = {
"id": get_uuid(),
"kb_id": kb.id,
@ -438,6 +454,8 @@ def upload():
doc["parser_id"] = ParserType.AUDIO.value
if re.search(r"\.(ppt|pptx|pages)$", filename):
doc["parser_id"] = ParserType.PRESENTATION.value
if re.search(r"\.(eml)$", filename):
doc["parser_id"] = ParserType.EMAIL.value
doc_result = DocumentService.insert(doc)
FileService.add_file_from_kb(doc, kb_folder["id"], kb.tenant_id)
@ -462,7 +480,7 @@ def upload():
e, doc = DocumentService.get_by_id(doc["id"])
doc = doc.to_dict()
doc["tenant_id"] = tenant_id
bucket, name = File2DocumentService.get_minio_address(doc_id=doc["id"])
bucket, name = File2DocumentService.get_storage_address(doc_id=doc["id"])
queue_tasks(doc, bucket, name)
except Exception as e:
return server_error_response(e)
@ -624,7 +642,7 @@ def document_rm():
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
b, n = File2DocumentService.get_minio_address(doc_id=doc_id)
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(
@ -634,7 +652,7 @@ def document_rm():
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
File2DocumentService.delete_by_document_id(doc_id)
MINIO.rm(b, n)
STORAGE_IMPL.rm(b, n)
except Exception as e:
errors += str(e)
@ -663,8 +681,79 @@ def completion_faq():
msg = []
msg.append({"role": "user", "content": req["word"]})
if not msg[-1].get("id"): msg[-1]["id"] = get_uuid()
message_id = msg[-1]["id"]
def fillin_conv(ans):
nonlocal conv, message_id
if not conv.reference:
conv.reference.append(ans["reference"])
else:
conv.reference[-1] = ans["reference"]
conv.message[-1] = {"role": "assistant", "content": ans["answer"], "id": message_id}
ans["id"] = message_id
try:
if conv.source == "agent":
conv.message.append(msg[-1])
e, cvs = UserCanvasService.get_by_id(conv.dialog_id)
if not e:
return server_error_response("canvas not found.")
if not isinstance(cvs.dsl, str):
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []})
final_ans = {"reference": [], "doc_aggs": []}
canvas = Canvas(cvs.dsl, objs[0].tenant_id)
canvas.messages.append(msg[-1])
canvas.add_user_input(msg[-1]["content"])
answer = canvas.run(stream=False)
assert answer is not None, "Nothing. Is it over?"
data_type_picture = {
"type": 3,
"url": "base64 content"
}
data = [
{
"type": 1,
"content": ""
}
]
final_ans["content"] = "\n".join(answer["content"]) if "content" in answer else ""
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
ans = {"answer": final_ans["content"], "reference": final_ans.get("reference", [])}
data[0]["content"] += re.sub(r'##\d\$\$', '', ans["answer"])
fillin_conv(ans)
API4ConversationService.append_message(conv.id, conv.to_dict())
chunk_idxs = [int(match[2]) for match in re.findall(r'##\d\$\$', ans["answer"])]
for chunk_idx in chunk_idxs[:1]:
if ans["reference"]["chunks"][chunk_idx]["img_id"]:
try:
bkt, nm = ans["reference"]["chunks"][chunk_idx]["img_id"].split("-")
response = STORAGE_IMPL.get(bkt, nm)
data_type_picture["url"] = base64.b64encode(response).decode('utf-8')
data.append(data_type_picture)
break
except Exception as e:
return server_error_response(e)
response = {"code": 200, "msg": "success", "data": data}
return response
# ******************For dialog******************
conv.message.append(msg[-1])
e, dia = DialogService.get_by_id(conv.dialog_id)
if not e:
@ -673,17 +762,9 @@ def completion_faq():
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": ""})
conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []})
def fillin_conv(ans):
nonlocal conv
if not conv.reference:
conv.reference.append(ans["reference"])
else:
conv.reference[-1] = ans["reference"]
conv.message[-1] = {"role": "assistant", "content": ans["answer"]}
data_type_picture = {
"type": 3,
"url": "base64 content"
@ -707,7 +788,7 @@ def completion_faq():
if ans["reference"]["chunks"][chunk_idx]["img_id"]:
try:
bkt, nm = ans["reference"]["chunks"][chunk_idx]["img_id"].split("-")
response = MINIO.get(bkt, nm)
response = STORAGE_IMPL.get(bkt, nm)
data_type_picture["url"] = base64.b64encode(response).decode('utf-8')
data.append(data_type_picture)
break
@ -767,4 +848,4 @@ def retrieval():
if str(e).find("not_found") > 0:
return get_json_result(data=False, retmsg=f'No chunk found! Check the chunk status please!',
retcode=RetCode.DATA_ERROR)
return server_error_response(e)
return server_error_response(e)

View File

@ -18,8 +18,11 @@ from functools import partial
from flask import request, Response
from flask_login import login_required, current_user
from api.db.services.canvas_service import CanvasTemplateService, UserCanvasService
from api.db.services.dialog_service import full_question
from api.db.services.user_service import TenantService
from api.settings import RetCode
from api.utils import get_uuid
from api.utils.api_utils import get_json_result, server_error_response, validate_request
from api.utils.api_utils import get_json_result, server_error_response, validate_request, get_data_error_result
from agent.canvas import Canvas
from peewee import MySQLDatabase, PostgresqlDatabase
@ -43,6 +46,10 @@ def canvas_list():
@login_required
def rm():
for i in request.json["canvas_ids"]:
if not UserCanvasService.query(user_id=current_user.id,id=i):
return get_json_result(
data=False, retmsg=f'Only owner of canvas authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
UserCanvasService.delete_by_id(i)
return get_json_result(data=True)
@ -61,10 +68,13 @@ def save():
return server_error_response(ValueError("Duplicated title."))
req["id"] = get_uuid()
if not UserCanvasService.save(**req):
return server_error_response("Fail to save canvas.")
return get_data_error_result(retmsg="Fail to save canvas.")
else:
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
return get_json_result(
data=False, retmsg=f'Only owner of canvas authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
UserCanvasService.update_by_id(req["id"], req)
return get_json_result(data=req)
@ -73,7 +83,7 @@ def save():
def get(canvas_id):
e, c = UserCanvasService.get_by_id(canvas_id)
if not e:
return server_error_response("canvas not found.")
return get_data_error_result(retmsg="canvas not found.")
return get_json_result(data=c.to_dict())
@ -85,16 +95,24 @@ def run():
stream = req.get("stream", True)
e, cvs = UserCanvasService.get_by_id(req["id"])
if not e:
return server_error_response("canvas not found.")
return get_data_error_result(retmsg="canvas not found.")
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
return get_json_result(
data=False, retmsg=f'Only owner of canvas authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
if not isinstance(cvs.dsl, str):
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
final_ans = {"reference": [], "content": ""}
message_id = req.get("message_id", get_uuid())
try:
canvas = Canvas(cvs.dsl, current_user.id)
if "message" in req:
canvas.messages.append({"role": "user", "content": req["message"]})
canvas.messages.append({"role": "user", "content": req["message"], "id": message_id})
if len([m for m in canvas.messages if m["role"] == "user"]) > 1:
ten = TenantService.get_by_user_id(current_user.id)[0]
req["message"] = full_question(ten["tenant_id"], ten["llm_id"], canvas.messages)
canvas.add_user_input(req["message"])
answer = canvas.run(stream=stream)
print(canvas)
@ -115,7 +133,7 @@ def run():
ans = {"answer": ans["content"], "reference": ans.get("reference", [])}
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n"
canvas.messages.append({"role": "assistant", "content": final_ans["content"]})
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
@ -134,7 +152,7 @@ def run():
return resp
final_ans["content"] = "\n".join(answer["content"]) if "content" in answer else ""
canvas.messages.append({"role": "assistant", "content": final_ans["content"]})
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
@ -150,7 +168,11 @@ def reset():
try:
e, user_canvas = UserCanvasService.get_by_id(req["id"])
if not e:
return server_error_response("canvas not found.")
return get_data_error_result(retmsg="canvas not found.")
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
return get_json_result(
data=False, retmsg=f'Only owner of canvas authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
canvas = Canvas(json.dumps(user_canvas.dsl), current_user.id)
canvas.reset()

View File

@ -27,7 +27,7 @@ from rag.utils.es_conn import ELASTICSEARCH
from rag.utils import rmSpace
from api.db import LLMType, ParserType
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import TenantLLMService
from api.db.services.llm_service import LLMBundle
from api.db.services.user_service import UserTenantService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.db.services.document_service import DocumentService
@ -58,7 +58,7 @@ def list_chunk():
}
if "available_int" in req:
query["available_int"] = int(req["available_int"])
sres = retrievaler.search(query, search.index_name(tenant_id))
sres = retrievaler.search(query, search.index_name(tenant_id), highlight=True)
res = {"total": sres.total, "chunks": [], "doc": doc.to_dict()}
for id in sres.ids:
d = {
@ -141,8 +141,7 @@ def set():
return get_data_error_result(retmsg="Tenant not found!")
embd_id = DocumentService.get_embd_id(req["doc_id"])
embd_mdl = TenantLLMService.model_instance(
tenant_id, LLMType.EMBEDDING.value, embd_id)
embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING, embd_id)
e, doc = DocumentService.get_by_id(req["doc_id"])
if not e:
@ -235,8 +234,7 @@ def create():
return get_data_error_result(retmsg="Tenant not found!")
embd_id = DocumentService.get_embd_id(req["doc_id"])
embd_mdl = TenantLLMService.model_instance(
tenant_id, LLMType.EMBEDDING.value, embd_id)
embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING.value, embd_id)
v, c = embd_mdl.encode([doc.name, req["content_with_weight"]])
v = 0.1 * v[0] + 0.9 * v[1]
@ -259,31 +257,42 @@ def retrieval_test():
size = int(req.get("size", 30))
question = req["question"]
kb_id = req["kb_id"]
if isinstance(kb_id, str): kb_id = [kb_id]
doc_ids = req.get("doc_ids", [])
similarity_threshold = float(req.get("similarity_threshold", 0.2))
similarity_threshold = float(req.get("similarity_threshold", 0.0))
vector_similarity_weight = float(req.get("vector_similarity_weight", 0.3))
top = int(req.get("top_k", 1024))
try:
e, kb = KnowledgebaseService.get_by_id(kb_id)
tenants = UserTenantService.query(user_id=current_user.id)
for kid in kb_id:
for tenant in tenants:
if KnowledgebaseService.query(
tenant_id=tenant.tenant_id, id=kid):
break
else:
return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_id[0])
if not e:
return get_data_error_result(retmsg="Knowledgebase not found!")
embd_mdl = TenantLLMService.model_instance(
kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
rerank_mdl = None
if req.get("rerank_id"):
rerank_mdl = TenantLLMService.model_instance(
kb.tenant_id, LLMType.RERANK.value, llm_name=req["rerank_id"])
rerank_mdl = LLMBundle(kb.tenant_id, LLMType.RERANK.value, llm_name=req["rerank_id"])
if req.get("keyword", False):
chat_mdl = TenantLLMService.model_instance(kb.tenant_id, LLMType.CHAT)
chat_mdl = LLMBundle(kb.tenant_id, LLMType.CHAT)
question += keyword_extraction(chat_mdl, question)
retr = retrievaler if kb.parser_id != ParserType.KG else kg_retrievaler
ranks = retr.retrieval(question, embd_mdl, kb.tenant_id, [kb_id], page, size,
ranks = retr.retrieval(question, embd_mdl, kb.tenant_id, kb_id, page, size,
similarity_threshold, vector_similarity_weight, top,
doc_ids, rerank_mdl=rerank_mdl)
doc_ids, rerank_mdl=rerank_mdl, highlight=req.get("highlight"))
for c in ranks["chunks"]:
if "vector" in c:
del c["vector"]

View File

@ -13,14 +13,23 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
import re
import traceback
from copy import deepcopy
from api.db.services.user_service import UserTenantService
from flask import request, Response
from flask_login import login_required
from api.db.services.dialog_service import DialogService, ConversationService, chat
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from flask_login import login_required, current_user
from api.db import LLMType
from api.db.services.dialog_service import DialogService, ConversationService, chat, ask
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle, TenantService, TenantLLMService
from api.settings import RetCode, retrievaler
from api.utils import get_uuid
from api.utils.api_utils import get_json_result
import json
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from graphrag.mind_map_extractor import MindMapExtractor
@manager.route('/set', methods=['POST'])
@ -28,7 +37,9 @@ import json
def set_conversation():
req = request.json
conv_id = req.get("conversation_id")
if conv_id:
is_new = req.get("is_new")
del req["is_new"]
if not is_new:
del req["conversation_id"]
try:
if not ConversationService.update_by_id(conv_id, req):
@ -47,7 +58,7 @@ def set_conversation():
if not e:
return get_data_error_result(retmsg="Dialog not found")
conv = {
"id": get_uuid(),
"id": conv_id,
"dialog_id": req["dialog_id"],
"name": req.get("name", "New conversation"),
"message": [{"role": "assistant", "content": dia.prompt_config["prologue"]}]
@ -70,6 +81,14 @@ def get():
e, conv = ConversationService.get_by_id(conv_id)
if not e:
return get_data_error_result(retmsg="Conversation not found!")
tenants = UserTenantService.query(user_id=current_user.id)
for tenant in tenants:
if DialogService.query(tenant_id=tenant.tenant_id, id=conv.dialog_id):
break
else:
return get_json_result(
data=False, retmsg=f'Only owner of conversation authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
conv = conv.to_dict()
return get_json_result(data=conv)
except Exception as e:
@ -82,6 +101,17 @@ def rm():
conv_ids = request.json["conversation_ids"]
try:
for cid in conv_ids:
exist, conv = ConversationService.get_by_id(cid)
if not exist:
return get_data_error_result(retmsg="Conversation not found!")
tenants = UserTenantService.query(user_id=current_user.id)
for tenant in tenants:
if DialogService.query(tenant_id=tenant.tenant_id, id=conv.dialog_id):
break
else:
return get_json_result(
data=False, retmsg=f'Only owner of conversation authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
ConversationService.delete_by_id(cid)
return get_json_result(data=True)
except Exception as e:
@ -93,6 +123,10 @@ def rm():
def list_convsersation():
dialog_id = request.args["dialog_id"]
try:
if not DialogService.query(tenant_id=current_user.id, id=dialog_id):
return get_json_result(
data=False, retmsg=f'Only owner of dialog authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
convs = ConversationService.query(
dialog_id=dialog_id,
order_by=ConversationService.model.create_time,
@ -105,26 +139,25 @@ def list_convsersation():
@manager.route('/completion', methods=['POST'])
@login_required
#@validate_request("conversation_id", "messages")
@validate_request("conversation_id", "messages")
def completion():
req = request.json
#req = {"conversation_id": "9aaaca4c11d311efa461fa163e197198", "messages": [
# req = {"conversation_id": "9aaaca4c11d311efa461fa163e197198", "messages": [
# {"role": "user", "content": "上海有吗?"}
#]}
# ]}
msg = []
for m in req["messages"]:
if m["role"] == "system":
continue
if m["role"] == "assistant" and not msg:
continue
msg.append({"role": m["role"], "content": m["content"]})
if "doc_ids" in m:
msg[-1]["doc_ids"] = m["doc_ids"]
msg.append(m)
message_id = msg[-1].get("id")
try:
e, conv = ConversationService.get_by_id(req["conversation_id"])
if not e:
return get_data_error_result(retmsg="Conversation not found!")
conv.message.append(deepcopy(msg[-1]))
conv.message = deepcopy(req["messages"])
e, dia = DialogService.get_by_id(conv.dialog_id)
if not e:
return get_data_error_result(retmsg="Dialog not found!")
@ -133,28 +166,31 @@ def completion():
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": ""})
conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []})
def fillin_conv(ans):
nonlocal conv
nonlocal conv, message_id
if not conv.reference:
conv.reference.append(ans["reference"])
else: conv.reference[-1] = ans["reference"]
conv.message[-1] = {"role": "assistant", "content": ans["answer"]}
else:
conv.reference[-1] = ans["reference"]
conv.message[-1] = {"role": "assistant", "content": ans["answer"],
"id": message_id, "prompt": ans.get("prompt", "")}
ans["id"] = message_id
def stream():
nonlocal dia, msg, req, conv
try:
for ans in chat(dia, msg, True, **req):
fillin_conv(ans)
yield "data:"+json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n"
ConversationService.update_by_id(conv.id, conv.to_dict())
except Exception as e:
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e),
"data": {"answer": "**ERROR**: "+str(e), "reference": []}},
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:"+json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n"
if req.get("stream", True):
resp = Response(stream(), mimetype="text/event-stream")
@ -175,3 +211,169 @@ def completion():
except Exception as e:
return server_error_response(e)
@manager.route('/tts', methods=['POST'])
@login_required
def tts():
req = request.json
text = req["text"]
tenants = TenantService.get_by_user_id(current_user.id)
if not tenants:
return get_data_error_result(retmsg="Tenant not found!")
tts_id = tenants[0]["tts_id"]
if not tts_id:
return get_data_error_result(retmsg="No default TTS model is set")
tts_mdl = LLMBundle(tenants[0]["tenant_id"], LLMType.TTS, tts_id)
def stream_audio():
try:
for txt in re.split(r"[,。/《》?;:!\n\r:;]+", text):
for chunk in tts_mdl.tts(txt):
yield chunk
except Exception as e:
yield ("data:" + json.dumps({"retcode": 500, "retmsg": str(e),
"data": {"answer": "**ERROR**: " + str(e)}},
ensure_ascii=False)).encode('utf-8')
resp = Response(stream_audio(), mimetype="audio/mpeg")
resp.headers.add_header("Cache-Control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
return resp
@manager.route('/delete_msg', methods=['POST'])
@login_required
@validate_request("conversation_id", "message_id")
def delete_msg():
req = request.json
e, conv = ConversationService.get_by_id(req["conversation_id"])
if not e:
return get_data_error_result(retmsg="Conversation not found!")
conv = conv.to_dict()
for i, msg in enumerate(conv["message"]):
if req["message_id"] != msg.get("id", ""):
continue
assert conv["message"][i + 1]["id"] == req["message_id"]
conv["message"].pop(i)
conv["message"].pop(i)
conv["reference"].pop(max(0, i // 2 - 1))
break
ConversationService.update_by_id(conv["id"], conv)
return get_json_result(data=conv)
@manager.route('/thumbup', methods=['POST'])
@login_required
@validate_request("conversation_id", "message_id")
def thumbup():
req = request.json
e, conv = ConversationService.get_by_id(req["conversation_id"])
if not e:
return get_data_error_result(retmsg="Conversation not found!")
up_down = req.get("set")
feedback = req.get("feedback", "")
conv = conv.to_dict()
for i, msg in enumerate(conv["message"]):
if req["message_id"] == msg.get("id", "") and msg.get("role", "") == "assistant":
if up_down:
msg["thumbup"] = True
if "feedback" in msg: del msg["feedback"]
else:
msg["thumbup"] = False
if feedback: msg["feedback"] = feedback
break
ConversationService.update_by_id(conv["id"], conv)
return get_json_result(data=conv)
@manager.route('/ask', methods=['POST'])
@login_required
@validate_request("question", "kb_ids")
def ask_about():
req = request.json
uid = current_user.id
def stream():
nonlocal req, uid
try:
for ans in ask(req["question"], req["kb_ids"], uid):
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n"
except Exception as e:
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n"
resp = Response(stream(), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
@manager.route('/mindmap', methods=['POST'])
@login_required
@validate_request("question", "kb_ids")
def mindmap():
req = request.json
kb_ids = req["kb_ids"]
e, kb = KnowledgebaseService.get_by_id(kb_ids[0])
if not e:
return get_data_error_result(retmsg="Knowledgebase not found!")
embd_mdl = TenantLLMService.model_instance(
kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
chat_mdl = LLMBundle(current_user.id, LLMType.CHAT)
ranks = retrievaler.retrieval(req["question"], embd_mdl, kb.tenant_id, kb_ids, 1, 12,
0.3, 0.3, aggs=False)
mindmap = MindMapExtractor(chat_mdl)
mind_map = mindmap([c["content_with_weight"] for c in ranks["chunks"]]).output
if "error" in mind_map:
return server_error_response(Exception(mind_map["error"]))
return get_json_result(data=mind_map)
@manager.route('/related_questions', methods=['POST'])
@login_required
@validate_request("question")
def related_questions():
req = request.json
question = req["question"]
chat_mdl = LLMBundle(current_user.id, LLMType.CHAT)
prompt = """
Objective: To generate search terms related to the user's search keywords, helping users find more valuable information.
Instructions:
- Based on the keywords provided by the user, generate 5-10 related search terms.
- Each search term should be directly or indirectly related to the keyword, guiding the user to find more valuable information.
- Use common, general terms as much as possible, avoiding obscure words or technical jargon.
- Keep the term length between 2-4 words, concise and clear.
- DO NOT translate, use the language of the original keywords.
### Example:
Keywords: Chinese football
Related search terms:
1. Current status of Chinese football
2. Reform of Chinese football
3. Youth training of Chinese football
4. Chinese football in the Asian Cup
5. Chinese football in the World Cup
Reason:
- When searching, users often only use one or two keywords, making it difficult to fully express their information needs.
- Generating related search terms can help users dig deeper into relevant information and improve search efficiency.
- At the same time, related terms can also help search engines better understand user needs and return more accurate search results.
"""
ans = chat_mdl.chat(prompt, [{"role": "user", "content": f"""
Keywords: {question}
Related search terms:
"""}], {"temperature": 0.9})
return get_json_result(data=[re.sub(r"^[0-9]\. ", "", a) for a in ans.split("\n") if re.match(r"^[0-9]\. ", a)])

View File

@ -42,7 +42,7 @@ from api.utils.file_utils import filename_type, thumbnail
from rag.app import book, laws, manual, naive, one, paper, presentation, qa, resume, table, picture, audio, email
from rag.nlp import search
from rag.utils.es_conn import ELASTICSEARCH
from rag.utils.minio_conn import MINIO
from rag.utils.storage_factory import STORAGE_IMPL
MAXIMUM_OF_UPLOADING_FILES = 256
@ -352,7 +352,7 @@ def upload_documents(dataset_id):
# upload to the minio
location = filename
while MINIO.obj_exist(dataset_id, location):
while STORAGE_IMPL.obj_exist(dataset_id, location):
location += "_"
blob = file.read()
@ -361,7 +361,7 @@ def upload_documents(dataset_id):
if blob == b'':
warnings.warn(f"[WARNING]: The content of the file {filename} is empty.")
MINIO.put(dataset_id, location, blob)
STORAGE_IMPL.put(dataset_id, location, blob)
doc = {
"id": get_uuid(),
@ -381,6 +381,8 @@ def upload_documents(dataset_id):
doc["parser_id"] = ParserType.AUDIO.value
if re.search(r"\.(ppt|pptx|pages)$", filename):
doc["parser_id"] = ParserType.PRESENTATION.value
if re.search(r"\.(eml)$", filename):
doc["parser_id"] = ParserType.EMAIL.value
DocumentService.insert(doc)
FileService.add_file_from_kb(doc, kb_folder["id"], dataset.tenant_id)
@ -420,7 +422,7 @@ def delete_document(document_id, dataset_id): # string
f" reason!", code=RetCode.AUTHENTICATION_ERROR)
# get the doc's id and location
real_dataset_id, location = File2DocumentService.get_minio_address(doc_id=document_id)
real_dataset_id, location = File2DocumentService.get_storage_address(doc_id=document_id)
if real_dataset_id != dataset_id:
return construct_json_result(message=f"The document {document_id} is not in the dataset: {dataset_id}, "
@ -441,7 +443,7 @@ def delete_document(document_id, dataset_id): # string
File2DocumentService.delete_by_document_id(document_id)
# delete it from minio
MINIO.rm(dataset_id, location)
STORAGE_IMPL.rm(dataset_id, location)
except Exception as e:
errors += str(e)
if errors:
@ -595,8 +597,8 @@ def download_document(dataset_id, document_id):
code=RetCode.ARGUMENT_ERROR)
# The process of downloading
doc_id, doc_location = File2DocumentService.get_minio_address(doc_id=document_id) # minio address
file_stream = MINIO.get(doc_id, doc_location)
doc_id, doc_location = File2DocumentService.get_storage_address(doc_id=document_id) # minio address
file_stream = STORAGE_IMPL.get(doc_id, doc_location)
if not file_stream:
return construct_json_result(message="This file is empty.", code=RetCode.DATA_ERROR)
@ -736,8 +738,8 @@ def parsing_document_internal(id):
doc_attributes = doc_attributes.to_dict()
doc_id = doc_attributes["id"]
bucket, doc_name = File2DocumentService.get_minio_address(doc_id=doc_id)
binary = MINIO.get(bucket, doc_name)
bucket, doc_name = File2DocumentService.get_storage_address(doc_id=doc_id)
binary = STORAGE_IMPL.get(bucket, doc_name)
parser_name = doc_attributes["parser_id"]
if binary:
res = doc_parse(binary, doc_name, parser_name, tenant_id, doc_id)

View File

@ -19,7 +19,8 @@ from flask_login import login_required, current_user
from api.db.services.dialog_service import DialogService
from api.db import StatusEnum
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.user_service import TenantService
from api.db.services.user_service import TenantService, UserTenantService
from api.settings import RetCode
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.utils import get_uuid
from api.utils.api_utils import get_json_result
@ -164,9 +165,19 @@ def list_dialogs():
@validate_request("dialog_ids")
def rm():
req = request.json
dialog_list=[]
tenants = UserTenantService.query(user_id=current_user.id)
try:
DialogService.update_many_by_id(
[{"id": id, "status": StatusEnum.INVALID.value} for id in req["dialog_ids"]])
for id in req["dialog_ids"]:
for tenant in tenants:
if DialogService.query(tenant_id=tenant.tenant_id, id=id):
break
else:
return get_json_result(
data=False, retmsg=f'Only owner of dialog authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
dialog_list.append({"id": id,"status":StatusEnum.INVALID.value})
DialogService.update_many_by_id(dialog_list)
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)

View File

@ -35,7 +35,7 @@ from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
from api.db.services.llm_service import LLMBundle
from api.db.services.task_service import TaskService, queue_tasks
from api.db.services.user_service import TenantService
from api.db.services.user_service import TenantService, UserTenantService
from graphrag.mind_map_extractor import MindMapExtractor
from rag.app import naive
from rag.nlp import search
@ -48,7 +48,7 @@ from api.db import FileType, TaskStatus, ParserType, FileSource, LLMType
from api.db.services.document_service import DocumentService, doc_upload_and_parse
from api.settings import RetCode, stat_logger
from api.utils.api_utils import get_json_result
from rag.utils.minio_conn import MINIO
from rag.utils.storage_factory import STORAGE_IMPL
from api.utils.file_utils import filename_type, thumbnail, get_project_base_directory
from api.utils.web_utils import html2pdf, is_valid_url
@ -118,9 +118,9 @@ def web_crawl():
raise RuntimeError("This type of file has not been supported yet!")
location = filename
while MINIO.obj_exist(kb_id, location):
while STORAGE_IMPL.obj_exist(kb_id, location):
location += "_"
MINIO.put(kb_id, location, blob)
STORAGE_IMPL.put(kb_id, location, blob)
doc = {
"id": get_uuid(),
"kb_id": kb.id,
@ -139,6 +139,8 @@ def web_crawl():
doc["parser_id"] = ParserType.AUDIO.value
if re.search(r"\.(ppt|pptx|pages)$", filename):
doc["parser_id"] = ParserType.PRESENTATION.value
if re.search(r"\.(eml)$", filename):
doc["parser_id"] = ParserType.EMAIL.value
DocumentService.insert(doc)
FileService.add_file_from_kb(doc, kb_folder["id"], kb.tenant_id)
except Exception as e:
@ -189,6 +191,15 @@ def list_docs():
if not kb_id:
return get_json_result(
data=False, retmsg='Lack of "KB ID"', retcode=RetCode.ARGUMENT_ERROR)
tenants = UserTenantService.query(user_id=current_user.id)
for tenant in tenants:
if KnowledgebaseService.query(
tenant_id=tenant.tenant_id, id=kb_id):
break
else:
return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
keywords = request.args.get("keywords", "")
page_number = int(request.args.get("page", 1))
@ -288,7 +299,7 @@ def rm():
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
b, n = File2DocumentService.get_minio_address(doc_id=doc_id)
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(
@ -298,7 +309,7 @@ def rm():
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
File2DocumentService.delete_by_document_id(doc_id)
MINIO.rm(b, n)
STORAGE_IMPL.rm(b, n)
except Exception as e:
errors += str(e)
@ -333,7 +344,7 @@ def run():
e, doc = DocumentService.get_by_id(id)
doc = doc.to_dict()
doc["tenant_id"] = tenant_id
bucket, name = File2DocumentService.get_minio_address(doc_id=doc["id"])
bucket, name = File2DocumentService.get_storage_address(doc_id=doc["id"])
queue_tasks(doc, bucket, name)
return get_json_result(data=True)
@ -384,8 +395,8 @@ def get(doc_id):
if not e:
return get_data_error_result(retmsg="Document not found!")
b, n = File2DocumentService.get_minio_address(doc_id=doc_id)
response = flask.make_response(MINIO.get(b, n))
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
response = flask.make_response(STORAGE_IMPL.get(b, n))
ext = re.search(r"\.([^.]+)$", doc.name)
if ext:
@ -449,7 +460,7 @@ def change_parser():
def get_image(image_id):
try:
bkt, nm = image_id.split("-")
response = flask.make_response(MINIO.get(bkt, nm))
response = flask.make_response(STORAGE_IMPL.get(bkt, nm))
response.headers.set('Content-Type', 'image/JPEG')
return response
except Exception as e:

View File

@ -77,7 +77,7 @@ def convert():
doc = DocumentService.insert({
"id": get_uuid(),
"kb_id": kb.id,
"parser_id": kb.parser_id,
"parser_id": FileService.get_parser(file.type, file.name, kb.parser_id),
"parser_config": kb.parser_config,
"created_by": current_user.id,
"type": file.type,

View File

@ -34,7 +34,7 @@ from api.utils.api_utils import get_json_result
from api.utils.file_utils import filename_type
from rag.nlp import search
from rag.utils.es_conn import ELASTICSEARCH
from rag.utils.minio_conn import MINIO
from rag.utils.storage_factory import STORAGE_IMPL
@manager.route('/upload', methods=['POST'])
@ -98,7 +98,7 @@ def upload():
# file type
filetype = filename_type(file_obj_names[file_len - 1])
location = file_obj_names[file_len - 1]
while MINIO.obj_exist(last_folder.id, location):
while STORAGE_IMPL.obj_exist(last_folder.id, location):
location += "_"
blob = file_obj.read()
filename = duplicate_name(
@ -116,7 +116,7 @@ def upload():
"size": len(blob),
}
file = FileService.insert(file)
MINIO.put(last_folder.id, location, blob)
STORAGE_IMPL.put(last_folder.id, location, blob)
file_res.append(file.to_json())
return get_json_result(data=file_res)
except Exception as e:
@ -260,7 +260,7 @@ def rm():
e, file = FileService.get_by_id(inner_file_id)
if not e:
return get_data_error_result(retmsg="File not found!")
MINIO.rm(file.parent_id, file.location)
STORAGE_IMPL.rm(file.parent_id, file.location)
FileService.delete_folder_by_pf_id(current_user.id, file_id)
else:
if not FileService.delete(file):
@ -296,7 +296,8 @@ def rename():
e, file = FileService.get_by_id(req["file_id"])
if not e:
return get_data_error_result(retmsg="File not found!")
if pathlib.Path(req["name"].lower()).suffix != pathlib.Path(
if file.type != FileType.FOLDER.value \
and pathlib.Path(req["name"].lower()).suffix != pathlib.Path(
file.name.lower()).suffix:
return get_json_result(
data=False,
@ -331,8 +332,8 @@ def get(file_id):
e, file = FileService.get_by_id(file_id)
if not e:
return get_data_error_result(retmsg="Document not found!")
b, n = File2DocumentService.get_minio_address(file_id=file_id)
response = flask.make_response(MINIO.get(b, n))
b, n = File2DocumentService.get_storage_address(file_id=file_id)
response = flask.make_response(STORAGE_IMPL.get(b, n))
ext = re.search(r"\.([^.]+)$", file.name)
if ext:
if file.type == FileType.VISUAL.value:

View File

@ -100,6 +100,15 @@ def update():
def detail():
kb_id = request.args["kb_id"]
try:
tenants = UserTenantService.query(user_id=current_user.id)
for tenant in tenants:
if KnowledgebaseService.query(
tenant_id=tenant.tenant_id, id=kb_id):
break
else:
return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
kb = KnowledgebaseService.get_detail(kb_id)
if not kb:
return get_data_error_result(

View File

@ -13,23 +13,38 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
from flask import request
from flask_login import login_required, current_user
from api.db.services.llm_service import LLMFactoriesService, TenantLLMService, LLMService
from api.settings import LIGHTEN
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.db import StatusEnum, LLMType
from api.db.db_models import TenantLLM
from api.utils.api_utils import get_json_result
from rag.llm import EmbeddingModel, ChatModel, RerankModel,CvModel
from rag.llm import EmbeddingModel, ChatModel, RerankModel, CvModel, TTSModel
import requests
import ast
@manager.route('/factories', methods=['GET'])
@login_required
def factories():
try:
fac = LLMFactoriesService.get_all()
return get_json_result(data=[f.to_dict() for f in fac if f.name not in ["Youdao", "FastEmbed", "BAAI"]])
fac = [f.to_dict() for f in fac if f.name not in ["Youdao", "FastEmbed", "BAAI"]]
llms = LLMService.get_all()
mdl_types = {}
for m in llms:
if m.status != StatusEnum.VALID.value:
continue
if m.fid not in mdl_types:
mdl_types[m.fid] = set([])
mdl_types[m.fid].add(m.model_type)
for f in fac:
f["model_types"] = list(mdl_types.get(f["name"], [LLMType.CHAT, LLMType.EMBEDDING, LLMType.RERANK,
LLMType.IMAGE2TEXT, LLMType.SPEECH2TEXT, LLMType.TTS]))
return get_json_result(data=fac)
except Exception as e:
return server_error_response(e)
@ -43,7 +58,7 @@ def set_api_key():
chat_passed, embd_passed, rerank_passed = False, False, False
factory = req["llm_factory"]
msg = ""
for llm in LLMService.query(fid=factory):
for llm in LLMService.query(fid=factory)[:3]:
if not embd_passed and llm.model_type == LLMType.EMBEDDING.value:
mdl = EmbeddingModel[factory](
req["api_key"], llm.llm_name, base_url=req.get("base_url"))
@ -81,24 +96,27 @@ def set_api_key():
if msg:
return get_data_error_result(retmsg=msg)
llm = {
llm_config = {
"api_key": req["api_key"],
"api_base": req.get("base_url", "")
}
for n in ["model_type", "llm_name"]:
if n in req:
llm[n] = req[n]
llm_config[n] = req[n]
if not TenantLLMService.filter_update(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == factory], llm):
for llm in LLMService.query(fid=factory):
for llm in LLMService.query(fid=factory):
if not TenantLLMService.filter_update(
[TenantLLM.tenant_id == current_user.id,
TenantLLM.llm_factory == factory,
TenantLLM.llm_name == llm.llm_name],
llm_config):
TenantLLMService.save(
tenant_id=current_user.id,
llm_factory=factory,
llm_name=llm.llm_name,
model_type=llm.model_type,
api_key=req["api_key"],
api_base=req.get("base_url", "")
api_key=llm_config["api_key"],
api_base=llm_config["api_base"]
)
return get_json_result(data=True)
@ -111,43 +129,63 @@ def add_llm():
req = request.json
factory = req["llm_factory"]
def apikey_json(keys):
nonlocal req
return json.dumps({k: req.get(k, "") for k in keys})
if factory == "VolcEngine":
# For VolcEngine, due to its special authentication method
# Assemble volc_ak, volc_sk, endpoint_id into api_key
temp = list(ast.literal_eval(req["llm_name"]).items())[0]
llm_name = temp[0]
endpoint_id = temp[1]
api_key = '{' + f'"volc_ak": "{req.get("volc_ak", "")}", ' \
f'"volc_sk": "{req.get("volc_sk", "")}", ' \
f'"ep_id": "{endpoint_id}", ' + '}'
# Assemble ark_api_key endpoint_id into api_key
llm_name = req["llm_name"]
api_key = apikey_json(["ark_api_key", "endpoint_id"])
elif factory == "Tencent Hunyuan":
api_key = '{' + f'"hunyuan_sid": "{req.get("hunyuan_sid", "")}", ' \
f'"hunyuan_sk": "{req.get("hunyuan_sk", "")}"' + '}'
req["api_key"] = api_key
req["api_key"] = apikey_json(["hunyuan_sid", "hunyuan_sk"])
return set_api_key()
elif factory == "Tencent Cloud":
req["api_key"] = apikey_json(["tencent_cloud_sid", "tencent_cloud_sk"])
elif factory == "Bedrock":
# For Bedrock, due to its special authentication method
# Assemble bedrock_ak, bedrock_sk, bedrock_region
llm_name = req["llm_name"]
api_key = '{' + f'"bedrock_ak": "{req.get("bedrock_ak", "")}", ' \
f'"bedrock_sk": "{req.get("bedrock_sk", "")}", ' \
f'"bedrock_region": "{req.get("bedrock_region", "")}", ' + '}'
api_key = apikey_json(["bedrock_ak", "bedrock_sk", "bedrock_region"])
elif factory == "LocalAI":
llm_name = req["llm_name"]+"___LocalAI"
api_key = "xxxxxxxxxxxxxxx"
elif factory == "HuggingFace":
llm_name = req["llm_name"]+"___HuggingFace"
api_key = "xxxxxxxxxxxxxxx"
elif factory == "OpenAI-API-Compatible":
llm_name = req["llm_name"]+"___OpenAI-API"
api_key = req.get("api_key","xxxxxxxxxxxxxxx")
elif factory =="XunFei Spark":
llm_name = req["llm_name"]
api_key = req.get("spark_api_password","xxxxxxxxxxxxxxx")
if req["model_type"] == "chat":
api_key = req.get("spark_api_password", "xxxxxxxxxxxxxxx")
elif req["model_type"] == "tts":
api_key = apikey_json(["spark_app_id", "spark_api_secret","spark_api_key"])
elif factory == "BaiduYiyan":
llm_name = req["llm_name"]
api_key = '{' + f'"yiyan_ak": "{req.get("yiyan_ak", "")}", ' \
f'"yiyan_sk": "{req.get("yiyan_sk", "")}"' + '}'
api_key = apikey_json(["yiyan_ak", "yiyan_sk"])
elif factory == "Fish Audio":
llm_name = req["llm_name"]
api_key = apikey_json(["fish_audio_ak", "fish_audio_refid"])
elif factory == "Google Cloud":
llm_name = req["llm_name"]
api_key = apikey_json(["google_project_id", "google_region", "google_service_account_key"])
else:
llm_name = req["llm_name"]
api_key = req.get("api_key","xxxxxxxxxxxxxxx")
api_key = req.get("api_key", "xxxxxxxxxxxxxxx")
llm = {
"tenant_id": current_user.id,
@ -218,6 +256,15 @@ def add_llm():
pass
except Exception as e:
msg += f"\nFail to access model({llm['llm_name']})." + str(e)
elif llm["model_type"] == LLMType.TTS:
mdl = TTSModel[factory](
key=llm["api_key"], model_name=llm["llm_name"], base_url=llm["api_base"]
)
try:
for resp in mdl.tts("Hello~ Ragflower!"):
pass
except RuntimeError as e:
msg += f"\nFail to access model({llm['llm_name']})." + str(e)
else:
# TODO: check other type of models
pass
@ -242,6 +289,16 @@ def delete_llm():
return get_json_result(data=True)
@manager.route('/delete_factory', methods=['POST'])
@login_required
@validate_request("llm_factory")
def delete_factory():
req = request.json
TenantLLMService.filter_delete(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"]])
return get_json_result(data=True)
@manager.route('/my_llms', methods=['GET'])
@login_required
def my_llms():
@ -266,15 +323,17 @@ def my_llms():
@manager.route('/list', methods=['GET'])
@login_required
def list_app():
self_deploied = ["Youdao","FastEmbed", "BAAI", "Ollama", "Xinference", "LocalAI", "LM-Studio"]
weighted = ["Youdao","FastEmbed", "BAAI"] if LIGHTEN else []
model_type = request.args.get("model_type")
try:
objs = TenantLLMService.query(tenant_id=current_user.id)
facts = set([o.to_dict()["llm_factory"] for o in objs if o.api_key])
llms = LLMService.get_all()
llms = [m.to_dict()
for m in llms if m.status == StatusEnum.VALID.value]
for m in llms if m.status == StatusEnum.VALID.value and m.fid not in weighted]
for m in llms:
m["available"] = m["fid"] in facts or m["llm_name"].lower() == "flag-embedding" or m["fid"] in ["Youdao","FastEmbed", "BAAI"]
m["available"] = m["fid"] in facts or m["llm_name"].lower() == "flag-embedding" or m["fid"] in self_deploied
llm_set = set([m["llm_name"] for m in llms])
for o in objs:

304
api/apps/sdk/assistant.py Normal file
View File

@ -0,0 +1,304 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import request
from api.db import StatusEnum
from api.db.db_models import TenantLLM
from api.db.services.dialog_service import DialogService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMService, TenantLLMService
from api.db.services.user_service import TenantService
from api.settings import RetCode
from api.utils import get_uuid
from api.utils.api_utils import get_data_error_result, token_required
from api.utils.api_utils import get_json_result
@manager.route('/save', methods=['POST'])
@token_required
def save(tenant_id):
req = request.json
# dataset
if req.get("knowledgebases") == []:
return get_data_error_result(retmsg="knowledgebases can not be empty list")
kb_list = []
if req.get("knowledgebases"):
for kb in req.get("knowledgebases"):
if not kb["id"]:
return get_data_error_result(retmsg="knowledgebase needs id")
if not KnowledgebaseService.query(id=kb["id"], tenant_id=tenant_id):
return get_data_error_result(retmsg="you do not own the knowledgebase")
# if not DocumentService.query(kb_id=kb["id"]):
# return get_data_error_result(retmsg="There is a invalid knowledgebase")
kb_list.append(kb["id"])
req["kb_ids"] = kb_list
# llm
llm = req.get("llm")
if llm:
if "model_name" in llm:
req["llm_id"] = llm.pop("model_name")
req["llm_setting"] = req.pop("llm")
e, tenant = TenantService.get_by_id(tenant_id)
if not e:
return get_data_error_result(retmsg="Tenant not found!")
# prompt
prompt = req.get("prompt")
key_mapping = {"parameters": "variables",
"prologue": "opener",
"quote": "show_quote",
"system": "prompt",
"rerank_id": "rerank_model",
"vector_similarity_weight": "keywords_similarity_weight"}
key_list = ["similarity_threshold", "vector_similarity_weight", "top_n", "rerank_id"]
if prompt:
for new_key, old_key in key_mapping.items():
if old_key in prompt:
prompt[new_key] = prompt.pop(old_key)
for key in key_list:
if key in prompt:
req[key] = prompt.pop(key)
req["prompt_config"] = req.pop("prompt")
# create
if "id" not in req:
# dataset
if not kb_list:
return get_data_error_result(retmsg="knowledgebases are required!")
# init
req["id"] = get_uuid()
req["description"] = req.get("description", "A helpful Assistant")
req["icon"] = req.get("avatar", "")
req["top_n"] = req.get("top_n", 6)
req["top_k"] = req.get("top_k", 1024)
req["rerank_id"] = req.get("rerank_id", "")
if req.get("llm_id"):
if not TenantLLMService.query(llm_name=req["llm_id"]):
return get_data_error_result(retmsg="the model_name does not exist.")
else:
req["llm_id"] = tenant.llm_id
if not req.get("name"):
return get_data_error_result(retmsg="name is required.")
if DialogService.query(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_data_error_result(retmsg="Duplicated assistant name in creating dataset.")
# tenant_id
if req.get("tenant_id"):
return get_data_error_result(retmsg="tenant_id must not be provided.")
req["tenant_id"] = tenant_id
# prompt more parameter
default_prompt = {
"system": """你是一个智能助手,请总结知识库的内容来回答问题,请列举知识库中的数据详细回答。当所有知识库内容都与问题无关时,你的回答必须包括“知识库中未找到您要的答案!”这句话。回答需要考虑聊天历史。
以下是知识库:
{knowledge}
以上是知识库。""",
"prologue": "您好我是您的助手小樱长得可爱又善良can I help you?",
"parameters": [
{"key": "knowledge", "optional": False}
],
"empty_response": "Sorry! 知识库中未找到相关内容!"
}
key_list_2 = ["system", "prologue", "parameters", "empty_response"]
if "prompt_config" not in req:
req['prompt_config'] = {}
for key in key_list_2:
temp = req['prompt_config'].get(key)
if not temp:
req['prompt_config'][key] = default_prompt[key]
for p in req['prompt_config']["parameters"]:
if p["optional"]:
continue
if req['prompt_config']["system"].find("{%s}" % p["key"]) < 0:
return get_data_error_result(
retmsg="Parameter '{}' is not used".format(p["key"]))
# save
if not DialogService.save(**req):
return get_data_error_result(retmsg="Fail to new an assistant!")
# response
e, res = DialogService.get_by_id(req["id"])
if not e:
return get_data_error_result(retmsg="Fail to new an assistant!")
res = res.to_json()
renamed_dict = {}
for key, value in res["prompt_config"].items():
new_key = key_mapping.get(key, key)
renamed_dict[new_key] = value
res["prompt"] = renamed_dict
del res["prompt_config"]
new_dict = {"similarity_threshold": res["similarity_threshold"],
"keywords_similarity_weight": res["vector_similarity_weight"],
"top_n": res["top_n"],
"rerank_model": res['rerank_id']}
res["prompt"].update(new_dict)
for key in key_list:
del res[key]
res["llm"] = res.pop("llm_setting")
res["llm"]["model_name"] = res.pop("llm_id")
del res["kb_ids"]
res["knowledgebases"] = req["knowledgebases"]
res["avatar"] = res.pop("icon")
return get_json_result(data=res)
else:
# authorization
if not DialogService.query(tenant_id=tenant_id, id=req["id"], status=StatusEnum.VALID.value):
return get_json_result(data=False, retmsg='You do not own the assistant', retcode=RetCode.OPERATING_ERROR)
# prompt
if not req["id"]:
return get_data_error_result(retmsg="id can not be empty")
e, res = DialogService.get_by_id(req["id"])
res = res.to_json()
if "llm_id" in req:
if not TenantLLMService.query(llm_name=req["llm_id"]):
return get_data_error_result(retmsg="the model_name does not exist.")
if "name" in req:
if not req.get("name"):
return get_data_error_result(retmsg="name is not empty.")
if req["name"].lower() != res["name"].lower() \
and len(
DialogService.query(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value)) > 0:
return get_data_error_result(retmsg="Duplicated assistant name in updating dataset.")
if "prompt_config" in req:
res["prompt_config"].update(req["prompt_config"])
for p in res["prompt_config"]["parameters"]:
if p["optional"]:
continue
if res["prompt_config"]["system"].find("{%s}" % p["key"]) < 0:
return get_data_error_result(retmsg="Parameter '{}' is not used".format(p["key"]))
if "llm_setting" in req:
res["llm_setting"].update(req["llm_setting"])
req["prompt_config"] = res["prompt_config"]
req["llm_setting"] = res["llm_setting"]
# avatar
if "avatar" in req:
req["icon"] = req.pop("avatar")
assistant_id = req.pop("id")
if "knowledgebases" in req:
req.pop("knowledgebases")
if not DialogService.update_by_id(assistant_id, req):
return get_data_error_result(retmsg="Assistant not found!")
return get_json_result(data=True)
@manager.route('/delete', methods=['DELETE'])
@token_required
def delete(tenant_id):
req = request.args
if "id" not in req:
return get_data_error_result(retmsg="id is required")
id = req['id']
if not DialogService.query(tenant_id=tenant_id, id=id, status=StatusEnum.VALID.value):
return get_json_result(data=False, retmsg='you do not own the assistant.', retcode=RetCode.OPERATING_ERROR)
temp_dict = {"status": StatusEnum.INVALID.value}
DialogService.update_by_id(req["id"], temp_dict)
return get_json_result(data=True)
@manager.route('/get', methods=['GET'])
@token_required
def get(tenant_id):
req = request.args
if "id" in req:
id = req["id"]
ass = DialogService.query(tenant_id=tenant_id, id=id, status=StatusEnum.VALID.value)
if not ass:
return get_json_result(data=False, retmsg='You do not own the assistant.', retcode=RetCode.OPERATING_ERROR)
if "name" in req:
name = req["name"]
if ass[0].name != name:
return get_json_result(data=False, retmsg='name does not match id.', retcode=RetCode.OPERATING_ERROR)
res = ass[0].to_json()
else:
if "name" in req:
name = req["name"]
ass = DialogService.query(name=name, tenant_id=tenant_id, status=StatusEnum.VALID.value)
if not ass:
return get_json_result(data=False, retmsg='You do not own the assistant.',
retcode=RetCode.OPERATING_ERROR)
res = ass[0].to_json()
else:
return get_data_error_result(retmsg="At least one of `id` or `name` must be provided.")
renamed_dict = {}
key_mapping = {"parameters": "variables",
"prologue": "opener",
"quote": "show_quote",
"system": "prompt",
"rerank_id": "rerank_model",
"vector_similarity_weight": "keywords_similarity_weight"}
key_list = ["similarity_threshold", "vector_similarity_weight", "top_n", "rerank_id"]
for key, value in res["prompt_config"].items():
new_key = key_mapping.get(key, key)
renamed_dict[new_key] = value
res["prompt"] = renamed_dict
del res["prompt_config"]
new_dict = {"similarity_threshold": res["similarity_threshold"],
"keywords_similarity_weight": res["vector_similarity_weight"],
"top_n": res["top_n"],
"rerank_model": res['rerank_id']}
res["prompt"].update(new_dict)
for key in key_list:
del res[key]
res["llm"] = res.pop("llm_setting")
res["llm"]["model_name"] = res.pop("llm_id")
kb_list = []
for kb_id in res["kb_ids"]:
kb = KnowledgebaseService.query(id=kb_id)
kb_list.append(kb[0].to_json())
del res["kb_ids"]
res["knowledgebases"] = kb_list
res["avatar"] = res.pop("icon")
return get_json_result(data=res)
@manager.route('/list', methods=['GET'])
@token_required
def list_assistants(tenant_id):
assts = DialogService.query(
tenant_id=tenant_id,
status=StatusEnum.VALID.value,
reverse=True,
order_by=DialogService.model.create_time)
assts = [d.to_dict() for d in assts]
list_assts = []
renamed_dict = {}
key_mapping = {"parameters": "variables",
"prologue": "opener",
"quote": "show_quote",
"system": "prompt",
"rerank_id": "rerank_model",
"vector_similarity_weight": "keywords_similarity_weight"}
key_list = ["similarity_threshold", "vector_similarity_weight", "top_n", "rerank_id"]
for res in assts:
for key, value in res["prompt_config"].items():
new_key = key_mapping.get(key, key)
renamed_dict[new_key] = value
res["prompt"] = renamed_dict
del res["prompt_config"]
new_dict = {"similarity_threshold": res["similarity_threshold"],
"keywords_similarity_weight": res["vector_similarity_weight"],
"top_n": res["top_n"],
"rerank_model": res['rerank_id']}
res["prompt"].update(new_dict)
for key in key_list:
del res[key]
res["llm"] = res.pop("llm_setting")
res["llm"]["model_name"] = res.pop("llm_id")
kb_list = []
for kb_id in res["kb_ids"]:
kb = KnowledgebaseService.query(id=kb_id)
kb_list.append(kb[0].to_json())
del res["kb_ids"]
res["knowledgebases"] = kb_list
res["avatar"] = res.pop("icon")
list_assts.append(res)
return get_json_result(data=list_assts)

224
api/apps/sdk/dataset.py Normal file
View File

@ -0,0 +1,224 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import request
from api.db import StatusEnum, FileSource
from api.db.db_models import File
from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.user_service import TenantService
from api.settings import RetCode
from api.utils import get_uuid
from api.utils.api_utils import get_json_result, token_required, get_data_error_result
@manager.route('/save', methods=['POST'])
@token_required
def save(tenant_id):
req = request.json
e, t = TenantService.get_by_id(tenant_id)
if "id" not in req:
if "tenant_id" in req or "embedding_model" in req:
return get_data_error_result(
retmsg="Tenant_id or embedding_model must not be provided")
if "name" not in req:
return get_data_error_result(
retmsg="Name is not empty!")
req['id'] = get_uuid()
req["name"] = req["name"].strip()
if req["name"] == "":
return get_data_error_result(
retmsg="Name is not empty string!")
if KnowledgebaseService.query(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_data_error_result(
retmsg="Duplicated knowledgebase name in creating dataset.")
req["tenant_id"] = req['created_by'] = tenant_id
req['embedding_model'] = t.embd_id
key_mapping = {
"chunk_num": "chunk_count",
"doc_num": "document_count",
"parser_id": "parse_method",
"embd_id": "embedding_model"
}
mapped_keys = {new_key: req[old_key] for new_key, old_key in key_mapping.items() if old_key in req}
req.update(mapped_keys)
if not KnowledgebaseService.save(**req):
return get_data_error_result(retmsg="Create dataset error.(Database error)")
renamed_data = {}
e, k = KnowledgebaseService.get_by_id(req["id"])
for key, value in k.to_dict().items():
new_key = key_mapping.get(key, key)
renamed_data[new_key] = value
return get_json_result(data=renamed_data)
else:
invalid_keys = {"embd_id", "chunk_num", "doc_num", "parser_id"}
if any(key in req for key in invalid_keys):
return get_data_error_result(retmsg="The input parameters are invalid.")
if "tenant_id" in req:
if req["tenant_id"] != tenant_id:
return get_data_error_result(
retmsg="Can't change tenant_id.")
if "embedding_model" in req:
if req["embedding_model"] != t.embd_id:
return get_data_error_result(
retmsg="Can't change embedding_model.")
req.pop("embedding_model")
if not KnowledgebaseService.query(
created_by=tenant_id, id=req["id"]):
return get_json_result(
data=False, retmsg='You do not own the dataset.',
retcode=RetCode.OPERATING_ERROR)
if not req["id"]:
return get_data_error_result(
retmsg="id can not be empty.")
e, kb = KnowledgebaseService.get_by_id(req["id"])
if "chunk_count" in req:
if req["chunk_count"] != kb.chunk_num:
return get_data_error_result(
retmsg="Can't change chunk_count.")
req.pop("chunk_count")
if "document_count" in req:
if req['document_count'] != kb.doc_num:
return get_data_error_result(
retmsg="Can't change document_count.")
req.pop("document_count")
if "parse_method" in req:
if kb.chunk_num != 0 and req['parse_method'] != kb.parser_id:
return get_data_error_result(
retmsg="If chunk count is not 0, parse method is not changable.")
req['parser_id'] = req.pop('parse_method')
if "name" in req:
req["name"] = req["name"].strip()
if req["name"].lower() != kb.name.lower() \
and len(KnowledgebaseService.query(name=req["name"], tenant_id=tenant_id,
status=StatusEnum.VALID.value)) > 0:
return get_data_error_result(
retmsg="Duplicated knowledgebase name in updating dataset.")
del req["id"]
if not KnowledgebaseService.update_by_id(kb.id, req):
return get_data_error_result(retmsg="Update dataset error.(Database error)")
return get_json_result(data=True)
@manager.route('/delete', methods=['DELETE'])
@token_required
def delete(tenant_id):
req = request.args
if "id" not in req:
return get_data_error_result(
retmsg="id is required")
kbs = KnowledgebaseService.query(
created_by=tenant_id, id=req["id"])
if not kbs:
return get_json_result(
data=False, retmsg='You do not own the dataset',
retcode=RetCode.OPERATING_ERROR)
for doc in DocumentService.query(kb_id=req["id"]):
if not DocumentService.remove_document(doc, kbs[0].tenant_id):
return get_data_error_result(
retmsg="Remove document error.(Database error)")
f2d = File2DocumentService.get_by_document_id(doc.id)
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
File2DocumentService.delete_by_document_id(doc.id)
if not KnowledgebaseService.delete_by_id(req["id"]):
return get_data_error_result(
retmsg="Delete dataset error.(Database serror)")
return get_json_result(data=True)
@manager.route('/list', methods=['GET'])
@token_required
def list_datasets(tenant_id):
page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 1024))
orderby = request.args.get("orderby", "create_time")
desc = bool(request.args.get("desc", True))
tenants = TenantService.get_joined_tenants_by_user_id(tenant_id)
kbs = KnowledgebaseService.get_by_tenant_ids(
[m["tenant_id"] for m in tenants], tenant_id, page_number, items_per_page, orderby, desc)
renamed_list = []
for kb in kbs:
key_mapping = {
"chunk_num": "chunk_count",
"doc_num": "document_count",
"parser_id": "parse_method",
"embd_id": "embedding_model"
}
renamed_data = {}
for key, value in kb.items():
new_key = key_mapping.get(key, key)
renamed_data[new_key] = value
renamed_list.append(renamed_data)
return get_json_result(data=renamed_list)
@manager.route('/detail', methods=['GET'])
@token_required
def detail(tenant_id):
req = request.args
key_mapping = {
"chunk_num": "chunk_count",
"doc_num": "document_count",
"parser_id": "parse_method",
"embd_id": "embedding_model"
}
renamed_data = {}
if "id" in req:
id = req["id"]
kb = KnowledgebaseService.query(created_by=tenant_id, id=req["id"])
if not kb:
return get_json_result(
data=False, retmsg='You do not own the dataset.',
retcode=RetCode.OPERATING_ERROR)
if "name" in req:
name = req["name"]
if kb[0].name != name:
return get_json_result(
data=False, retmsg='You do not own the dataset.',
retcode=RetCode.OPERATING_ERROR)
e, k = KnowledgebaseService.get_by_id(id)
for key, value in k.to_dict().items():
new_key = key_mapping.get(key, key)
renamed_data[new_key] = value
return get_json_result(data=renamed_data)
else:
if "name" in req:
name = req["name"]
e, k = KnowledgebaseService.get_by_name(kb_name=name, tenant_id=tenant_id)
if not e:
return get_json_result(
data=False, retmsg='You do not own the dataset.',
retcode=RetCode.OPERATING_ERROR)
for key, value in k.to_dict().items():
new_key = key_mapping.get(key, key)
renamed_data[new_key] = value
return get_json_result(data=renamed_data)
else:
return get_data_error_result(
retmsg="At least one of `id` or `name` must be provided.")

720
api/apps/sdk/doc.py Normal file
View File

@ -0,0 +1,720 @@
import pathlib
import re
import datetime
import json
import traceback
from flask import request
from flask_login import login_required, current_user
from elasticsearch_dsl import Q
from rag.app.qa import rmPrefix, beAdoc
from rag.nlp import search, rag_tokenizer, keyword_extraction
from rag.utils.es_conn import ELASTICSEARCH
from rag.utils import rmSpace
from api.db import LLMType, ParserType
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import TenantLLMService
from api.db.services.user_service import UserTenantService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.db.services.document_service import DocumentService
from api.settings import RetCode, retrievaler, kg_retrievaler
from api.utils.api_utils import get_json_result
import hashlib
import re
from api.utils.api_utils import get_json_result, token_required, get_data_error_result
from api.db.db_models import Task, File
from api.db.services.task_service import TaskService, queue_tasks
from api.db.services.user_service import TenantService, UserTenantService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.utils.api_utils import get_json_result
from functools import partial
from io import BytesIO
from elasticsearch_dsl import Q
from flask import request, send_file
from flask_login import login_required
from api.db import FileSource, TaskStatus, FileType
from api.db.db_models import File
from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.settings import RetCode, retrievaler
from api.utils.api_utils import construct_json_result, construct_error_response
from rag.app import book, laws, manual, naive, one, paper, presentation, qa, resume, table, picture, audio, email
from rag.nlp import search
from rag.utils import rmSpace
from rag.utils.es_conn import ELASTICSEARCH
from rag.utils.storage_factory import STORAGE_IMPL
MAXIMUM_OF_UPLOADING_FILES = 256
MAXIMUM_OF_UPLOADING_FILES = 256
@manager.route('/dataset/<dataset_id>/documents/upload', methods=['POST'])
@token_required
def upload(dataset_id, tenant_id):
if 'file' not in request.files:
return get_json_result(
data=False, retmsg='No file part!', retcode=RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist('file')
for file_obj in file_objs:
if file_obj.filename == '':
return get_json_result(
data=False, retmsg='No file selected!', retcode=RetCode.ARGUMENT_ERROR)
e, kb = KnowledgebaseService.get_by_id(dataset_id)
if not e:
raise LookupError(f"Can't find the knowledgebase with ID {dataset_id}!")
err, _ = FileService.upload_document(kb, file_objs, tenant_id)
if err:
return get_json_result(
data=False, retmsg="\n".join(err), retcode=RetCode.SERVER_ERROR)
return get_json_result(data=True)
@manager.route('/infos', methods=['GET'])
@token_required
def docinfos(tenant_id):
req = request.args
if "id" not in req and "name" not in req:
return get_data_error_result(
retmsg="Id or name should be provided")
doc_id=None
if "id" in req:
doc_id = req["id"]
if "name" in req:
doc_name = req["name"]
doc_id = DocumentService.get_doc_id_by_doc_name(doc_name)
e, doc = DocumentService.get_by_id(doc_id)
#rename key's name
key_mapping = {
"chunk_num": "chunk_count",
"kb_id": "knowledgebase_id",
"token_num": "token_count",
"parser_id":"parser_method",
}
renamed_doc = {}
for key, value in doc.to_dict().items():
new_key = key_mapping.get(key, key)
renamed_doc[new_key] = value
return get_json_result(data=renamed_doc)
@manager.route('/save', methods=['POST'])
@token_required
def save_doc(tenant_id):
req = request.json
#get doc by id or name
doc_id = None
if "id" in req:
doc_id = req["id"]
elif "name" in req:
doc_name = req["name"]
doc_id = DocumentService.get_doc_id_by_doc_name(doc_name)
if not doc_id:
return get_json_result(retcode=400, retmsg="Document ID or name is required")
e, doc = DocumentService.get_by_id(doc_id)
if not e:
return get_data_error_result(retmsg="Document not found!")
#other value can't be changed
if "chunk_count" in req:
if req["chunk_count"] != doc.chunk_num:
return get_data_error_result(
retmsg="Can't change chunk_count.")
if "token_count" in req:
if req["token_count"] != doc.token_num:
return get_data_error_result(
retmsg="Can't change token_count.")
if "progress" in req:
if req['progress'] != doc.progress:
return get_data_error_result(
retmsg="Can't change progress.")
#change name or parse_method
if "name" in req and req["name"] != doc.name:
try:
if pathlib.Path(req["name"].lower()).suffix != pathlib.Path(
doc.name.lower()).suffix:
return get_json_result(
data=False,
retmsg="The extension of file can't be changed",
retcode=RetCode.ARGUMENT_ERROR)
for d in DocumentService.query(name=req["name"], kb_id=doc.kb_id):
if d.name == req["name"]:
return get_data_error_result(
retmsg="Duplicated document name in the same knowledgebase.")
if not DocumentService.update_by_id(
doc_id, {"name": req["name"]}):
return get_data_error_result(
retmsg="Database error (Document rename)!")
informs = File2DocumentService.get_by_document_id(doc_id)
if informs:
e, file = FileService.get_by_id(informs[0].file_id)
FileService.update_by_id(file.id, {"name": req["name"]})
except Exception as e:
return server_error_response(e)
if "parser_method" in req:
try:
if doc.parser_id.lower() == req["parser_method"].lower():
if "parser_config" in req:
if req["parser_config"] == doc.parser_config:
return get_json_result(data=True)
else:
return get_json_result(data=True)
if doc.type == FileType.VISUAL or re.search(
r"\.(ppt|pptx|pages)$", doc.name):
return get_data_error_result(retmsg="Not supported yet!")
e = DocumentService.update_by_id(doc.id,
{"parser_id": req["parser_method"], "progress": 0, "progress_msg": "",
"run": TaskStatus.UNSTART.value})
if not e:
return get_data_error_result(retmsg="Document not found!")
if "parser_config" in req:
DocumentService.update_parser_config(doc.id, req["parser_config"])
if doc.token_num > 0:
e = DocumentService.increment_chunk_num(doc.id, doc.kb_id, doc.token_num * -1, doc.chunk_num * -1,
doc.process_duation * -1)
if not e:
return get_data_error_result(retmsg="Document not found!")
tenant_id = DocumentService.get_tenant_id(req["id"])
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
ELASTICSEARCH.deleteByQuery(
Q("match", doc_id=doc.id), idxnm=search.index_name(tenant_id))
except Exception as e:
return server_error_response(e)
return get_json_result(data=True)
@manager.route('/change_parser', methods=['POST'])
@token_required
def change_parser(tenant_id):
req = request.json
try:
e, doc = DocumentService.get_by_id(req["doc_id"])
if not e:
return get_data_error_result(retmsg="Document not found!")
if doc.parser_id.lower() == req["parser_id"].lower():
if "parser_config" in req:
if req["parser_config"] == doc.parser_config:
return get_json_result(data=True)
else:
return get_json_result(data=True)
if doc.type == FileType.VISUAL or re.search(
r"\.(ppt|pptx|pages)$", doc.name):
return get_data_error_result(retmsg="Not supported yet!")
e = DocumentService.update_by_id(doc.id,
{"parser_id": req["parser_id"], "progress": 0, "progress_msg": "",
"run": TaskStatus.UNSTART.value})
if not e:
return get_data_error_result(retmsg="Document not found!")
if "parser_config" in req:
DocumentService.update_parser_config(doc.id, req["parser_config"])
if doc.token_num > 0:
e = DocumentService.increment_chunk_num(doc.id, doc.kb_id, doc.token_num * -1, doc.chunk_num * -1,
doc.process_duation * -1)
if not e:
return get_data_error_result(retmsg="Document not found!")
tenant_id = DocumentService.get_tenant_id(req["doc_id"])
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
ELASTICSEARCH.deleteByQuery(
Q("match", doc_id=doc.id), idxnm=search.index_name(tenant_id))
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@manager.route('/rename', methods=['POST'])
@login_required
@validate_request("doc_id", "name")
def rename():
req = request.json
try:
e, doc = DocumentService.get_by_id(req["doc_id"])
if not e:
return get_data_error_result(retmsg="Document not found!")
if pathlib.Path(req["name"].lower()).suffix != pathlib.Path(
doc.name.lower()).suffix:
return get_json_result(
data=False,
retmsg="The extension of file can't be changed",
retcode=RetCode.ARGUMENT_ERROR)
for d in DocumentService.query(name=req["name"], kb_id=doc.kb_id):
if d.name == req["name"]:
return get_data_error_result(
retmsg="Duplicated document name in the same knowledgebase.")
if not DocumentService.update_by_id(
req["doc_id"], {"name": req["name"]}):
return get_data_error_result(
retmsg="Database error (Document rename)!")
informs = File2DocumentService.get_by_document_id(req["doc_id"])
if informs:
e, file = FileService.get_by_id(informs[0].file_id)
FileService.update_by_id(file.id, {"name": req["name"]})
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@manager.route("/<document_id>", methods=["GET"])
@token_required
def download_document(document_id,tenant_id):
try:
# Check whether there is this document
exist, document = DocumentService.get_by_id(document_id)
if not exist:
return construct_json_result(message=f"This document '{document_id}' cannot be found!",
code=RetCode.ARGUMENT_ERROR)
# The process of downloading
doc_id, doc_location = File2DocumentService.get_storage_address(doc_id=document_id) # minio address
file_stream = STORAGE_IMPL.get(doc_id, doc_location)
if not file_stream:
return construct_json_result(message="This file is empty.", code=RetCode.DATA_ERROR)
file = BytesIO(file_stream)
# Use send_file with a proper filename and MIME type
return send_file(
file,
as_attachment=True,
download_name=document.name,
mimetype='application/octet-stream' # Set a default MIME type
)
# Error
except Exception as e:
return construct_error_response(e)
@manager.route('/dataset/<dataset_id>/documents', methods=['GET'])
@token_required
def list_docs(dataset_id, tenant_id):
kb_id = request.args.get("knowledgebase_id")
if not kb_id:
return get_json_result(
data=False, retmsg='Lack of "KB ID"', retcode=RetCode.ARGUMENT_ERROR)
tenants = UserTenantService.query(user_id=tenant_id)
for tenant in tenants:
if KnowledgebaseService.query(
tenant_id=tenant.tenant_id, id=kb_id):
break
else:
return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
keywords = request.args.get("keywords", "")
page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 15))
orderby = request.args.get("orderby", "create_time")
desc = request.args.get("desc", True)
try:
docs, tol = DocumentService.get_by_kb_id(
kb_id, page_number, items_per_page, orderby, desc, keywords)
# rename key's name
renamed_doc_list = []
for doc in docs:
key_mapping = {
"chunk_num": "chunk_count",
"kb_id": "knowledgebase_id",
"token_num": "token_count",
"parser_id":"parser_method"
}
renamed_doc = {}
for key, value in doc.items():
new_key = key_mapping.get(key, key)
renamed_doc[new_key] = value
renamed_doc_list.append(renamed_doc)
return get_json_result(data={"total": tol, "docs": renamed_doc_list})
except Exception as e:
return server_error_response(e)
@manager.route('/delete', methods=['DELETE'])
@token_required
def rm(tenant_id):
req = request.args
if "document_id" not in req:
return get_data_error_result(
retmsg="doc_id is required")
doc_ids = req["document_id"]
if isinstance(doc_ids, str): doc_ids = [doc_ids]
root_folder = FileService.get_root_folder(tenant_id)
pf_id = root_folder["id"]
FileService.init_knowledgebase_docs(pf_id, tenant_id)
errors = ""
for doc_id in doc_ids:
try:
e, doc = DocumentService.get_by_id(doc_id)
if not e:
return get_data_error_result(retmsg="Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(
retmsg="Database error (Document removal)!")
f2d = File2DocumentService.get_by_document_id(doc_id)
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
File2DocumentService.delete_by_document_id(doc_id)
STORAGE_IMPL.rm(b, n)
except Exception as e:
errors += str(e)
if errors:
return get_json_result(data=False, retmsg=errors, retcode=RetCode.SERVER_ERROR)
return get_json_result(data=True, retmsg="success")
@manager.route("/<document_id>/status", methods=["GET"])
@token_required
def show_parsing_status(tenant_id, document_id):
try:
# valid document
exist, _ = DocumentService.get_by_id(document_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"This document: '{document_id}' is not a valid document.")
_, doc = DocumentService.get_by_id(document_id) # get doc object
doc_attributes = doc.to_dict()
return construct_json_result(
data={"progress": doc_attributes["progress"], "status": TaskStatus(doc_attributes["status"]).name},
code=RetCode.SUCCESS
)
except Exception as e:
return construct_error_response(e)
@manager.route('/run', methods=['POST'])
@token_required
def run(tenant_id):
req = request.json
try:
for id in req["document_ids"]:
info = {"run": str(req["run"]), "progress": 0}
if str(req["run"]) == TaskStatus.RUNNING.value:
info["progress_msg"] = ""
info["chunk_num"] = 0
info["token_num"] = 0
DocumentService.update_by_id(id, info)
# if str(req["run"]) == TaskStatus.CANCEL.value:
tenant_id = DocumentService.get_tenant_id(id)
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
ELASTICSEARCH.deleteByQuery(
Q("match", doc_id=id), idxnm=search.index_name(tenant_id))
if str(req["run"]) == TaskStatus.RUNNING.value:
TaskService.filter_delete([Task.doc_id == id])
e, doc = DocumentService.get_by_id(id)
doc = doc.to_dict()
doc["tenant_id"] = tenant_id
bucket, name = File2DocumentService.get_storage_address(doc_id=doc["id"])
queue_tasks(doc, bucket, name)
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@manager.route('/chunk/list', methods=['POST'])
@token_required
@validate_request("document_id")
def list_chunk(tenant_id):
req = request.json
doc_id = req["document_id"]
page = int(req.get("page", 1))
size = int(req.get("size", 30))
question = req.get("keywords", "")
try:
tenant_id = DocumentService.get_tenant_id(req["document_id"])
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
e, doc = DocumentService.get_by_id(doc_id)
if not e:
return get_data_error_result(retmsg="Document not found!")
query = {
"doc_ids": [doc_id], "page": page, "size": size, "question": question, "sort": True
}
if "available_int" in req:
query["available_int"] = int(req["available_int"])
sres = retrievaler.search(query, search.index_name(tenant_id), highlight=True)
res = {"total": sres.total, "chunks": [], "doc": doc.to_dict()}
origin_chunks=[]
for id in sres.ids:
d = {
"chunk_id": id,
"content_with_weight": rmSpace(sres.highlight[id]) if question and id in sres.highlight else sres.field[
id].get(
"content_with_weight", ""),
"doc_id": sres.field[id]["doc_id"],
"docnm_kwd": sres.field[id]["docnm_kwd"],
"important_kwd": sres.field[id].get("important_kwd", []),
"img_id": sres.field[id].get("img_id", ""),
"available_int": sres.field[id].get("available_int", 1),
"positions": sres.field[id].get("position_int", "").split("\t")
}
if len(d["positions"]) % 5 == 0:
poss = []
for i in range(0, len(d["positions"]), 5):
poss.append([float(d["positions"][i]), float(d["positions"][i + 1]), float(d["positions"][i + 2]),
float(d["positions"][i + 3]), float(d["positions"][i + 4])])
d["positions"] = poss
origin_chunks.append(d)
##rename keys
for chunk in origin_chunks:
key_mapping = {
"chunk_id": "id",
"content_with_weight": "content",
"doc_id": "document_id",
"important_kwd": "important_keywords",
"img_id":"image_id",
}
renamed_chunk = {}
for key, value in chunk.items():
new_key = key_mapping.get(key, key)
renamed_chunk[new_key] = value
res["chunks"].append(renamed_chunk)
return get_json_result(data=res)
except Exception as e:
if str(e).find("not_found") > 0:
return get_json_result(data=False, retmsg=f'No chunk found!',
retcode=RetCode.DATA_ERROR)
return server_error_response(e)
@manager.route('/chunk/create', methods=['POST'])
@token_required
@validate_request("document_id", "content")
def create(tenant_id):
req = request.json
md5 = hashlib.md5()
md5.update((req["content"] + req["document_id"]).encode("utf-8"))
chunk_id = md5.hexdigest()
d = {"id": chunk_id, "content_ltks": rag_tokenizer.tokenize(req["content"]),
"content_with_weight": req["content"]}
d["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(d["content_ltks"])
d["important_kwd"] = req.get("important_kwd", [])
d["important_tks"] = rag_tokenizer.tokenize(" ".join(req.get("important_kwd", [])))
d["create_time"] = str(datetime.datetime.now()).replace("T", " ")[:19]
d["create_timestamp_flt"] = datetime.datetime.now().timestamp()
try:
e, doc = DocumentService.get_by_id(req["document_id"])
if not e:
return get_data_error_result(retmsg="Document not found!")
d["kb_id"] = [doc.kb_id]
d["docnm_kwd"] = doc.name
d["doc_id"] = doc.id
tenant_id = DocumentService.get_tenant_id(req["document_id"])
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
embd_id = DocumentService.get_embd_id(req["document_id"])
embd_mdl = TenantLLMService.model_instance(
tenant_id, LLMType.EMBEDDING.value, embd_id)
v, c = embd_mdl.encode([doc.name, req["content"]])
v = 0.1 * v[0] + 0.9 * v[1]
d["q_%d_vec" % len(v)] = v.tolist()
ELASTICSEARCH.upsert([d], search.index_name(tenant_id))
DocumentService.increment_chunk_num(
doc.id, doc.kb_id, c, 1, 0)
d["chunk_id"] = chunk_id
#rename keys
key_mapping = {
"chunk_id": "id",
"content_with_weight": "content",
"doc_id": "document_id",
"important_kwd": "important_keywords",
"kb_id":"dataset_id",
"create_timestamp_flt":"create_timestamp",
"create_time": "create_time",
"document_keyword":"document",
}
renamed_chunk = {}
for key, value in d.items():
if key in key_mapping:
new_key = key_mapping.get(key, key)
renamed_chunk[new_key] = value
return get_json_result(data={"chunk": renamed_chunk})
# return get_json_result(data={"chunk_id": chunk_id})
except Exception as e:
return server_error_response(e)
@manager.route('/chunk/rm', methods=['POST'])
@token_required
@validate_request("chunk_ids", "document_id")
def rm_chunk(tenant_id):
req = request.json
try:
if not ELASTICSEARCH.deleteByQuery(
Q("ids", values=req["chunk_ids"]), search.index_name(tenant_id)):
return get_data_error_result(retmsg="Index updating failure")
e, doc = DocumentService.get_by_id(req["document_id"])
if not e:
return get_data_error_result(retmsg="Document not found!")
deleted_chunk_ids = req["chunk_ids"]
chunk_number = len(deleted_chunk_ids)
DocumentService.decrement_chunk_num(doc.id, doc.kb_id, 1, chunk_number, 0)
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@manager.route('/chunk/set', methods=['POST'])
@token_required
@validate_request("document_id", "chunk_id", "content",
"important_keywords")
def set(tenant_id):
req = request.json
d = {
"id": req["chunk_id"],
"content_with_weight": req["content"]}
d["content_ltks"] = rag_tokenizer.tokenize(req["content"])
d["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(d["content_ltks"])
d["important_kwd"] = req["important_keywords"]
d["important_tks"] = rag_tokenizer.tokenize(" ".join(req["important_keywords"]))
if "available" in req:
d["available_int"] = req["available"]
try:
tenant_id = DocumentService.get_tenant_id(req["document_id"])
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
embd_id = DocumentService.get_embd_id(req["document_id"])
embd_mdl = TenantLLMService.model_instance(
tenant_id, LLMType.EMBEDDING.value, embd_id)
e, doc = DocumentService.get_by_id(req["document_id"])
if not e:
return get_data_error_result(retmsg="Document not found!")
if doc.parser_id == ParserType.QA:
arr = [
t for t in re.split(
r"[\n\t]",
req["content"]) if len(t) > 1]
if len(arr) != 2:
return get_data_error_result(
retmsg="Q&A must be separated by TAB/ENTER key.")
q, a = rmPrefix(arr[0]), rmPrefix(arr[1])
d = beAdoc(d, arr[0], arr[1], not any(
[rag_tokenizer.is_chinese(t) for t in q + a]))
v, c = embd_mdl.encode([doc.name, req["content"]])
v = 0.1 * v[0] + 0.9 * v[1] if doc.parser_id != ParserType.QA else v[1]
d["q_%d_vec" % len(v)] = v.tolist()
ELASTICSEARCH.upsert([d], search.index_name(tenant_id))
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@manager.route('/retrieval_test', methods=['POST'])
@token_required
@validate_request("knowledgebase_id", "question")
def retrieval_test(tenant_id):
req = request.json
page = int(req.get("page", 1))
size = int(req.get("size", 30))
question = req["question"]
kb_id = req["knowledgebase_id"]
if isinstance(kb_id, str): kb_id = [kb_id]
doc_ids = req.get("doc_ids", [])
similarity_threshold = float(req.get("similarity_threshold", 0.2))
vector_similarity_weight = float(req.get("vector_similarity_weight", 0.3))
top = int(req.get("top_k", 1024))
try:
tenants = UserTenantService.query(user_id=tenant_id)
for kid in kb_id:
for tenant in tenants:
if KnowledgebaseService.query(
tenant_id=tenant.tenant_id, id=kid):
break
else:
return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_id[0])
if not e:
return get_data_error_result(retmsg="Knowledgebase not found!")
embd_mdl = TenantLLMService.model_instance(
kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
rerank_mdl = None
if req.get("rerank_id"):
rerank_mdl = TenantLLMService.model_instance(
kb.tenant_id, LLMType.RERANK.value, llm_name=req["rerank_id"])
if req.get("keyword", False):
chat_mdl = TenantLLMService.model_instance(kb.tenant_id, LLMType.CHAT)
question += keyword_extraction(chat_mdl, question)
retr = retrievaler if kb.parser_id != ParserType.KG else kg_retrievaler
ranks = retr.retrieval(question, embd_mdl, kb.tenant_id, kb_id, page, size,
similarity_threshold, vector_similarity_weight, top,
doc_ids, rerank_mdl=rerank_mdl, highlight=req.get("highlight"))
for c in ranks["chunks"]:
if "vector" in c:
del c["vector"]
##rename keys
renamed_chunks=[]
for chunk in ranks["chunks"]:
key_mapping = {
"chunk_id": "id",
"content_with_weight": "content",
"doc_id": "document_id",
"important_kwd": "important_keywords",
"docnm_kwd":"document_keyword"
}
rename_chunk={}
for key, value in chunk.items():
new_key = key_mapping.get(key, key)
rename_chunk[new_key] = value
renamed_chunks.append(rename_chunk)
ranks["chunks"] = renamed_chunks
return get_json_result(data=ranks)
except Exception as e:
if str(e).find("not_found") > 0:
return get_json_result(data=False, retmsg=f'No chunk found! Check the chunk status please!',
retcode=RetCode.DATA_ERROR)
return server_error_response(e)

266
api/apps/sdk/session.py Normal file
View File

@ -0,0 +1,266 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
from uuid import uuid4
from flask import request, Response
from api.db import StatusEnum
from api.db.services.dialog_service import DialogService, ConversationService, chat
from api.settings import RetCode
from api.utils import get_uuid
from api.utils.api_utils import get_data_error_result
from api.utils.api_utils import get_json_result, token_required
@manager.route('/save', methods=['POST'])
@token_required
def set_conversation(tenant_id):
req = request.json
conv_id = req.get("id")
if "assistant_id" in req:
req["dialog_id"] = req.pop("assistant_id")
if "id" in req:
del req["id"]
conv = ConversationService.query(id=conv_id)
if not conv:
return get_data_error_result(retmsg="Session does not exist")
if not DialogService.query(id=conv[0].dialog_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_data_error_result(retmsg="You do not own the session")
if req.get("dialog_id"):
dia = DialogService.query(tenant_id=tenant_id, id=req["dialog_id"], status=StatusEnum.VALID.value)
if not dia:
return get_data_error_result(retmsg="You do not own the assistant")
if "dialog_id" in req and not req.get("dialog_id"):
return get_data_error_result(retmsg="assistant_id can not be empty.")
if "message" in req:
return get_data_error_result(retmsg="message can not be change")
if "reference" in req:
return get_data_error_result(retmsg="reference can not be change")
if "name" in req and not req.get("name"):
return get_data_error_result(retmsg="name can not be empty.")
if not ConversationService.update_by_id(conv_id, req):
return get_data_error_result(retmsg="Session updates error")
return get_json_result(data=True)
if not req.get("dialog_id"):
return get_data_error_result(retmsg="assistant_id is required.")
dia = DialogService.query(tenant_id=tenant_id, id=req["dialog_id"], status=StatusEnum.VALID.value)
if not dia:
return get_data_error_result(retmsg="You do not own the assistant")
conv = {
"id": get_uuid(),
"dialog_id": req["dialog_id"],
"name": req.get("name", "New session"),
"message": [{"role": "assistant", "content": "Hi! I am your assistantcan I help you?"}]
}
if not conv.get("name"):
return get_data_error_result(retmsg="name can not be empty.")
ConversationService.save(**conv)
e, conv = ConversationService.get_by_id(conv["id"])
if not e:
return get_data_error_result(retmsg="Fail to new session!")
conv = conv.to_dict()
conv['messages'] = conv.pop("message")
conv["assistant_id"] = conv.pop("dialog_id")
del conv["reference"]
return get_json_result(data=conv)
@manager.route('/completion', methods=['POST'])
@token_required
def completion(tenant_id):
req = request.json
# req = {"conversation_id": "9aaaca4c11d311efa461fa163e197198", "messages": [
# {"role": "user", "content": "上海有吗?"}
# ]}
if "session_id" not in req:
return get_data_error_result(retmsg="session_id is required")
conv = ConversationService.query(id=req["session_id"])
if not conv:
return get_data_error_result(retmsg="Session does not exist")
conv = conv[0]
if not DialogService.query(id=conv.dialog_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_data_error_result(retmsg="You do not own the session")
msg = []
question = {
"content": req.get("question"),
"role": "user",
"id": str(uuid4())
}
conv.message.append(question)
for m in conv.message:
if m["role"] == "system": continue
if m["role"] == "assistant" and not msg: continue
msg.append(m)
message_id = msg[-1].get("id")
e, dia = DialogService.get_by_id(conv.dialog_id)
del req["session_id"]
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []})
def fillin_conv(ans):
nonlocal conv, message_id
if not conv.reference:
conv.reference.append(ans["reference"])
else:
conv.reference[-1] = ans["reference"]
conv.message[-1] = {"role": "assistant", "content": ans["answer"],
"id": message_id, "prompt": ans.get("prompt", "")}
ans["id"] = message_id
def stream():
nonlocal dia, msg, req, conv
try:
for ans in chat(dia, msg, **req):
fillin_conv(ans)
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n"
ConversationService.update_by_id(conv.id, conv.to_dict())
except Exception as e:
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n"
if req.get("stream", True):
resp = Response(stream(), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
else:
answer = None
for ans in chat(dia, msg, **req):
answer = ans
fillin_conv(ans)
ConversationService.update_by_id(conv.id, conv.to_dict())
break
return get_json_result(data=answer)
@manager.route('/get', methods=['GET'])
@token_required
def get(tenant_id):
req = request.args
if "id" not in req:
return get_data_error_result(retmsg="id is required")
conv_id = req["id"]
conv = ConversationService.query(id=conv_id)
if not conv:
return get_data_error_result(retmsg="Session does not exist")
if not DialogService.query(id=conv[0].dialog_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_data_error_result(retmsg="You do not own the session")
if "assistant_id" in req:
if req["assistant_id"] != conv[0].dialog_id:
return get_data_error_result(retmsg="The session doesn't belong to the assistant")
conv = conv[0].to_dict()
conv['messages'] = conv.pop("message")
conv["assistant_id"] = conv.pop("dialog_id")
if conv["reference"]:
messages = conv["messages"]
message_num = 0
chunk_num = 0
while message_num < len(messages):
if message_num != 0 and messages[message_num]["role"] != "user":
chunk_list = []
if "chunks" in conv["reference"][chunk_num]:
chunks = conv["reference"][chunk_num]["chunks"]
for chunk in chunks:
new_chunk = {
"id": chunk["chunk_id"],
"content": chunk["content_with_weight"],
"document_id": chunk["doc_id"],
"document_name": chunk["docnm_kwd"],
"knowledgebase_id": chunk["kb_id"],
"image_id": chunk["img_id"],
"similarity": chunk["similarity"],
"vector_similarity": chunk["vector_similarity"],
"term_similarity": chunk["term_similarity"],
"positions": chunk["positions"],
}
chunk_list.append(new_chunk)
chunk_num += 1
messages[message_num]["reference"] = chunk_list
message_num += 1
del conv["reference"]
return get_json_result(data=conv)
@manager.route('/list', methods=["GET"])
@token_required
def list(tenant_id):
assistant_id = request.args["assistant_id"]
if not DialogService.query(tenant_id=tenant_id, id=assistant_id, status=StatusEnum.VALID.value):
return get_json_result(
data=False, retmsg=f"You don't own the assistant.",
retcode=RetCode.OPERATING_ERROR)
convs = ConversationService.query(
dialog_id=assistant_id,
order_by=ConversationService.model.create_time,
reverse=True)
convs = [d.to_dict() for d in convs]
for conv in convs:
conv['messages'] = conv.pop("message")
conv["assistant_id"] = conv.pop("dialog_id")
if conv["reference"]:
messages = conv["messages"]
message_num = 0
chunk_num = 0
while message_num < len(messages):
if message_num != 0 and messages[message_num]["role"] != "user":
chunk_list = []
if "chunks" in conv["reference"][chunk_num]:
chunks = conv["reference"][chunk_num]["chunks"]
for chunk in chunks:
new_chunk = {
"id": chunk["chunk_id"],
"content": chunk["content_with_weight"],
"document_id": chunk["doc_id"],
"document_name": chunk["docnm_kwd"],
"knowledgebase_id": chunk["kb_id"],
"image_id": chunk["img_id"],
"similarity": chunk["similarity"],
"vector_similarity": chunk["vector_similarity"],
"term_similarity": chunk["term_similarity"],
"positions": chunk["positions"],
}
chunk_list.append(new_chunk)
chunk_num += 1
messages[message_num]["reference"] = chunk_list
message_num += 1
del conv["reference"]
return get_json_result(data=convs)
@manager.route('/delete', methods=["DELETE"])
@token_required
def delete(tenant_id):
id = request.args.get("id")
if not id:
return get_data_error_result(retmsg="`id` is required in deleting operation")
conv = ConversationService.query(id=id)
if not conv:
return get_data_error_result(retmsg="Session doesn't exist")
conv = conv[0]
if not DialogService.query(id=conv.dialog_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_data_error_result(retmsg="You don't own the session")
ConversationService.delete_by_id(id)
return get_json_result(data=True)

View File

@ -18,11 +18,12 @@ import json
from flask_login import login_required
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.settings import DATABASE_TYPE
from api.utils.api_utils import get_json_result
from api.versions import get_rag_version
from rag.settings import SVR_QUEUE_NAME
from rag.utils.es_conn import ELASTICSEARCH
from rag.utils.minio_conn import MINIO
from rag.utils.storage_factory import STORAGE_IMPL, STORAGE_IMPL_TYPE
from timeit import default_timer as timer
from rag.utils.redis_conn import REDIS_CONN
@ -47,17 +48,17 @@ def status():
st = timer()
try:
MINIO.health()
res["minio"] = {"status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)}
STORAGE_IMPL.health()
res["storage"] = {"storage": STORAGE_IMPL_TYPE.lower(), "status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)}
except Exception as e:
res["minio"] = {"status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
res["storage"] = {"storage": STORAGE_IMPL_TYPE.lower(), "status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
st = timer()
try:
KnowledgebaseService.get_by_id("x")
res["mysql"] = {"status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)}
res["database"] = {"database": DATABASE_TYPE.lower(), "status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)}
except Exception as e:
res["mysql"] = {"status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
res["database"] = {"database": DATABASE_TYPE.lower(), "status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
st = timer()
try:

85
api/apps/tenant_app.py Normal file
View File

@ -0,0 +1,85 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import request
from flask_login import current_user, login_required
from api.db import UserTenantRole, StatusEnum
from api.db.db_models import UserTenant
from api.db.services.user_service import TenantService, UserTenantService
from api.settings import RetCode
from api.utils import get_uuid
from api.utils.api_utils import get_json_result, validate_request, server_error_response
@manager.route("/list", methods=["GET"])
@login_required
def tenant_list():
try:
tenants = TenantService.get_by_user_id(current_user.id)
return get_json_result(data=tenants)
except Exception as e:
return server_error_response(e)
@manager.route("/<tenant_id>/user/list", methods=["GET"])
@login_required
def user_list(tenant_id):
try:
users = UserTenantService.get_by_tenant_id(tenant_id)
return get_json_result(data=users)
except Exception as e:
return server_error_response(e)
@manager.route('/<tenant_id>/user', methods=['POST'])
@login_required
@validate_request("user_id")
def create(tenant_id):
user_id = request.json.get("user_id")
if not user_id:
return get_json_result(
data=False, retmsg='Lack of "USER ID"', retcode=RetCode.ARGUMENT_ERROR)
try:
user_tenants = UserTenantService.query(user_id=user_id, tenant_id=tenant_id)
if user_tenants:
uuid = user_tenants[0].id
return get_json_result(data={"id": uuid})
uuid = get_uuid()
UserTenantService.save(
id = uuid,
user_id = user_id,
tenant_id = tenant_id,
role = UserTenantRole.NORMAL.value,
status = StatusEnum.VALID.value)
return get_json_result(data={"id": uuid})
except Exception as e:
return server_error_response(e)
@manager.route('/<tenant_id>/user/<user_id>', methods=['DELETE'])
@login_required
def rm(tenant_id, user_id):
try:
UserTenantService.filter_delete([UserTenant.tenant_id == tenant_id, UserTenant.user_id == user_id])
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)

View File

@ -55,6 +55,7 @@ class LLMType(StrEnum):
SPEECH2TEXT = 'speech2text'
IMAGE2TEXT = 'image2text'
RERANK = 'rerank'
TTS = 'tts'
class ChatStyle(StrEnum):

View File

@ -18,18 +18,19 @@ import os
import sys
import typing
import operator
from enum import Enum
from functools import wraps
from itsdangerous.url_safe import URLSafeTimedSerializer as Serializer
from flask_login import UserMixin
from playhouse.migrate import MySQLMigrator, migrate
from playhouse.migrate import MySQLMigrator, PostgresqlMigrator, migrate
from peewee import (
BigIntegerField, BooleanField, CharField,
CompositeKey, IntegerField, TextField, FloatField, DateTimeField,
Field, Model, Metadata
)
from playhouse.pool import PooledMySQLDatabase
from playhouse.pool import PooledMySQLDatabase, PooledPostgresqlDatabase
from api.db import SerializedType, ParserType
from api.settings import DATABASE, stat_logger, SECRET_KEY
from api.settings import DATABASE, stat_logger, SECRET_KEY, DATABASE_TYPE
from api.utils.log_utils import getLogger
from api import utils
@ -58,8 +59,13 @@ AUTO_DATE_TIMESTAMP_FIELD_PREFIX = {
"write_access"}
class TextFieldType(Enum):
MYSQL = 'LONGTEXT'
POSTGRES = 'TEXT'
class LongTextField(TextField):
field_type = 'LONGTEXT'
field_type = TextFieldType[DATABASE_TYPE.upper()].value
class JSONField(LongTextField):
@ -266,18 +272,69 @@ class JsonSerializedField(SerializedField):
super(JsonSerializedField, self).__init__(serialized_type=SerializedType.JSON, object_hook=object_hook,
object_pairs_hook=object_pairs_hook, **kwargs)
class PooledDatabase(Enum):
MYSQL = PooledMySQLDatabase
POSTGRES = PooledPostgresqlDatabase
class DatabaseMigrator(Enum):
MYSQL = MySQLMigrator
POSTGRES = PostgresqlMigrator
@singleton
class BaseDataBase:
def __init__(self):
database_config = DATABASE.copy()
db_name = database_config.pop("name")
self.database_connection = PooledMySQLDatabase(
db_name, **database_config)
stat_logger.info('init mysql database on cluster mode successfully')
self.database_connection = PooledDatabase[DATABASE_TYPE.upper()].value(db_name, **database_config)
stat_logger.info('init database on cluster mode successfully')
class PostgresDatabaseLock:
def __init__(self, lock_name, timeout=10, db=None):
self.lock_name = lock_name
self.timeout = int(timeout)
self.db = db if db else DB
class DatabaseLock:
def lock(self):
cursor = self.db.execute_sql("SELECT pg_try_advisory_lock(%s)", self.timeout)
ret = cursor.fetchone()
if ret[0] == 0:
raise Exception(f'acquire postgres lock {self.lock_name} timeout')
elif ret[0] == 1:
return True
else:
raise Exception(f'failed to acquire lock {self.lock_name}')
def unlock(self):
cursor = self.db.execute_sql("SELECT pg_advisory_unlock(%s)", self.timeout)
ret = cursor.fetchone()
if ret[0] == 0:
raise Exception(
f'postgres lock {self.lock_name} was not established by this thread')
elif ret[0] == 1:
return True
else:
raise Exception(f'postgres lock {self.lock_name} does not exist')
def __enter__(self):
if isinstance(self.db, PostgresDatabaseLock):
self.lock()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if isinstance(self.db, PostgresDatabaseLock):
self.unlock()
def __call__(self, func):
@wraps(func)
def magic(*args, **kwargs):
with self:
return func(*args, **kwargs)
return magic
class MysqlDatabaseLock:
def __init__(self, lock_name, timeout=10, db=None):
self.lock_name = lock_name
self.timeout = int(timeout)
@ -325,8 +382,13 @@ class DatabaseLock:
return magic
class DatabaseLock(Enum):
MYSQL = MysqlDatabaseLock
POSTGRES = PostgresDatabaseLock
DB = BaseDataBase().database_connection
DB.lock = DatabaseLock
DB.lock = DatabaseLock[DATABASE_TYPE.upper()].value
def close_connection():
@ -449,6 +511,11 @@ class Tenant(DataBaseModel):
null=False,
help_text="default rerank model ID",
index=True)
tts_id = CharField(
max_length=256,
null=True,
help_text="default tts model ID",
index=True)
parser_ids = CharField(
max_length=256,
null=False,
@ -783,6 +850,7 @@ class Task(DataBaseModel):
null=True,
help_text="process message",
default="")
retry_count = IntegerField(default=0)
class Dialog(DataBaseModel):
@ -824,6 +892,7 @@ class Dialog(DataBaseModel):
do_refer = CharField(
max_length=1,
null=False,
default="1",
help_text="it needs to insert reference index into answer or not")
rerank_id = CharField(
@ -911,7 +980,7 @@ class CanvasTemplate(DataBaseModel):
def migrate_db():
with DB.transaction():
migrator = MySQLMigrator(DB)
migrator = DatabaseMigrator[DATABASE_TYPE.upper()].value(DB)
try:
migrate(
migrator.add_column('file', 'source_type', CharField(max_length=128, null=False, default="",
@ -958,6 +1027,13 @@ def migrate_db():
)
except Exception as e:
pass
try:
migrate(
migrator.add_column("tenant","tts_id",
CharField(max_length=256,null=True,help_text="default tts model ID",index=True))
)
except Exception as e:
pass
try:
migrate(
migrator.add_column('api_4_conversation', 'source',
@ -970,3 +1046,10 @@ def migrate_db():
DB.execute_sql('ALTER TABLE llm ADD PRIMARY KEY (llm_name,fid);')
except Exception as e:
pass
try:
migrate(
migrator.add_column('task', 'retry_count', IntegerField(default=0))
)
except Exception as e:
pass

View File

@ -17,6 +17,8 @@ import operator
from functools import reduce
from typing import Dict, Type, Union
from playhouse.pool import PooledMySQLDatabase
from api.utils import current_timestamp, timestamp_to_date
from api.db.db_models import DB, DataBaseModel
@ -49,7 +51,10 @@ def bulk_insert_into_db(model, data_source, replace_on_conflict=False):
with DB.atomic():
query = model.insert_many(data_source[i:i + batch_size])
if replace_on_conflict:
query = query.on_conflict(preserve=preserve)
if isinstance(DB, PooledMySQLDatabase):
query = query.on_conflict(preserve=preserve)
else:
query = query.on_conflict(conflict_target="id", preserve=preserve)
query.execute()

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import base64
import json
import os
import time
@ -31,10 +32,15 @@ from api.settings import CHAT_MDL, EMBEDDING_MDL, ASR_MDL, IMAGE2TEXT_MDL, PARSE
from api.utils.file_utils import get_project_base_directory
def encode_to_base64(input_string):
base64_encoded = base64.b64encode(input_string.encode('utf-8'))
return base64_encoded.decode('utf-8')
def init_superuser():
user_info = {
"id": uuid.uuid1().hex,
"password": "admin",
"password": encode_to_base64("admin"),
"nickname": "admin",
"is_superuser": True,
"email": "admin@ragflow.io",
@ -172,8 +178,8 @@ def init_web_data():
start_time = time.time()
init_llm_factory()
if not UserService.get_all().count():
init_superuser()
#if not UserService.get_all().count():
# init_superuser()
add_graph_templates()
print("init web data success:{}".format(time.time() - start_time))

View File

@ -14,7 +14,9 @@
# limitations under the License.
#
from datetime import datetime
import peewee
from api.db.db_models import DB, API4Conversation, APIToken, Dialog
from api.db.services.common_service import CommonService
from api.utils import current_timestamp, datetime_format
@ -41,7 +43,7 @@ class API4ConversationService(CommonService):
@DB.connection_context()
def append_message(cls, id, conversation):
cls.update_by_id(id, conversation)
return cls.model.update(round=cls.model.round + 1).where(cls.model.id==id).execute()
return cls.model.update(round=cls.model.round + 1).where(cls.model.id == id).execute()
@classmethod
@DB.connection_context()
@ -61,7 +63,7 @@ class API4ConversationService(CommonService):
cls.model.round).alias("round"),
peewee.fn.SUM(
cls.model.thumb_up).alias("thumb_up")
).join(Dialog, on=(cls.model.dialog_id == Dialog.id & Dialog.tenant_id == tenant_id)).where(
).join(Dialog, on=((cls.model.dialog_id == Dialog.id) & (Dialog.tenant_id == tenant_id))).where(
cls.model.create_date >= from_date,
cls.model.create_date <= to_date,
cls.model.source == source

View File

@ -13,11 +13,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import binascii
import os
import json
import re
from copy import deepcopy
from timeit import default_timer as timer
from api.db import LLMType, ParserType
from api.db.db_models import Dialog, Conversation
from api.db.services.common_service import CommonService
@ -77,6 +78,7 @@ def message_fit_in(msg, max_length=4000):
def llm_id2llm_type(llm_id):
llm_id = llm_id.split("@")[0]
fnm = os.path.join(get_project_base_directory(), "conf")
llm_factories = json.load(open(os.path.join(fnm, "llm_factories.json"), "r"))
for llm_factory in llm_factories["factory_llm_infos"]:
@ -87,9 +89,16 @@ def llm_id2llm_type(llm_id):
def chat(dialog, messages, stream=True, **kwargs):
assert messages[-1]["role"] == "user", "The last content of this conversation is not from user."
llm = LLMService.query(llm_name=dialog.llm_id)
st = timer()
tmp = dialog.llm_id.split("@")
fid = None
llm_id = tmp[0]
if len(tmp)>1: fid = tmp[1]
llm = LLMService.query(llm_name=llm_id) if not fid else LLMService.query(llm_name=llm_id, fid=fid)
if not llm:
llm = TenantLLMService.query(tenant_id=dialog.tenant_id, llm_name=dialog.llm_id)
llm = TenantLLMService.query(tenant_id=dialog.tenant_id, llm_name=llm_id) if not fid else \
TenantLLMService.query(tenant_id=dialog.tenant_id, llm_name=llm_id, llm_factory=fid)
if not llm:
raise LookupError("LLM(%s) not found" % dialog.llm_id)
max_tokens = 8192
@ -120,6 +129,9 @@ def chat(dialog, messages, stream=True, **kwargs):
prompt_config = dialog.prompt_config
field_map = KnowledgebaseService.get_field_map(dialog.kb_ids)
tts_mdl = None
if prompt_config.get("tts"):
tts_mdl = LLMBundle(dialog.tenant_id, LLMType.TTS)
# try to use sql if field mapping is good to go
if field_map:
chat_logger.info("Use SQL to retrieval:{}".format(questions[-1]))
@ -137,6 +149,11 @@ def chat(dialog, messages, stream=True, **kwargs):
prompt_config["system"] = prompt_config["system"].replace(
"{%s}" % p["key"], " ")
if len(questions) > 1 and prompt_config.get("refine_multiturn"):
questions = [full_question(dialog.tenant_id, dialog.llm_id, messages)]
else:
questions = questions[-1:]
rerank_mdl = None
if dialog.rerank_id:
rerank_mdl = LLMBundle(dialog.tenant_id, LLMType.RERANK, dialog.rerank_id)
@ -154,24 +171,16 @@ def chat(dialog, messages, stream=True, **kwargs):
doc_ids=attachments,
top=dialog.top_k, aggs=False, rerank_mdl=rerank_mdl)
knowledges = [ck["content_with_weight"] for ck in kbinfos["chunks"]]
#self-rag
if dialog.prompt_config.get("self_rag") and not relevant(dialog.tenant_id, dialog.llm_id, questions[-1], knowledges):
questions[-1] = rewrite(dialog.tenant_id, dialog.llm_id, questions[-1])
kbinfos = retr.retrieval(" ".join(questions), embd_mdl, dialog.tenant_id, dialog.kb_ids, 1, dialog.top_n,
dialog.similarity_threshold,
dialog.vector_similarity_weight,
doc_ids=attachments,
top=dialog.top_k, aggs=False, rerank_mdl=rerank_mdl)
knowledges = [ck["content_with_weight"] for ck in kbinfos["chunks"]]
chat_logger.info(
"{}->{}".format(" ".join(questions), "\n->".join(knowledges)))
retrieval_tm = timer()
if not knowledges and prompt_config.get("empty_response"):
yield {"answer": prompt_config["empty_response"], "reference": kbinfos}
empty_res = prompt_config["empty_response"]
yield {"answer": empty_res, "reference": kbinfos, "audio_binary": tts(tts_mdl, empty_res)}
return {"answer": prompt_config["empty_response"], "reference": kbinfos}
kwargs["knowledge"] = "\n".join(knowledges)
kwargs["knowledge"] = "\n\n------\n\n".join(knowledges)
gen_conf = dialog.llm_setting
msg = [{"role": "system", "content": prompt_config["system"].format(**kwargs)}]
@ -179,6 +188,7 @@ def chat(dialog, messages, stream=True, **kwargs):
for m in messages if m["role"] != "system"])
used_token_count, msg = message_fit_in(msg, int(max_tokens * 0.97))
assert len(msg) >= 2, f"message_fit_in has bug: {msg}"
prompt = msg[0]["content"]
if "max_tokens" in gen_conf:
gen_conf["max_tokens"] = min(
@ -186,7 +196,7 @@ def chat(dialog, messages, stream=True, **kwargs):
max_tokens - used_token_count)
def decorate_answer(answer):
nonlocal prompt_config, knowledges, kwargs, kbinfos
nonlocal prompt_config, knowledges, kwargs, kbinfos, prompt, retrieval_tm
refs = []
if knowledges and (prompt_config.get("quote", True) and kwargs.get("quote", True)):
answer, idx = retr.insert_citations(answer,
@ -210,20 +220,31 @@ def chat(dialog, messages, stream=True, **kwargs):
if answer.lower().find("invalid key") >= 0 or answer.lower().find("invalid api") >= 0:
answer += " Please set LLM API-Key in 'User Setting -> Model Providers -> API-Key'"
return {"answer": answer, "reference": refs}
done_tm = timer()
prompt += "\n\n### Elapsed\n - Retrieval: %.1f ms\n - LLM: %.1f ms"%((retrieval_tm-st)*1000, (done_tm-st)*1000)
return {"answer": answer, "reference": refs, "prompt": prompt}
if stream:
last_ans = ""
answer = ""
for ans in chat_mdl.chat_streamly(msg[0]["content"], msg[1:], gen_conf):
for ans in chat_mdl.chat_streamly(prompt, msg[1:], gen_conf):
answer = ans
yield {"answer": answer, "reference": {}}
delta_ans = ans[len(last_ans):]
if num_tokens_from_string(delta_ans) < 16:
continue
last_ans = answer
yield {"answer": answer, "reference": {}, "audio_binary": tts(tts_mdl, delta_ans)}
delta_ans = answer[len(last_ans):]
if delta_ans:
yield {"answer": answer, "reference": {}, "audio_binary": tts(tts_mdl, delta_ans)}
yield decorate_answer(answer)
else:
answer = chat_mdl.chat(
msg[0]["content"], msg[1:], gen_conf)
answer = chat_mdl.chat(prompt, msg[1:], gen_conf)
chat_logger.info("User: {}|Assistant: {}".format(
msg[-1]["content"], answer))
yield decorate_answer(answer)
res = decorate_answer(answer)
res["audio_binary"] = tts(tts_mdl, answer)
yield res
def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
@ -334,7 +355,8 @@ def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
chat_logger.warning("SQL missing field: " + sql)
return {
"answer": "\n".join([clmns, line, rows]),
"reference": {"chunks": [], "doc_aggs": []}
"reference": {"chunks": [], "doc_aggs": []},
"prompt": sys_prompt
}
docid_idx = list(docid_idx)[0]
@ -348,7 +370,8 @@ def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
"answer": "\n".join([clmns, line, rows]),
"reference": {"chunks": [{"doc_id": r[docid_idx], "docnm_kwd": r[docnm_idx]} for r in tbl["rows"]],
"doc_aggs": [{"doc_id": did, "doc_name": d["doc_name"], "count": d["count"]} for did, d in
doc_aggs.items()]}
doc_aggs.items()]},
"prompt": sys_prompt
}
@ -390,3 +413,134 @@ def rewrite(tenant_id, llm_id, question):
"""
ans = chat_mdl.chat(prompt, [{"role": "user", "content": question}], {"temperature": 0.8})
return ans
def full_question(tenant_id, llm_id, messages):
if llm_id2llm_type(llm_id) == "image2text":
chat_mdl = LLMBundle(tenant_id, LLMType.IMAGE2TEXT, llm_id)
else:
chat_mdl = LLMBundle(tenant_id, LLMType.CHAT, llm_id)
conv = []
for m in messages:
if m["role"] not in ["user", "assistant"]: continue
conv.append("{}: {}".format(m["role"].upper(), m["content"]))
conv = "\n".join(conv)
prompt = f"""
Role: A helpful assistant
Task: Generate a full user question that would follow the conversation.
Requirements & Restrictions:
- Text generated MUST be in the same language of the original user's question.
- If the user's latest question is completely, don't do anything, just return the original question.
- DON'T generate anything except a refined question.
######################
-Examples-
######################
# Example 1
## Conversation
USER: What is the name of Donald Trump's father?
ASSISTANT: Fred Trump.
USER: And his mother?
###############
Output: What's the name of Donald Trump's mother?
------------
# Example 2
## Conversation
USER: What is the name of Donald Trump's father?
ASSISTANT: Fred Trump.
USER: And his mother?
ASSISTANT: Mary Trump.
User: What's her full name?
###############
Output: What's the full name of Donald Trump's mother Mary Trump?
######################
# Real Data
## Conversation
{conv}
###############
"""
ans = chat_mdl.chat(prompt, [{"role": "user", "content": "Output: "}], {"temperature": 0.2})
return ans if ans.find("**ERROR**") < 0 else messages[-1]["content"]
def tts(tts_mdl, text):
if not tts_mdl or not text: return
bin = b""
for chunk in tts_mdl.tts(text):
bin += chunk
return binascii.hexlify(bin).decode("utf-8")
def ask(question, kb_ids, tenant_id):
kbs = KnowledgebaseService.get_by_ids(kb_ids)
embd_nms = list(set([kb.embd_id for kb in kbs]))
is_kg = all([kb.parser_id == ParserType.KG for kb in kbs])
retr = retrievaler if not is_kg else kg_retrievaler
embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING, embd_nms[0])
chat_mdl = LLMBundle(tenant_id, LLMType.CHAT)
max_tokens = chat_mdl.max_length
kbinfos = retr.retrieval(question, embd_mdl, tenant_id, kb_ids, 1, 12, 0.1, 0.3, aggs=False)
knowledges = [ck["content_with_weight"] for ck in kbinfos["chunks"]]
used_token_count = 0
for i, c in enumerate(knowledges):
used_token_count += num_tokens_from_string(c)
if max_tokens * 0.97 < used_token_count:
knowledges = knowledges[:i]
break
prompt = """
Role: You're a smart assistant. Your name is Miss R.
Task: Summarize the information from knowledge bases and answer user's question.
Requirements and restriction:
- DO NOT make things up, especially for numbers.
- If the information from knowledge is irrelevant with user's question, JUST SAY: Sorry, no relevant information provided.
- Answer with markdown format text.
- Answer in language of user's question.
- DO NOT make things up, especially for numbers.
### Information from knowledge bases
%s
The above is information from knowledge bases.
"""%"\n".join(knowledges)
msg = [{"role": "user", "content": question}]
def decorate_answer(answer):
nonlocal knowledges, kbinfos, prompt
answer, idx = retr.insert_citations(answer,
[ck["content_ltks"]
for ck in kbinfos["chunks"]],
[ck["vector"]
for ck in kbinfos["chunks"]],
embd_mdl,
tkweight=0.7,
vtweight=0.3)
idx = set([kbinfos["chunks"][int(i)]["doc_id"] for i in idx])
recall_docs = [
d for d in kbinfos["doc_aggs"] if d["doc_id"] in idx]
if not recall_docs: recall_docs = kbinfos["doc_aggs"]
kbinfos["doc_aggs"] = recall_docs
refs = deepcopy(kbinfos)
for c in refs["chunks"]:
if c.get("vector"):
del c["vector"]
if answer.lower().find("invalid key") >= 0 or answer.lower().find("invalid api") >= 0:
answer += " Please set LLM API-Key in 'User Setting -> Model Providers -> API-Key'"
return {"answer": answer, "reference": refs}
answer = ""
for ans in chat_mdl.chat_streamly(prompt, msg, {"temperature": 0.1}):
answer = ans
yield {"answer": answer, "reference": {}}
yield decorate_answer(answer)

View File

@ -34,7 +34,7 @@ from api.utils.file_utils import get_project_base_directory
from graphrag.mind_map_extractor import MindMapExtractor
from rag.settings import SVR_QUEUE_NAME
from rag.utils.es_conn import ELASTICSEARCH
from rag.utils.minio_conn import MINIO
from rag.utils.storage_factory import STORAGE_IMPL
from rag.nlp import search, rag_tokenizer
from api.db import FileType, TaskStatus, ParserType, LLMType
@ -473,7 +473,7 @@ def doc_upload_and_parse(conversation_id, file_objs, user_id):
else:
d["image"].save(output_buffer, format='JPEG')
MINIO.put(kb.id, d["_id"], output_buffer.getvalue())
STORAGE_IMPL.put(kb.id, d["_id"], output_buffer.getvalue())
d["img_id"] = "{}-{}".format(kb.id, d["_id"])
del d["image"]
docs.append(d)

View File

@ -69,14 +69,14 @@ class File2DocumentService(CommonService):
@classmethod
@DB.connection_context()
def get_minio_address(cls, doc_id=None, file_id=None):
def get_storage_address(cls, doc_id=None, file_id=None):
if doc_id:
f2d = cls.get_by_document_id(doc_id)
else:
f2d = cls.get_by_file_id(file_id)
if f2d:
file = File.get_by_id(f2d[0].file_id)
if file.source_type == FileSource.LOCAL:
if not file.source_type or file.source_type == FileSource.LOCAL:
return file.parent_id, file.location
doc_id = f2d[0].document_id

View File

@ -27,7 +27,7 @@ from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService
from api.utils import get_uuid
from api.utils.file_utils import filename_type, thumbnail
from rag.utils.minio_conn import MINIO
from rag.utils.storage_factory import STORAGE_IMPL
class FileService(CommonService):
@ -350,14 +350,14 @@ class FileService(CommonService):
raise RuntimeError("This type of file has not been supported yet!")
location = filename
while MINIO.obj_exist(kb.id, location):
while STORAGE_IMPL.obj_exist(kb.id, location):
location += "_"
blob = file.read()
MINIO.put(kb.id, location, blob)
STORAGE_IMPL.put(kb.id, location, blob)
doc = {
"id": get_uuid(),
"kb_id": kb.id,
"parser_id": kb.parser_id,
"parser_id": self.get_parser(filetype, filename, kb.parser_id),
"parser_config": kb.parser_config,
"created_by": user_id,
"type": filetype,
@ -366,14 +366,6 @@ class FileService(CommonService):
"size": len(blob),
"thumbnail": thumbnail(filename, blob)
}
if doc["type"] == FileType.VISUAL:
doc["parser_id"] = ParserType.PICTURE.value
if doc["type"] == FileType.AURAL:
doc["parser_id"] = ParserType.AUDIO.value
if re.search(r"\.(ppt|pptx|pages)$", filename):
doc["parser_id"] = ParserType.PRESENTATION.value
if re.search(r"\.(eml)$", filename):
doc["parser_id"] = ParserType.EMAIL.value
DocumentService.insert(doc)
FileService.add_file_from_kb(doc, kb_folder["id"], kb.tenant_id)
@ -381,4 +373,16 @@ class FileService(CommonService):
except Exception as e:
err.append(file.filename + ": " + str(e))
return err, files
return err, files
@staticmethod
def get_parser(doc_type, filename, default):
if doc_type == FileType.VISUAL:
return ParserType.PICTURE.value
if doc_type == FileType.AURAL:
return ParserType.AUDIO.value
if re.search(r"\.(ppt|pptx|pages)$", filename):
return ParserType.PRESENTATION.value
if re.search(r"\.(eml)$", filename):
return ParserType.EMAIL.value
return default

View File

@ -15,9 +15,9 @@
#
from api.db.services.user_service import TenantService
from api.settings import database_logger
from rag.llm import EmbeddingModel, CvModel, ChatModel, RerankModel, Seq2txtModel
from rag.llm import EmbeddingModel, CvModel, ChatModel, RerankModel, Seq2txtModel, TTSModel
from api.db import LLMType
from api.db.db_models import DB, UserTenant
from api.db.db_models import DB
from api.db.db_models import LLMFactories, LLM, TenantLLM
from api.db.services.common_service import CommonService
@ -36,7 +36,11 @@ class TenantLLMService(CommonService):
@classmethod
@DB.connection_context()
def get_api_key(cls, tenant_id, model_name):
objs = cls.query(tenant_id=tenant_id, llm_name=model_name)
arr = model_name.split("@")
if len(arr) < 2:
objs = cls.query(tenant_id=tenant_id, llm_name=model_name)
else:
objs = cls.query(tenant_id=tenant_id, llm_name=arr[0], llm_factory=arr[1])
if not objs:
return
return objs[0]
@ -75,18 +79,23 @@ class TenantLLMService(CommonService):
mdlnm = tenant.llm_id if not llm_name else llm_name
elif llm_type == LLMType.RERANK:
mdlnm = tenant.rerank_id if not llm_name else llm_name
elif llm_type == LLMType.TTS:
mdlnm = tenant.tts_id if not llm_name else llm_name
else:
assert False, "LLM type error"
model_config = cls.get_api_key(tenant_id, mdlnm)
tmp = mdlnm.split("@")
fid = None if len(tmp) < 2 else tmp[1]
mdlnm = tmp[0]
if model_config: model_config = model_config.to_dict()
if not model_config:
if llm_type in [LLMType.EMBEDDING, LLMType.RERANK]:
llm = LLMService.query(llm_name=llm_name if llm_name else mdlnm)
llm = LLMService.query(llm_name=mdlnm) if not fid else LLMService.query(llm_name=mdlnm, fid=fid)
if llm and llm[0].fid in ["Youdao", "FastEmbed", "BAAI"]:
model_config = {"llm_factory": llm[0].fid, "api_key":"", "llm_name": llm_name if llm_name else mdlnm, "api_base": ""}
model_config = {"llm_factory": llm[0].fid, "api_key":"", "llm_name": mdlnm, "api_base": ""}
if not model_config:
if llm_name == "flag-embedding":
if mdlnm == "flag-embedding":
model_config = {"llm_factory": "Tongyi-Qianwen", "api_key": "",
"llm_name": llm_name, "api_base": ""}
else:
@ -127,6 +136,14 @@ class TenantLLMService(CommonService):
model_config["api_key"], model_config["llm_name"], lang,
base_url=model_config["api_base"]
)
if llm_type == LLMType.TTS:
if model_config["llm_factory"] not in TTSModel:
return
return TTSModel[model_config["llm_factory"]](
model_config["api_key"],
model_config["llm_name"],
base_url=model_config["api_base"],
)
@classmethod
@DB.connection_context()
@ -144,14 +161,16 @@ class TenantLLMService(CommonService):
elif llm_type == LLMType.CHAT.value:
mdlnm = tenant.llm_id if not llm_name else llm_name
elif llm_type == LLMType.RERANK:
mdlnm = tenant.llm_id if not llm_name else llm_name
mdlnm = tenant.rerank_id if not llm_name else llm_name
elif llm_type == LLMType.TTS:
mdlnm = tenant.tts_id if not llm_name else llm_name
else:
assert False, "LLM type error"
num = 0
try:
for u in cls.query(tenant_id = tenant_id, llm_name=mdlnm):
num += cls.model.update(used_tokens = u.used_tokens + used_tokens)\
for u in cls.query(tenant_id=tenant_id, llm_name=mdlnm):
num += cls.model.update(used_tokens=u.used_tokens + used_tokens)\
.where(cls.model.tenant_id == tenant_id, cls.model.llm_name == mdlnm)\
.execute()
except Exception as e:
@ -178,11 +197,11 @@ class LLMBundle(object):
tenant_id, llm_type, llm_name, lang=lang)
assert self.mdl, "Can't find mole for {}/{}/{}".format(
tenant_id, llm_type, llm_name)
self.max_length = 512
self.max_length = 8192
for lm in LLMService.query(llm_name=llm_name):
self.max_length = lm.max_tokens
break
def encode(self, texts: list, batch_size=32):
emd, used_tokens = self.mdl.encode(texts, batch_size)
if not TenantLLMService.increase_usage(
@ -223,6 +242,16 @@ class LLMBundle(object):
"Can't update token usage for {}/SEQUENCE2TXT".format(self.tenant_id))
return txt
def tts(self, text):
for chunk in self.mdl.tts(text):
if isinstance(chunk,int):
if not TenantLLMService.increase_usage(
self.tenant_id, self.llm_type, chunk, self.llm_name):
database_logger.error(
"Can't update token usage for {}/TTS".format(self.tenant_id))
return
yield chunk
def chat(self, system, history, gen_conf):
txt, used_tokens = self.mdl.chat(system, history, gen_conf)
if not TenantLLMService.increase_usage(

View File

@ -27,7 +27,7 @@ from api.db.services.document_service import DocumentService
from api.utils import current_timestamp, get_uuid
from deepdoc.parser.excel_parser import RAGFlowExcelParser
from rag.settings import SVR_QUEUE_NAME
from rag.utils.minio_conn import MINIO
from rag.utils.storage_factory import STORAGE_IMPL
from rag.utils.redis_conn import REDIS_CONN
@ -42,6 +42,7 @@ class TaskService(CommonService):
cls.model.doc_id,
cls.model.from_page,
cls.model.to_page,
cls.model.retry_count,
Document.kb_id,
Document.parser_id,
Document.parser_config,
@ -64,9 +65,20 @@ class TaskService(CommonService):
docs = list(docs.dicts())
if not docs: return []
cls.model.update(progress_msg=cls.model.progress_msg + "\n" + "Task has been received.",
progress=random.random() / 10.).where(
msg = "\nTask has been received."
prog = random.random() / 10.
if docs[0]["retry_count"] >= 3:
msg = "\nERROR: Task is abandoned after 3 times attempts."
prog = -1
cls.model.update(progress_msg=cls.model.progress_msg + msg,
progress=prog,
retry_count=docs[0]["retry_count"]+1
).where(
cls.model.id == docs[0]["id"]).execute()
if docs[0]["retry_count"] >= 3: return []
return docs
@classmethod
@ -121,9 +133,8 @@ class TaskService(CommonService):
cls.model.id == id).execute()
def queue_tasks(doc, bucket, name):
def queue_tasks(doc: dict, bucket: str, name: str):
def new_task():
nonlocal doc
return {
"id": get_uuid(),
"doc_id": doc["id"]
@ -131,21 +142,15 @@ def queue_tasks(doc, bucket, name):
tsks = []
if doc["type"] == FileType.PDF.value:
file_bin = MINIO.get(bucket, name)
file_bin = STORAGE_IMPL.get(bucket, name)
do_layout = doc["parser_config"].get("layout_recognize", True)
pages = PdfParser.total_page_number(doc["name"], file_bin)
page_size = doc["parser_config"].get("task_page_size", 12)
if doc["parser_id"] == "paper":
page_size = doc["parser_config"].get("task_page_size", 22)
if doc["parser_id"] == "one":
page_size = 1000000000
if doc["parser_id"] == "knowledge_graph":
page_size = 1000000000
if not do_layout:
page_size = 1000000000
page_ranges = doc["parser_config"].get("pages")
if not page_ranges:
page_ranges = [(1, 100000)]
if doc["parser_id"] in ["one", "knowledge_graph"] or not do_layout:
page_size = 10 ** 9
page_ranges = doc["parser_config"].get("pages") or [(1, 10 ** 5)]
for s, e in page_ranges:
s -= 1
s = max(0, s)
@ -157,9 +162,8 @@ def queue_tasks(doc, bucket, name):
tsks.append(task)
elif doc["parser_id"] == "table":
file_bin = MINIO.get(bucket, name)
rn = RAGFlowExcelParser.row_number(
doc["name"], file_bin)
file_bin = STORAGE_IMPL.get(bucket, name)
rn = RAGFlowExcelParser.row_number(doc["name"], file_bin)
for i in range(0, rn, 3000):
task = new_task()
task["from_page"] = i

View File

@ -96,6 +96,7 @@ class TenantService(CommonService):
cls.model.rerank_id,
cls.model.asr_id,
cls.model.img2txt_id,
cls.model.tts_id,
cls.model.parser_ids,
UserTenant.role]
return list(cls.model.select(*fields)
@ -136,3 +137,24 @@ class UserTenantService(CommonService):
kwargs["id"] = get_uuid()
obj = cls.model(**kwargs).save(force_insert=True)
return obj
@classmethod
@DB.connection_context()
def get_by_tenant_id(cls, tenant_id):
fields = [
cls.model.user_id,
cls.model.tenant_id,
cls.model.role,
cls.model.status,
User.nickname,
User.email,
User.avatar,
User.is_authenticated,
User.is_active,
User.is_anonymous,
User.status,
User.is_superuser]
return list(cls.model.select(*fields)
.join(User, on=((cls.model.user_id == User.id) & (cls.model.status == StatusEnum.VALID.value)))
.where(cls.model.tenant_id == tenant_id)
.dicts())

View File

@ -46,13 +46,12 @@ def update_progress():
if __name__ == '__main__':
print("""
____ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
/____/
print(r"""
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
""", flush=True)
stat_logger.info(

View File

@ -42,6 +42,7 @@ RAG_FLOW_SERVICE_NAME = "ragflow"
SERVER_MODULE = "rag_flow_server.py"
TEMP_DIRECTORY = os.path.join(get_project_base_directory(), "temp")
RAG_FLOW_CONF_PATH = os.path.join(get_project_base_directory(), "conf")
LIGHTEN = os.environ.get('LIGHTEN')
SUBPROCESS_STD_LOG_NAME = "std.log"
@ -57,77 +58,76 @@ REQUEST_MAX_WAIT_SEC = 300
USE_REGISTRY = get_base_config("use_registry")
default_llm = {
"Tongyi-Qianwen": {
"chat_model": "qwen-plus",
"embedding_model": "text-embedding-v2",
"image2text_model": "qwen-vl-max",
"asr_model": "paraformer-realtime-8k-v1",
},
"OpenAI": {
"chat_model": "gpt-3.5-turbo",
"embedding_model": "text-embedding-ada-002",
"image2text_model": "gpt-4-vision-preview",
"asr_model": "whisper-1",
},
"Azure-OpenAI": {
"chat_model": "azure-gpt-35-turbo",
"embedding_model": "azure-text-embedding-ada-002",
"image2text_model": "azure-gpt-4-vision-preview",
"asr_model": "azure-whisper-1",
},
"ZHIPU-AI": {
"chat_model": "glm-3-turbo",
"embedding_model": "embedding-2",
"image2text_model": "glm-4v",
"asr_model": "",
},
"Ollama": {
"chat_model": "qwen-14B-chat",
"embedding_model": "flag-embedding",
"image2text_model": "",
"asr_model": "",
},
"Moonshot": {
"chat_model": "moonshot-v1-8k",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"DeepSeek": {
"chat_model": "deepseek-chat",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"VolcEngine": {
"chat_model": "",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"BAAI": {
"chat_model": "",
"embedding_model": "BAAI/bge-large-zh-v1.5",
"image2text_model": "",
"asr_model": "",
"rerank_model": "BAAI/bge-reranker-v2-m3",
}
}
LLM = get_base_config("user_default_llm", {})
LLM_FACTORY = LLM.get("factory", "Tongyi-Qianwen")
LLM_BASE_URL = LLM.get("base_url")
if LLM_FACTORY not in default_llm:
print(
"\33[91m【ERROR】\33[0m:",
f"LLM factory {LLM_FACTORY} has not supported yet, switch to 'Tongyi-Qianwen/QWen' automatically, and please check the API_KEY in service_conf.yaml.")
LLM_FACTORY = "Tongyi-Qianwen"
CHAT_MDL = default_llm[LLM_FACTORY]["chat_model"]
EMBEDDING_MDL = default_llm["BAAI"]["embedding_model"]
RERANK_MDL = default_llm["BAAI"]["rerank_model"]
ASR_MDL = default_llm[LLM_FACTORY]["asr_model"]
IMAGE2TEXT_MDL = default_llm[LLM_FACTORY]["image2text_model"]
if not LIGHTEN:
default_llm = {
"Tongyi-Qianwen": {
"chat_model": "qwen-plus",
"embedding_model": "text-embedding-v2",
"image2text_model": "qwen-vl-max",
"asr_model": "paraformer-realtime-8k-v1",
},
"OpenAI": {
"chat_model": "gpt-3.5-turbo",
"embedding_model": "text-embedding-ada-002",
"image2text_model": "gpt-4-vision-preview",
"asr_model": "whisper-1",
},
"Azure-OpenAI": {
"chat_model": "gpt-35-turbo",
"embedding_model": "text-embedding-ada-002",
"image2text_model": "gpt-4-vision-preview",
"asr_model": "whisper-1",
},
"ZHIPU-AI": {
"chat_model": "glm-3-turbo",
"embedding_model": "embedding-2",
"image2text_model": "glm-4v",
"asr_model": "",
},
"Ollama": {
"chat_model": "qwen-14B-chat",
"embedding_model": "flag-embedding",
"image2text_model": "",
"asr_model": "",
},
"Moonshot": {
"chat_model": "moonshot-v1-8k",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"DeepSeek": {
"chat_model": "deepseek-chat",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"VolcEngine": {
"chat_model": "",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"BAAI": {
"chat_model": "",
"embedding_model": "BAAI/bge-large-zh-v1.5",
"image2text_model": "",
"asr_model": "",
"rerank_model": "BAAI/bge-reranker-v2-m3",
}
}
CHAT_MDL = default_llm[LLM_FACTORY]["chat_model"]
EMBEDDING_MDL = default_llm["BAAI"]["embedding_model"]
RERANK_MDL = default_llm["BAAI"]["rerank_model"] if not LIGHTEN else ""
ASR_MDL = default_llm[LLM_FACTORY]["asr_model"]
IMAGE2TEXT_MDL = default_llm[LLM_FACTORY]["image2text_model"]
else:
CHAT_MDL = EMBEDDING_MDL = RERANK_MDL = ASR_MDL = IMAGE2TEXT_MDL = ""
API_KEY = LLM.get("api_key", "")
PARSERS = LLM.get(
@ -164,7 +164,8 @@ RANDOM_INSTANCE_ID = get_base_config(
PROXY = get_base_config(RAG_FLOW_SERVICE_NAME, {}).get("proxy")
PROXY_PROTOCOL = get_base_config(RAG_FLOW_SERVICE_NAME, {}).get("protocol")
DATABASE = decrypt_database_config(name="mysql")
DATABASE_TYPE = os.getenv("DB_TYPE", 'mysql')
DATABASE = decrypt_database_config(name=DATABASE_TYPE)
# Switch
# upload

View File

@ -13,30 +13,32 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import functools
import json
import random
import time
from base64 import b64encode
from functools import wraps
from hmac import HMAC
from io import BytesIO
from urllib.parse import quote, urlencode
from uuid import uuid1
import requests
from flask import (
Response, jsonify, send_file, make_response,
request as flask_request,
)
from werkzeug.http import HTTP_STATUS_CODES
from api.utils import json_dumps
from api.settings import RetCode
from api.db.db_models import APIToken
from api.settings import (
REQUEST_MAX_WAIT_SEC, REQUEST_WAIT_SEC,
stat_logger, CLIENT_AUTHENTICATION, HTTP_APP_KEY, SECRET_KEY
)
import requests
import functools
from api.settings import RetCode
from api.utils import CustomJSONEncoder
from uuid import uuid1
from base64 import b64encode
from hmac import HMAC
from urllib.parse import quote, urlencode
from api.utils import json_dumps
requests.models.complexjson.dumps = functools.partial(
json.dumps, cls=CustomJSONEncoder)
@ -96,7 +98,6 @@ def get_exponential_backoff_interval(retries, full_jitter=False):
def get_json_result(retcode=RetCode.SUCCESS, retmsg='success',
data=None, job_id=None, meta=None):
import re
result_dict = {
"retcode": retcode,
"retmsg": retmsg,
@ -145,7 +146,8 @@ def server_error_response(e):
return get_json_result(
retcode=RetCode.EXCEPTION_ERROR, retmsg=repr(e.args[0]), data=e.args[1])
if repr(e).find("index_not_found_exception") >= 0:
return get_json_result(retcode=RetCode.EXCEPTION_ERROR, retmsg="No chunk found, please upload file and parse it.")
return get_json_result(retcode=RetCode.EXCEPTION_ERROR,
retmsg="No chunk found, please upload file and parse it.")
return get_json_result(retcode=RetCode.EXCEPTION_ERROR, retmsg=repr(e))
@ -190,7 +192,9 @@ def validate_request(*args, **kwargs):
return get_json_result(
retcode=RetCode.ARGUMENT_ERROR, retmsg=error_string)
return func(*_args, **_kwargs)
return decorated_function
return wrapper
@ -217,7 +221,7 @@ def get_json_result(retcode=RetCode.SUCCESS, retmsg='success', data=None):
def construct_response(retcode=RetCode.SUCCESS,
retmsg='success', data=None, auth=None):
retmsg='success', data=None, auth=None):
result_dict = {"retcode": retcode, "retmsg": retmsg, "data": data}
response_dict = {}
for key, value in result_dict.items():
@ -235,6 +239,7 @@ def construct_response(retcode=RetCode.SUCCESS,
response.headers["Access-Control-Expose-Headers"] = "Authorization"
return response
def construct_result(code=RetCode.DATA_ERROR, message='data is missing'):
import re
result_dict = {"code": code, "message": re.sub(r"rag", "seceum", message, flags=re.IGNORECASE)}
@ -263,7 +268,23 @@ def construct_error_response(e):
pass
if len(e.args) > 1:
return construct_json_result(code=RetCode.EXCEPTION_ERROR, message=repr(e.args[0]), data=e.args[1])
if repr(e).find("index_not_found_exception") >=0:
return construct_json_result(code=RetCode.EXCEPTION_ERROR, message="No chunk found, please upload file and parse it.")
if repr(e).find("index_not_found_exception") >= 0:
return construct_json_result(code=RetCode.EXCEPTION_ERROR,
message="No chunk found, please upload file and parse it.")
return construct_json_result(code=RetCode.EXCEPTION_ERROR, message=repr(e))
def token_required(func):
@wraps(func)
def decorated_function(*args, **kwargs):
token = flask_request.headers.get('Authorization').split()[1]
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, retmsg='Token is not valid!', retcode=RetCode.AUTHENTICATION_ERROR
)
kwargs['tenant_id'] = objs[0].tenant_id
return func(*args, **kwargs)
return decorated_function

View File

@ -13,10 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import dotenv
import typing
from api.utils.file_utils import get_project_base_directory
def get_versions() -> typing.Mapping[str, typing.Any]:
@ -25,4 +23,4 @@ def get_versions() -> typing.Mapping[str, typing.Any]:
def get_rag_version() -> typing.Optional[str]:
return get_versions().get("RAGFLOW_VERSION", "dev")
return get_versions().get("RAGFLOW_IMAGE", "infiniflow/ragflow:dev").split(":")[-1]

File diff suppressed because it is too large Load Diff

View File

@ -1,49 +0,0 @@
ragflow:
host: 0.0.0.0
http_port: 9380
mysql:
name: 'rag_flow'
user: 'root'
password: 'infini_rag_flow'
host: 'mysql'
port: 3306
max_connections: 100
stale_timeout: 30
minio:
user: 'rag_flow'
password: 'infini_rag_flow'
host: 'minio:9000'
es:
hosts: 'http://es01:9200'
username: 'elastic'
password: 'infini_rag_flow'
redis:
db: 1
password: 'infini_rag_flow'
host: 'redis:6379'
user_default_llm:
factory: 'Tongyi-Qianwen'
api_key: 'sk-xxxxxxxxxxxxx'
base_url: ''
oauth:
github:
client_id: xxxxxxxxxxxxxxxxxxxxxxxxx
secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
url: https://github.com/login/oauth/access_token
feishu:
app_id: cli_xxxxxxxxxxxxxxxxxxx
app_secret: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
app_access_token_url: https://open.feishu.cn/open-apis/auth/v3/app_access_token/internal
user_access_token_url: https://open.feishu.cn/open-apis/authen/v1/oidc/access_token
grant_type: 'authorization_code'
authentication:
client:
switch: false
http_app_key:
http_secret_key:
site:
switch: false
permission:
switch: false
component: false
dataset: false

1
conf/service_conf.yaml Symbolic link
View File

@ -0,0 +1 @@
../docker/service_conf.yaml

View File

@ -16,7 +16,6 @@ import random
import xgboost as xgb
from io import BytesIO
import torch
import re
import pdfplumber
import logging
@ -25,6 +24,7 @@ import numpy as np
from timeit import default_timer as timer
from pypdf import PdfReader as pdf2_read
from api.settings import LIGHTEN
from api.utils.file_utils import get_project_base_directory
from deepdoc.vision import OCR, Recognizer, LayoutRecognizer, TableStructureRecognizer
from rag.nlp import rag_tokenizer
@ -44,8 +44,10 @@ class RAGFlowPdfParser:
self.tbl_det = TableStructureRecognizer()
self.updown_cnt_mdl = xgb.Booster()
if torch.cuda.is_available():
self.updown_cnt_mdl.set_param({"device": "cuda"})
if not LIGHTEN:
import torch
if torch.cuda.is_available():
self.updown_cnt_mdl.set_param({"device": "cuda"})
try:
model_dir = os.path.join(
get_project_base_directory(),
@ -299,7 +301,7 @@ class RAGFlowPdfParser:
self.lefted_chars.append(c)
continue
if c["text"] == " " and bxs[ii]["text"]:
if re.match(r"[0-9a-zA-Z,.?;:!%%]", bxs[ii]["text"][-1]):
if re.match(r"[0-9a-zA-Zа-яА-Я,.?;:!%%]", bxs[ii]["text"][-1]):
bxs[ii]["text"] += " "
else:
bxs[ii]["text"] += c["text"]
@ -486,7 +488,7 @@ class RAGFlowPdfParser:
i += 1
continue
if not down["text"].strip():
if not down["text"].strip() or not up["text"].strip():
i += 1
continue

View File

@ -10,37 +10,43 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from deepdoc.parser.utils import get_text
from rag.nlp import num_tokens_from_string
from rag.nlp import find_codec,num_tokens_from_string
import re
class RAGFlowTxtParser:
def __call__(self, fnm, binary=None, chunk_token_num=128, delimiter="\n!?;。;!?"):
txt = ""
if binary:
encoding = find_codec(binary)
txt = binary.decode(encoding, errors="ignore")
else:
with open(fnm, "r") as f:
while True:
l = f.readline()
if not l:
break
txt += l
txt = get_text(fnm, binary)
return self.parser_txt(txt, chunk_token_num, delimiter)
@classmethod
def parser_txt(cls, txt, chunk_token_num=128, delimiter="\n!?;。;!?"):
if type(txt) != str:
if not isinstance(txt, str):
raise TypeError("txt type should be str!")
sections = []
for sec in re.split(r"[%s]+"%delimiter, txt):
if sections and sec in delimiter:
sections[-1][0] += sec
continue
if num_tokens_from_string(sec) > 10 * int(chunk_token_num):
sections.append([sec[: int(len(sec) / 2)], ""])
sections.append([sec[int(len(sec) / 2) :], ""])
cks = [""]
tk_nums = [0]
def add_chunk(t):
nonlocal cks, tk_nums, delimiter
tnum = num_tokens_from_string(t)
if tnum < 8:
pos = ""
if tk_nums[-1] > chunk_token_num:
cks.append(t)
tk_nums.append(tnum)
else:
sections.append([sec, ""])
return sections
cks[-1] += t
tk_nums[-1] += tnum
s, e = 0, 1
while e < len(txt):
if txt[e] in delimiter:
add_chunk(txt[s: e + 1])
s = e + 1
e = s + 1
else:
e += 1
if s < e:
add_chunk(txt[s: e + 1])
return [[c,""] for c in cks]

29
deepdoc/parser/utils.py Normal file
View File

@ -0,0 +1,29 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from rag.nlp import find_codec
def get_text(fnm: str, binary=None) -> str:
txt = ""
if binary:
encoding = find_codec(binary)
txt = binary.decode(encoding, errors="ignore")
else:
with open(fnm, "r") as f:
while True:
line = f.readline()
if not line:
break
txt += line
return txt

View File

@ -33,10 +33,17 @@ REDIS_PASSWORD=infini_rag_flow
SVR_HTTP_PORT=9380
RAGFLOW_VERSION=dev
RAGFLOW_IMAGE=infiniflow/ragflow:dev-slim
# If inside mainland China, decomment either of the following hub.docker.com mirrors:
# RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:dev-slim
# RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:dev-slim
TIMEZONE='Asia/Shanghai'
# If inside mainland China, decomment the following huggingface.co mirror:
# HF_ENDPOINT=https://hf-mirror.com
######## OS setup for ES ###########
# sysctl vm.max_map_count
# sudo sysctl -w vm.max_map_count=262144

View File

@ -1,30 +0,0 @@
include:
- path: ./docker-compose-base.yml
env_file: ./.env
services:
ragflow:
depends_on:
mysql:
condition: service_healthy
es01:
condition: service_healthy
image: swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:${RAGFLOW_VERSION}
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380
- 80:80
- 443:443
volumes:
- ./service_conf.yaml:/ragflow/conf/service_conf.yaml
- ./ragflow-logs:/ragflow/logs
- ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
- ./nginx/proxy.conf:/etc/nginx/proxy.conf
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
environment:
- TZ=${TIMEZONE}
- HF_ENDPOINT=https://hf-mirror.com
- MACOS=${MACOS}
networks:
- ragflow
restart: always

View File

@ -1,3 +1,7 @@
include:
- path: ./docker-compose.yml
env_file: ./.env
services:
kibana:
image: kibana:${STACK_VERSION}
@ -12,7 +16,7 @@ services:
es01:
condition: service_healthy
kibana-user-init:
condition: service_completed_successfully
condition: service_completed_successfully
networks:
- ragflow

View File

@ -30,7 +30,8 @@ services:
restart: always
mysql:
image: mysql:5.7.18
# mysql:5.7 linux/arm64 image is unavailable.
image: mysql:8.0.39
container_name: ragflow-mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}

View File

@ -1,37 +0,0 @@
include:
- path: ./docker-compose-base.yml
env_file: ./.env
services:
ragflow:
depends_on:
mysql:
condition: service_healthy
es01:
condition: service_healthy
image: swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:${RAGFLOW_VERSION}
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380
- 80:80
- 443:443
volumes:
- ./service_conf.yaml:/ragflow/conf/service_conf.yaml
- ./ragflow-logs:/ragflow/logs
- ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
- ./nginx/proxy.conf:/etc/nginx/proxy.conf
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
environment:
- TZ=${TIMEZONE}
- HF_ENDPOINT=https://hf-mirror.com
- MACOS=${MACOS}
networks:
- ragflow
restart: always
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]

View File

@ -9,7 +9,7 @@ services:
condition: service_healthy
es01:
condition: service_healthy
image: infiniflow/ragflow:${RAGFLOW_VERSION}
image: ${RAGFLOW_IMAGE}
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380

View File

@ -9,12 +9,13 @@ services:
condition: service_healthy
es01:
condition: service_healthy
image: infiniflow/ragflow:${RAGFLOW_VERSION}
image: ${RAGFLOW_IMAGE}
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380
- 80:80
- 443:443
- 5678:5678
volumes:
- ./service_conf.yaml:/ragflow/conf/service_conf.yaml
- ./ragflow-logs:/ragflow/logs
@ -23,7 +24,7 @@ services:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
environment:
- TZ=${TIMEZONE}
- HF_ENDPOINT=https://huggingface.co
- HF_ENDPOINT=${HF_ENDPOINT}
- MACOS=${MACOS}
networks:
- ragflow

View File

@ -1,5 +1,8 @@
#!/bin/bash
# unset http proxy which maybe set by docker daemon
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
/usr/sbin/nginx
export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/
@ -11,13 +14,13 @@ fi
function task_exe(){
while [ 1 -eq 1 ];do
$PY rag/svr/task_executor.py ;
$PY rag/svr/task_executor.py $1;
done
}
for ((i=0;i<WS;i++))
do
task_exe &
task_exe $i &
done
while [ 1 -eq 1 ];do

View File

@ -1,30 +1,67 @@
#!/bin/bash
# 等待 Elasticsearch 啟動
until curl -u "elastic:${ELASTIC_PASSWORD}" -s http://es01:9200 >/dev/null; do
echo "等待 Elasticsearch 啟動..."
sleep 5
# unset http proxy which maybe set by docker daemon
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
echo "Elasticsearch built-in user: elastic:${ELASTIC_PASSWORD}"
# Wait Elasticsearch be healthy
while true; do
response=$(curl -s -v -w "\n%{http_code}" -u "elastic:${ELASTIC_PASSWORD}" "http://es01:9200")
exit_code=$?
status=$(echo "$response" | tail -n1)
if [ $exit_code -eq 0 ] && [ "$status" = "200" ]; then
echo "Elasticsearch is healthy"
break
else
echo "Elasticsearch is unhealthy: $exit_code $status"
echo "$response"
sleep 5
fi
done
# Create new role with all privileges to all indices
# https://www.elastic.co/guide/en/elasticsearch/reference/current/security-privileges.html#privileges-list-indices
echo "Going to create Elasticsearch role own_indices with all privileges to all indices"
while true; do
response=$(curl -s -v -w "\n%{http_code}" -u "elastic:${ELASTIC_PASSWORD}" -X POST http://es01:9200/_security/role/own_indices -H 'Content-Type: application/json' -d '{"indices": [{"names": ["*"], "privileges": ["all"]}]}')
exit_code=$?
status=$(echo "$response" | tail -n1)
if [ $exit_code -eq 0 ] && [ "$status" = "200" ]; then
echo "Elasticsearch role own_indices created"
break
else
echo "Elasticsearch role own_indices failure: $exit_code $status"
echo "$response"
sleep 5
fi
done
echo "使用者: elastic:${ELASTIC_PASSWORD}"
echo "Elasticsearch role own_indices:"
curl -u "elastic:${ELASTIC_PASSWORD}" -X GET "http://es01:9200/_security/role/own_indices"
echo ""
PAYLOAD="{\"password\": \"${KIBANA_PASSWORD}\", \"roles\": [\"kibana_admin\", \"kibana_system\", \"own_indices\"], \"full_name\": \"${KIBANA_USER}\", \"email\": \"${KIBANA_USER}@example.com\"}"
echo "Going to create Elasticsearch user ${KIBANA_USER}: ${PAYLOAD}"
PAYLOAD="{
\"password\" : \"${KIBANA_PASSWORD}\",
\"roles\" : [ \"kibana_admin\",\"kibana_system\" ],
\"full_name\" : \"${KIBANA_USER}\",
\"email\" : \"${KIBANA_USER}@example.com\"
}"
echo "新用戶帳戶: $PAYLOAD"
# Create new user
while true; do
response=$(curl -s -v -w "\n%{http_code}" -u "elastic:${ELASTIC_PASSWORD}" -X POST http://es01:9200/_security/user/${KIBANA_USER} -H "Content-Type: application/json" -d "${PAYLOAD}")
exit_code=$?
status=$(echo "$response" | tail -n1)
if [ $exit_code -eq 0 ] && [ "$status" = "200" ]; then
echo "Elasticsearch user ${KIBANA_USER} created"
break
else
echo "Elasticsearch user ${KIBANA_USER} failure: $exit_code $status"
echo "$response"
sleep 5
fi
done
# 創建新用戶帳戶
curl -X POST "http://es01:9200/_security/user/${KIBANA_USER}" \
-u "elastic:${ELASTIC_PASSWORD}" \
-H "Content-Type: application/json" \
-d "$PAYLOAD"s
echo "新用戶帳戶已創建"
echo "Elasticsearch user ${KIBANA_USER}:"
curl -u "elastic:${ELASTIC_PASSWORD}" -X GET "http://es01:9200/_security/user/${KIBANA_USER}"
echo ""
exit 0

View File

@ -0,0 +1,28 @@
#!/bin/bash
# unset http proxy which maybe set by docker daemon
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/
PY=python3
if [[ -z "$WS" || $WS -lt 1 ]]; then
WS=1
fi
function task_exe(){
while [ 1 -eq 1 ];do
$PY rag/svr/task_executor.py $1;
done
}
for ((i=0;i<WS;i++))
do
task_exe $i &
done
while [ 1 -eq 1 ];do
$PY api/ragflow_server.py
done
wait;

View File

@ -11,10 +11,11 @@ server {
gzip_disable "MSIE [1-6]\.";
location /v1 {
proxy_pass http://ragflow:9380;
include proxy.conf;
proxy_pass http://ragflow:9380;
include proxy.conf;
}
location / {
index index.html;
try_files $uri $uri/ /index.html;

View File

@ -21,23 +21,54 @@ redis:
db: 1
password: 'infini_rag_flow'
host: 'redis:6379'
user_default_llm:
factory: 'Tongyi-Qianwen'
api_key: 'sk-xxxxxxxxxxxxx'
base_url: ''
oauth:
github:
client_id: xxxxxxxxxxxxxxxxxxxxxxxxx
secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
url: https://github.com/login/oauth/access_token
authentication:
client:
switch: false
http_app_key:
http_secret_key:
site:
switch: false
permission:
switch: false
component: false
dataset: false
# postgres:
# name: 'rag_flow'
# user: 'rag_flow'
# password: 'infini_rag_flow'
# host: 'postgres'
# port: 5432
# max_connections: 100
# stale_timeout: 30
# s3:
# endpoint: 'endpoint'
# access_key: 'access_key'
# secret_key: 'secret_key'
# region: 'region'
# azure:
# auth_type: 'sas'
# container_url: 'container_url'
# sas_token: 'sas_token'
# azure:
# auth_type: 'spn'
# account_url: 'account_url'
# client_id: 'client_id'
# secret: 'secret'
# tenant_id: 'tenant_id'
# container_name: 'container_name'
# user_default_llm:
# factory: 'Tongyi-Qianwen'
# api_key: 'sk-xxxxxxxxxxxxx'
# base_url: ''
# oauth:
# github:
# client_id: xxxxxxxxxxxxxxxxxxxxxxxxx
# secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
# url: https://github.com/login/oauth/access_token
# feishu:
# app_id: cli_xxxxxxxxxxxxxxxxxxx
# app_secret: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
# app_access_token_url: https://open.feishu.cn/open-apis/auth/v3/app_access_token/internal
# user_access_token_url: https://open.feishu.cn/open-apis/authen/v1/oidc/access_token
# grant_type: 'authorization_code'
# authentication:
# client:
# switch: false
# http_app_key:
# http_secret_key:
# site:
# switch: false
# permission:
# switch: false
# component: false
# dataset: false

View File

@ -1,8 +1,8 @@
{
"label": "User Guides",
"label": "Guides",
"position": 2,
"link": {
"type": "generated-index",
"description": "RAGFlow User Guides"
"description": "Guides for RAGFlow users and developers."
}
}

View File

@ -0,0 +1,8 @@
{
"label": "Agents",
"position": 3,
"link": {
"type": "generated-index",
"description": "RAGFlow v0.8.0 introduces an agent mechanism, featuring a no-code workflow editor on the front end and a comprehensive graph-based task orchestration framework on the backend."
}
}

View File

@ -0,0 +1,78 @@
---
sidebar_position: 1
slug: /agent_introduction
---
# Introduction to agents
Agents and RAG are complementary techniques, each enhancing the others capabilities in business applications. RAGFlow v0.8.0 introduces an agent mechanism, featuring a no-code workflow editor on the front end and a comprehensive graph-based task orchestration framework on the back end. This mechanism is built on top of RAGFlow's existing RAG solutions and aims to orchestrate search technologies such as query intent classification, conversation leading, and query rewriting to:
- Provide higher retrievals and,
- Accommodate more complex scenarios.
## Create an agent
:::tip NOTE
Before proceeding, ensure that:
1. You have properly set the LLM to use. See the guides on [Configure your API key](../llm_api_key_setup.md) or [Deploy a local LLM](../deploy_local_llm.mdx) for more information.
2. You have a knowledge base configured and the corresponding files properly parsed. See the guide on [Configure a knowledge base](../configure_knowledge_base.md) for more information.
:::
Click the **Agent** tab in the middle top of the page to show the **Agent** page. As shown in the screenshot below, the cards on this page represent the created agents, which you can continue to edit.
![agents](https://github.com/user-attachments/assets/5e10758b-ec43-49ae-bf91-ff7d04c56e9d)
We also provide templates catered to different business scenarios. You can either generate your agent from one of our agent templates or create one from scratch:
1. Click **+ Create agent** to show the **agent template** page:
![agent_templates](https://github.com/user-attachments/assets/73bd476c-4bab-4c8c-82f8-6b00fb2cd044)
2. To create an agent from scratch, click the **Blank** card. Alternatively, to create an agent from one of our templates, hover over the desired card, such as **General-purpose chatbot**, click **Use this template**, name your agent in the pop-up dialogue, and click **OK** to confirm.
*You are now taken to the **no-code workflow editor** page. The left panel lists the components (operators): Above the dividing line are the RAG-specific components; below the line are tools. We are still working to expand the component list.*
![workflow_editor](https://github.com/user-attachments/assets/9fc6891c-7784-43b8-ab4a-3b08a9e551c4)
4. General speaking, now you can do the following:
- Drag and drop a desired component to your workflow,
- Select the knowledge base to use,
- Update settings of specific components,
- Update LLM settings
- Sets the input and output for a specific component, and more.
5. Click **Save** to apply changes to your agent and **Run** to test it.
## Components
Please review the flowing description of the RAG-specific components before you proceed:
| Component | Description |
| -------------- | ------------------------------------------------------------ |
| **Retrieval** | A component that retrieves information from specified knowledge bases and returns 'Empty response' if no information is found. Ensure the correct knowledge bases are selected. |
| **Generate** | A component that prompts the LLM to generate responses. You must ensure the prompt is set correctly. |
| **Interact** | A component that serves as the interface between human and the bot, receiving user inputs and displaying the agent's responses. |
| **Categorize** | A component that uses the LLM to classify user inputs into predefined categories. Ensure you specify the name, description, and examples for each category, along with the corresponding next component. |
| **Message** | A component that sends out a static message. If multiple messages are supplied, it randomly selects one to send. Ensure its downstream is **Interact**, the interface component. |
| **Relevant** | A component that uses the LLM to assess whether the upstream output is relevant to the user's latest query. Ensure you specify the next component for each judge result. |
| **Rewrite** | A component that refines a user query if it fails to retrieve relevant information from the knowledge base. It repeats this process until the predefined looping upper limit is reached. Ensure its upstream is **Relevant** and downstream is **Retrieval**. |
| **Keyword** | A component that retrieves top N search results from wikipedia.org. Ensure the TopN value is set properly before use. |
:::caution NOTE
- Ensure **Rewrite**'s upstream component is **Relevant** and downstream component is **Retrieval**.
- Ensure the downstream component of **Message** is **Interact**.
- The downstream component of **Begin** is always **Interact**.
:::
## Basic operations
| Operation | Description |
| ------------------------- | ------------------------------------------------------------ |
| Add a component | Drag and drop the desired component from the left panel onto the canvas. |
| Delete a component | On the canvas, hover over the three dots (...) of the component to display the delete option, then select it to remove the component. |
| Copy a component | On the canvas, hover over the three dots (...) of the component to display the copy option, then select it to make a copy the component. |
| Update component settings | On the canvas, click the desired component to display the component settings. |

View File

@ -0,0 +1,101 @@
---
sidebar_position: 2
slug: /general_purpose_chatbot
---
# Create a general-purpose chatbot
Chatbot is one of the most common AI scenarios. However, effectively understanding user queries and responding appropriately remains a challenge. RAGFlow's general-purpose chatbot agent is our attempt to tackle this longstanding issue.
This chatbot closely resembles the chatbot introduced in [Start an AI chat](../start_chat.md), but with a key difference - it introduces a reflective mechanism that allows it to improve the retrieval from the target knowledge bases by rewriting the user's query.
This document provides guides on creating such a chatbot using our chatbot template.
## Prerequisites
1. Ensure you have properly set the LLM to use. See the guides on [Configure your API key](../llm_api_key_setup.md) or [Deploy a local LLM](../deploy_local_llm.mdx) for more information.
2. Ensure you have a knowledge base configured and the corresponding files properly parsed. See the guide on [Configure a knowledge base](../configure_knowledge_base.md) for more information.
3. Make sure you have read the [Introduction to Agentic RAG](./agentic_rag_introduction.md).
## Create a chatbot agent from template
To create a general-purpose chatbot agent using our template:
1. Click the **Agent** tab in the middle top of the page to show the **Agent** page.
2. Click **+ Create agent** on the top right of the page to show the **agent template** page.
3. On the **agent template** page, hover over the card on **General-purpose chatbot** and click **Use this template**.
*You are now directed to the **no-code workflow editor** page.*
![workflow_editor](https://github.com/user-attachments/assets/52e7dc62-4bf5-4fbb-ab73-4a6e252065f0)
:::tip NOTE
RAGFlow's no-code editor spares you the trouble of coding, making agent development effortless.
:::
## Understand each component in the template
Heres a breakdown of each component and its role and requirements in the chatbot template:
- **Begin**
- Function: Sets the opening greeting for the user.
- Purpose: Establishes a welcoming atmosphere and prepares the user for interaction.
- **Interact**
- Function: Serves as the interface between human and the bot.
- Role: Acts as the downstream component of **Begin**.
- **Retrieval**
- Function: Retrieves information from specified knowledge base(s).
- Requirement: Must have `knowledgebases` set up to function.
- **Relevant**
- Function: Assesses the relevance of the retrieved information from the **Retrieval** component to the user query.
- Process:
- If relevant, it directs the data to the **Generate** component for final response generation.
- Otherwise, it triggers the **Rewrite** component to refine the user query and redo the retrival process.
- **Generate**
- Function: Prompts the LLM to generate responses based on the retrieved information.
- Note: The prompt settings allow you to control the way in which the LLM generates responses. Be sure to review the prompts and make necessary changes.
- **Rewrite**:
- Function: Refines a user query when no relevant information from the knowledge base is retrieved.
- Usage: Often used in conjunction with **Relevant** and **Retrieval** to create a reflective/feedback loop.
## Configure your chatbot agent
1. Click **Begin** to set an opening greeting:
![opener](https://github.com/user-attachments/assets/4416bc16-2a84-4f24-a19b-6dc8b1de0908)
2. Click **Retrieval** to select the right knowledge base(s) and make any necessary adjustments:
![setting_knowledge_bases](https://github.com/user-attachments/assets/5f694820-5651-45bc-afd6-cf580ca0228d)
3. Click **Generate** to configure the LLM's summarization behavior:
3.1. Confirm the model.
3.2. Review the prompt settings. If there are variables, ensure they match the correct component IDs:
![prompt_settings](https://github.com/user-attachments/assets/19e94ea7-7f62-4b73-b526-32fcfa62f1e9)
4. Click **Relevant** to review or change its settings:
*You may retain the current settings, but feel free to experiment with changes to understand how the agent operates.*
![relevant_settings](https://github.com/user-attachments/assets/9ff7fdd8-7a69-4ee2-bfba-c7fb8029150f)
5. Click **Rewrite** to select a different model for query rewriting or update the maximum loop times for query rewriting:
![choose_model](https://github.com/user-attachments/assets/2bac1d6c-c4f1-42ac-997b-102858c3f550)
![loop_time](https://github.com/user-attachments/assets/09a4ce34-7aac-496f-aa59-d8aa33bf0b1f)
:::danger NOTE
Increasing the maximum loop times may significantly extend the time required to receive the final response.
:::
1. Update your workflow where you see necessary.
2. Click to **Save** to apply your changes.
*Your agent appears as one of the agent cards on the **Agent** page.*
## Test your chatbot agent
1. Find your chatbot agent on the **Agent** page:
![find_chatbot](https://github.com/user-attachments/assets/6e6382c6-9a86-4190-9fdd-e363b7f64ba9)
2. Experiment with your questions to verify if this chatbot functions as intended:
![test_chatbot](https://github.com/user-attachments/assets/c074d3bd-4c39-4b05-a68b-1fd361f256b3)

View File

@ -128,7 +128,7 @@ RAGFlow uses multiple recall of both full-text search and vector search in its c
## Search for knowledge base
As of RAGFlow v0.10.0, the search feature is still in a rudimentary form, supporting only knowledge base search by name.
As of RAGFlow v0.12.0, the search feature is still in a rudimentary form, supporting only knowledge base search by name.
![search knowledge base](https://github.com/infiniflow/ragflow/assets/93570324/836ae94c-2438-42be-879e-c7ad2a59693e)

View File

@ -1,5 +1,5 @@
---
sidebar_position: 5
sidebar_position: 6
slug: /deploy_local_llm
---
@ -7,7 +7,7 @@ slug: /deploy_local_llm
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
RAGFlow supports deploying models locally using Ollama or Xinference. If you have locally deployed models to leverage or wish to enable GPU or CUDA for inference acceleration, you can bind Ollama or Xinference into RAGFlow and use either of them as a local "server" for interacting with your local models.
RAGFlow supports deploying models locally using Ollama, Xinference, IPEX-LLM, or jina. If you have locally deployed models to leverage or wish to enable GPU or CUDA for inference acceleration, you can bind Ollama or Xinference into RAGFlow and use either of them as a local "server" for interacting with your local models.
RAGFlow seamlessly integrates with Ollama and Xinference, without the need for further environment configurations. You can use them to deploy two types of local models in RAGFlow: chat models and embedding models.
@ -15,40 +15,6 @@ RAGFlow seamlessly integrates with Ollama and Xinference, without the need for f
This user guide does not intend to cover much of the installation or configuration details of Ollama or Xinference; its focus is on configurations inside RAGFlow. For the most current information, you may need to check out the official site of Ollama or Xinference.
:::
# Deploy a local model using jina
[Jina](https://github.com/jina-ai/jina) lets you build AI services and pipelines that communicate via gRPC, HTTP and WebSockets, then scale them up and deploy to production.
To deploy a local model, e.g., **gpt2**, using Jina:
### 1. Check firewall settings
Ensure that your host machine's firewall allows inbound connections on port 12345.
```bash
sudo ufw allow 12345/tcp
```
### 2.install jina package
```bash
pip install jina
```
### 3. deployment local model
Step 1: Navigate to the rag/svr directory.
```bash
cd rag/svr
```
Step 2: Use Python to run the jina_server.py script and pass in the model name or the local path of the model (the script only supports loading models downloaded from Huggingface)
```bash
python jina_server.py --model_name gpt2
```
## Deploy a local model using Ollama
[Ollama](https://github.com/ollama/ollama) enables you to run open-source large language models that you deployed locally. It bundles model weights, configurations, and data into a single package, defined by a Modelfile, and optimizes setup and configurations, including GPU usage.
@ -347,3 +313,36 @@ To enable IPEX-LLM accelerated Ollama in RAGFlow, you must also complete the con
2. [Complete basic Ollama settings](#5-complete-basic-ollama-settings)
3. [Update System Model Settings](#6-update-system-model-settings)
4. [Update Chat Configuration](#7-update-chat-configuration)
## Deploy a local model using jina
To deploy a local model, e.g., **gpt2**, using jina:
### 1. Check firewall settings
Ensure that your host machine's firewall allows inbound connections on port 12345.
```bash
sudo ufw allow 12345/tcp
```
### 2. Install jina package
```bash
pip install jina
```
### 3. Deploy a local model
Step 1: Navigate to the **rag/svr** directory.
```bash
cd rag/svr
```
Step 2: Run **jina_server.py**, specifying either the model's name or its local directory:
```bash
python jina_server.py --model_name gpt2
```
> The script only supports models downloaded from Hugging Face.

View File

@ -0,0 +1,8 @@
{
"label": "Develop",
"position": 10,
"link": {
"type": "generated-index",
"description": "Guides for Hardcore Developers"
}
}

View File

@ -0,0 +1,87 @@
---
sidebar_position: 1
slug: /build_docker_image
---
# Build a RAGFlow Docker Image
A guide explaining how to build a RAGFlow Docker image from its source code. By following this guide, you'll be able to create a local Docker image that can be used for development, debugging, or testing purposes.
## Target Audience
- Developers who have added new features or modified the existing code and require a Docker image to view and debug their changes.
- Testers looking to explore the latest features of RAGFlow in a Docker image.
## Prerequisites
- CPU &ge; 4 cores
- RAM &ge; 16 GB
- Disk &ge; 50 GB
- Docker &ge; 24.0.0 & Docker Compose &ge; v2.26.1
:::tip NOTE
If you have not installed Docker on your local machine (Windows, Mac, or Linux), see the [Install Docker Engine](https://docs.docker.com/engine/install/) guide.
:::
## Build a RAGFlow Docker Image
To build a RAGFlow Docker image from source code:
### Git Clone the Repository
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow
```
### Build the Docker Image
Navigate to the `ragflow` directory where the Dockerfile and other necessary files are located. Now you can build the Docker image using the provided Dockerfile. The command below specifies which Dockerfile to use and tags the image with a name for reference purpose.
#### Build and push multi-arch image `infiniflow/ragflow:dev-slim`
On a `linux/amd64` host:
```bash
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim-amd64 .
docker push infiniflow/ragflow:dev-slim-amd64
```
On a `linux/arm64` host:
```bash
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim-arm64 .
docker push infiniflow/ragflow:dev-slim-arm64
```
On a Linux host:
```bash
docker manifest create infiniflow/ragflow:dev-slim --amend infiniflow/ragflow:dev-slim-amd64 --amend infiniflow/ragflow:dev-slim-arm64
docker manifest push infiniflow/ragflow:dev-slim
```
This image is approximately 1 GB in size and relies on external LLM services, as it does not include deepdoc, embedding, or chat models.
#### Build and push multi-arch image `infiniflow/ragflow:dev`
On a `linux/amd64` host:
```bash
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev-amd64 .
docker push infiniflow/ragflow:dev-amd64
```
On a `linux/arm64` host:
```bash
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev-arm64 .
docker push infiniflow/ragflow:dev-arm64
```
On any linux host:
```bash
docker manifest create infiniflow/ragflow:dev --amend infiniflow/ragflow:dev-amd64 --amend infiniflow/ragflow:dev-arm64
docker manifest push infiniflow/ragflow:dev
```
This image's size is approximately 9 GB in size and can reference via either local CPU/GPU or an external LLM, as it includes deepdoc, embedding, and chat models.

View File

@ -0,0 +1,141 @@
---
sidebar_position: 2
slug: /launch_ragflow_from_source
---
# Launch the RAGFlow Service from Source
A guide explaining how to set up a RAGFlow service from its source code. By following this guide, you'll be able to debug using the source code.
## Target Audience
Developers who have added new features or modified existing code and wish to debug using the source code, *provided that* their machine has the target deployment environment set up.
## Prerequisites
- CPU &ge; 4 cores
- RAM &ge; 16 GB
- Disk &ge; 50 GB
- Docker &ge; 24.0.0 & Docker Compose &ge; v2.26.1
:::tip NOTE
If you have not installed Docker on your local machine (Windows, Mac, or Linux), see the [Install Docker Engine](https://docs.docker.com/engine/install/) guide.
:::
## Launch the Service from Source
To launch the RAGFlow service from source code:
### Clone the RAGFlow Repository
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
```
### Install Python dependencies
1. Install Poetry:
```bash
curl -sSL https://install.python-poetry.org | python3 -
```
2. Configure Poetry:
```bash
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
```
3. Install Python dependencies:
```bash
~/.local/bin/poetry install --sync --no-root
```
*A virtual environment named `.venv` is created, and all Python dependencies are installed into the new environment.*
### Launch Third-party Services
The following command launches the 'base' services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose:
```bash
docker compose -f docker/docker-compose-base.yml up -d
```
### Update `host` and `port` Settings for Third-party Services
1. Add the following line to `/etc/hosts` to resolve all hosts specified in **docker/service_conf.yaml** to `127.0.0.1`:
```
127.0.0.1 es01 mysql minio redis
```
2. In **docker/service_conf.yaml**, update mysql port to `5455` and es port to `1200`, as specified in **docker/.env**.
### Launch the RAGFlow Backend Service
1. Comment out the `nginx` line in **docker/entrypoint.sh**.
```
# /usr/sbin/nginx
```
2. Activate the Python virtual environment:
```bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
```
3. **Optional:** If you cannot access HuggingFace, set the HF_ENDPOINT environment variable to use a mirror site:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
4. Run the **entrypoint.sh** script to launch the backend service:
```
bash docker/entrypoint.sh
```
### Launch the RAGFlow frontend service
1. Navigate to the `web` directory and install the frontend dependencies:
```bash
cd web
npm install --force
```
2. Update `proxy.target` in **.umirc.ts** to `http://127.0.0.1:9380`:
```bash
vim .umirc.ts
```
3. Start up the RAGFlow frontend service:
```bash
npm run dev
```
*The following message appears, showing the IP address and port number of your frontend service:*
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
### Access the RAGFlow service
In your web browser, enter `http://127.0.0.1:<PORT>/`, ensuring the port number matches that shown in the screenshot above.
### Stop the RAGFlow service when the development is done
1. Stop the RAGFlow frontend service:
```bash
pkill npm
```
2. Stop the RAGFlow backend service:
```bash
pkill -f "docker/entrypoint.sh"
```

Some files were not shown because too many files have changed in this diff Show More