Compare commits

..

101 Commits

Author SHA1 Message Date
92a4a095c9 fix: Fixed an issue where quotes in messages could not be displayed #2677 (#2683)
### What problem does this PR solve?

fix: Fixed an issue where quotes in messages could not be displayed
#2677

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-30 12:40:12 +08:00
2368d738ab fix: Search page search results are cleared after output #2677 (#2678)
### What problem does this PR solve?

fix: Search page search results are cleared after output #2677

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-30 11:00:03 +08:00
833e3a08cd update poetry lock (#2676)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-30 10:59:47 +08:00
7a73fec2e5 upgrade opencv-python-headless (#2674)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-30 09:28:38 +08:00
2f8e0e66ef change opencv version (#2673)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-30 09:13:11 +08:00
5b4b252895 Fixed huggingface url (#2667)
### What problem does this PR solve?
Fixed huggingface url. Close #2665

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 20:38:11 +08:00
9081150c2c Translated Korean README (#2666)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-29 20:03:25 +08:00
cb295ec106 Translated Japanese README (#2664)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-29 19:27:48 +08:00
4f5210352c added back oc9 (#2663)
### What problem does this PR solve?

added back oc9

### Type of change

- [x] Refactoring
2024-09-29 18:32:48 +08:00
f98ec9034f Fix docker file bugs (#2662)
### What problem does this PR solve?

Fix docker file bugs

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-29 18:24:24 +08:00
4b8ecba32b Updated CN readme (#2661)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-29 17:27:15 +08:00
892166ec24 document preparation for release (#2660)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-09-29 16:29:02 +08:00
a411330b09 Add build image and launch from source in README (#2658)
### What problem does this PR solve?

Move the build image and launch from source back to README.

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-09-29 16:28:07 +08:00
5a8ae4a289 fix: Filter the timePeriod options based on the userType parameter #1739 (#2657)
### What problem does this PR solve?

fix: Filter the timePeriod options based on the userType parameter #1739

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-29 15:40:20 +08:00
3f16377412 change url of local llm deploy guide (#2659)
### What problem does this PR solve?


### Type of change

- [x] Other (please describe): I made a mistake with an URL and now I
need to change it
2024-09-29 15:39:05 +08:00
d3b37b0b70 fix: Fixed the issue where the error message was not displayed when uploading a file that was too large #1782 (#2654)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-29 15:22:05 +08:00
01db00b587 Updated component description (#2651)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-29 14:53:52 +08:00
25f07e8e29 fix template error (#2653)
### What problem does this PR solve?

#2478

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-29 14:47:06 +08:00
daa65199e8 trival (#2650)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 13:20:02 +08:00
fc867cb959 rename get_txt to get_text (#2649)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 12:47:09 +08:00
fb694143ee refine general purpose chat bot (#2648)
### What problem does this PR solve?

#2478

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-29 12:20:44 +08:00
a8280d9fd2 Add doc for dev image (#2641)
Add doc for dev image

### Type of change

- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-09-29 10:51:46 +08:00
aea553c3a8 Add get_txt function (#2639)
### What problem does this PR solve?

Add get_txt function to reduce duplicate code

### Type of change

- [x] Refactoring

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-29 10:29:56 +08:00
57237634f1 Refactoring large integers to improve readability (#2636)
### What problem does this PR solve?

Refactoring large integers

### Type of change

- [x] Refactoring
2024-09-29 10:17:42 +08:00
604061c4a5 Fix mutable default argument (#2635)
### What problem does this PR solve?

The default value of Python function parameters cannot be mutable.
Modifying this parameter inside the function will permanently change the
default value

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 10:16:00 +08:00
c103dd2746 change chunk.status to chunk.available (#2646)
### What problem does this PR solve?

#1102

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-29 10:13:07 +08:00
e82e8fde13 Fix logger error (#2643)
### What problem does this PR solve?

Fix logger error: AttributeError: 'Logger' object has no attribute
'basicConfig'

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 09:49:59 +08:00
a44ed9626a handle nits in task_executor (#2637)
### What problem does this PR solve?

- fix typo
- fix string format
- format import

### Type of change

- [x] Refactoring
2024-09-29 09:49:45 +08:00
ff9c11c970 fix: Fixed the issue where the conversation list was not displayed on the conversation page #2625 (#2638)
### What problem does this PR solve?

fix: Fixed the issue where the conversation list was not displayed on
the conversation page #2625

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-29 09:43:23 +08:00
674d342761 refine get_input (#2630)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-27 20:20:36 +08:00
a246e5644b feat: Add top_n to DeepLForm #1739 (#2629)
### What problem does this PR solve?

feat: Add top_n to DeepLForm #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-27 19:22:33 +08:00
96f56a3c43 add huggingface model (#2624)
### What problem does this PR solve?

#2469

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-27 19:15:38 +08:00
1b2f66fc11 Added doc on dev-slim (#2627)
Added doc on dev-slim

### Type of change

- [x] Documentation Update
- [x] Refactoring

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-27 19:15:27 +08:00
ca2de896c7 fix: Fixed an issue where the first message would be displayed when sending the second message #2625 (#2626)
### What problem does this PR solve?

fix: Fixed an issue where the first message would be displayed when
sending the second message #2625

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-27 18:20:19 +08:00
34abcf7704 style: fix typo and format code (#2618)
### What problem does this PR solve?

- Fix typo
- Remove unused import
- Format code

### Type of change

- [x] Other (please describe): typo and format
2024-09-27 13:17:25 +08:00
4c0b79c4f6 remove repeat func (#2619)
### What problem does this PR solve?

- remove repeat func

### Type of change

- [x] Other (please describe): remove repeat func
2024-09-27 13:15:26 +08:00
e11a74eed5 Update Yichat base_url (#2620)
### What problem does this PR solve?

Update Yichat base_url

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-09-27 12:55:58 +08:00
297b2d0ac9 force eml file to be parsed by EMAIL (#2615)
### What problem does this PR solve?
#2613
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-27 10:29:30 +08:00
b16f16e19e Bug fix - email processing could be run now from API (#2613)
### What problem does this PR solve?

If .eml file is uploaded, there is always General method chosen for
email processing, even if parsing_method is defined in the request. This
change solves this issue.

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Adam Kobus <adam.kobus@gitlab.eleader.biz>
2024-09-27 10:24:46 +08:00
35598c04ce fix generate bug (#2614)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-27 10:22:13 +08:00
09d1f7f333 Support agent for aibot (#2609)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-26 18:06:56 +08:00
240450ea52 Remove WenCai imageurl and update investment_advisor prompt (#2606)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-09-26 17:27:53 +08:00
1de3032650 fix AzureOpenAI issue` (#2608)
### What problem does this PR solve?

#1599

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-26 17:25:16 +08:00
41548bf019 Added two developer guide and removed from README ' builder docker image' and 'launch service from source' (#2590)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-09-26 16:15:57 +08:00
b68d349bd6 Fix: renrank_model and pdf_parser bugs | Update: session API (#2601)
### What problem does this PR solve?

Fix: renrank_model and pdf_parser bugs | Update: session API
#2575
#2559
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-09-26 16:05:25 +08:00
f6bfe4d970 feat: Add component Concentrator #1739 (#2604)
### What problem does this PR solve?

feat: Add component Concentrator #1739
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-26 14:47:28 +08:00
cb2ae708f3 Fix soft link. Close #2587 (#2602)
Fix soft link

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-26 14:33:38 +08:00
d7f26786d4 Update dsl_examples and Fix component concentrator (#2597)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-09-26 11:58:50 +08:00
b05fab14f7 Add component Concentrator (#2586)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-25 18:44:31 +08:00
e6da0c7c7b deprecate init a super user (#2589)
### What problem does this PR solve?
#2295

### Type of change

- [x] Refactoring
2024-09-25 18:30:27 +08:00
ef89e3ebea remove onnx copy command from dockerfile (#2584)
### What problem does this PR solve?

#2564

### Type of change

- [x] Refactoring
2024-09-25 17:14:59 +08:00
8ede1c7bf5 trival (#2582)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-25 16:26:44 +08:00
6363d58e98 Add template investment_advisor (#2580)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-25 16:22:06 +08:00
c262011393 revert error in Dockerfile (#2581)
### What problem does this PR solve?
#2295

### Type of change


- [x] Refactoring
2024-09-25 16:10:29 +08:00
dda1367ab2 make it lighten (#2577)
### What problem does this PR solve?

#2295

### Type of change

- [x] Refactoring
2024-09-25 13:38:40 +08:00
e4c9cf2264 feat: If the model is not set, a pop-up window will remind the user #2295 (#2574)
### What problem does this PR solve?

feat: If the model is not set, a pop-up window will remind the user
#2295

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-25 11:16:00 +08:00
e3b3ec3f79 multi-arch-build (#2571)
### What problem does this PR solve?

Build multi-arch docker image `infiniflow/ragflow:poetry` on
`linux/amd64` and `linux/arm64`.

### Type of change

- [x] Refactoring
2024-09-25 10:37:20 +08:00
08d5637770 Fix tokenizer bug (#2573)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-25 10:30:49 +08:00
7bb28ca2bd add lighten control (#2567)
### What problem does this PR solve?

#2295

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-24 19:22:01 +08:00
9251fb39af feat: Delete Model Provider #2376 (#2565)
### What problem does this PR solve?

feat: Delete Model Provider #2376

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-09-24 19:10:06 +08:00
91dbce30bd feat: Add component Jin10 #1739 (#2563)
### What problem does this PR solve?

feat: Add component Jin10  #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-24 18:54:09 +08:00
949a999478 feat: Add component YahooFinance #1739 (#2561)
### What problem does this PR solve?

feat: Add component YahooFinance #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-24 16:46:41 +08:00
d40041cc82 refine multi-turn chat in agent (#2560)
### What problem does this PR solve?

#2484

### Type of change

- [x] Performance Improvement
- [ ] Other (please describe):
2024-09-24 16:20:19 +08:00
832c90ac3e fix: Web code build fails on ARM machines #2554 (#2557)
### What problem does this PR solve?

fix: Web code build fails on ARM machines #2554

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-24 15:27:26 +08:00
7b3099b1a1 add an API of delete llm supplier (#2556)
### What problem does this PR solve?

#1853

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-24 15:24:15 +08:00
4681638974 Streaming output is supported, dialogue share is not (#2555)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-09-24 15:14:44 +08:00
ecf441c830 refine using rerank model (#2553)
### What problem does this PR solve?

#2552

### Type of change

- [x] Performance Improvement
2024-09-24 12:38:18 +08:00
d9c2a128a5 SparkTTS (#2535)
### What problem does this PR solve?

SparkTTS

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-09-24 12:15:12 +08:00
38e3475714 refine markdown prompt (#2551)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-09-24 12:04:16 +08:00
90644246d6 Updated README on debugging web and python (#2544)
### What problem does this PR solve?

Updated README on debugging web and python

### Type of change

- [x] Documentation Update
2024-09-24 11:46:03 +08:00
100c60017f fix component rewrite bug (#2549)
### What problem does this PR solve?

#2545

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-24 11:31:42 +08:00
51dd6d1f90 fix: Initial language is English, but the UI is in Chinese #2514 (#2541)
### What problem does this PR solve?

fix: Initial language is English, but the UI is in Chinese #2514

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-23 16:28:27 +08:00
521ea6afcb feat: Refine reteival of multi-turn conversation #2362 (#2539)
### What problem does this PR solve?

feat: Refine reteival of multi-turn conversation #2362

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-23 15:26:11 +08:00
dd019e7ba1 feat: Configurable for excel, html table or row based text #2516 (#2538)
### What problem does this PR solve?

feat: Configurable for excel, html table or row based text #2516

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-23 14:58:51 +08:00
db1be22a2f fix: Merge models of the same category #2479 (#2536)
### What problem does this PR solve?

fix: Merge models of the same category #2479

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-23 14:07:00 +08:00
139268de6f Reverted replacing npm with yarn (#2531)
Reverted replacing npm with yarn

### Type of change

- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-23 11:08:31 +08:00
f6ceb43e36 fix: Add model by ollama in model provider page, user can't choose the model in chat window. #2479 (#2529)
### What problem does this PR solve?

fix: Add model by ollama in model provider page, user can't choose the
model in chat window. #2479

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-23 10:53:18 +08:00
d8a43416f5 Rework Dockerfile.scratch (#2525)
### What problem does this PR solve?

Rework Dockerfile.scratch
- Multiple stage Dockerfile
- Removed conda
- Replaced pip with poetry
- Added missing dependencies and fixed package version conflicts
- Added deepdoc models

### Type of change

- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-23 10:00:44 +08:00
4a6a2a0f1b refine xinference (#2521)
### What problem does this PR solve?

#1588

### Type of change

- [x] Refactoring
2024-09-20 18:37:01 +08:00
9bbef8216d refine reteival of multi-turn conversation (#2520)
### What problem does this PR solve?

#2362 #2484

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Performance Improvement
2024-09-20 17:25:55 +08:00
78856703c4 make excel parsing configurable (#2517)
### What problem does this PR solve?

#2516

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-09-20 15:33:38 +08:00
099c37ba95 rm key set in xinference (#2511)
### What problem does this PR solve?

#2492

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-20 10:55:52 +08:00
a44f1f735d fix self deployed llm lost (#2510)
### What problem does this PR solve?

#2509 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-20 10:41:25 +08:00
ae6f68e625 Update README_zh.md (#2491)
核心镜像swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:dev 大小为 19.1G,
不是9G

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-20 10:22:47 +08:00
5dd19c6a57 remove setting-system/index.tsx error import (#2507)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

Regarding the code merge #ca0c22f3184b9229e7e86de699842bb3b0e502c2, the
ragflow/web code will not run. This commit solves this problem.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-20 10:21:48 +08:00
5968f148bc refactor add LLM (#2508)
### What problem does this PR solve?

#2487

### Type of change

- [x] Refactoring
2024-09-20 10:20:35 +08:00
4f962d6bff BugFix: Fixed api_key generation error for VolcEngine (#2502)
BugFix: Fixed api_key generation error for VolcEngine with python's
f-string syntax

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: 海贼宅 <stu_xyx@163.com>
2024-09-20 10:03:43 +08:00
ddb8be9219 Web: Display the icon of the currently used storage. (#2504)
https://github.com/infiniflow/ragflow/issues/2503


### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Before:

<img width="611" alt="image"
src="https://github.com/user-attachments/assets/02a3a1ee-7bfb-4fe0-9b15-11ced69cc8a3">

After:

<img width="796" alt="image"
src="https://github.com/user-attachments/assets/371136af-8d16-47aa-909b-26609d3ad60e">

<img width="557" alt="image"
src="https://github.com/user-attachments/assets/9268362f-2b6a-4ea1-9fe7-659f7292e5e1">
2024-09-20 09:49:16 +08:00
422c229e52 Storage: Rename all the variables about get file to storage from minio. (#2497)
https://github.com/infiniflow/ragflow/issues/2496

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-19 19:19:27 +08:00
b5d1d2fec4 refine TTS (#2500)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-19 19:15:16 +08:00
d545633a6c OpenAITTS (#2493)
### What problem does this PR solve?

OpenAITTS

### Type of change


- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-19 16:55:18 +08:00
af0b4b0828 fix(Add model api): Add VolcEngine to create api_key format error (#2490)
### What problem does this PR solve?


Add VolcEngine to create api_key format error
When constructing the json string, there was an extra "," at the end,
which caused a formatting error. This commit fixed the problem.


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-19 15:10:49 +08:00
6c6380d27a update document sdk (#2485)
### Type of change
#2485
- [x] Performance Improvement
2024-09-19 12:52:35 +08:00
2324b88579 fix parser for pptx of which files are from filemanager (#2482)
### What problem does this PR solve?

#2467

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-18 19:13:37 +08:00
2b0dc01a88 rename some attributes in document sdk (#2481)
### What problem does this PR solve?

#1102

### Type of change

- [x] Performance Improvement

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-18 18:46:37 +08:00
01acc3fd5a fix duplicated llm name betweeen different suppliers (#2477)
### What problem does this PR solve?

#2465

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-18 16:09:22 +08:00
2484e26cb5 fix superuser password not base64 encoded (#2475)
### What problem does this PR solve?

Fixes the _superuser_ `admin@ragflow.io` not being accessible due to how
entered passwords are used. Unless this is expected behavior?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-18 14:30:45 +08:00
7195742ca5 rename create_timestamp_flt to create_timestamp_float (#2473)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-09-18 12:50:05 +08:00
62cb5f1bac update document sdk (#2445)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-18 11:08:19 +08:00
e7dd487779 fix ppt file from filemanager error (#2470)
### What problem does this PR solve?

#2467

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-18 09:22:14 +08:00
e41268efc6 Add Multi-Language Descriptions for 'Switch' Component and Update Message Assistant Placeholder (#2450)
### What problem does this PR solve?

_This PR addresses the need to describe the "Switch" component across
different languages and corrects a misleading description for a
placeholder message not exclusively tied to a specific assistant type.
By providing clearer and more accurate descriptions, this PR aims to
improve user understanding and usability of the Switch component and the
"Message Resume Assistant..." placeholder in a multilingual context._

### Explanation of Changes

1. **Added Descriptions for "Switch" Component**: 
- Descriptions were added for the "Switch" component in three different
locales:
- **English (EN)**: Provides a concise description of what the "Switch"
component does, focusing on its ability to evaluate conditions and
direct the flow of execution.
- **Simplified Chinese (ZH)**: Translated the English description into
Simplified Chinese to cater to users who prefer this locale.
- **Traditional Chinese (ZH-Traditional)**: Added a Traditional Chinese
version of the description to support users in regions that use
Traditional Chinese.
   
2. **Corrected "Message Resume Assistant..." to "Message the
Assistant..."**:
- Updated the description from "Message Resume Assistant..." to "Message
the Assistant..." in the English locale. This correction makes the
description more generic and accurate, reflecting the placeholder's
broader functionality, which is not limited to Resume Assistants. It now
clearly communicates that the placeholder can be used with various types
of assistants, not just those related to resumes.

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-15 16:16:10 +08:00
161 changed files with 14413 additions and 2046 deletions

View File

@ -1,16 +1,10 @@
---
sidebar_position: 0
slug: /contribution_guidelines
---
# Contribution guidelines
Thanks for wanting to contribute to RAGFlow. This document offers guidlines and major considerations for submitting your contributions.
This document offers guidlines and major considerations for submitting your contributions to RAGFlow.
- To report a bug, file a [GitHub issue](https://github.com/infiniflow/ragflow/issues/new/choose) with us.
- For further questions, you can explore existing discussions or initiate a new one in [Discussions](https://github.com/orgs/infiniflow/discussions).
## What you can contribute
The list below mentions some contributions you can make, but it is not a complete list.
@ -42,6 +36,7 @@ The list below mentions some contributions you can make, but it is not a complet
- Consider splitting a large PR into multiple smaller, standalone PRs to keep a traceable development history.
- Ensure that your PR addresses just one issue, or keep any unrelated changes small.
- Add test cases when contributing new features. They demonstrate that your code functions correctly and protect against potential issues from future changes.
### Describing your PR
- Ensure that your PR title is concise and clear, providing all the required information.
@ -49,4 +44,5 @@ The list below mentions some contributions you can make, but it is not a complet
- Include sufficient design details for *breaking changes* or *API changes* in your description.
### Reviewing & merging a PR
- Ensure that your PR passes all Continuous Integration (CI) tests before merging it.
Ensure that your PR passes all Continuous Integration (CI) tests before merging it.

View File

@ -1,23 +1,108 @@
FROM infiniflow/ragflow-base:v2.0
USER root
# base stage
FROM ubuntu:24.04 AS base
USER root
ENV LIGHTEN=0
WORKDIR /ragflow
ADD ./web ./web
RUN cd ./web && npm i --force && npm run build
RUN rm -f /etc/apt/apt.conf.d/docker-clean \
&& echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
ADD ./api ./api
ADD ./conf ./conf
ADD ./deepdoc ./deepdoc
ADD ./rag ./rag
ADD ./agent ./agent
ADD ./graphrag ./graphrag
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt-get --no-install-recommends install -y ca-certificates
# if you located in China, you can use tsinghua mirror to speed up apt
RUN sed -i 's|http://archive.ubuntu.com|https://mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list.d/ubuntu.sources
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y curl libpython3-dev nginx libglib2.0-0 libglx-mesa0 pkg-config libicu-dev libgdiplus \
&& rm -rf /var/lib/apt/lists/* \
&& curl -sSL https://install.python-poetry.org | python3 -
RUN curl -o libssl1.deb http://archive.ubuntu.com/ubuntu/pool/main/o/openssl1.0/libssl1.0.0_1.0.2n-1ubuntu5_amd64.deb && dpkg -i libssl1.deb && rm -f libssl1.deb
ENV PYTHONDONTWRITEBYTECODE=1 DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1
# Configure Poetry
ENV POETRY_NO_INTERACTION=1
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
ENV POETRY_VIRTUALENVS_CREATE=true
ENV POETRY_REQUESTS_TIMEOUT=15
# builder stage
FROM base AS builder
USER root
WORKDIR /ragflow
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y nodejs npm cargo && \
rm -rf /var/lib/apt/lists/*
COPY web web
RUN cd web && npm i --force && npm run build
# install dependencies from poetry.lock file
COPY pyproject.toml poetry.toml poetry.lock ./
RUN --mount=type=cache,target=/root/.cache/pypoetry,sharing=locked \
if [ "$LIGHTEN" -eq 0 ]; then \
/root/.local/bin/poetry install --sync --no-cache --no-root --with=full; \
else \
/root/.local/bin/poetry install --sync --no-cache --no-root; \
fi
# production stage
FROM base AS production
USER root
WORKDIR /ragflow
# Install python packages' dependencies
# cv2 requires libGL.so.1
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y --no-install-recommends nginx libgl1 vim less && \
rm -rf /var/lib/apt/lists/*
COPY web web
COPY api api
COPY conf conf
COPY deepdoc deepdoc
COPY rag rag
COPY agent agent
COPY graphrag graphrag
COPY pyproject.toml poetry.toml poetry.lock ./
# Copy models downloaded via download_deps.py
RUN mkdir -p /ragflow/rag/res/deepdoc /root/.ragflow
RUN --mount=type=bind,source=huggingface.co,target=/huggingface.co \
tar --exclude='.*' -cf - \
/huggingface.co/InfiniFlow/text_concat_xgb_v1.0 \
/huggingface.co/InfiniFlow/deepdoc \
| tar -xf - --strip-components=3 -C /ragflow/rag/res/deepdoc
RUN --mount=type=bind,source=huggingface.co,target=/huggingface.co \
tar -cf - \
/huggingface.co/BAAI/bge-large-zh-v1.5 \
/huggingface.co/BAAI/bge-reranker-v2-m3 \
/huggingface.co/maidalun1020/bce-embedding-base_v1 \
/huggingface.co/maidalun1020/bce-reranker-base_v1 \
| tar -xf - --strip-components=2 -C /root/.ragflow
# Copy compiled web pages
COPY --from=builder /ragflow/web/dist /ragflow/web/dist
# Copy Python environment and packages
ENV VIRTUAL_ENV=/ragflow/.venv
COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
ENV PATH="${VIRTUAL_ENV}/bin:/root/.local/bin:${PATH}"
# Download nltk data
RUN python3 -m nltk.downloader wordnet punkt punkt_tab
ENV PYTHONPATH=/ragflow/
ENV HF_ENDPOINT=https://hf-mirror.com
ADD docker/entrypoint.sh ./entrypoint.sh
ADD docker/.env ./
COPY docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

View File

@ -1,43 +0,0 @@
FROM python:3.11
USER root
WORKDIR /ragflow
COPY requirements_arm.txt /ragflow/requirements.txt
RUN pip install nltk --default-timeout=10000
RUN pip install -i https://mirrors.aliyun.com/pypi/simple/ --default-timeout=1000 -r requirements.txt &&\
python -c "import nltk;nltk.download('punkt');nltk.download('wordnet')"
RUN apt-get update && \
apt-get install -y curl gnupg && \
rm -rf /var/lib/apt/lists/*
RUN curl -sL https://deb.nodesource.com/setup_20.x | bash - && \
apt-get install -y --fix-missing nodejs nginx ffmpeg libsm6 libxext6 libgl1
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
RUN pip install graspologic
ADD ./web ./web
RUN cd ./web && npm i --force && npm run build
ADD ./api ./api
ADD ./conf ./conf
ADD ./deepdoc ./deepdoc
ADD ./rag ./rag
ADD ./agent ./agent
ADD ./graphrag ./graphrag
ENV PYTHONPATH=/ragflow/
ENV HF_ENDPOINT=https://hf-mirror.com
ADD docker/entrypoint.sh ./entrypoint.sh
ADD docker/.env ./
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

View File

@ -1,27 +0,0 @@
FROM infiniflow/ragflow-base:v2.0
USER root
WORKDIR /ragflow
## for cuda > 12.0
RUN pip uninstall -y onnxruntime-gpu
RUN pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
ADD ./web ./web
RUN cd ./web && npm i --force && npm run build
ADD ./api ./api
ADD ./conf ./conf
ADD ./deepdoc ./deepdoc
ADD ./rag ./rag
ADD ./agent ./agent
ADD ./graphrag ./graphrag
ENV PYTHONPATH=/ragflow/
ENV HF_ENDPOINT=https://hf-mirror.com
ADD docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

View File

@ -1,56 +0,0 @@
FROM ubuntu:22.04
USER root
WORKDIR /ragflow
RUN apt-get update && apt-get install -y wget curl build-essential libopenmpi-dev
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh && \
bash ~/miniconda.sh -b -p /root/miniconda3 && \
rm ~/miniconda.sh && ln -s /root/miniconda3/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
echo ". /root/miniconda3/etc/profile.d/conda.sh" >> ~/.bashrc && \
echo "conda activate base" >> ~/.bashrc
ENV PATH /root/miniconda3/bin:$PATH
RUN conda create -y --name py11 python=3.11
ENV CONDA_DEFAULT_ENV py11
ENV CONDA_PREFIX /root/miniconda3/envs/py11
ENV PATH $CONDA_PREFIX/bin:$PATH
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
RUN apt-get install -y nginx
ADD ./web ./web
ADD ./api ./api
ADD ./conf ./conf
ADD ./deepdoc ./deepdoc
ADD ./rag ./rag
ADD ./requirements.txt ./requirements.txt
ADD ./agent ./agent
ADD ./graphrag ./graphrag
RUN apt install openmpi-bin openmpi-common libopenmpi-dev
ENV LD_LIBRARY_PATH /usr/lib/x86_64-linux-gnu/openmpi/lib:$LD_LIBRARY_PATH
RUN rm /root/miniconda3/envs/py11/compiler_compat/ld
RUN cd ./web && npm i --force && npm run build
RUN conda run -n py11 pip install -i https://mirrors.aliyun.com/pypi/simple/ -r ./requirements.txt
RUN apt-get update && \
apt-get install -y libglib2.0-0 libgl1-mesa-glx && \
rm -rf /var/lib/apt/lists/*
RUN conda run -n py11 pip install -i https://mirrors.aliyun.com/pypi/simple/ ollama
RUN conda run -n py11 python -m nltk.downloader punkt
RUN conda run -n py11 python -m nltk.downloader wordnet
ENV PYTHONPATH=/ragflow/
ENV HF_ENDPOINT=https://hf-mirror.com
ADD docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

101
Dockerfile.slim Normal file
View File

@ -0,0 +1,101 @@
# base stage
FROM ubuntu:24.04 AS base
USER root
ENV LIGHTEN=1
WORKDIR /ragflow
RUN rm -f /etc/apt/apt.conf.d/docker-clean \
&& echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt-get --no-install-recommends install -y ca-certificates
# if you located in China, you can use tsinghua mirror to speed up apt
RUN sed -i 's|http://archive.ubuntu.com|https://mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list.d/ubuntu.sources
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y curl libpython3-dev nginx libglib2.0-0 libglx-mesa0 pkg-config libicu-dev libgdiplus \
&& rm -rf /var/lib/apt/lists/* \
&& curl -sSL https://install.python-poetry.org | python3 -
RUN curl -o libssl1.deb http://archive.ubuntu.com/ubuntu/pool/main/o/openssl1.0/libssl1.0.0_1.0.2n-1ubuntu5_amd64.deb && dpkg -i libssl1.deb && rm -f libssl1.deb
ENV PYTHONDONTWRITEBYTECODE=1 DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1
# Configure Poetry
ENV POETRY_NO_INTERACTION=1
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
ENV POETRY_VIRTUALENVS_CREATE=true
ENV POETRY_REQUESTS_TIMEOUT=15
# builder stage
FROM base AS builder
USER root
WORKDIR /ragflow
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y nodejs npm cargo && \
rm -rf /var/lib/apt/lists/*
COPY web web
RUN cd web && npm i --force && npm run build
# install dependencies from poetry.lock file
COPY pyproject.toml poetry.toml poetry.lock ./
RUN --mount=type=cache,target=/root/.cache/pypoetry,sharing=locked \
if [ "$LIGHTEN" -eq 0 ]; then \
/root/.local/bin/poetry install --sync --no-cache --no-root --with=full; \
else \
/root/.local/bin/poetry install --sync --no-cache --no-root; \
fi
# production stage
FROM base AS production
USER root
WORKDIR /ragflow
# Install python packages' dependencies
# cv2 requires libGL.so.1
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y --no-install-recommends nginx libgl1 vim less && \
rm -rf /var/lib/apt/lists/*
COPY web web
COPY api api
COPY conf conf
COPY deepdoc deepdoc
COPY rag rag
COPY agent agent
COPY graphrag graphrag
COPY pyproject.toml poetry.toml poetry.lock ./
# Copy models downloaded via download_deps.py
RUN mkdir -p /ragflow/rag/res/deepdoc /root/.ragflow
RUN --mount=type=bind,source=huggingface.co,target=/huggingface.co \
tar --exclude='.*' -cf - \
/huggingface.co/InfiniFlow/text_concat_xgb_v1.0 \
/huggingface.co/InfiniFlow/deepdoc \
| tar -xf - --strip-components=3 -C /ragflow/rag/res/deepdoc
# Copy compiled web pages
COPY --from=builder /ragflow/web/dist /ragflow/web/dist
# Copy Python environment and packages
ENV VIRTUAL_ENV=/ragflow/.venv
COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
ENV PATH="${VIRTUAL_ENV}/bin:/root/.local/bin:${PATH}"
# Download nltk data
RUN python3 -m nltk.downloader wordnet punkt punkt_tab
ENV PYTHONPATH=/ragflow/
COPY docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

156
README.md
View File

@ -18,7 +18,7 @@
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.11.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.11.0"></a>
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.12.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.12.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
@ -42,8 +42,9 @@
- 🔎 [System Architecture](#-system-architecture)
- 🎬 [Get Started](#-get-started)
- 🔧 [Configurations](#-configurations)
- 🛠️ [Build from source](#-build-from-source)
- 🛠️ [Launch service from source](#-launch-service-from-source)
- 🪛 [Build the docker image without embedding models](#-build-the-docker-image-without-embedding-models)
- 🪚 [Build the docker image including embedding models](#-build-the-docker-image-including-embedding-models)
- 🔨 [Launch service from source for development](#-launch-service-from-source-for-development)
- 📚 [Documentation](#-documentation)
- 📜 [Roadmap](#-roadmap)
- 🏄 [Community](#-community)
@ -66,6 +67,7 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Latest Updates
- 2024-09-29 Optimizes multi-round conversations.
- 2024-09-13 Adds search mode for knowledge base Q&A.
- 2024-09-09 Adds a medical consultant agent template.
- 2024-08-22 Support text to SQL statements through RAG.
@ -150,16 +152,13 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
```
3. Build the pre-built Docker images and start up the server:
> Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_VERSION` in **docker/.env** to the intended version, for example `RAGFLOW_VERSION=v0.11.0`, before running the following commands.
> Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_IMAGE` in **docker/.env** to the intended version, for example `RAGFLOW_IMAGE=infiniflow/ragflow:v0.12.0`, before running the following commands.
```bash
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
```
> The core image is about 9 GB in size and may take a while to load.
4. Check the server status after having the server up and running:
@ -171,12 +170,12 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
_The following output confirms a successful launch of the system:_
```bash
____ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
/____/
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
@ -191,7 +190,7 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
> See [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) for more information.
_The show is now on!_
_The show is on!_
## 🔧 Configurations
@ -207,118 +206,89 @@ You must ensure that changes to the [.env](./docker/.env) file are in line with
To update the default HTTP serving port (80), go to [docker-compose.yml](./docker/docker-compose.yml) and change `80:80` to `<YOUR_SERVING_PORT>:80`.
> Updates to all system configurations require a system reboot to take effect:
>
Updates to the above configurations require a reboot of all containers to take effect:
> ```bash
> $ docker-compose up -d
> $ docker-compose -f docker/docker-compose.yml up -d
> ```
## 🛠️ Build from source
## 🪛 Build the Docker image without embedding models
To build the Docker images from source:
This image is approximately 1 GB in size and relies on external LLM and embedding services.
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
$ docker build -t infiniflow/ragflow:dev .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```
## 🛠️ Launch service from source
## 🪚 Build the Docker image including embedding models
To launch the service from source:
This image is approximately 9 GB in size. As it includes embedding models, it relies on external LLM services only.
1. Clone the repository:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
```
## 🔨 Launch service from source for development
1. Install Poetry, or skip this step if it is already installed:
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
curl -sSL https://install.python-poetry.org | python3 -
```
2. Create a virtual environment, ensuring that Anaconda or Miniconda is installed:
2. Clone the source code and install Python dependencies:
```bash
$ conda create -n ragflow python=3.11.0
$ conda activate ragflow
$ pip install -r requirements.txt
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
~/.local/bin/poetry install --sync --no-root # install RAGFlow dependent python modules
```
3. Launch the dependent services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose:
```bash
# If your CUDA version is higher than 12.0, run the following additional commands:
$ pip uninstall -y onnxruntime-gpu
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
docker compose -f docker/docker-compose-base.yml up -d
```
3. Copy the entry script and configure environment variables:
```bash
# Get the Python path:
$ which python
# Get the ragflow project path:
$ pwd
Add the following line to `/etc/hosts` to resolve all hosts specified in **docker/service_conf.yaml** to `127.0.0.1`:
```
```bash
$ cp docker/entrypoint.sh .
$ vi entrypoint.sh
127.0.0.1 es01 mysql minio redis
```
In **docker/service_conf.yaml**, update mysql port to `5455` and es port to `1200`, as specified in **docker/.env**.
4. If you cannot access HuggingFace, set the `HF_ENDPOINT` environment variable to use a mirror site:
```bash
# Adjust configurations according to your actual situation (the following two export commands are newly added):
# - Assign the result of `which python` to `PY`.
# - Assign the result of `pwd` to `PYTHONPATH`.
# - Comment out `LD_LIBRARY_PATH`, if it is configured.
# - Optional: Add Hugging Face mirror.
PY=${PY}
export PYTHONPATH=${PYTHONPATH}
export HF_ENDPOINT=https://hf-mirror.com
```
4. Launch the third-party services (MinIO, Elasticsearch, Redis, and MySQL):
5. Launch backend service:
```bash
$ cd docker
$ docker compose -f docker-compose-base.yml up -d
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
5. Check the configuration files, ensuring that:
- The settings in **docker/.env** match those in **conf/service_conf.yaml**.
- The IP addresses and ports for related services in **service_conf.yaml** match the local machine IP and ports exposed by the container.
6. Launch the RAGFlow backend service:
6. Install frontend dependencies:
```bash
$ chmod +x ./entrypoint.sh
$ bash ./entrypoint.sh
cd web
npm install --force
```
7. Configure frontend to update `proxy.target` in **.umirc.ts** to `http://127.0.0.1:9380`:
8. Launch frontend service:
```bash
npm run dev
```
7. Launch the frontend service:
_The following output confirms a successful launch of the system:_
```bash
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ vim .umirc.ts
# Update proxy.target to http://127.0.0.1:9380
$ npm run dev
```
8. Deploy the frontend service:
```bash
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ umi build
$ mkdir -p /ragflow/web
$ cp -r dist /ragflow/web
$ apt install nginx -y
$ cp ../docker/nginx/proxy.conf /etc/nginx
$ cp ../docker/nginx/nginx.conf /etc/nginx
$ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d
$ systemctl start nginx
```
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
## 📚 Documentation
@ -339,4 +309,4 @@ See the [RAGFlow Roadmap 2024](https://github.com/infiniflow/ragflow/issues/162)
## 🙌 Contributing
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our [Contribution Guidelines](./docs/references/CONTRIBUTING.md) first.
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our [Contribution Guidelines](./CONTRIBUTING.md) first.

View File

@ -18,8 +18,8 @@
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.11.0-brightgreen"
alt="docker pull infiniflow/ragflow:v0.11.0"></a>
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.12.0-brightgreen"
alt="docker pull infiniflow/ragflow:v0.12.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
@ -48,6 +48,7 @@
## 🔥 最新情報
- 2024-09-29 マルチラウンドダイアログを最適化。
- 2024-09-13 ナレッジベース Q&A の検索モードを追加しました。
- 2024-09-09 エージェントに医療相談テンプレートを追加しました。
- 2024-08-22 RAG を介して SQL ステートメントへのテキストをサポートします。
@ -139,7 +140,7 @@
$ docker compose up -d
```
> 上記のコマンドを実行すると、RAGFlowの開発版dockerイメージが自動的にダウンロードされます。 特定のバージョンのDockerイメージをダウンロードして実行したい場合は、docker/.envファイルのRAGFLOW_VERSION変数を見つけて、対応するバージョンに変更してください。 例えば、RAGFLOW_VERSION=v0.11.0として、上記のコマンドを実行してください。
> 上記のコマンドを実行すると、RAGFlowの開発版dockerイメージが自動的にダウンロードされます。 特定のバージョンのDockerイメージをダウンロードして実行したい場合は、docker/.envファイルのRAGFLOW_IMAGE変数を見つけて、対応するバージョンに変更してください。 例えば、`RAGFLOW_IMAGE=infiniflow/ragflow:v0.12.0`として、上記のコマンドを実行してください。
> コアイメージのサイズは約 9 GB で、ロードに時間がかかる場合があります。
@ -152,12 +153,11 @@
_以下の出力は、システムが正常に起動したことを確認するものです:_
```bash
____ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
/____/
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
@ -194,78 +194,83 @@
> $ docker-compose up -d
> ```
## 🛠️ ソースからビルドする
## 🪛 ソースコードでDockerイメージを作成埋め込みモデルなし
ソースからDockerイメージをビルドするには:
この Docker イメージのサイズは約 1GB で、外部の大モデルと埋め込みサービスに依存しています。
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
$ docker build -t infiniflow/ragflow:v0.11.0 .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```
## 🛠️ ソースコードからサービスを起動する方法
## 🪚 ソースコードをコンパイルしたDockerイメージ埋め込みモデルを含む
ソースコードからサービスを起動する場合は、以下の手順に従ってください:
1. リポジトリをクローンします
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
```
2. 仮想環境を作成しますAnacondaまたはMinicondaがインストールされていることを確認してください
```bash
$ conda create -n ragflow python=3.11.0
$ conda activate ragflow
$ pip install -r requirements.txt
```
CUDAのバージョンが12.0以上の場合、以下の追加コマンドを実行してください:
```bash
$ pip uninstall -y onnxruntime-gpu
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
```
3. エントリースクリプトをコピーし、環境変数を設定します
```bash
$ cp docker/entrypoint.sh .
$ vi entrypoint.sh
```
以下のコマンドで Python のパスとragflowプロジェクトのパスを取得します
```bash
$ which python
$ pwd
```
`which python` の出力を `PY` の値として、`pwd` の出力を `PYTHONPATH` の値として設定します。
`LD_LIBRARY_PATH` が既に設定されている場合は、コメントアウトできます。
この Docker のサイズは約 9GB で、埋め込みモデルを含むため、外部の大モデルサービスのみが必要です。
```bash
# 実際の状況に応じて設定を調整してください。以下の二つの export は新たに追加された設定です
PY=${PY}
export PYTHONPATH=${PYTHONPATH}
# オプションHugging Face ミラーを追加
export HF_ENDPOINT=https://hf-mirror.com
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
```
4. 基本サービスを起動しま
```bash
$ cd docker
$ docker compose -f docker-compose-base.yml up -d
```
## 🔨 ソースコードからサービスを起動する方法
5. 設定ファイルを確認します
**docker/.env** 内の設定が**conf/service_conf.yaml**内の設定と一致していることを確認してください。**service_conf.yaml**内の関連サービスのIPアドレスとポートは、ローカルマシンのIPアドレスとコンテナが公開するポートに変更する必要があります。
1. Poetry をインストールする。すでにインストールされている場合は、このステップをスキップしてください:
```bash
curl -sSL https://install.python-poetry.org | python3 -
```
6. サービスを起動します
```bash
$ chmod +x ./entrypoint.sh
$ bash ./entrypoint.sh
```
2. ソースコードをクローンし、Python の依存関係をインストールする:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
~/.local/bin/poetry install --sync --no-root # install RAGFlow dependent python modules
```
3. Docker Compose を使用して依存サービスMinIO、Elasticsearch、Redis、MySQLを起動する:
```bash
docker compose -f docker/docker-compose-base.yml up -d
```
`/etc/hosts` に以下の行を追加して、**docker/service_conf.yaml** に指定されたすべてのホストを `127.0.0.1` に解決します:
```
127.0.0.1 es01 mysql minio redis
```
**docker/service_conf.yaml** で mysql のポートを `5455` に、es のポートを `1200` に更新します(**docker/.env** に指定された通り).
4. HuggingFace にアクセスできない場合は、`HF_ENDPOINT` 環境変数を設定してミラーサイトを使用してください:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
5. バックエンドサービスを起動する:
```bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
6. フロントエンドの依存関係をインストールする:
```bash
cd web
npm install --force
```
7. フロントエンドを設定し、**.umirc.ts** の `proxy.target` を `http://127.0.0.1:9380` に更新します:
8. フロントエンドサービスを起動する:
```bash
npm run dev
```
_以下の画面で、システムが正常に起動したことを示します:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
## 📚 ドキュメンテーション
@ -286,4 +291,4 @@ $ bash ./entrypoint.sh
## 🙌 コントリビュート
RAGFlow はオープンソースのコラボレーションによって発展してきました。この精神に基づき、私たちはコミュニティからの多様なコントリビュートを受け入れています。 参加を希望される方は、まず [コントリビューションガイド](./docs/references/CONTRIBUTING.md)をご覧ください。
RAGFlow はオープンソースのコラボレーションによって発展してきました。この精神に基づき、私たちはコミュニティからの多様なコントリビュートを受け入れています。 参加を希望される方は、まず [コントリビューションガイド](./CONTRIBUTING.md)をご覧ください。

View File

@ -18,7 +18,7 @@
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.11.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.11.0"></a>
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.12.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.12.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
@ -49,6 +49,8 @@
## 🔥 업데이트
- 2024-09-29 다단계 대화를 최적화합니다.
- 2024-09-13 지식베이스 Q&A 검색 모드를 추가합니다.
- 2024-09-09 Agent에 의료상담 템플릿을 추가하였습니다.
@ -138,7 +140,7 @@
3. 미리 빌드된 Docker 이미지를 생성하고 서버를 시작하세요:
> 다음 명령어를 실행하면 *dev* 버전의 RAGFlow Docker 이미지가 자동으로 다운로드됩니다. 특정 Docker 버전을 다운로드하고 실행하려면, **docker/.env** 파일에서 `RAGFLOW_VERSION`을 원하는 버전으로 업데이트한 후, 예를 들어 `RAGFLOW_VERSION=v0.11.0`로 업데이트 한 뒤, 다음 명령어를 실행하세요.
> 다음 명령어를 실행하면 *dev* 버전의 RAGFlow Docker 이미지가 자동으로 다운로드됩니다. 특정 Docker 버전을 다운로드하고 실행하려면, **docker/.env** 파일에서 `RAGFLOW_IMAGE`을 원하는 버전으로 업데이트한 후, 예를 들어 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.12.0`로 업데이트 한 뒤, 다음 명령어를 실행하세요.
```bash
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
@ -157,12 +159,11 @@
_다음 출력 결과로 시스템이 성공적으로 시작되었음을 확인합니다:_
```bash
____ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
/____/
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
@ -198,110 +199,83 @@
> $ docker-compose up -d
> ```
## 🛠️ 소스에서 빌드하기
## 🪛 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함하지 않음)
Docker 이미지를 소스에서 빌드하려면:
Docker 이미지의 크기는 약 1GB이며, 외부 대형 모델과 임베딩 서비스에 의존합니다.
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
$ docker build -t infiniflow/ragflow:dev .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```
## 🪚 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함)
## 🛠️ 소스에서 서비스 시작하기
이 Docker의 크기는 약 9GB이며, 이미 임베딩 모델을 포함하고 있으므로 외부 대형 모델 서비스에만 의존하면 됩니다.
서비스를 소스에서 시작하려면:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
```
1. 레포지토리를 클론하세요:
## 🔨 소스 코드로 서비스를 시작합니다.
1. Poetry를 설치하거나 이미 설치된 경우 이 단계를 건너뜁니다:
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
curl -sSL https://install.python-poetry.org | python3 -
```
2. 가상 환경을 생성하고, Anaconda 또는 Miniconda가 설치되어 있는지 확인하세요:
2. 소스 코드를 클론하고 Python 의존성을 설치합니다:
```bash
$ conda create -n ragflow python=3.11.0
$ conda activate ragflow
$ pip install -r requirements.txt
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
~/.local/bin/poetry install --sync --no-root # install RAGFlow dependent python modules
```
3. Docker Compose를 사용하여 의존 서비스(MinIO, Elasticsearch, Redis 및 MySQL)를 시작합니다:
```bash
# CUDA 버전이 12.0보다 높은 경우, 다음 명령어를 추가로 실행하세요:
$ pip uninstall -y onnxruntime-gpu
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
docker compose -f docker/docker-compose-base.yml up -d
```
3. 진입 스크립트를 복사하고 환경 변수를 설정하세요:
```bash
# 파이썬 경로를 받아옵니다:
$ which python
# RAGFlow 프로젝트 경로를 받아옵니다:
$ pwd
`/etc/hosts` 에 다음 줄을 추가하여 **docker/service_conf.yaml** 에 지정된 모든 호스트를 `127.0.0.1` 로 해결합니다:
```
127.0.0.1 es01 mysql minio redis
```
**docker/service_conf.yaml** 에서 mysql 포트를 `5455` 로, es 포트를 `1200` 으로 업데이트합니다( **docker/.env** 에 지정된 대로).
4. HuggingFace에 접근할 수 없는 경우, `HF_ENDPOINT` 환경 변수를 설정하여 미러 사이트를 사용하세요:
```bash
$ cp docker/entrypoint.sh .
$ vi entrypoint.sh
```
```bash
# 실제 상황에 맞게 설정 조정하기 (다음 두 개의 export 명령어는 새로 추가되었습니다):
# - `which python`의 결과를 `PY`에 할당합니다.
# - `pwd`의 결과를 `PYTHONPATH`에 할당합니다.
# - `LD_LIBRARY_PATH`가 설정되어 있는 경우 주석 처리합니다.
# - 선택 사항: Hugging Face 미러 추가.
PY=${PY}
export PYTHONPATH=${PYTHONPATH}
export HF_ENDPOINT=https://hf-mirror.com
```
4. 다른 서비스(MinIO, Elasticsearch, Redis, MySQL)를 시작하세요:
5. 백엔드 서비스를 시작합니다:
```bash
$ cd docker
$ docker compose -f docker-compose-base.yml up -d
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
5. 설정 파일을 확인하여 다음 사항을 확인하세요:
- **docker/.env**의 설정이 **conf/service_conf.yaml**의 설정과 일치하는지 확인합니다.
- **service_conf.yaml**의 관련 서비스에 대한 IP 주소와 포트가 로컬 머신의 IP 주소와 컨테이너에서 노출된 포트와 일치하는지 확인합니다.
6. RAGFlow 백엔드 서비스를 시작합니다:
6. 프론트엔드 의존성을 설치합니다:
```bash
$ chmod +x ./entrypoint.sh
$ bash ./entrypoint.sh
cd web
npm install --force
```
7. **.umirc.ts** 에서 `proxy.target` 을 `http://127.0.0.1:9380` 으로 업데이트합니다:
8. 프론트엔드 서비스를 시작합니다:
```bash
npm run dev
```
7. 프론트엔드 서비스를 시작합니다:
_다음 인터페이스는 시스템이 성공적으로 시작되었음을 나타냅니다:_
```bash
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ vim .umirc.ts
# proxy.target을 http://127.0.0.1:9380로 업데이트합니다.
$ npm run dev
```
8. 프론트엔드 서비스를 배포합니다:
```bash
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ umi build
$ mkdir -p /ragflow/web
$ cp -r dist /ragflow/web
$ apt install nginx -y
$ cp ../docker/nginx/proxy.conf /etc/nginx
$ cp ../docker/nginx/nginx.conf /etc/nginx
$ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d
$ systemctl start nginx
```
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
## 📚 문서
@ -322,4 +296,4 @@ $ docker compose up -d
## 🙌 컨트리뷰션
RAGFlow는 오픈소스 협업을 통해 발전합니다. 이러한 정신을 바탕으로, 우리는 커뮤니티의 다양한 기여를 환영합니다. 참여하고 싶으시다면, 먼저 [가이드라인](./docs/references/CONTRIBUTING.md)을 검토해 주세요.
RAGFlow는 오픈소스 협업을 통해 발전합니다. 이러한 정신을 바탕으로, 우리는 커뮤니티의 다양한 기여를 환영합니다. 참여하고 싶으시다면, 먼저 [가이드라인](./CONTRIBUTING.md)을 검토해 주세요.

View File

@ -18,7 +18,7 @@
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.11.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.11.0"></a>
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.12.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.12.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
@ -47,9 +47,10 @@
## 🔥 近期更新
- 2024-09-29 优化多轮对话.
- 2024-09-13 增加知识库问答搜索模式。
- 2024-09-09 在 Agent 中加入医疗问诊模板。
- 2024-08-22 支持用RAG技术实现从自然语言到SQL语句的转换。
- 2024-08-22 支持用 RAG 技术实现从自然语言到 SQL 语句的转换。
- 2024-08-02 支持 GraphRAG 启发于 [graphrag](https://github.com/microsoft/graphrag) 和思维导图。
- 2024-07-23 支持解析音频文件。
- 2024-07-08 支持 Agentic RAG: 基于 [Graph](./agent/README.md) 的工作流。
@ -134,12 +135,12 @@
```bash
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose -f docker-compose-CN.yml up -d
$ docker compose -f docker-compose.yml up -d
```
> 请注意,运行上述命令会自动下载 RAGFlow 的开发版本 docker 镜像。如果你想下载并运行特定版本的 docker 镜像,请在 docker/.env 文件中找到 RAGFLOW_VERSION 变量,将其改为对应版本。例如 RAGFLOW_VERSION=v0.11.0,然后运行上述命令。
> 请注意,运行上述命令会自动下载 RAGFlow 的开发版本 docker 镜像。如果你想下载并运行特定版本的 docker 镜像,请在 docker/.env 文件中找到 RAGFLOW_IMAGE 变量,将其改为对应版本。例如 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.12.0`,然后运行上述命令。
> 核心镜像文件大约 9 GB可能需要一定时间拉取。请耐心等待。
> 核心镜像下载大小为 9 GB可能需要一定时间拉取。请耐心等待。
4. 服务器启动成功后再次确认服务器状态:
@ -150,12 +151,11 @@
_出现以下界面提示说明服务器启动成功_
```bash
____ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
/____/
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
@ -178,122 +178,100 @@
- [.env](./docker/.env):存放一些基本的系统环境变量,比如 `SVR_HTTP_PORT`、`MYSQL_PASSWORD`、`MINIO_PASSWORD` 等。
- [service_conf.yaml](./docker/service_conf.yaml):配置各类后台服务。
- [docker-compose-CN.yml](./docker/docker-compose-CN.yml): 系统依赖该文件完成启动。
- [docker-compose.yml](./docker/docker-compose.yml): 系统依赖该文件完成启动。
请务必确保 [.env](./docker/.env) 文件中的变量设置与 [service_conf.yaml](./docker/service_conf.yaml) 文件中的配置保持一致!
如果不能访问镜像站点hub.docker.com或者模型站点huggingface.co请按照[.env](./docker/.env)注释修改`RAGFLOW_IMAGE`和`HF_ENDPOINT`。
> [./docker/README](./docker/README.md) 文件提供了环境变量设置和服务配置的详细信息。请**一定要**确保 [./docker/README](./docker/README.md) 文件当中列出来的环境变量的值与 [service_conf.yaml](./docker/service_conf.yaml) 文件当中的系统配置保持一致。
如需更新默认的 HTTP 服务端口(80), 可以在 [docker-compose-CN.yml](./docker/docker-compose-CN.yml) 文件中将配置 `80:80` 改为 `<YOUR_SERVING_PORT>:80`。
如需更新默认的 HTTP 服务端口(80), 可以在 [docker-compose.yml](./docker/docker-compose.yml) 文件中将配置 `80:80` 改为 `<YOUR_SERVING_PORT>:80`。
> 所有系统配置都需要通过系统重启生效:
>
> ```bash
> $ docker compose -f docker-compose-CN.yml up -d
> $ docker compose -f docker-compose.yml up -d
> ```
## 🛠️ 源码编译、安装 Docker 镜像
## 🪛 源码编译 Docker 镜像(不含 embedding 模型)
如需从源码安装 Docker 镜像
Docker 镜像大小约 1 GB 左右并且依赖外部的大模型和 embedding 服务。
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
$ docker build -t infiniflow/ragflow:v0.11.0 .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```
## 🛠️ 源码启动服务
## 🪚 源码编译 Docker 镜像(包含 embedding 模型)
如需从源码启动服务,请参考以下步骤:
1. 克隆仓库
本 Docker 大小约 9 GB 左右。由于已包含 embedding 模型,所以只需依赖外部的大模型服务即可。
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
```
2. 创建虚拟环境(确保已安装 Anaconda 或 Miniconda
## 🔨 以源代码启动服务
```bash
$ conda create -n ragflow python=3.11.0
$ conda activate ragflow
$ pip install -r requirements.txt
```
如果 cuda > 12.0,需额外执行以下命令:
```bash
$ pip uninstall -y onnxruntime-gpu
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
```
1. 安装 Poetry。如已经安装可跳过本步骤
```bash
curl -sSL https://install.python-poetry.org | python3 -
```
3. 拷贝入口脚本并配置环境变量
2. 下载源代码并安装 Python 依赖:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
~/.local/bin/poetry install --sync --no-root # install RAGFlow dependent python modules
```
```bash
$ cp docker/entrypoint.sh .
$ vi entrypoint.sh
```
使用以下命令获取python路径及ragflow项目路径
```bash
$ which python
$ pwd
```
3. 通过 Docker Compose 启动依赖的服务MinIO, Elasticsearch, Redis, and MySQL
```bash
docker compose -f docker/docker-compose-base.yml up -d
```
将上述 `which python` 的输出作为 `PY` 的值,将 `pwd` 的输出作为 `PYTHONPATH` 的值。
在 `/etc/hosts` 中添加以下代码,将 **docker/service_conf.yaml** 文件中的所有 host 地址都解析为 `127.0.0.1`
```
127.0.0.1 es01 mysql minio redis
```
在文件 **docker/service_conf.yaml** 中,对照 **docker/.env** 的配置将 mysql 端口更新为 `5455`es 端口更新为 `1200`。
`LD_LIBRARY_PATH` 如果环境已经配置好,可以注释掉。
4. 如果无法访问 HuggingFace可以把环境变量 `HF_ENDPOINT` 设成相应的镜像站点:
```bash
# 此处配置需要按照实际情况调整,两个 export 为新增配置
PY=${PY}
export PYTHONPATH=${PYTHONPATH}
# 可选:添加 Hugging Face 镜像
export HF_ENDPOINT=https://hf-mirror.com
```
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
4. 启动基础服务
5. 启动后端服务:
```bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
```bash
$ cd docker
$ docker compose -f docker-compose-base.yml up -d
```
6. 安装前端依赖:
```bash
cd web
npm install --force
```
7. 配置前端,将 **.umirc.ts** 的 `proxy.target` 更新为 `http://127.0.0.1:9380`
8. 启动前端服务:
```bash
npm run dev
```
5. 检查配置文件
确保**docker/.env**中的配置与**conf/service_conf.yaml**中配置一致, **service_conf.yaml**中相关服务的IP地址与端口应该改成本机IP地址及容器映射出来的端口。
_以下界面说明系统已经成功启动_
6. 启动服务
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
```bash
$ chmod +x ./entrypoint.sh
$ bash ./entrypoint.sh
```
7. 启动WebUI服务
```bash
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ vim .umirc.ts
# 修改proxy.target为http://127.0.0.1:9380
$ npm run dev
```
8. 部署WebUI服务
```bash
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ umi build
$ mkdir -p /ragflow/web
$ cp -r dist /ragflow/web
$ apt install nginx -y
$ cp ../docker/nginx/proxy.conf /etc/nginx
$ cp ../docker/nginx/nginx.conf /etc/nginx
$ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d
$ systemctl start nginx
```
## 📚 技术文档
- [Quickstart](https://ragflow.io/docs/dev/)
@ -313,7 +291,7 @@ $ systemctl start nginx
## 🙌 贡献指南
RAGFlow 只有通过开源协作才能蓬勃发展。秉持这一精神,我们欢迎来自社区的各种贡献。如果您有意参与其中,请查阅我们的 [贡献者指南](./docs/references/CONTRIBUTING.md) 。
RAGFlow 只有通过开源协作才能蓬勃发展。秉持这一精神,我们欢迎来自社区的各种贡献。如果您有意参与其中,请查阅我们的 [贡献者指南](./CONTRIBUTING.md) 。
## 🤝 商务合作

View File

@ -9,6 +9,7 @@ from .relevant import Relevant, RelevantParam
from .message import Message, MessageParam
from .rewrite import RewriteQuestion, RewriteQuestionParam
from .keyword import KeywordExtract, KeywordExtractParam
from .concentrator import Concentrator, ConcentratorParam
from .baidu import Baidu, BaiduParam
from .duckduckgo import DuckDuckGo, DuckDuckGoParam
from .wikipedia import Wikipedia, WikipediaParam

View File

@ -444,7 +444,7 @@ class ComponentBase(ABC):
if DEBUG: print(self.component_name, reversed_cpnts[::-1])
for u in reversed_cpnts[::-1]:
if self.get_component_name(u) in ["switch"]: continue
if self.get_component_name(u) in ["switch", "concentrator"]: continue
if self.component_name.lower() == "generate" and self.get_component_name(u) == "retrieval":
o = self._canvas.get_component(u)["obj"].output(allow_partial=False)[1]
if o is not None:
@ -472,7 +472,7 @@ class ComponentBase(ABC):
if "content" in df:
df = df.drop_duplicates(subset=['content']).reset_index(drop=True)
return df
return pd.DataFrame()
return pd.DataFrame(self._canvas.get_history(3)[-1:])
def get_stream_input(self):
reversed_cpnts = []

View File

@ -0,0 +1,36 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
from agent.component.base import ComponentBase, ComponentParamBase
class ConcentratorParam(ComponentParamBase):
"""
Define the Concentrator component parameters.
"""
def __init__(self):
super().__init__()
def check(self):
return True
class Concentrator(ComponentBase, ABC):
component_name = "Concentrator"
def _run(self, history, **kwargs):
return Concentrator.be_output("")

View File

@ -122,13 +122,13 @@ class Generate(ComponentBase):
if "empty_response" in retrieval_res.columns and not "".join(retrieval_res["content"]):
res = {"content": "\n- ".join(retrieval_res["empty_response"]) if "\n- ".join(
retrieval_res["empty_response"]) else "Nothing found in knowledgebase!", "reference": []}
return Generate.be_output(res)
return pd.DataFrame([res])
ans = chat_mdl.chat(prompt, self._canvas.get_history(self._param.message_history_window_size),
self._param.gen_conf())
if self._param.cite and "content_ltks" in retrieval_res.columns and "vector" in retrieval_res.columns:
df = self.set_cite(retrieval_res, ans)
return pd.DataFrame(df)
res = self.set_cite(retrieval_res, ans)
return pd.DataFrame([res])
return Generate.be_output(ans)

View File

@ -65,6 +65,8 @@ class RewriteQuestion(Generate, ABC):
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
ans = chat_mdl.chat(self._param.get_prompt(), [{"role": "user", "content": q}],
self._param.gen_conf())
self._canvas.history.pop()
self._canvas.history.append(("user", ans))
print(ans, ":::::::::::::::::::::::::::::::::")
return RewriteQuestion.be_output(ans)

View File

@ -49,34 +49,15 @@ class Switch(ComponentBase, ABC):
def _run(self, history, **kwargs):
for cond in self._param.conditions:
if len(cond["items"]) == 1:
out = self._canvas.get_component(cond["items"][0]["cpn_id"])["obj"].output()[1]
cpn_input = "" if "content" not in out.columns else " ".join(out["content"])
if self.process_operator(cpn_input, cond["items"][0]["operator"], cond["items"][0]["value"]):
return Switch.be_output(cond["to"])
continue
if cond["logical_operator"] == "and":
res = True
for item in cond["items"]:
out = self._canvas.get_component(item["cpn_id"])["obj"].output()[1]
cpn_input = "" if "content" not in out.columns else " ".join(out["content"])
if not self.process_operator(cpn_input, item["operator"], item["value"]):
res = False
break
if res:
return Switch.be_output(cond["to"])
continue
res = False
res = []
for item in cond["items"]:
out = self._canvas.get_component(item["cpn_id"])["obj"].output()[1]
cpn_input = "" if "content" not in out.columns else " ".join(out["content"])
if self.process_operator(cpn_input, item["operator"], item["value"]):
res = True
break
if res:
res.append(self.process_operator(cpn_input, item["operator"], item["value"]))
if cond["logical_operator"] != "and" and any(res):
return Switch.be_output(cond["to"])
if all(res):
return Switch.be_output(cond["to"])
return Switch.be_output(self._param.end_cpn_id)

View File

@ -64,6 +64,12 @@ class WenCai(ComponentBase, ABC):
continue
wencai_res.append({"content": pd.DataFrame.from_dict(item[1], orient='index').to_markdown()})
continue
if isinstance(item[1], pd.DataFrame):
if "image_url" in item[1].columns:
continue
wencai_res.append({"content": item[1].to_markdown()})
continue
wencai_res.append({"content": item[0] + "\n" + str(item[1])})
except Exception as e:
return WenCai.be_output("**ERROR**: " + str(e))

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -26,20 +26,48 @@
"category_description": {
"product_related": {
"description": "The question is about the product usage, appearance and how it works.",
"examples": "Why it always beaming?\nHow to install it onto the wall?\nIt leaks, what to do?"
"examples": "Why it always beaming?\nHow to install it onto the wall?\nIt leaks, what to do?",
"to": "message:0"
},
"others": {
"description": "The question is not about the product usage, appearance and how it works.",
"examples": "How are you doing?\nWhat is your name?\nAre you a robot?\nWhat's the weather?\nWill it rain?"
"examples": "How are you doing?\nWhat is your name?\nAre you a robot?\nWhat's the weather?\nWill it rain?",
"to": "message:1"
}
}
}
},
"downstream": [],
"downstream": ["message:0","message:1"],
"upstream": ["answer:0"]
},
"message:0": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 0!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["categorize:0"]
},
"message:1": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 1!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["categorize:0"]
}
},
"history": [],
"messages": [],
"path": [],
"reference": [],
"answer": []
}

View File

@ -0,0 +1,113 @@
{
"components": {
"begin": {
"obj":{
"component_name": "Begin",
"params": {
"prologue": "Hi there!"
}
},
"downstream": ["answer:0"],
"upstream": []
},
"answer:0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["categorize:0"],
"upstream": ["begin"]
},
"categorize:0": {
"obj": {
"component_name": "Categorize",
"params": {
"llm_id": "deepseek-chat",
"category_description": {
"product_related": {
"description": "The question is about the product usage, appearance and how it works.",
"examples": "Why it always beaming?\nHow to install it onto the wall?\nIt leaks, what to do?",
"to": "concentrator:0"
},
"others": {
"description": "The question is not about the product usage, appearance and how it works.",
"examples": "How are you doing?\nWhat is your name?\nAre you a robot?\nWhat's the weather?\nWill it rain?",
"to": "concentrator:1"
}
}
}
},
"downstream": ["concentrator:0","concentrator:1"],
"upstream": ["answer:0"]
},
"concentrator:0": {
"obj": {
"component_name": "Concentrator",
"params": {}
},
"downstream": ["message:0"],
"upstream": ["categorize:0"]
},
"concentrator:1": {
"obj": {
"component_name": "Concentrator",
"params": {}
},
"downstream": ["message:1_0","message:1_1","message:1_2"],
"upstream": ["categorize:0"]
},
"message:0": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 0_0!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:0"]
},
"message:1_0": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 1_0!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:1"]
},
"message:1_1": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 1_1!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:1"]
},
"message:1_2": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 1_2!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:1"]
}
},
"history": [],
"messages": [],
"path": [],
"reference": [],
"answer": []
}

View File

@ -454,6 +454,8 @@ def upload():
doc["parser_id"] = ParserType.AUDIO.value
if re.search(r"\.(ppt|pptx|pages)$", filename):
doc["parser_id"] = ParserType.PRESENTATION.value
if re.search(r"\.(eml)$", filename):
doc["parser_id"] = ParserType.EMAIL.value
doc_result = DocumentService.insert(doc)
FileService.add_file_from_kb(doc, kb_folder["id"], kb.tenant_id)
@ -478,7 +480,7 @@ def upload():
e, doc = DocumentService.get_by_id(doc["id"])
doc = doc.to_dict()
doc["tenant_id"] = tenant_id
bucket, name = File2DocumentService.get_minio_address(doc_id=doc["id"])
bucket, name = File2DocumentService.get_storage_address(doc_id=doc["id"])
queue_tasks(doc, bucket, name)
except Exception as e:
return server_error_response(e)
@ -640,7 +642,7 @@ def document_rm():
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
b, n = File2DocumentService.get_minio_address(doc_id=doc_id)
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(
@ -679,8 +681,79 @@ def completion_faq():
msg = []
msg.append({"role": "user", "content": req["word"]})
if not msg[-1].get("id"): msg[-1]["id"] = get_uuid()
message_id = msg[-1]["id"]
def fillin_conv(ans):
nonlocal conv, message_id
if not conv.reference:
conv.reference.append(ans["reference"])
else:
conv.reference[-1] = ans["reference"]
conv.message[-1] = {"role": "assistant", "content": ans["answer"], "id": message_id}
ans["id"] = message_id
try:
if conv.source == "agent":
conv.message.append(msg[-1])
e, cvs = UserCanvasService.get_by_id(conv.dialog_id)
if not e:
return server_error_response("canvas not found.")
if not isinstance(cvs.dsl, str):
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []})
final_ans = {"reference": [], "doc_aggs": []}
canvas = Canvas(cvs.dsl, objs[0].tenant_id)
canvas.messages.append(msg[-1])
canvas.add_user_input(msg[-1]["content"])
answer = canvas.run(stream=False)
assert answer is not None, "Nothing. Is it over?"
data_type_picture = {
"type": 3,
"url": "base64 content"
}
data = [
{
"type": 1,
"content": ""
}
]
final_ans["content"] = "\n".join(answer["content"]) if "content" in answer else ""
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
ans = {"answer": final_ans["content"], "reference": final_ans.get("reference", [])}
data[0]["content"] += re.sub(r'##\d\$\$', '', ans["answer"])
fillin_conv(ans)
API4ConversationService.append_message(conv.id, conv.to_dict())
chunk_idxs = [int(match[2]) for match in re.findall(r'##\d\$\$', ans["answer"])]
for chunk_idx in chunk_idxs[:1]:
if ans["reference"]["chunks"][chunk_idx]["img_id"]:
try:
bkt, nm = ans["reference"]["chunks"][chunk_idx]["img_id"].split("-")
response = STORAGE_IMPL.get(bkt, nm)
data_type_picture["url"] = base64.b64encode(response).decode('utf-8')
data.append(data_type_picture)
break
except Exception as e:
return server_error_response(e)
response = {"code": 200, "msg": "success", "data": data}
return response
# ******************For dialog******************
conv.message.append(msg[-1])
e, dia = DialogService.get_by_id(conv.dialog_id)
if not e:
@ -689,17 +762,9 @@ def completion_faq():
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": ""})
conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []})
def fillin_conv(ans):
nonlocal conv
if not conv.reference:
conv.reference.append(ans["reference"])
else:
conv.reference[-1] = ans["reference"]
conv.message[-1] = {"role": "assistant", "content": ans["answer"]}
data_type_picture = {
"type": 3,
"url": "base64 content"

View File

@ -18,6 +18,8 @@ from functools import partial
from flask import request, Response
from flask_login import login_required, current_user
from api.db.services.canvas_service import CanvasTemplateService, UserCanvasService
from api.db.services.dialog_service import full_question
from api.db.services.user_service import TenantService
from api.settings import RetCode
from api.utils import get_uuid
from api.utils.api_utils import get_json_result, server_error_response, validate_request, get_data_error_result
@ -108,6 +110,9 @@ def run():
canvas = Canvas(cvs.dsl, current_user.id)
if "message" in req:
canvas.messages.append({"role": "user", "content": req["message"], "id": message_id})
if len([m for m in canvas.messages if m["role"] == "user"]) > 1:
ten = TenantService.get_by_user_id(current_user.id)[0]
req["message"] = full_question(ten["tenant_id"], ten["llm_id"], canvas.messages)
canvas.add_user_input(req["message"])
answer = canvas.run(stream=stream)
print(canvas)

View File

@ -27,7 +27,7 @@ from rag.utils.es_conn import ELASTICSEARCH
from rag.utils import rmSpace
from api.db import LLMType, ParserType
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import TenantLLMService
from api.db.services.llm_service import LLMBundle
from api.db.services.user_service import UserTenantService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.db.services.document_service import DocumentService
@ -141,8 +141,7 @@ def set():
return get_data_error_result(retmsg="Tenant not found!")
embd_id = DocumentService.get_embd_id(req["doc_id"])
embd_mdl = TenantLLMService.model_instance(
tenant_id, LLMType.EMBEDDING.value, embd_id)
embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING, embd_id)
e, doc = DocumentService.get_by_id(req["doc_id"])
if not e:
@ -235,8 +234,7 @@ def create():
return get_data_error_result(retmsg="Tenant not found!")
embd_id = DocumentService.get_embd_id(req["doc_id"])
embd_mdl = TenantLLMService.model_instance(
tenant_id, LLMType.EMBEDDING.value, embd_id)
embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING.value, embd_id)
v, c = embd_mdl.encode([doc.name, req["content_with_weight"]])
v = 0.1 * v[0] + 0.9 * v[1]
@ -281,16 +279,14 @@ def retrieval_test():
if not e:
return get_data_error_result(retmsg="Knowledgebase not found!")
embd_mdl = TenantLLMService.model_instance(
kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
rerank_mdl = None
if req.get("rerank_id"):
rerank_mdl = TenantLLMService.model_instance(
kb.tenant_id, LLMType.RERANK.value, llm_name=req["rerank_id"])
rerank_mdl = LLMBundle(kb.tenant_id, LLMType.RERANK.value, llm_name=req["rerank_id"])
if req.get("keyword", False):
chat_mdl = TenantLLMService.model_instance(kb.tenant_id, LLMType.CHAT)
chat_mdl = LLMBundle(kb.tenant_id, LLMType.CHAT)
question += keyword_extraction(chat_mdl, question)
retr = retrievaler if kb.parser_id != ParserType.KG else kg_retrievaler

View File

@ -37,7 +37,9 @@ from graphrag.mind_map_extractor import MindMapExtractor
def set_conversation():
req = request.json
conv_id = req.get("conversation_id")
if conv_id:
is_new = req.get("is_new")
del req["is_new"]
if not is_new:
del req["conversation_id"]
try:
if not ConversationService.update_by_id(conv_id, req):
@ -56,7 +58,7 @@ def set_conversation():
if not e:
return get_data_error_result(retmsg="Dialog not found")
conv = {
"id": get_uuid(),
"id": conv_id,
"dialog_id": req["dialog_id"],
"name": req.get("name", "New conversation"),
"message": [{"role": "assistant", "content": dia.prompt_config["prologue"]}]
@ -228,8 +230,9 @@ def tts():
def stream_audio():
try:
for chunk in tts_mdl.tts(text):
yield chunk
for txt in re.split(r"[,。/《》?;:!\n\r:;]+", text):
for chunk in tts_mdl.tts(txt):
yield chunk
except Exception as e:
yield ("data:" + json.dumps({"retcode": 500, "retmsg": str(e),
"data": {"answer": "**ERROR**: " + str(e)}},

View File

@ -381,6 +381,8 @@ def upload_documents(dataset_id):
doc["parser_id"] = ParserType.AUDIO.value
if re.search(r"\.(ppt|pptx|pages)$", filename):
doc["parser_id"] = ParserType.PRESENTATION.value
if re.search(r"\.(eml)$", filename):
doc["parser_id"] = ParserType.EMAIL.value
DocumentService.insert(doc)
FileService.add_file_from_kb(doc, kb_folder["id"], dataset.tenant_id)
@ -420,7 +422,7 @@ def delete_document(document_id, dataset_id): # string
f" reason!", code=RetCode.AUTHENTICATION_ERROR)
# get the doc's id and location
real_dataset_id, location = File2DocumentService.get_minio_address(doc_id=document_id)
real_dataset_id, location = File2DocumentService.get_storage_address(doc_id=document_id)
if real_dataset_id != dataset_id:
return construct_json_result(message=f"The document {document_id} is not in the dataset: {dataset_id}, "
@ -595,7 +597,7 @@ def download_document(dataset_id, document_id):
code=RetCode.ARGUMENT_ERROR)
# The process of downloading
doc_id, doc_location = File2DocumentService.get_minio_address(doc_id=document_id) # minio address
doc_id, doc_location = File2DocumentService.get_storage_address(doc_id=document_id) # minio address
file_stream = STORAGE_IMPL.get(doc_id, doc_location)
if not file_stream:
return construct_json_result(message="This file is empty.", code=RetCode.DATA_ERROR)
@ -736,7 +738,7 @@ def parsing_document_internal(id):
doc_attributes = doc_attributes.to_dict()
doc_id = doc_attributes["id"]
bucket, doc_name = File2DocumentService.get_minio_address(doc_id=doc_id)
bucket, doc_name = File2DocumentService.get_storage_address(doc_id=doc_id)
binary = STORAGE_IMPL.get(bucket, doc_name)
parser_name = doc_attributes["parser_id"]
if binary:

View File

@ -139,6 +139,8 @@ def web_crawl():
doc["parser_id"] = ParserType.AUDIO.value
if re.search(r"\.(ppt|pptx|pages)$", filename):
doc["parser_id"] = ParserType.PRESENTATION.value
if re.search(r"\.(eml)$", filename):
doc["parser_id"] = ParserType.EMAIL.value
DocumentService.insert(doc)
FileService.add_file_from_kb(doc, kb_folder["id"], kb.tenant_id)
except Exception as e:
@ -297,7 +299,7 @@ def rm():
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
b, n = File2DocumentService.get_minio_address(doc_id=doc_id)
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(
@ -342,7 +344,7 @@ def run():
e, doc = DocumentService.get_by_id(id)
doc = doc.to_dict()
doc["tenant_id"] = tenant_id
bucket, name = File2DocumentService.get_minio_address(doc_id=doc["id"])
bucket, name = File2DocumentService.get_storage_address(doc_id=doc["id"])
queue_tasks(doc, bucket, name)
return get_json_result(data=True)
@ -393,7 +395,7 @@ def get(doc_id):
if not e:
return get_data_error_result(retmsg="Document not found!")
b, n = File2DocumentService.get_minio_address(doc_id=doc_id)
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
response = flask.make_response(STORAGE_IMPL.get(b, n))
ext = re.search(r"\.([^.]+)$", doc.name)

View File

@ -77,7 +77,7 @@ def convert():
doc = DocumentService.insert({
"id": get_uuid(),
"kb_id": kb.id,
"parser_id": kb.parser_id,
"parser_id": FileService.get_parser(file.type, file.name, kb.parser_id),
"parser_config": kb.parser_config,
"created_by": current_user.id,
"type": file.type,

View File

@ -332,7 +332,7 @@ def get(file_id):
e, file = FileService.get_by_id(file_id)
if not e:
return get_data_error_result(retmsg="Document not found!")
b, n = File2DocumentService.get_minio_address(file_id=file_id)
b, n = File2DocumentService.get_storage_address(file_id=file_id)
response = flask.make_response(STORAGE_IMPL.get(b, n))
ext = re.search(r"\.([^.]+)$", file.name)
if ext:

View File

@ -13,9 +13,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
from flask import request
from flask_login import login_required, current_user
from api.db.services.llm_service import LLMFactoriesService, TenantLLMService, LLMService
from api.settings import LIGHTEN
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.db import StatusEnum, LLMType
from api.db.db_models import TenantLLM
@ -93,24 +96,27 @@ def set_api_key():
if msg:
return get_data_error_result(retmsg=msg)
llm = {
llm_config = {
"api_key": req["api_key"],
"api_base": req.get("base_url", "")
}
for n in ["model_type", "llm_name"]:
if n in req:
llm[n] = req[n]
llm_config[n] = req[n]
if not TenantLLMService.filter_update(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == factory], llm):
for llm in LLMService.query(fid=factory):
for llm in LLMService.query(fid=factory):
if not TenantLLMService.filter_update(
[TenantLLM.tenant_id == current_user.id,
TenantLLM.llm_factory == factory,
TenantLLM.llm_name == llm.llm_name],
llm_config):
TenantLLMService.save(
tenant_id=current_user.id,
llm_factory=factory,
llm_name=llm.llm_name,
model_type=llm.model_type,
api_key=req["api_key"],
api_base=req.get("base_url", "")
api_key=llm_config["api_key"],
api_base=llm_config["api_base"]
)
return get_json_result(data=True)
@ -123,56 +129,63 @@ def add_llm():
req = request.json
factory = req["llm_factory"]
def apikey_json(keys):
nonlocal req
return json.dumps({k: req.get(k, "") for k in keys})
if factory == "VolcEngine":
# For VolcEngine, due to its special authentication method
# Assemble ark_api_key endpoint_id into api_key
llm_name = req["llm_name"]
api_key = '{' + f'"ark_api_key": "{req.get("ark_api_key", "")}", ' \
f'"ep_id": "{req.get("endpoint_id", "")}", ' + '}'
api_key = apikey_json(["ark_api_key", "endpoint_id"])
elif factory == "Tencent Hunyuan":
api_key = '{' + f'"hunyuan_sid": "{req.get("hunyuan_sid", "")}", ' \
f'"hunyuan_sk": "{req.get("hunyuan_sk", "")}"' + '}'
req["api_key"] = api_key
req["api_key"] = apikey_json(["hunyuan_sid", "hunyuan_sk"])
return set_api_key()
elif factory == "Tencent Cloud":
api_key = '{' + f'"tencent_cloud_sid": "{req.get("tencent_cloud_sid", "")}", ' \
f'"tencent_cloud_sk": "{req.get("tencent_cloud_sk", "")}"' + '}'
req["api_key"] = api_key
req["api_key"] = apikey_json(["tencent_cloud_sid", "tencent_cloud_sk"])
elif factory == "Bedrock":
# For Bedrock, due to its special authentication method
# Assemble bedrock_ak, bedrock_sk, bedrock_region
llm_name = req["llm_name"]
api_key = '{' + f'"bedrock_ak": "{req.get("bedrock_ak", "")}", ' \
f'"bedrock_sk": "{req.get("bedrock_sk", "")}", ' \
f'"bedrock_region": "{req.get("bedrock_region", "")}", ' + '}'
api_key = apikey_json(["bedrock_ak", "bedrock_sk", "bedrock_region"])
elif factory == "LocalAI":
llm_name = req["llm_name"]+"___LocalAI"
api_key = "xxxxxxxxxxxxxxx"
elif factory == "HuggingFace":
llm_name = req["llm_name"]+"___HuggingFace"
api_key = "xxxxxxxxxxxxxxx"
elif factory == "OpenAI-API-Compatible":
llm_name = req["llm_name"]+"___OpenAI-API"
api_key = req.get("api_key","xxxxxxxxxxxxxxx")
elif factory =="XunFei Spark":
llm_name = req["llm_name"]
api_key = req.get("spark_api_password","xxxxxxxxxxxxxxx")
if req["model_type"] == "chat":
api_key = req.get("spark_api_password", "xxxxxxxxxxxxxxx")
elif req["model_type"] == "tts":
api_key = apikey_json(["spark_app_id", "spark_api_secret","spark_api_key"])
elif factory == "BaiduYiyan":
llm_name = req["llm_name"]
api_key = '{' + f'"yiyan_ak": "{req.get("yiyan_ak", "")}", ' \
f'"yiyan_sk": "{req.get("yiyan_sk", "")}"' + '}'
api_key = apikey_json(["yiyan_ak", "yiyan_sk"])
elif factory == "Fish Audio":
llm_name = req["llm_name"]
api_key = '{' + f'"fish_audio_ak": "{req.get("fish_audio_ak", "")}", ' \
f'"fish_audio_refid": "{req.get("fish_audio_refid", "59cb5986671546eaa6ca8ae6f29f6d22")}"' + '}'
api_key = apikey_json(["fish_audio_ak", "fish_audio_refid"])
elif factory == "Google Cloud":
llm_name = req["llm_name"]
api_key = (
"{" + f'"google_project_id": "{req.get("google_project_id", "")}", '
f'"google_region": "{req.get("google_region", "")}", '
f'"google_service_account_key": "{req.get("google_service_account_key", "")}"'
+ "}"
)
api_key = apikey_json(["google_project_id", "google_region", "google_service_account_key"])
else:
llm_name = req["llm_name"]
api_key = req.get("api_key","xxxxxxxxxxxxxxx")
api_key = req.get("api_key", "xxxxxxxxxxxxxxx")
llm = {
"tenant_id": current_user.id,
@ -276,6 +289,16 @@ def delete_llm():
return get_json_result(data=True)
@manager.route('/delete_factory', methods=['POST'])
@login_required
@validate_request("llm_factory")
def delete_factory():
req = request.json
TenantLLMService.filter_delete(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"]])
return get_json_result(data=True)
@manager.route('/my_llms', methods=['GET'])
@login_required
def my_llms():
@ -300,15 +323,17 @@ def my_llms():
@manager.route('/list', methods=['GET'])
@login_required
def list_app():
self_deploied = ["Youdao","FastEmbed", "BAAI", "Ollama", "Xinference", "LocalAI", "LM-Studio"]
weighted = ["Youdao","FastEmbed", "BAAI"] if LIGHTEN else []
model_type = request.args.get("model_type")
try:
objs = TenantLLMService.query(tenant_id=current_user.id)
facts = set([o.to_dict()["llm_factory"] for o in objs if o.api_key])
llms = LLMService.get_all()
llms = [m.to_dict()
for m in llms if m.status == StatusEnum.VALID.value]
for m in llms if m.status == StatusEnum.VALID.value and m.fid not in weighted]
for m in llms:
m["available"] = m["fid"] in facts or m["llm_name"].lower() == "flag-embedding" or m["fid"] in ["Youdao","FastEmbed", "BAAI"]
m["available"] = m["fid"] in facts or m["llm_name"].lower() == "flag-embedding" or m["fid"] in self_deploied
llm_set = set([m["llm_name"] for m in llms])
for o in objs:

View File

@ -84,15 +84,29 @@ def upload(dataset_id, tenant_id):
@token_required
def docinfos(tenant_id):
req = request.args
if "id" not in req and "name" not in req:
return get_data_error_result(
retmsg="Id or name should be provided")
doc_id=None
if "id" in req:
doc_id = req["id"]
e, doc = DocumentService.get_by_id(doc_id)
return get_json_result(data=doc.to_json())
if "name" in req:
doc_name = req["name"]
doc_id = DocumentService.get_doc_id_by_doc_name(doc_name)
e, doc = DocumentService.get_by_id(doc_id)
return get_json_result(data=doc.to_json())
e, doc = DocumentService.get_by_id(doc_id)
#rename key's name
key_mapping = {
"chunk_num": "chunk_count",
"kb_id": "knowledgebase_id",
"token_num": "token_count",
"parser_id":"parser_method",
}
renamed_doc = {}
for key, value in doc.to_dict().items():
new_key = key_mapping.get(key, key)
renamed_doc[new_key] = value
return get_json_result(data=renamed_doc)
@manager.route('/save', methods=['POST'])
@ -112,10 +126,14 @@ def save_doc(tenant_id):
if not e:
return get_data_error_result(retmsg="Document not found!")
#other value can't be changed
if "chunk_num" in req:
if req["chunk_num"] != doc.chunk_num:
if "chunk_count" in req:
if req["chunk_count"] != doc.chunk_num:
return get_data_error_result(
retmsg="Can't change chunk_count.")
if "token_count" in req:
if req["token_count"] != doc.token_num:
return get_data_error_result(
retmsg="Can't change token_count.")
if "progress" in req:
if req['progress'] != doc.progress:
return get_data_error_result(
@ -145,9 +163,9 @@ def save_doc(tenant_id):
FileService.update_by_id(file.id, {"name": req["name"]})
except Exception as e:
return server_error_response(e)
if "parser_id" in req:
if "parser_method" in req:
try:
if doc.parser_id.lower() == req["parser_id"].lower():
if doc.parser_id.lower() == req["parser_method"].lower():
if "parser_config" in req:
if req["parser_config"] == doc.parser_config:
return get_json_result(data=True)
@ -159,7 +177,7 @@ def save_doc(tenant_id):
return get_data_error_result(retmsg="Not supported yet!")
e = DocumentService.update_by_id(doc.id,
{"parser_id": req["parser_id"], "progress": 0, "progress_msg": "",
{"parser_id": req["parser_method"], "progress": 0, "progress_msg": "",
"run": TaskStatus.UNSTART.value})
if not e:
return get_data_error_result(retmsg="Document not found!")
@ -170,7 +188,7 @@ def save_doc(tenant_id):
doc.process_duation * -1)
if not e:
return get_data_error_result(retmsg="Document not found!")
tenant_id = DocumentService.get_tenant_id(req["doc_id"])
tenant_id = DocumentService.get_tenant_id(req["id"])
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
ELASTICSEARCH.deleteByQuery(
@ -259,7 +277,7 @@ def rename():
@manager.route("/<document_id>", methods=["GET"])
@token_required
def download_document(dataset_id, document_id):
def download_document(document_id,tenant_id):
try:
# Check whether there is this document
exist, document = DocumentService.get_by_id(document_id)
@ -268,7 +286,7 @@ def download_document(dataset_id, document_id):
code=RetCode.ARGUMENT_ERROR)
# The process of downloading
doc_id, doc_location = File2DocumentService.get_minio_address(doc_id=document_id) # minio address
doc_id, doc_location = File2DocumentService.get_storage_address(doc_id=document_id) # minio address
file_stream = STORAGE_IMPL.get(doc_id, doc_location)
if not file_stream:
return construct_json_result(message="This file is empty.", code=RetCode.DATA_ERROR)
@ -291,7 +309,7 @@ def download_document(dataset_id, document_id):
@manager.route('/dataset/<dataset_id>/documents', methods=['GET'])
@token_required
def list_docs(dataset_id, tenant_id):
kb_id = request.args.get("kb_id")
kb_id = request.args.get("knowledgebase_id")
if not kb_id:
return get_json_result(
data=False, retmsg='Lack of "KB ID"', retcode=RetCode.ARGUMENT_ERROR)
@ -313,7 +331,22 @@ def list_docs(dataset_id, tenant_id):
try:
docs, tol = DocumentService.get_by_kb_id(
kb_id, page_number, items_per_page, orderby, desc, keywords)
return get_json_result(data={"total": tol, "docs": docs})
# rename key's name
renamed_doc_list = []
for doc in docs:
key_mapping = {
"chunk_num": "chunk_count",
"kb_id": "knowledgebase_id",
"token_num": "token_count",
"parser_id":"parser_method"
}
renamed_doc = {}
for key, value in doc.items():
new_key = key_mapping.get(key, key)
renamed_doc[new_key] = value
renamed_doc_list.append(renamed_doc)
return get_json_result(data={"total": tol, "docs": renamed_doc_list})
except Exception as e:
return server_error_response(e)
@ -322,10 +355,10 @@ def list_docs(dataset_id, tenant_id):
@token_required
def rm(tenant_id):
req = request.args
if "doc_id" not in req:
if "document_id" not in req:
return get_data_error_result(
retmsg="doc_id is required")
doc_ids = req["doc_id"]
doc_ids = req["document_id"]
if isinstance(doc_ids, str): doc_ids = [doc_ids]
root_folder = FileService.get_root_folder(tenant_id)
pf_id = root_folder["id"]
@ -340,7 +373,7 @@ def rm(tenant_id):
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
b, n = File2DocumentService.get_minio_address(doc_id=doc_id)
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(
@ -386,7 +419,7 @@ def show_parsing_status(tenant_id, document_id):
def run(tenant_id):
req = request.json
try:
for id in req["doc_ids"]:
for id in req["document_ids"]:
info = {"run": str(req["run"]), "progress": 0}
if str(req["run"]) == TaskStatus.RUNNING.value:
info["progress_msg"] = ""
@ -405,7 +438,7 @@ def run(tenant_id):
e, doc = DocumentService.get_by_id(id)
doc = doc.to_dict()
doc["tenant_id"] = tenant_id
bucket, name = File2DocumentService.get_minio_address(doc_id=doc["id"])
bucket, name = File2DocumentService.get_storage_address(doc_id=doc["id"])
queue_tasks(doc, bucket, name)
return get_json_result(data=True)
@ -415,15 +448,15 @@ def run(tenant_id):
@manager.route('/chunk/list', methods=['POST'])
@token_required
@validate_request("doc_id")
@validate_request("document_id")
def list_chunk(tenant_id):
req = request.json
doc_id = req["doc_id"]
doc_id = req["document_id"]
page = int(req.get("page", 1))
size = int(req.get("size", 30))
question = req.get("keywords", "")
try:
tenant_id = DocumentService.get_tenant_id(req["doc_id"])
tenant_id = DocumentService.get_tenant_id(req["document_id"])
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
e, doc = DocumentService.get_by_id(doc_id)
@ -436,6 +469,8 @@ def list_chunk(tenant_id):
query["available_int"] = int(req["available_int"])
sres = retrievaler.search(query, search.index_name(tenant_id), highlight=True)
res = {"total": sres.total, "chunks": [], "doc": doc.to_dict()}
origin_chunks=[]
for id in sres.ids:
d = {
"chunk_id": id,
@ -455,7 +490,22 @@ def list_chunk(tenant_id):
poss.append([float(d["positions"][i]), float(d["positions"][i + 1]), float(d["positions"][i + 2]),
float(d["positions"][i + 3]), float(d["positions"][i + 4])])
d["positions"] = poss
res["chunks"].append(d)
origin_chunks.append(d)
##rename keys
for chunk in origin_chunks:
key_mapping = {
"chunk_id": "id",
"content_with_weight": "content",
"doc_id": "document_id",
"important_kwd": "important_keywords",
"img_id":"image_id",
}
renamed_chunk = {}
for key, value in chunk.items():
new_key = key_mapping.get(key, key)
renamed_chunk[new_key] = value
res["chunks"].append(renamed_chunk)
return get_json_result(data=res)
except Exception as e:
if str(e).find("not_found") > 0:
@ -466,14 +516,15 @@ def list_chunk(tenant_id):
@manager.route('/chunk/create', methods=['POST'])
@token_required
@validate_request("doc_id", "content_with_weight")
@validate_request("document_id", "content")
def create(tenant_id):
req = request.json
md5 = hashlib.md5()
md5.update((req["content_with_weight"] + req["doc_id"]).encode("utf-8"))
chunck_id = md5.hexdigest()
d = {"id": chunck_id, "content_ltks": rag_tokenizer.tokenize(req["content_with_weight"]),
"content_with_weight": req["content_with_weight"]}
md5.update((req["content"] + req["document_id"]).encode("utf-8"))
chunk_id = md5.hexdigest()
d = {"id": chunk_id, "content_ltks": rag_tokenizer.tokenize(req["content"]),
"content_with_weight": req["content"]}
d["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(d["content_ltks"])
d["important_kwd"] = req.get("important_kwd", [])
d["important_tks"] = rag_tokenizer.tokenize(" ".join(req.get("important_kwd", [])))
@ -481,44 +532,61 @@ def create(tenant_id):
d["create_timestamp_flt"] = datetime.datetime.now().timestamp()
try:
e, doc = DocumentService.get_by_id(req["doc_id"])
e, doc = DocumentService.get_by_id(req["document_id"])
if not e:
return get_data_error_result(retmsg="Document not found!")
d["kb_id"] = [doc.kb_id]
d["docnm_kwd"] = doc.name
d["doc_id"] = doc.id
tenant_id = DocumentService.get_tenant_id(req["doc_id"])
tenant_id = DocumentService.get_tenant_id(req["document_id"])
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
embd_id = DocumentService.get_embd_id(req["doc_id"])
embd_id = DocumentService.get_embd_id(req["document_id"])
embd_mdl = TenantLLMService.model_instance(
tenant_id, LLMType.EMBEDDING.value, embd_id)
v, c = embd_mdl.encode([doc.name, req["content_with_weight"]])
v, c = embd_mdl.encode([doc.name, req["content"]])
v = 0.1 * v[0] + 0.9 * v[1]
d["q_%d_vec" % len(v)] = v.tolist()
ELASTICSEARCH.upsert([d], search.index_name(tenant_id))
DocumentService.increment_chunk_num(
doc.id, doc.kb_id, c, 1, 0)
return get_json_result(data={"chunk": d})
# return get_json_result(data={"chunk_id": chunck_id})
d["chunk_id"] = chunk_id
#rename keys
key_mapping = {
"chunk_id": "id",
"content_with_weight": "content",
"doc_id": "document_id",
"important_kwd": "important_keywords",
"kb_id":"dataset_id",
"create_timestamp_flt":"create_timestamp",
"create_time": "create_time",
"document_keyword":"document",
}
renamed_chunk = {}
for key, value in d.items():
if key in key_mapping:
new_key = key_mapping.get(key, key)
renamed_chunk[new_key] = value
return get_json_result(data={"chunk": renamed_chunk})
# return get_json_result(data={"chunk_id": chunk_id})
except Exception as e:
return server_error_response(e)
@manager.route('/chunk/rm', methods=['POST'])
@token_required
@validate_request("chunk_ids", "doc_id")
def rm_chunk():
@validate_request("chunk_ids", "document_id")
def rm_chunk(tenant_id):
req = request.json
try:
if not ELASTICSEARCH.deleteByQuery(
Q("ids", values=req["chunk_ids"]), search.index_name(current_user.id)):
Q("ids", values=req["chunk_ids"]), search.index_name(tenant_id)):
return get_data_error_result(retmsg="Index updating failure")
e, doc = DocumentService.get_by_id(req["doc_id"])
e, doc = DocumentService.get_by_id(req["document_id"])
if not e:
return get_data_error_result(retmsg="Document not found!")
deleted_chunk_ids = req["chunk_ids"]
@ -527,3 +595,126 @@ def rm_chunk():
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@manager.route('/chunk/set', methods=['POST'])
@token_required
@validate_request("document_id", "chunk_id", "content",
"important_keywords")
def set(tenant_id):
req = request.json
d = {
"id": req["chunk_id"],
"content_with_weight": req["content"]}
d["content_ltks"] = rag_tokenizer.tokenize(req["content"])
d["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(d["content_ltks"])
d["important_kwd"] = req["important_keywords"]
d["important_tks"] = rag_tokenizer.tokenize(" ".join(req["important_keywords"]))
if "available" in req:
d["available_int"] = req["available"]
try:
tenant_id = DocumentService.get_tenant_id(req["document_id"])
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
embd_id = DocumentService.get_embd_id(req["document_id"])
embd_mdl = TenantLLMService.model_instance(
tenant_id, LLMType.EMBEDDING.value, embd_id)
e, doc = DocumentService.get_by_id(req["document_id"])
if not e:
return get_data_error_result(retmsg="Document not found!")
if doc.parser_id == ParserType.QA:
arr = [
t for t in re.split(
r"[\n\t]",
req["content"]) if len(t) > 1]
if len(arr) != 2:
return get_data_error_result(
retmsg="Q&A must be separated by TAB/ENTER key.")
q, a = rmPrefix(arr[0]), rmPrefix(arr[1])
d = beAdoc(d, arr[0], arr[1], not any(
[rag_tokenizer.is_chinese(t) for t in q + a]))
v, c = embd_mdl.encode([doc.name, req["content"]])
v = 0.1 * v[0] + 0.9 * v[1] if doc.parser_id != ParserType.QA else v[1]
d["q_%d_vec" % len(v)] = v.tolist()
ELASTICSEARCH.upsert([d], search.index_name(tenant_id))
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@manager.route('/retrieval_test', methods=['POST'])
@token_required
@validate_request("knowledgebase_id", "question")
def retrieval_test(tenant_id):
req = request.json
page = int(req.get("page", 1))
size = int(req.get("size", 30))
question = req["question"]
kb_id = req["knowledgebase_id"]
if isinstance(kb_id, str): kb_id = [kb_id]
doc_ids = req.get("doc_ids", [])
similarity_threshold = float(req.get("similarity_threshold", 0.2))
vector_similarity_weight = float(req.get("vector_similarity_weight", 0.3))
top = int(req.get("top_k", 1024))
try:
tenants = UserTenantService.query(user_id=tenant_id)
for kid in kb_id:
for tenant in tenants:
if KnowledgebaseService.query(
tenant_id=tenant.tenant_id, id=kid):
break
else:
return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_id[0])
if not e:
return get_data_error_result(retmsg="Knowledgebase not found!")
embd_mdl = TenantLLMService.model_instance(
kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
rerank_mdl = None
if req.get("rerank_id"):
rerank_mdl = TenantLLMService.model_instance(
kb.tenant_id, LLMType.RERANK.value, llm_name=req["rerank_id"])
if req.get("keyword", False):
chat_mdl = TenantLLMService.model_instance(kb.tenant_id, LLMType.CHAT)
question += keyword_extraction(chat_mdl, question)
retr = retrievaler if kb.parser_id != ParserType.KG else kg_retrievaler
ranks = retr.retrieval(question, embd_mdl, kb.tenant_id, kb_id, page, size,
similarity_threshold, vector_similarity_weight, top,
doc_ids, rerank_mdl=rerank_mdl, highlight=req.get("highlight"))
for c in ranks["chunks"]:
if "vector" in c:
del c["vector"]
##rename keys
renamed_chunks=[]
for chunk in ranks["chunks"]:
key_mapping = {
"chunk_id": "id",
"content_with_weight": "content",
"doc_id": "document_id",
"important_kwd": "important_keywords",
"docnm_kwd":"document_keyword"
}
rename_chunk={}
for key, value in chunk.items():
new_key = key_mapping.get(key, key)
rename_chunk[new_key] = value
renamed_chunks.append(rename_chunk)
ranks["chunks"] = renamed_chunks
return get_json_result(data=ranks)
except Exception as e:
if str(e).find("not_found") > 0:
return get_json_result(data=False, retmsg=f'No chunk found! Check the chunk status please!',
retcode=RetCode.DATA_ERROR)
return server_error_response(e)

View File

@ -87,9 +87,9 @@ def completion(tenant_id):
# req = {"conversation_id": "9aaaca4c11d311efa461fa163e197198", "messages": [
# {"role": "user", "content": "上海有吗?"}
# ]}
if "id" not in req:
return get_data_error_result(retmsg="id is required")
conv = ConversationService.query(id=req["id"])
if "session_id" not in req:
return get_data_error_result(retmsg="session_id is required")
conv = ConversationService.query(id=req["session_id"])
if not conv:
return get_data_error_result(retmsg="Session does not exist")
conv = conv[0]
@ -108,7 +108,7 @@ def completion(tenant_id):
msg.append(m)
message_id = msg[-1].get("id")
e, dia = DialogService.get_by_id(conv.dialog_id)
del req["id"]
del req["session_id"]
if not conv.reference:
conv.reference = []
@ -168,6 +168,9 @@ def get(tenant_id):
return get_data_error_result(retmsg="Session does not exist")
if not DialogService.query(id=conv[0].dialog_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_data_error_result(retmsg="You do not own the session")
if "assistant_id" in req:
if req["assistant_id"] != conv[0].dialog_id:
return get_data_error_result(retmsg="The session doesn't belong to the assistant")
conv = conv[0].to_dict()
conv['messages'] = conv.pop("message")
conv["assistant_id"] = conv.pop("dialog_id")
@ -207,7 +210,7 @@ def list(tenant_id):
assistant_id = request.args["assistant_id"]
if not DialogService.query(tenant_id=tenant_id, id=assistant_id, status=StatusEnum.VALID.value):
return get_json_result(
data=False, retmsg=f'Only owner of the assistant is authorized for this operation.',
data=False, retmsg=f"You don't own the assistant.",
retcode=RetCode.OPERATING_ERROR)
convs = ConversationService.query(
dialog_id=assistant_id,

View File

@ -18,11 +18,12 @@ import json
from flask_login import login_required
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.settings import DATABASE_TYPE
from api.utils.api_utils import get_json_result
from api.versions import get_rag_version
from rag.settings import SVR_QUEUE_NAME
from rag.utils.es_conn import ELASTICSEARCH
from rag.utils.storage_factory import STORAGE_IMPL
from rag.utils.storage_factory import STORAGE_IMPL, STORAGE_IMPL_TYPE
from timeit import default_timer as timer
from rag.utils.redis_conn import REDIS_CONN
@ -48,16 +49,16 @@ def status():
st = timer()
try:
STORAGE_IMPL.health()
res["minio"] = {"status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)}
res["storage"] = {"storage": STORAGE_IMPL_TYPE.lower(), "status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)}
except Exception as e:
res["minio"] = {"status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
res["storage"] = {"storage": STORAGE_IMPL_TYPE.lower(), "status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
st = timer()
try:
KnowledgebaseService.get_by_id("x")
res["mysql"] = {"status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)}
res["database"] = {"database": DATABASE_TYPE.lower(), "status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)}
except Exception as e:
res["mysql"] = {"status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
res["database"] = {"database": DATABASE_TYPE.lower(), "status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
st = timer()
try:

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import base64
import json
import os
import time
@ -31,10 +32,15 @@ from api.settings import CHAT_MDL, EMBEDDING_MDL, ASR_MDL, IMAGE2TEXT_MDL, PARSE
from api.utils.file_utils import get_project_base_directory
def encode_to_base64(input_string):
base64_encoded = base64.b64encode(input_string.encode('utf-8'))
return base64_encoded.decode('utf-8')
def init_superuser():
user_info = {
"id": uuid.uuid1().hex,
"password": "admin",
"password": encode_to_base64("admin"),
"nickname": "admin",
"is_superuser": True,
"email": "admin@ragflow.io",
@ -172,8 +178,8 @@ def init_web_data():
start_time = time.time()
init_llm_factory()
if not UserService.get_all().count():
init_superuser()
#if not UserService.get_all().count():
# init_superuser()
add_graph_templates()
print("init web data success:{}".format(time.time() - start_time))

View File

@ -78,6 +78,7 @@ def message_fit_in(msg, max_length=4000):
def llm_id2llm_type(llm_id):
llm_id = llm_id.split("@")[0]
fnm = os.path.join(get_project_base_directory(), "conf")
llm_factories = json.load(open(os.path.join(fnm, "llm_factories.json"), "r"))
for llm_factory in llm_factories["factory_llm_infos"]:
@ -89,9 +90,15 @@ def llm_id2llm_type(llm_id):
def chat(dialog, messages, stream=True, **kwargs):
assert messages[-1]["role"] == "user", "The last content of this conversation is not from user."
st = timer()
llm = LLMService.query(llm_name=dialog.llm_id)
tmp = dialog.llm_id.split("@")
fid = None
llm_id = tmp[0]
if len(tmp)>1: fid = tmp[1]
llm = LLMService.query(llm_name=llm_id) if not fid else LLMService.query(llm_name=llm_id, fid=fid)
if not llm:
llm = TenantLLMService.query(tenant_id=dialog.tenant_id, llm_name=dialog.llm_id)
llm = TenantLLMService.query(tenant_id=dialog.tenant_id, llm_name=llm_id) if not fid else \
TenantLLMService.query(tenant_id=dialog.tenant_id, llm_name=llm_id, llm_factory=fid)
if not llm:
raise LookupError("LLM(%s) not found" % dialog.llm_id)
max_tokens = 8192
@ -142,6 +149,11 @@ def chat(dialog, messages, stream=True, **kwargs):
prompt_config["system"] = prompt_config["system"].replace(
"{%s}" % p["key"], " ")
if len(questions) > 1 and prompt_config.get("refine_multiturn"):
questions = [full_question(dialog.tenant_id, dialog.llm_id, messages)]
else:
questions = questions[-1:]
rerank_mdl = None
if dialog.rerank_id:
rerank_mdl = LLMBundle(dialog.tenant_id, LLMType.RERANK, dialog.rerank_id)
@ -168,7 +180,7 @@ def chat(dialog, messages, stream=True, **kwargs):
yield {"answer": empty_res, "reference": kbinfos, "audio_binary": tts(tts_mdl, empty_res)}
return {"answer": prompt_config["empty_response"], "reference": kbinfos}
kwargs["knowledge"] = "\n------\n".join(knowledges)
kwargs["knowledge"] = "\n\n------\n\n".join(knowledges)
gen_conf = dialog.llm_setting
msg = [{"role": "system", "content": prompt_config["system"].format(**kwargs)}]
@ -209,7 +221,7 @@ def chat(dialog, messages, stream=True, **kwargs):
if answer.lower().find("invalid key") >= 0 or answer.lower().find("invalid api") >= 0:
answer += " Please set LLM API-Key in 'User Setting -> Model Providers -> API-Key'"
done_tm = timer()
prompt += "\n### Elapsed\n - Retrieval: %.1f ms\n - LLM: %.1f ms"%((retrieval_tm-st)*1000, (done_tm-st)*1000)
prompt += "\n\n### Elapsed\n - Retrieval: %.1f ms\n - LLM: %.1f ms"%((retrieval_tm-st)*1000, (done_tm-st)*1000)
return {"answer": answer, "reference": refs, "prompt": prompt}
if stream:
@ -403,6 +415,58 @@ def rewrite(tenant_id, llm_id, question):
return ans
def full_question(tenant_id, llm_id, messages):
if llm_id2llm_type(llm_id) == "image2text":
chat_mdl = LLMBundle(tenant_id, LLMType.IMAGE2TEXT, llm_id)
else:
chat_mdl = LLMBundle(tenant_id, LLMType.CHAT, llm_id)
conv = []
for m in messages:
if m["role"] not in ["user", "assistant"]: continue
conv.append("{}: {}".format(m["role"].upper(), m["content"]))
conv = "\n".join(conv)
prompt = f"""
Role: A helpful assistant
Task: Generate a full user question that would follow the conversation.
Requirements & Restrictions:
- Text generated MUST be in the same language of the original user's question.
- If the user's latest question is completely, don't do anything, just return the original question.
- DON'T generate anything except a refined question.
######################
-Examples-
######################
# Example 1
## Conversation
USER: What is the name of Donald Trump's father?
ASSISTANT: Fred Trump.
USER: And his mother?
###############
Output: What's the name of Donald Trump's mother?
------------
# Example 2
## Conversation
USER: What is the name of Donald Trump's father?
ASSISTANT: Fred Trump.
USER: And his mother?
ASSISTANT: Mary Trump.
User: What's her full name?
###############
Output: What's the full name of Donald Trump's mother Mary Trump?
######################
# Real Data
## Conversation
{conv}
###############
"""
ans = chat_mdl.chat(prompt, [{"role": "user", "content": "Output: "}], {"temperature": 0.2})
return ans if ans.find("**ERROR**") < 0 else messages[-1]["content"]
def tts(tts_mdl, text):
if not tts_mdl or not text: return
bin = b""

View File

@ -69,7 +69,7 @@ class File2DocumentService(CommonService):
@classmethod
@DB.connection_context()
def get_minio_address(cls, doc_id=None, file_id=None):
def get_storage_address(cls, doc_id=None, file_id=None):
if doc_id:
f2d = cls.get_by_document_id(doc_id)
else:

View File

@ -357,7 +357,7 @@ class FileService(CommonService):
doc = {
"id": get_uuid(),
"kb_id": kb.id,
"parser_id": kb.parser_id,
"parser_id": self.get_parser(filetype, filename, kb.parser_id),
"parser_config": kb.parser_config,
"created_by": user_id,
"type": filetype,
@ -366,14 +366,6 @@ class FileService(CommonService):
"size": len(blob),
"thumbnail": thumbnail(filename, blob)
}
if doc["type"] == FileType.VISUAL:
doc["parser_id"] = ParserType.PICTURE.value
if doc["type"] == FileType.AURAL:
doc["parser_id"] = ParserType.AUDIO.value
if re.search(r"\.(ppt|pptx|pages)$", filename):
doc["parser_id"] = ParserType.PRESENTATION.value
if re.search(r"\.(eml)$", filename):
doc["parser_id"] = ParserType.EMAIL.value
DocumentService.insert(doc)
FileService.add_file_from_kb(doc, kb_folder["id"], kb.tenant_id)
@ -382,3 +374,15 @@ class FileService(CommonService):
err.append(file.filename + ": " + str(e))
return err, files
@staticmethod
def get_parser(doc_type, filename, default):
if doc_type == FileType.VISUAL:
return ParserType.PICTURE.value
if doc_type == FileType.AURAL:
return ParserType.AUDIO.value
if re.search(r"\.(ppt|pptx|pages)$", filename):
return ParserType.PRESENTATION.value
if re.search(r"\.(eml)$", filename):
return ParserType.EMAIL.value
return default

View File

@ -17,7 +17,7 @@ from api.db.services.user_service import TenantService
from api.settings import database_logger
from rag.llm import EmbeddingModel, CvModel, ChatModel, RerankModel, Seq2txtModel, TTSModel
from api.db import LLMType
from api.db.db_models import DB, UserTenant
from api.db.db_models import DB
from api.db.db_models import LLMFactories, LLM, TenantLLM
from api.db.services.common_service import CommonService
@ -36,7 +36,11 @@ class TenantLLMService(CommonService):
@classmethod
@DB.connection_context()
def get_api_key(cls, tenant_id, model_name):
objs = cls.query(tenant_id=tenant_id, llm_name=model_name)
arr = model_name.split("@")
if len(arr) < 2:
objs = cls.query(tenant_id=tenant_id, llm_name=model_name)
else:
objs = cls.query(tenant_id=tenant_id, llm_name=arr[0], llm_factory=arr[1])
if not objs:
return
return objs[0]
@ -81,14 +85,17 @@ class TenantLLMService(CommonService):
assert False, "LLM type error"
model_config = cls.get_api_key(tenant_id, mdlnm)
tmp = mdlnm.split("@")
fid = None if len(tmp) < 2 else tmp[1]
mdlnm = tmp[0]
if model_config: model_config = model_config.to_dict()
if not model_config:
if llm_type in [LLMType.EMBEDDING, LLMType.RERANK]:
llm = LLMService.query(llm_name=llm_name if llm_name else mdlnm)
llm = LLMService.query(llm_name=mdlnm) if not fid else LLMService.query(llm_name=mdlnm, fid=fid)
if llm and llm[0].fid in ["Youdao", "FastEmbed", "BAAI"]:
model_config = {"llm_factory": llm[0].fid, "api_key":"", "llm_name": llm_name if llm_name else mdlnm, "api_base": ""}
model_config = {"llm_factory": llm[0].fid, "api_key":"", "llm_name": mdlnm, "api_base": ""}
if not model_config:
if llm_name == "flag-embedding":
if mdlnm == "flag-embedding":
model_config = {"llm_factory": "Tongyi-Qianwen", "api_key": "",
"llm_name": llm_name, "api_base": ""}
else:
@ -162,8 +169,8 @@ class TenantLLMService(CommonService):
num = 0
try:
for u in cls.query(tenant_id = tenant_id, llm_name=mdlnm):
num += cls.model.update(used_tokens = u.used_tokens + used_tokens)\
for u in cls.query(tenant_id=tenant_id, llm_name=mdlnm):
num += cls.model.update(used_tokens=u.used_tokens + used_tokens)\
.where(cls.model.tenant_id == tenant_id, cls.model.llm_name == mdlnm)\
.execute()
except Exception as e:
@ -245,7 +252,6 @@ class LLMBundle(object):
return
yield chunk
def chat(self, system, history, gen_conf):
txt, used_tokens = self.mdl.chat(system, history, gen_conf)
if not TenantLLMService.increase_usage(

View File

@ -133,9 +133,8 @@ class TaskService(CommonService):
cls.model.id == id).execute()
def queue_tasks(doc, bucket, name):
def queue_tasks(doc: dict, bucket: str, name: str):
def new_task():
nonlocal doc
return {
"id": get_uuid(),
"doc_id": doc["id"]
@ -149,15 +148,9 @@ def queue_tasks(doc, bucket, name):
page_size = doc["parser_config"].get("task_page_size", 12)
if doc["parser_id"] == "paper":
page_size = doc["parser_config"].get("task_page_size", 22)
if doc["parser_id"] == "one":
page_size = 1000000000
if doc["parser_id"] == "knowledge_graph":
page_size = 1000000000
if not do_layout:
page_size = 1000000000
page_ranges = doc["parser_config"].get("pages")
if not page_ranges:
page_ranges = [(1, 100000)]
if doc["parser_id"] in ["one", "knowledge_graph"] or not do_layout:
page_size = 10 ** 9
page_ranges = doc["parser_config"].get("pages") or [(1, 10 ** 5)]
for s, e in page_ranges:
s -= 1
s = max(0, s)
@ -170,8 +163,7 @@ def queue_tasks(doc, bucket, name):
elif doc["parser_id"] == "table":
file_bin = STORAGE_IMPL.get(bucket, name)
rn = RAGFlowExcelParser.row_number(
doc["name"], file_bin)
rn = RAGFlowExcelParser.row_number(doc["name"], file_bin)
for i in range(0, rn, 3000):
task = new_task()
task["from_page"] = i

View File

@ -46,13 +46,12 @@ def update_progress():
if __name__ == '__main__':
print("""
____ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
/____/
print(r"""
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
""", flush=True)
stat_logger.info(

View File

@ -42,6 +42,7 @@ RAG_FLOW_SERVICE_NAME = "ragflow"
SERVER_MODULE = "rag_flow_server.py"
TEMP_DIRECTORY = os.path.join(get_project_base_directory(), "temp")
RAG_FLOW_CONF_PATH = os.path.join(get_project_base_directory(), "conf")
LIGHTEN = os.environ.get('LIGHTEN')
SUBPROCESS_STD_LOG_NAME = "std.log"
@ -57,77 +58,76 @@ REQUEST_MAX_WAIT_SEC = 300
USE_REGISTRY = get_base_config("use_registry")
default_llm = {
"Tongyi-Qianwen": {
"chat_model": "qwen-plus",
"embedding_model": "text-embedding-v2",
"image2text_model": "qwen-vl-max",
"asr_model": "paraformer-realtime-8k-v1",
},
"OpenAI": {
"chat_model": "gpt-3.5-turbo",
"embedding_model": "text-embedding-ada-002",
"image2text_model": "gpt-4-vision-preview",
"asr_model": "whisper-1",
},
"Azure-OpenAI": {
"chat_model": "azure-gpt-35-turbo",
"embedding_model": "azure-text-embedding-ada-002",
"image2text_model": "azure-gpt-4-vision-preview",
"asr_model": "azure-whisper-1",
},
"ZHIPU-AI": {
"chat_model": "glm-3-turbo",
"embedding_model": "embedding-2",
"image2text_model": "glm-4v",
"asr_model": "",
},
"Ollama": {
"chat_model": "qwen-14B-chat",
"embedding_model": "flag-embedding",
"image2text_model": "",
"asr_model": "",
},
"Moonshot": {
"chat_model": "moonshot-v1-8k",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"DeepSeek": {
"chat_model": "deepseek-chat",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"VolcEngine": {
"chat_model": "",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"BAAI": {
"chat_model": "",
"embedding_model": "BAAI/bge-large-zh-v1.5",
"image2text_model": "",
"asr_model": "",
"rerank_model": "BAAI/bge-reranker-v2-m3",
}
}
LLM = get_base_config("user_default_llm", {})
LLM_FACTORY = LLM.get("factory", "Tongyi-Qianwen")
LLM_BASE_URL = LLM.get("base_url")
if LLM_FACTORY not in default_llm:
print(
"\33[91m【ERROR】\33[0m:",
f"LLM factory {LLM_FACTORY} has not supported yet, switch to 'Tongyi-Qianwen/QWen' automatically, and please check the API_KEY in service_conf.yaml.")
LLM_FACTORY = "Tongyi-Qianwen"
CHAT_MDL = default_llm[LLM_FACTORY]["chat_model"]
EMBEDDING_MDL = default_llm["BAAI"]["embedding_model"]
RERANK_MDL = default_llm["BAAI"]["rerank_model"]
ASR_MDL = default_llm[LLM_FACTORY]["asr_model"]
IMAGE2TEXT_MDL = default_llm[LLM_FACTORY]["image2text_model"]
if not LIGHTEN:
default_llm = {
"Tongyi-Qianwen": {
"chat_model": "qwen-plus",
"embedding_model": "text-embedding-v2",
"image2text_model": "qwen-vl-max",
"asr_model": "paraformer-realtime-8k-v1",
},
"OpenAI": {
"chat_model": "gpt-3.5-turbo",
"embedding_model": "text-embedding-ada-002",
"image2text_model": "gpt-4-vision-preview",
"asr_model": "whisper-1",
},
"Azure-OpenAI": {
"chat_model": "gpt-35-turbo",
"embedding_model": "text-embedding-ada-002",
"image2text_model": "gpt-4-vision-preview",
"asr_model": "whisper-1",
},
"ZHIPU-AI": {
"chat_model": "glm-3-turbo",
"embedding_model": "embedding-2",
"image2text_model": "glm-4v",
"asr_model": "",
},
"Ollama": {
"chat_model": "qwen-14B-chat",
"embedding_model": "flag-embedding",
"image2text_model": "",
"asr_model": "",
},
"Moonshot": {
"chat_model": "moonshot-v1-8k",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"DeepSeek": {
"chat_model": "deepseek-chat",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"VolcEngine": {
"chat_model": "",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"BAAI": {
"chat_model": "",
"embedding_model": "BAAI/bge-large-zh-v1.5",
"image2text_model": "",
"asr_model": "",
"rerank_model": "BAAI/bge-reranker-v2-m3",
}
}
CHAT_MDL = default_llm[LLM_FACTORY]["chat_model"]
EMBEDDING_MDL = default_llm["BAAI"]["embedding_model"]
RERANK_MDL = default_llm["BAAI"]["rerank_model"] if not LIGHTEN else ""
ASR_MDL = default_llm[LLM_FACTORY]["asr_model"]
IMAGE2TEXT_MDL = default_llm[LLM_FACTORY]["image2text_model"]
else:
CHAT_MDL = EMBEDDING_MDL = RERANK_MDL = ASR_MDL = IMAGE2TEXT_MDL = ""
API_KEY = LLM.get("api_key", "")
PARSERS = LLM.get(

View File

@ -13,10 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import dotenv
import typing
from api.utils.file_utils import get_project_base_directory
def get_versions() -> typing.Mapping[str, typing.Any]:
@ -25,4 +23,4 @@ def get_versions() -> typing.Mapping[str, typing.Any]:
def get_rag_version() -> typing.Optional[str]:
return get_versions().get("RAGFLOW_VERSION", "dev")
return get_versions().get("RAGFLOW_IMAGE", "infiniflow/ragflow:dev").split(":")[-1]

View File

@ -77,6 +77,12 @@
"tags": "LLM,CHAT,IMAGE2TEXT",
"max_tokens": 765,
"model_type": "image2text"
},
{
"llm_name": "tts-1",
"tags": "TTS",
"max_tokens": 2048,
"model_type": "tts"
}
]
},
@ -2338,6 +2344,13 @@
"tags": "LLM",
"status": "1",
"llm": []
}
},
{
"name": "HuggingFace",
"logo": "",
"tags": "TEXT EMBEDDING",
"status": "1",
"llm": []
}
]
}

View File

@ -1,73 +0,0 @@
ragflow:
host: 0.0.0.0
http_port: 9380
mysql:
name: 'rag_flow'
user: 'root'
password: 'infini_rag_flow'
host: 'mysql'
port: 3306
max_connections: 100
stale_timeout: 30
postgres:
name: 'rag_flow'
user: 'rag_flow'
password: 'infini_rag_flow'
host: 'postgres'
port: 5432
max_connections: 100
stale_timeout: 30
minio:
user: 'rag_flow'
password: 'infini_rag_flow'
host: 'minio:9000'
azure:
auth_type: 'sas'
container_url: 'container_url'
sas_token: 'sas_token'
#azure:
# auth_type: 'spn'
# account_url: 'account_url'
# client_id: 'client_id'
# secret: 'secret'
# tenant_id: 'tenant_id'
# container_name: 'container_name'
s3:
endpoint: 'endpoint'
access_key: 'access_key'
secret_key: 'secret_key'
region: 'region'
es:
hosts: 'http://es01:9200'
username: 'elastic'
password: 'infini_rag_flow'
redis:
db: 1
password: 'infini_rag_flow'
host: 'redis:6379'
user_default_llm:
factory: 'Tongyi-Qianwen'
api_key: 'sk-xxxxxxxxxxxxx'
base_url: ''
oauth:
github:
client_id: xxxxxxxxxxxxxxxxxxxxxxxxx
secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
url: https://github.com/login/oauth/access_token
feishu:
app_id: cli_xxxxxxxxxxxxxxxxxxx
app_secret: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
app_access_token_url: https://open.feishu.cn/open-apis/auth/v3/app_access_token/internal
user_access_token_url: https://open.feishu.cn/open-apis/authen/v1/oidc/access_token
grant_type: 'authorization_code'
authentication:
client:
switch: false
http_app_key:
http_secret_key:
site:
switch: false
permission:
switch: false
component: false
dataset: false

1
conf/service_conf.yaml Symbolic link
View File

@ -0,0 +1 @@
../docker/service_conf.yaml

View File

@ -16,7 +16,6 @@ import random
import xgboost as xgb
from io import BytesIO
import torch
import re
import pdfplumber
import logging
@ -25,6 +24,7 @@ import numpy as np
from timeit import default_timer as timer
from pypdf import PdfReader as pdf2_read
from api.settings import LIGHTEN
from api.utils.file_utils import get_project_base_directory
from deepdoc.vision import OCR, Recognizer, LayoutRecognizer, TableStructureRecognizer
from rag.nlp import rag_tokenizer
@ -44,8 +44,10 @@ class RAGFlowPdfParser:
self.tbl_det = TableStructureRecognizer()
self.updown_cnt_mdl = xgb.Booster()
if torch.cuda.is_available():
self.updown_cnt_mdl.set_param({"device": "cuda"})
if not LIGHTEN:
import torch
if torch.cuda.is_available():
self.updown_cnt_mdl.set_param({"device": "cuda"})
try:
model_dir = os.path.join(
get_project_base_directory(),
@ -486,7 +488,7 @@ class RAGFlowPdfParser:
i += 1
continue
if not down["text"].strip():
if not down["text"].strip() or not up["text"].strip():
i += 1
continue

View File

@ -10,28 +10,18 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from deepdoc.parser.utils import get_text
from rag.nlp import num_tokens_from_string
from rag.nlp import find_codec,num_tokens_from_string
import re
class RAGFlowTxtParser:
def __call__(self, fnm, binary=None, chunk_token_num=128, delimiter="\n!?;。;!?"):
txt = ""
if binary:
encoding = find_codec(binary)
txt = binary.decode(encoding, errors="ignore")
else:
with open(fnm, "r") as f:
while True:
l = f.readline()
if not l:
break
txt += l
txt = get_text(fnm, binary)
return self.parser_txt(txt, chunk_token_num, delimiter)
@classmethod
def parser_txt(cls, txt, chunk_token_num=128, delimiter="\n!?;。;!?"):
if type(txt) != str:
if not isinstance(txt, str):
raise TypeError("txt type should be str!")
cks = [""]
tk_nums = [0]

29
deepdoc/parser/utils.py Normal file
View File

@ -0,0 +1,29 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from rag.nlp import find_codec
def get_text(fnm: str, binary=None) -> str:
txt = ""
if binary:
encoding = find_codec(binary)
txt = binary.decode(encoding, errors="ignore")
else:
with open(fnm, "r") as f:
while True:
line = f.readline()
if not line:
break
txt += line
return txt

View File

@ -33,10 +33,17 @@ REDIS_PASSWORD=infini_rag_flow
SVR_HTTP_PORT=9380
RAGFLOW_VERSION=dev
RAGFLOW_IMAGE=infiniflow/ragflow:dev-slim
# If inside mainland China, decomment either of the following hub.docker.com mirrors:
# RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:dev-slim
# RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:dev-slim
TIMEZONE='Asia/Shanghai'
# If inside mainland China, decomment the following huggingface.co mirror:
# HF_ENDPOINT=https://hf-mirror.com
######## OS setup for ES ###########
# sysctl vm.max_map_count
# sudo sysctl -w vm.max_map_count=262144

View File

@ -1,30 +0,0 @@
include:
- path: ./docker-compose-base.yml
env_file: ./.env
services:
ragflow:
depends_on:
mysql:
condition: service_healthy
es01:
condition: service_healthy
image: swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:${RAGFLOW_VERSION}
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380
- 80:80
- 443:443
volumes:
- ./service_conf.yaml:/ragflow/conf/service_conf.yaml
- ./ragflow-logs:/ragflow/logs
- ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
- ./nginx/proxy.conf:/etc/nginx/proxy.conf
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
environment:
- TZ=${TIMEZONE}
- HF_ENDPOINT=https://hf-mirror.com
- MACOS=${MACOS}
networks:
- ragflow
restart: always

View File

@ -30,7 +30,8 @@ services:
restart: always
mysql:
image: mysql:5.7.18
# mysql:5.7 linux/arm64 image is unavailable.
image: mysql:8.0.39
container_name: ragflow-mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}

View File

@ -1,37 +0,0 @@
include:
- path: ./docker-compose-base.yml
env_file: ./.env
services:
ragflow:
depends_on:
mysql:
condition: service_healthy
es01:
condition: service_healthy
image: swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:${RAGFLOW_VERSION}
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380
- 80:80
- 443:443
volumes:
- ./service_conf.yaml:/ragflow/conf/service_conf.yaml
- ./ragflow-logs:/ragflow/logs
- ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
- ./nginx/proxy.conf:/etc/nginx/proxy.conf
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
environment:
- TZ=${TIMEZONE}
- HF_ENDPOINT=https://hf-mirror.com
- MACOS=${MACOS}
networks:
- ragflow
restart: always
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]

View File

@ -9,7 +9,7 @@ services:
condition: service_healthy
es01:
condition: service_healthy
image: infiniflow/ragflow:${RAGFLOW_VERSION}
image: ${RAGFLOW_IMAGE}
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380

View File

@ -9,12 +9,13 @@ services:
condition: service_healthy
es01:
condition: service_healthy
image: infiniflow/ragflow:${RAGFLOW_VERSION}
image: ${RAGFLOW_IMAGE}
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380
- 80:80
- 443:443
- 5678:5678
volumes:
- ./service_conf.yaml:/ragflow/conf/service_conf.yaml
- ./ragflow-logs:/ragflow/logs
@ -23,7 +24,7 @@ services:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
environment:
- TZ=${TIMEZONE}
- HF_ENDPOINT=https://huggingface.co
- HF_ENDPOINT=${HF_ENDPOINT}
- MACOS=${MACOS}
networks:
- ragflow

View File

@ -0,0 +1,28 @@
#!/bin/bash
# unset http proxy which maybe set by docker daemon
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/
PY=python3
if [[ -z "$WS" || $WS -lt 1 ]]; then
WS=1
fi
function task_exe(){
while [ 1 -eq 1 ];do
$PY rag/svr/task_executor.py $1;
done
}
for ((i=0;i<WS;i++))
do
task_exe $i &
done
while [ 1 -eq 1 ];do
$PY api/ragflow_server.py
done
wait;

View File

@ -21,23 +21,54 @@ redis:
db: 1
password: 'infini_rag_flow'
host: 'redis:6379'
user_default_llm:
factory: 'Tongyi-Qianwen'
api_key: 'sk-xxxxxxxxxxxxx'
base_url: ''
oauth:
github:
client_id: xxxxxxxxxxxxxxxxxxxxxxxxx
secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
url: https://github.com/login/oauth/access_token
authentication:
client:
switch: false
http_app_key:
http_secret_key:
site:
switch: false
permission:
switch: false
component: false
dataset: false
# postgres:
# name: 'rag_flow'
# user: 'rag_flow'
# password: 'infini_rag_flow'
# host: 'postgres'
# port: 5432
# max_connections: 100
# stale_timeout: 30
# s3:
# endpoint: 'endpoint'
# access_key: 'access_key'
# secret_key: 'secret_key'
# region: 'region'
# azure:
# auth_type: 'sas'
# container_url: 'container_url'
# sas_token: 'sas_token'
# azure:
# auth_type: 'spn'
# account_url: 'account_url'
# client_id: 'client_id'
# secret: 'secret'
# tenant_id: 'tenant_id'
# container_name: 'container_name'
# user_default_llm:
# factory: 'Tongyi-Qianwen'
# api_key: 'sk-xxxxxxxxxxxxx'
# base_url: ''
# oauth:
# github:
# client_id: xxxxxxxxxxxxxxxxxxxxxxxxx
# secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
# url: https://github.com/login/oauth/access_token
# feishu:
# app_id: cli_xxxxxxxxxxxxxxxxxxx
# app_secret: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
# app_access_token_url: https://open.feishu.cn/open-apis/auth/v3/app_access_token/internal
# user_access_token_url: https://open.feishu.cn/open-apis/authen/v1/oidc/access_token
# grant_type: 'authorization_code'
# authentication:
# client:
# switch: false
# http_app_key:
# http_secret_key:
# site:
# switch: false
# permission:
# switch: false
# component: false
# dataset: false

View File

@ -1,8 +1,8 @@
{
"label": "User Guides",
"label": "Guides",
"position": 2,
"link": {
"type": "generated-index",
"description": "RAGFlow User Guides"
"description": "Guides for RAGFlow users and developers."
}
}

View File

@ -3,6 +3,6 @@
"position": 3,
"link": {
"type": "generated-index",
"description": "RAGFlow v0.8.0 introduces an agent mechanism, featuring a no-code workflow editor on the front end and a comprehensive graph-based task orchestration framework on the back end."
"description": "RAGFlow v0.8.0 introduces an agent mechanism, featuring a no-code workflow editor on the front end and a comprehensive graph-based task orchestration framework on the backend."
}
}

View File

@ -53,9 +53,9 @@ Please review the flowing description of the RAG-specific components before you
| -------------- | ------------------------------------------------------------ |
| **Retrieval** | A component that retrieves information from specified knowledge bases and returns 'Empty response' if no information is found. Ensure the correct knowledge bases are selected. |
| **Generate** | A component that prompts the LLM to generate responses. You must ensure the prompt is set correctly. |
| **Answer** | A component that serves as the interface between human and the bot, receiving user inputs and displaying the agent's responses. |
| **Interact** | A component that serves as the interface between human and the bot, receiving user inputs and displaying the agent's responses. |
| **Categorize** | A component that uses the LLM to classify user inputs into predefined categories. Ensure you specify the name, description, and examples for each category, along with the corresponding next component. |
| **Message** | A component that sends out a static message. If multiple messages are supplied, it randomly selects one to send. Ensure its downstream is **Answer**, the interface component. |
| **Message** | A component that sends out a static message. If multiple messages are supplied, it randomly selects one to send. Ensure its downstream is **Interact**, the interface component. |
| **Relevant** | A component that uses the LLM to assess whether the upstream output is relevant to the user's latest query. Ensure you specify the next component for each judge result. |
| **Rewrite** | A component that refines a user query if it fails to retrieve relevant information from the knowledge base. It repeats this process until the predefined looping upper limit is reached. Ensure its upstream is **Relevant** and downstream is **Retrieval**. |
| **Keyword** | A component that retrieves top N search results from wikipedia.org. Ensure the TopN value is set properly before use. |
@ -63,8 +63,8 @@ Please review the flowing description of the RAG-specific components before you
:::caution NOTE
- Ensure **Rewrite**'s upstream component is **Relevant** and downstream component is **Retrieval**.
- Ensure the downstream component of **Message** is **Answer**.
- The downstream component of **Begin** is always **Answer**.
- Ensure the downstream component of **Message** is **Interact**.
- The downstream component of **Begin** is always **Interact**.
:::

View File

@ -26,7 +26,7 @@ To create a general-purpose chatbot agent using our template:
3. On the **agent template** page, hover over the card on **General-purpose chatbot** and click **Use this template**.
*You are now directed to the **no-code workflow editor** page.*
![workflow_editor](https://github.com/user-attachments/assets/9fc6891c-7784-43b8-ab4a-3b08a9e551c4)
![workflow_editor](https://github.com/user-attachments/assets/52e7dc62-4bf5-4fbb-ab73-4a6e252065f0)
:::tip NOTE
RAGFlow's no-code editor spares you the trouble of coding, making agent development effortless.
@ -40,10 +40,9 @@ Heres a breakdown of each component and its role and requirements in the chat
- Function: Sets the opening greeting for the user.
- Purpose: Establishes a welcoming atmosphere and prepares the user for interaction.
- **Answer**
- **Interact**
- Function: Serves as the interface between human and the bot.
- Role: Acts as the downstream component of **Begin**.
- Note: Though named "Answer", it does not engage with the LLM.
- **Retrieval**
- Function: Retrieves information from specified knowledge base(s).
@ -78,7 +77,7 @@ Heres a breakdown of each component and its role and requirements in the chat
4. Click **Relevant** to review or change its settings:
*You may retain the current settings, but feel free to experiment with changes to understand how the agent operates.*
![relevant_settings](https://github.com/user-attachments/assets/f582cc1c-0dd5-499c-813a-294dbfb941dd)
![relevant_settings](https://github.com/user-attachments/assets/9ff7fdd8-7a69-4ee2-bfba-c7fb8029150f)
5. Click **Rewrite** to select a different model for query rewriting or update the maximum loop times for query rewriting:
![choose_model](https://github.com/user-attachments/assets/2bac1d6c-c4f1-42ac-997b-102858c3f550)

View File

@ -128,7 +128,7 @@ RAGFlow uses multiple recall of both full-text search and vector search in its c
## Search for knowledge base
As of RAGFlow v0.11.0, the search feature is still in a rudimentary form, supporting only knowledge base search by name.
As of RAGFlow v0.12.0, the search feature is still in a rudimentary form, supporting only knowledge base search by name.
![search knowledge base](https://github.com/infiniflow/ragflow/assets/93570324/836ae94c-2438-42be-879e-c7ad2a59693e)

View File

@ -0,0 +1,8 @@
{
"label": "Develop",
"position": 10,
"link": {
"type": "generated-index",
"description": "Guides for Hardcore Developers"
}
}

View File

@ -0,0 +1,87 @@
---
sidebar_position: 1
slug: /build_docker_image
---
# Build a RAGFlow Docker Image
A guide explaining how to build a RAGFlow Docker image from its source code. By following this guide, you'll be able to create a local Docker image that can be used for development, debugging, or testing purposes.
## Target Audience
- Developers who have added new features or modified the existing code and require a Docker image to view and debug their changes.
- Testers looking to explore the latest features of RAGFlow in a Docker image.
## Prerequisites
- CPU &ge; 4 cores
- RAM &ge; 16 GB
- Disk &ge; 50 GB
- Docker &ge; 24.0.0 & Docker Compose &ge; v2.26.1
:::tip NOTE
If you have not installed Docker on your local machine (Windows, Mac, or Linux), see the [Install Docker Engine](https://docs.docker.com/engine/install/) guide.
:::
## Build a RAGFlow Docker Image
To build a RAGFlow Docker image from source code:
### Git Clone the Repository
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow
```
### Build the Docker Image
Navigate to the `ragflow` directory where the Dockerfile and other necessary files are located. Now you can build the Docker image using the provided Dockerfile. The command below specifies which Dockerfile to use and tags the image with a name for reference purpose.
#### Build and push multi-arch image `infiniflow/ragflow:dev-slim`
On a `linux/amd64` host:
```bash
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim-amd64 .
docker push infiniflow/ragflow:dev-slim-amd64
```
On a `linux/arm64` host:
```bash
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim-arm64 .
docker push infiniflow/ragflow:dev-slim-arm64
```
On a Linux host:
```bash
docker manifest create infiniflow/ragflow:dev-slim --amend infiniflow/ragflow:dev-slim-amd64 --amend infiniflow/ragflow:dev-slim-arm64
docker manifest push infiniflow/ragflow:dev-slim
```
This image is approximately 1 GB in size and relies on external LLM services, as it does not include deepdoc, embedding, or chat models.
#### Build and push multi-arch image `infiniflow/ragflow:dev`
On a `linux/amd64` host:
```bash
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev-amd64 .
docker push infiniflow/ragflow:dev-amd64
```
On a `linux/arm64` host:
```bash
pip3 install huggingface-hub
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev-arm64 .
docker push infiniflow/ragflow:dev-arm64
```
On any linux host:
```bash
docker manifest create infiniflow/ragflow:dev --amend infiniflow/ragflow:dev-amd64 --amend infiniflow/ragflow:dev-arm64
docker manifest push infiniflow/ragflow:dev
```
This image's size is approximately 9 GB in size and can reference via either local CPU/GPU or an external LLM, as it includes deepdoc, embedding, and chat models.

View File

@ -0,0 +1,141 @@
---
sidebar_position: 2
slug: /launch_ragflow_from_source
---
# Launch the RAGFlow Service from Source
A guide explaining how to set up a RAGFlow service from its source code. By following this guide, you'll be able to debug using the source code.
## Target Audience
Developers who have added new features or modified existing code and wish to debug using the source code, *provided that* their machine has the target deployment environment set up.
## Prerequisites
- CPU &ge; 4 cores
- RAM &ge; 16 GB
- Disk &ge; 50 GB
- Docker &ge; 24.0.0 & Docker Compose &ge; v2.26.1
:::tip NOTE
If you have not installed Docker on your local machine (Windows, Mac, or Linux), see the [Install Docker Engine](https://docs.docker.com/engine/install/) guide.
:::
## Launch the Service from Source
To launch the RAGFlow service from source code:
### Clone the RAGFlow Repository
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
```
### Install Python dependencies
1. Install Poetry:
```bash
curl -sSL https://install.python-poetry.org | python3 -
```
2. Configure Poetry:
```bash
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
```
3. Install Python dependencies:
```bash
~/.local/bin/poetry install --sync --no-root
```
*A virtual environment named `.venv` is created, and all Python dependencies are installed into the new environment.*
### Launch Third-party Services
The following command launches the 'base' services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose:
```bash
docker compose -f docker/docker-compose-base.yml up -d
```
### Update `host` and `port` Settings for Third-party Services
1. Add the following line to `/etc/hosts` to resolve all hosts specified in **docker/service_conf.yaml** to `127.0.0.1`:
```
127.0.0.1 es01 mysql minio redis
```
2. In **docker/service_conf.yaml**, update mysql port to `5455` and es port to `1200`, as specified in **docker/.env**.
### Launch the RAGFlow Backend Service
1. Comment out the `nginx` line in **docker/entrypoint.sh**.
```
# /usr/sbin/nginx
```
2. Activate the Python virtual environment:
```bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
```
3. **Optional:** If you cannot access HuggingFace, set the HF_ENDPOINT environment variable to use a mirror site:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
4. Run the **entrypoint.sh** script to launch the backend service:
```
bash docker/entrypoint.sh
```
### Launch the RAGFlow frontend service
1. Navigate to the `web` directory and install the frontend dependencies:
```bash
cd web
npm install --force
```
2. Update `proxy.target` in **.umirc.ts** to `http://127.0.0.1:9380`:
```bash
vim .umirc.ts
```
3. Start up the RAGFlow frontend service:
```bash
npm run dev
```
*The following message appears, showing the IP address and port number of your frontend service:*
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
### Access the RAGFlow service
In your web browser, enter `http://127.0.0.1:<PORT>/`, ensuring the port number matches that shown in the screenshot above.
### Stop the RAGFlow service when the development is done
1. Stop the RAGFlow frontend service:
```bash
pkill npm
```
2. Stop the RAGFlow backend service:
```bash
pkill -f "docker/entrypoint.sh"
```

View File

@ -49,7 +49,7 @@ You can link your file to one knowledge base or multiple knowledge bases at one
## Search files or folders
As of RAGFlow v0.11.0, the search feature is still in a rudimentary form, supporting only file and folder search in the current directory by name (files or folders in the child directory will not be retrieved).
As of RAGFlow v0.12.0, the search feature is still in a rudimentary form, supporting only file and folder search in the current directory by name (files or folders in the child directory will not be retrieved).
![search file](https://github.com/infiniflow/ragflow/assets/93570324/77ffc2e5-bd80-4ed1-841f-068e664efffe)
@ -81,4 +81,4 @@ RAGFlow's file management allows you to download an uploaded file:
![download_file](https://github.com/infiniflow/ragflow/assets/93570324/cf3b297f-7d9b-4522-bf5f-4f45743e4ed5)
> As of RAGFlow v0.11.0, bulk download is not supported, nor can you download an entire folder.
> As of RAGFlow v0.12.0, bulk download is not supported, nor can you download an entire folder.

View File

@ -34,7 +34,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
`vm.max_map_count`. This value sets the maximum number of memory map areas a process may have. Its default value is 65530. While most applications require fewer than a thousand maps, reducing this value can result in abmornal behaviors, and the system will throw out-of-memory errors when a process reaches the limitation.
RAGFlow v0.11.0 uses Elasticsearch for multiple recall. Setting the value of `vm.max_map_count` correctly is crucial to the proper functioning of the Elasticsearch component.
RAGFlow v0.12.0 uses Elasticsearch for multiple recall. Setting the value of `vm.max_map_count` correctly is crucial to the proper functioning of the Elasticsearch component.
<Tabs
defaultValue="linux"
@ -177,7 +177,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
3. Build the pre-built Docker images and start up the server:
> Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_VERSION` in **docker/.env** to the intended version, for example `RAGFLOW_VERSION=v0.11.0`, before running the following commands.
> Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_IMAGE` in **docker/.env** to the intended version, for example `RAGFLOW_IMAGE=infiniflow/ragflow:v0.12.0`, before running the following commands.
```bash
$ cd ragflow/docker
@ -196,12 +196,11 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
_The following output confirms a successful launch of the system:_
```bash
____ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
/____/
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380

View File

@ -1,8 +1,8 @@
{
"label": "References",
"position": 3,
"position": 4,
"link": {
"type": "generated-index",
"description": "RAGFlow References"
"description": "Miscellaneous References"
}
}

View File

@ -53,11 +53,11 @@ The corresponding APIs are now available. See the [RAGFlow API Reference](./api.
### 3. Do you support stream output?
No, this feature is still in development. Contributions are welcome.
This feature is supported.
### 4. Is it possible to share dialogue through URL?
Yes, this feature is now available.
No, this feature is not supported.
### 5. Do you support multiple rounds of dialogues, i.e., referencing previous dialogues as context for the current dialogue?
@ -168,12 +168,11 @@ You will not log in to RAGFlow unless the server is fully initialized. Run `dock
*The server is successfully initialized, if your system displays the following:*
```
____ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
/____/
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
@ -408,49 +407,25 @@ You can upgrade RAGFlow to either the dev version or the latest version:
To upgrade RAGFlow to the dev version:
1. Pull the latest source code
Update the RAGFlow image and restart RAGFlow:
1. Update **ragflow/docker/.env** as follows:
```bash
cd ragflow
git pull
RAGFLOW_IMAGE=infiniflow/ragflow:dev
```
2. If you used `docker compose up -d` to start up RAGFlow server:
2. Update ragflow image and restart ragflow:
```bash
docker pull infiniflow/ragflow:dev
```
```bash
docker compose up ragflow -d
```
3. If you used `docker compose -f docker-compose-CN.yml up -d` to start up RAGFlow server:
```bash
docker pull swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:dev
```
```bash
docker compose -f docker-compose-CN.yml up -d
docker compose -f docker/docker-compose.yml pull
docker compose -f docker/docker-compose.yml up -d
```
To upgrade RAGFlow to the latest version:
1. Update **ragflow/docker/.env** as follows:
```bash
RAGFLOW_VERSION=latest
RAGFLOW_IMAGE=infiniflow/ragflow:latest
```
2. Pull the latest source code:
2. Update the RAGFlow image and restart RAGFlow:
```bash
cd ragflow
git pull
```
3. If you used `docker compose up -d` to start up RAGFlow server:
```bash
docker pull infiniflow/ragflow:latest
```
```bash
docker compose up ragflow -d
```
4. If you used `docker compose -f docker-compose-CN.yml up -d` to start up RAGFlow server:
```bash
docker pull swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:latest
```
```bash
docker compose -f docker-compose-CN.yml up -d
docker compose -f docker/docker-compose.yml pull
docker compose -f docker/docker-compose.yml up -d
```

24
download_deps.py Normal file
View File

@ -0,0 +1,24 @@
#!/usr/bin/env python3
from huggingface_hub import snapshot_download
import os
repos = [
"InfiniFlow/text_concat_xgb_v1.0",
"InfiniFlow/deepdoc",
"BAAI/bge-large-zh-v1.5",
"BAAI/bge-reranker-v2-m3",
"maidalun1020/bce-embedding-base_v1",
"maidalun1020/bce-reranker-base_v1",
]
def download_model(repo_id):
local_dir = os.path.join("huggingface.co", repo_id)
os.makedirs(local_dir, exist_ok=True)
snapshot_download(repo_id=repo_id, local_dir=local_dir)
if __name__ == "__main__":
for repo_id in repos:
download_model(repo_id)

View File

@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import re
from concurrent.futures import ThreadPoolExecutor
import json
from functools import reduce
@ -24,7 +23,7 @@ from api.db.services.llm_service import LLMBundle
from api.db.services.user_service import TenantService
from graphrag.community_reports_extractor import CommunityReportsExtractor
from graphrag.entity_resolution import EntityResolution
from graphrag.graph_extractor import GraphExtractor
from graphrag.graph_extractor import GraphExtractor, DEFAULT_ENTITY_TYPES
from graphrag.mind_map_extractor import MindMapExtractor
from rag.nlp import rag_tokenizer
from rag.utils import num_tokens_from_string
@ -52,7 +51,7 @@ def graph_merge(g1, g2):
return g
def build_knowlege_graph_chunks(tenant_id: str, chunks: List[str], callback, entity_types=["organization", "person", "location", "event", "time"]):
def build_knowledge_graph_chunks(tenant_id: str, chunks: List[str], callback, entity_types=DEFAULT_ENTITY_TYPES):
_, tenant = TenantService.get_by_id(tenant_id)
llm_bdl = LLMBundle(tenant_id, LLMType.CHAT, tenant.llm_id)
ext = GraphExtractor(llm_bdl)

View File

@ -134,14 +134,6 @@ def run(graph: nx.Graph, args: dict[str, Any]) -> dict[int, dict[str, dict]]:
return results_by_level
def add_community_info2graph(graph: nx.Graph, commu_info: dict[str, dict[str, dict]]):
for lev, cluster_info in commu_info.items():
for cid, nodes in cluster_info.items():
for n in nodes["nodes"]:
if "community" not in graph.nodes[n]: graph.nodes[n]["community"] = {}
graph.nodes[n]["community"].update({lev: cid})
def add_community_info2graph(graph: nx.Graph, nodes: List[str], community_title):
for n in nodes:
if "communities" not in graph.nodes[n]:

View File

@ -14,23 +14,25 @@ ErrorHandlerFn = Callable[[BaseException | None, str | None, dict | None], None]
def perform_variable_replacements(
input: str, history: list[dict]=[], variables: dict | None ={}
input: str, history: list[dict] | None = None, variables: dict | None = None
) -> str:
"""Perform variable replacements on the input string and in a chat log."""
if history is None:
history = []
if variables is None:
variables = {}
result = input
def replace_all(input: str) -> str:
result = input
if variables:
for entry in variables:
result = result.replace(f"{{{entry}}}", variables[entry])
for k, v in variables.items():
result = result.replace(f"{{{k}}}", v)
return result
result = replace_all(result)
for i in range(len(history)):
entry = history[i]
for i, entry in enumerate(history):
if entry.get("role") == "system":
history[i]["content"] = replace_all(entry.get("content") or "")
entry["content"] = replace_all(entry.get("content") or "")
return result

8793
poetry.lock generated Normal file

File diff suppressed because it is too large Load Diff

4
poetry.toml Normal file
View File

@ -0,0 +1,4 @@
[virtualenvs]
in-project = true
create = true
prefer-active-python = true

130
pyproject.toml Normal file
View File

@ -0,0 +1,130 @@
[tool.poetry]
name = "ragflow"
version = "0.11.0"
description = "[RAGFlow](https://ragflow.io/) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data."
authors = ["Your Name <you@example.com>"]
license = "https://github.com/infiniflow/ragflow/blob/main/LICENSE"
readme = "README.md"
package-mode = false
[tool.poetry.dependencies]
python = ">=3.12,<3.13"
datrie = "0.8.2"
akshare = "^1.14.81"
azure-storage-blob = "12.22.0"
azure-identity = "1.17.1"
azure-storage-file-datalake = "12.16.0"
anthropic = "=0.34.1"
arxiv = "2.1.3"
aspose-slides = { version = "^24.9.0", markers = "platform_machine == 'x86_64'" }
bio = "1.7.1"
boto3 = "1.34.140"
botocore = "1.34.140"
cachetools = "5.3.3"
chardet = "5.2.0"
cn2an = "0.5.22"
cohere = "5.6.2"
dashscope = "1.14.1"
deepl = "1.18.0"
demjson3 = "3.0.6"
discord-py = "2.3.2"
duckduckgo-search = "6.1.9"
editdistance = "0.8.1"
elastic-transport = "8.12.0"
elasticsearch = "8.12.1"
elasticsearch-dsl = "8.12.0"
fasttext = "0.9.3"
filelock = "3.15.4"
flask = "3.0.3"
flask-cors = "5.0.0"
flask-login = "0.6.3"
flask-session = "0.8.0"
google-search-results = "2.4.2"
groq = "0.9.0"
hanziconv = "0.3.2"
html-text = "0.6.2"
httpx = "0.27.0"
huggingface-hub = "^0.25.0"
infinity-emb = "0.0.51"
itsdangerous = "2.1.2"
markdown = "3.6"
markdown-to-json = "2.1.1"
minio = "7.2.4"
mistralai = "0.4.2"
nltk = "3.9.1"
numpy = "1.26.4"
ollama = "0.2.1"
onnxruntime = "1.19.2"
openai = "1.12.0"
opencv-python = "4.10.0.84"
opencv-python-headless = "4.10.0.84"
openpyxl = "3.1.2"
ormsgpack = "1.5.0"
pandas = "2.2.2"
pdfplumber = "0.10.4"
peewee = "3.17.1"
pillow = "10.3.0"
protobuf = "5.27.2"
psycopg2-binary = "2.9.9"
pyclipper = "1.3.0.post5"
pycryptodomex = "3.20.0"
pypdf = "^5.0.0"
pytest = "8.2.2"
python-dotenv = "1.0.1"
python-dateutil = "2.8.2"
python-pptx = "^1.0.2"
pywencai = "0.12.2"
qianfan = "0.4.6"
ranx = "0.3.20"
readability-lxml = "0.8.1"
redis = "5.0.3"
requests = "2.32.2"
replicate = "0.31.0"
roman-numbers = "1.0.2"
ruamel-base = "1.0.0"
scholarly = "1.7.11"
scikit-learn = "1.5.0"
selenium = "4.22.0"
setuptools = "70.0.0"
shapely = "2.0.5"
six = "1.16.0"
strenum = "0.4.15"
tabulate = "0.9.0"
tencentcloud-sdk-python = "3.0.1215"
tika = "2.6.0"
tiktoken = "0.6.0"
umap_learn = "0.5.6"
vertexai = "1.64.0"
volcengine = "1.0.146"
voyageai = "0.2.3"
webdriver-manager = "4.0.1"
werkzeug = "3.0.3"
wikipedia = "1.4.0"
word2number = "1.1"
xgboost = "1.5.0"
xpinyin = "0.7.6"
yfinance = "0.1.96"
zhipuai = "2.0.1"
ruamel-yaml = "^0.18.6"
google-generativeai = "^0.8.1"
python-docx = "^1.1.2"
pypdf2 = "^3.0.1"
graspologic = "^3.4.1"
pymysql = "^1.1.1"
mini-racer = "^0.12.4"
pyicu = "^2.13.1"
[tool.poetry.group.full]
optional = true
[tool.poetry.group.full.dependencies]
bcembedding = "0.1.3"
fastembed = "^0.3.6"
flagembedding = "1.2.10"
torch = "2.3.0"
transformers = "4.38.1"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@ -10,11 +10,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import copy
from tika import parser
import re
from io import BytesIO
from deepdoc.parser.utils import get_text
from rag.nlp import bullets_category, is_english, tokenize, remove_contents_table, \
hierarchical_merge, make_colon_as_title, naive_merge, random_choices, tokenize_table, add_positions, \
tokenize_chunks, find_codec
@ -88,17 +88,7 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
elif re.search(r"\.txt$", filename, re.IGNORECASE):
callback(0.1, "Start to parse.")
txt = ""
if binary:
encoding = find_codec(binary)
txt = binary.decode(encoding, errors="ignore")
else:
with open(filename, "r") as f:
while True:
l = f.readline()
if not l:
break
txt += l
txt = get_text(filename, binary)
sections = txt.split("\n")
sections = [(l, "") for l in sections if l]
remove_contents_table(sections, eng=is_english(

View File

@ -1,6 +1,6 @@
import re
from graphrag.index import build_knowlege_graph_chunks
from graphrag.index import build_knowledge_graph_chunks
from rag.app import naive
from rag.nlp import rag_tokenizer, tokenize_chunks
@ -15,9 +15,9 @@ def chunk(filename, binary, tenant_id, from_page=0, to_page=100000,
parser_config["layout_recognize"] = False
sections = naive.chunk(filename, binary, from_page=from_page, to_page=to_page, section_only=True,
parser_config=parser_config, callback=callback)
chunks = build_knowlege_graph_chunks(tenant_id, sections, callback,
parser_config.get("entity_types", ["organization", "person", "location", "event", "time"])
)
chunks = build_knowledge_graph_chunks(tenant_id, sections, callback,
parser_config.get("entity_types", ["organization", "person", "location", "event", "time"])
)
for c in chunks: c["docnm_kwd"] = filename
doc = {

View File

@ -17,6 +17,7 @@ from io import BytesIO
from docx import Document
from api.db import ParserType
from deepdoc.parser.utils import get_text
from rag.nlp import bullets_category, is_english, tokenize, remove_contents_table, hierarchical_merge, \
make_colon_as_title, add_positions, tokenize_chunks, find_codec, docx_question_level
from rag.nlp import rag_tokenizer
@ -165,17 +166,7 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
elif re.search(r"\.txt$", filename, re.IGNORECASE):
callback(0.1, "Start to parse.")
txt = ""
if binary:
encoding = find_codec(binary)
txt = binary.decode(encoding, errors="ignore")
else:
with open(filename, "r") as f:
while True:
l = f.readline()
if not l:
break
txt += l
txt = get_text(filename, binary)
sections = txt.split("\n")
sections = [l for l in sections if l]
callback(0.8, "Finish parsing.")

View File

@ -76,7 +76,7 @@ class Docx(DocxParser):
if last_image:
image_list.insert(0, last_image)
last_image = None
lines.append((self.__clean(p.text), image_list, p.style.name))
lines.append((self.__clean(p.text), image_list, p.style.name if p.style else ""))
else:
if current_image := self.get_picture(self.doc, p):
if lines:
@ -169,7 +169,6 @@ class Markdown(MarkdownParser):
return sections, tbls
def chunk(filename, binary=None, from_page=0, to_page=100000,
lang="Chinese", callback=None, **kwargs):
"""
@ -190,7 +189,6 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
doc["title_sm_tks"] = rag_tokenizer.fine_grained_tokenize(doc["title_tks"])
res = []
pdf_parser = None
sections = []
if re.search(r"\.docx$", filename, re.IGNORECASE):
callback(0.1, "Start to parse.")
sections, tbls = Docx()(filename, binary)
@ -221,11 +219,14 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
elif re.search(r"\.xlsx?$", filename, re.IGNORECASE):
callback(0.1, "Start to parse.")
excel_parser = ExcelParser()
sections = [(l, "") for l in excel_parser.html(binary) if l]
if parser_config.get("html4excel"):
sections = [(_, "") for _ in excel_parser.html(binary, 12) if _]
else:
sections = [(_, "") for _ in excel_parser(binary) if _]
elif re.search(r"\.(txt|py|js|java|c|cpp|h|php|go|ts|sh|cs|kt|sql)$", filename, re.IGNORECASE):
callback(0.1, "Start to parse.")
sections = TxtParser()(filename,binary,
sections = TxtParser()(filename, binary,
parser_config.get("chunk_token_num", 128),
parser_config.get("delimiter", "\n!?;。;!?"))
callback(0.8, "Finish parsing.")
@ -239,13 +240,13 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
elif re.search(r"\.(htm|html)$", filename, re.IGNORECASE):
callback(0.1, "Start to parse.")
sections = HtmlParser()(filename, binary)
sections = [(l, "") for l in sections if l]
sections = [(_, "") for _ in sections if _]
callback(0.8, "Finish parsing.")
elif re.search(r"\.json$", filename, re.IGNORECASE):
callback(0.1, "Start to parse.")
sections = JsonParser(int(parser_config.get("chunk_token_num", 128)))(binary)
sections = [(l, "") for l in sections if l]
sections = [(_, "") for _ in sections if _]
callback(0.8, "Finish parsing.")
elif re.search(r"\.doc$", filename, re.IGNORECASE):
@ -253,7 +254,7 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
binary = BytesIO(binary)
doc_parsed = parser.from_buffer(binary)
sections = doc_parsed['content'].split('\n')
sections = [(l, "") for l in sections if l]
sections = [(_, "") for _ in sections if _]
callback(0.8, "Finish parsing.")
else:

View File

@ -13,8 +13,10 @@
from tika import parser
from io import BytesIO
import re
from deepdoc.parser.utils import get_text
from rag.app import laws
from rag.nlp import rag_tokenizer, tokenize, find_codec
from rag.nlp import rag_tokenizer, tokenize
from deepdoc.parser import PdfParser, ExcelParser, PlainParser, HtmlParser
@ -82,17 +84,7 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
elif re.search(r"\.(txt|md|markdown)$", filename, re.IGNORECASE):
callback(0.1, "Start to parse.")
txt = ""
if binary:
encoding = find_codec(binary)
txt = binary.decode(encoding, errors="ignore")
else:
with open(filename, "r") as f:
while True:
l = f.readline()
if not l:
break
txt += l
txt = get_text(filename, binary)
sections = txt.split("\n")
sections = [s for s in sections if s]
callback(0.8, "Finish parsing.")

View File

@ -16,13 +16,17 @@ from io import BytesIO
from timeit import default_timer as timer
from nltk import word_tokenize
from openpyxl import load_workbook
from rag.nlp import is_english, random_choices, find_codec, qbullets_category, add_positions, has_qbullet, docx_question_level
from deepdoc.parser.utils import get_text
from rag.nlp import is_english, random_choices, qbullets_category, add_positions, has_qbullet, docx_question_level
from rag.nlp import rag_tokenizer, tokenize_table, concat_img
from rag.settings import cron_logger
from deepdoc.parser import PdfParser, ExcelParser, DocxParser
from docx import Document
from PIL import Image
from markdown import markdown
class Excel(ExcelParser):
def __call__(self, fnm, binary=None, callback=None):
if not binary:
@ -305,17 +309,7 @@ def chunk(filename, binary=None, lang="Chinese", callback=None, **kwargs):
return res
elif re.search(r"\.(txt|csv)$", filename, re.IGNORECASE):
callback(0.1, "Start to parse.")
txt = ""
if binary:
encoding = find_codec(binary)
txt = binary.decode(encoding, errors="ignore")
else:
with open(filename, "r") as f:
while True:
l = f.readline()
if not l:
break
txt += l
txt = get_text(filename, binary)
lines = txt.split("\n")
comma, tab = 0, 0
for l in lines:
@ -358,17 +352,7 @@ def chunk(filename, binary=None, lang="Chinese", callback=None, **kwargs):
return res
elif re.search(r"\.(md|markdown)$", filename, re.IGNORECASE):
callback(0.1, "Start to parse.")
txt = ""
if binary:
encoding = find_codec(binary)
txt = binary.decode(encoding, errors="ignore")
else:
with open(filename, "r") as f:
while True:
l = f.readline()
if not l:
break
txt += l
txt = get_text(filename, binary)
lines = txt.split("\n")
last_question, last_answer = "", ""
question_stack, level_stack = [], []

View File

@ -20,7 +20,8 @@ from openpyxl import load_workbook
from dateutil.parser import parse as datetime_parse
from api.db.services.knowledgebase_service import KnowledgebaseService
from rag.nlp import rag_tokenizer, is_english, tokenize, find_codec
from deepdoc.parser.utils import get_text
from rag.nlp import rag_tokenizer, tokenize
from deepdoc.parser import ExcelParser
@ -146,17 +147,7 @@ def chunk(filename, binary=None, from_page=0, to_page=10000000000,
callback=callback)
elif re.search(r"\.(txt|csv)$", filename, re.IGNORECASE):
callback(0.1, "Start to parse.")
txt = ""
if binary:
encoding = find_codec(binary)
txt = binary.decode(encoding, errors="ignore")
else:
with open(filename, "r") as f:
while True:
l = f.readline()
if not l:
break
txt += l
txt = get_text(filename, binary)
lines = txt.split("\n")
fails = []
headers = lines[0].split(kwargs.get("delimiter", "\t"))

View File

@ -46,7 +46,8 @@ EmbeddingModel = {
"SILICONFLOW": SILICONFLOWEmbed,
"Replicate": ReplicateEmbed,
"BaiduYiyan": BaiduYiyanEmbed,
"Voyage AI": VoyageEmbed
"Voyage AI": VoyageEmbed,
"HuggingFace":HuggingFaceEmbed,
}
@ -138,5 +139,7 @@ Seq2txtModel = {
TTSModel = {
"Fish Audio": FishAudioTTS,
"Tongyi-Qianwen": QwenTTS
"Tongyi-Qianwen": QwenTTS,
"OpenAI":OpenAITTS,
"XunFei Spark":SparkTTS
}

View File

@ -20,7 +20,6 @@ from abc import ABC
from openai import OpenAI
import openai
from ollama import Client
from volcengine.maas.v2 import MaasService
from rag.nlp import is_english
from rag.utils import num_tokens_from_string
from groq import Groq
@ -29,6 +28,7 @@ import json
import requests
import asyncio
class Base(ABC):
def __init__(self, key, model_name, base_url):
self.client = OpenAI(api_key=key, base_url=base_url)
@ -103,7 +103,6 @@ class XinferenceChat(Base):
raise ValueError("Local llm url cannot be None")
if base_url.split("/")[-1] != "v1":
base_url = os.path.join(base_url, "v1")
key = "xxx"
super().__init__(key, model_name, base_url)
@ -458,7 +457,7 @@ class VolcEngineChat(Base):
"""
base_url = base_url if base_url else 'https://ark.cn-beijing.volces.com/api/v3'
ark_api_key = json.loads(key).get('ark_api_key', '')
model_name = json.loads(key).get('ep_id', '')
model_name = json.loads(key).get('ep_id', '') + json.loads(key).get('endpoint_id', '')
super().__init__(ark_api_key, model_name, base_url)
@ -690,6 +689,7 @@ class BedrockChat(Base):
yield num_tokens_from_string(ans)
class GeminiChat(Base):
def __init__(self, key, model_name,base_url=None):
@ -981,9 +981,9 @@ class SILICONFLOWChat(Base):
class YiChat(Base):
def __init__(self, key, model_name, base_url="https://api.01.ai/v1"):
def __init__(self, key, model_name, base_url="https://api.lingyiwanwu.com/v1"):
if not base_url:
base_url = "https://api.01.ai/v1"
base_url = "https://api.lingyiwanwu.com/v1"
super().__init__(key, model_name, base_url)
@ -1414,3 +1414,4 @@ class GoogleChat(Base):
yield ans + "\n**ERROR**: " + str(e)
yield response._chunks[-1].usage_metadata.total_token_count

View File

@ -449,6 +449,8 @@ class LocalAICV(GptV4):
class XinferenceCV(Base):
def __init__(self, key, model_name="", lang="Chinese", base_url=""):
if base_url.split("/")[-1] != "v1":
base_url = os.path.join(base_url, "v1")
self.client = OpenAI(api_key="xxx", base_url=base_url)
self.model_name = model_name
self.lang = lang

View File

@ -25,10 +25,10 @@ from abc import ABC
from ollama import Client
import dashscope
from openai import OpenAI
from FlagEmbedding import FlagModel
import torch
import numpy as np
import asyncio
from api.settings import LIGHTEN
from api.utils.file_utils import get_home_cache_dir
from rag.utils import num_tokens_from_string, truncate
import google.generativeai as genai
@ -60,8 +60,10 @@ class DefaultEmbedding(Base):
^_-
"""
if not DefaultEmbedding._model:
if not LIGHTEN and not DefaultEmbedding._model:
with DefaultEmbedding._model_lock:
from FlagEmbedding import FlagModel
import torch
if not DefaultEmbedding._model:
try:
DefaultEmbedding._model = FlagModel(os.path.join(get_home_cache_dir(), re.sub(r"^[a-zA-Z]+/", "", model_name)),
@ -243,8 +245,8 @@ class FastEmbed(Base):
threads: Optional[int] = None,
**kwargs,
):
from fastembed import TextEmbedding
if not FastEmbed._model:
if not LIGHTEN and not FastEmbed._model:
from fastembed import TextEmbedding
self._model = TextEmbedding(model_name, cache_dir, threads, **kwargs)
def encode(self, texts: list, batch_size=32):
@ -268,6 +270,8 @@ class FastEmbed(Base):
class XinferenceEmbed(Base):
def __init__(self, key, model_name="", base_url=""):
if base_url.split("/")[-1] != "v1":
base_url = os.path.join(base_url, "v1")
self.client = OpenAI(api_key="xxx", base_url=base_url)
self.model_name = model_name
@ -287,8 +291,8 @@ class YoudaoEmbed(Base):
_client = None
def __init__(self, key=None, model_name="maidalun1020/bce-embedding-base_v1", **kwargs):
from BCEmbedding import EmbeddingModel as qanthing
if not YoudaoEmbed._client:
if not LIGHTEN and not YoudaoEmbed._client:
from BCEmbedding import EmbeddingModel as qanthing
try:
print("LOADING BCE...")
YoudaoEmbed._client = qanthing(model_name_or_path=os.path.join(
@ -674,3 +678,40 @@ class VoyageEmbed(Base):
texts=text, model=self.model_name, input_type="query"
)
return np.array(res.embeddings), res.total_tokens
class HuggingFaceEmbed(Base):
def __init__(self, key, model_name, base_url=None):
if not model_name:
raise ValueError("Model name cannot be None")
self.key = key
self.model_name = model_name
self.base_url = base_url or "http://127.0.0.1:8080"
def encode(self, texts: list, batch_size=32):
embeddings = []
for text in texts:
response = requests.post(
f"{self.base_url}/embed",
json={"inputs": text},
headers={'Content-Type': 'application/json'}
)
if response.status_code == 200:
embedding = response.json()
embeddings.append(embedding[0])
else:
raise Exception(f"Error: {response.status_code} - {response.text}")
return np.array(embeddings), sum([num_tokens_from_string(text) for text in texts])
def encode_queries(self, text):
response = requests.post(
f"{self.base_url}/embed",
json={"inputs": text},
headers={'Content-Type': 'application/json'}
)
if response.status_code == 200:
embedding = response.json()
return np.array(embedding[0]), num_tokens_from_string(text)
else:
raise Exception(f"Error: {response.status_code} - {response.text}")

View File

@ -14,21 +14,23 @@
# limitations under the License.
#
import re
import threading
import threading
import requests
import torch
from FlagEmbedding import FlagReranker
from huggingface_hub import snapshot_download
import os
from abc import ABC
import numpy as np
from api.settings import LIGHTEN
from api.utils.file_utils import get_home_cache_dir
from rag.utils import num_tokens_from_string, truncate
import json
def sigmoid(x):
return 1 / (1 + np.exp(-x))
class Base(ABC):
def __init__(self, key, model_name):
pass
@ -53,20 +55,25 @@ class DefaultRerank(Base):
^_-
"""
if not DefaultRerank._model:
if not LIGHTEN and not DefaultRerank._model:
import torch
from FlagEmbedding import FlagReranker
with DefaultRerank._model_lock:
if not DefaultRerank._model:
try:
DefaultRerank._model = FlagReranker(os.path.join(get_home_cache_dir(), re.sub(r"^[a-zA-Z]+/", "", model_name)), use_fp16=torch.cuda.is_available())
DefaultRerank._model = FlagReranker(
os.path.join(get_home_cache_dir(), re.sub(r"^[a-zA-Z]+/", "", model_name)),
use_fp16=torch.cuda.is_available())
except Exception as e:
model_dir = snapshot_download(repo_id= model_name,
local_dir=os.path.join(get_home_cache_dir(), re.sub(r"^[a-zA-Z]+/", "", model_name)),
model_dir = snapshot_download(repo_id=model_name,
local_dir=os.path.join(get_home_cache_dir(),
re.sub(r"^[a-zA-Z]+/", "", model_name)),
local_dir_use_symlinks=False)
DefaultRerank._model = FlagReranker(model_dir, use_fp16=torch.cuda.is_available())
self._model = DefaultRerank._model
def similarity(self, query: str, texts: list):
pairs = [(query,truncate(t, 2048)) for t in texts]
pairs = [(query, truncate(t, 2048)) for t in texts]
token_count = 0
for _, t in pairs:
token_count += num_tokens_from_string(t)
@ -75,8 +82,10 @@ class DefaultRerank(Base):
for i in range(0, len(pairs), batch_size):
scores = self._model.compute_score(pairs[i:i + batch_size], max_length=2048)
scores = sigmoid(np.array(scores)).tolist()
if isinstance(scores, float): res.append(scores)
else: res.extend(scores)
if isinstance(scores, float):
res.append(scores)
else:
res.extend(scores)
return np.array(res), token_count
@ -99,7 +108,10 @@ class JinaRerank(Base):
"top_n": len(texts)
}
res = requests.post(self.base_url, headers=self.headers, json=data).json()
return np.array([d["relevance_score"] for d in res["results"]]), res["usage"]["total_tokens"]
rank = np.zeros(len(texts), dtype=float)
for d in res["results"]:
rank[d["index"]] = d["relevance_score"]
return rank, res["usage"]["total_tokens"]
class YoudaoRerank(DefaultRerank):
@ -107,8 +119,8 @@ class YoudaoRerank(DefaultRerank):
_model_lock = threading.Lock()
def __init__(self, key=None, model_name="maidalun1020/bce-reranker-base_v1", **kwargs):
from BCEmbedding import RerankerModel
if not YoudaoRerank._model:
if not LIGHTEN and not YoudaoRerank._model:
from BCEmbedding import RerankerModel
with YoudaoRerank._model_lock:
if not YoudaoRerank._model:
try:
@ -133,13 +145,17 @@ class YoudaoRerank(DefaultRerank):
for i in range(0, len(pairs), batch_size):
scores = self._model.compute_score(pairs[i:i + batch_size], max_length=self._model.max_length)
scores = sigmoid(np.array(scores)).tolist()
if isinstance(scores, float): res.append(scores)
else: res.extend(scores)
if isinstance(scores, float):
res.append(scores)
else:
res.extend(scores)
return np.array(res), token_count
class XInferenceRerank(Base):
def __init__(self, key="xxxxxxx", model_name="", base_url=""):
if base_url.split("/")[-1] != "v1":
base_url = os.path.join(base_url, "v1")
self.model_name = model_name
self.base_url = base_url
self.headers = {
@ -158,7 +174,10 @@ class XInferenceRerank(Base):
"documents": texts
}
res = requests.post(self.base_url, headers=self.headers, json=data).json()
return np.array([d["relevance_score"] for d in res["results"]]), res["meta"]["tokens"]["input_tokens"]+res["meta"]["tokens"]["output_tokens"]
rank = np.zeros(len(texts), dtype=float)
for d in res["results"]:
rank[d["index"]] = d["relevance_score"]
return rank, res["meta"]["tokens"]["input_tokens"] + res["meta"]["tokens"]["output_tokens"]
class LocalAIRerank(Base):
@ -171,7 +190,7 @@ class LocalAIRerank(Base):
class NvidiaRerank(Base):
def __init__(
self, key, model_name, base_url="https://ai.api.nvidia.com/v1/retrieval/nvidia/"
self, key, model_name, base_url="https://ai.api.nvidia.com/v1/retrieval/nvidia/"
):
if not base_url:
base_url = "https://ai.api.nvidia.com/v1/retrieval/nvidia/"
@ -204,9 +223,10 @@ class NvidiaRerank(Base):
"top_n": len(texts),
}
res = requests.post(self.base_url, headers=self.headers, json=data).json()
rank = np.array([d["logit"] for d in res["rankings"]])
indexs = [d["index"] for d in res["rankings"]]
return rank[indexs], token_count
rank = np.zeros(len(texts), dtype=float)
for d in res["rankings"]:
rank[d["index"]] = d["logit"]
return rank, token_count
class LmStudioRerank(Base):
@ -243,9 +263,10 @@ class CoHereRerank(Base):
top_n=len(texts),
return_documents=False,
)
rank = np.array([d.relevance_score for d in res.results])
indexs = [d.index for d in res.results]
return rank[indexs], token_count
rank = np.zeros(len(texts), dtype=float)
for d in res.results:
rank[d.index] = d.relevance_score
return rank, token_count
class TogetherAIRerank(Base):
@ -258,7 +279,7 @@ class TogetherAIRerank(Base):
class SILICONFLOWRerank(Base):
def __init__(
self, key, model_name, base_url="https://api.siliconflow.cn/v1/rerank"
self, key, model_name, base_url="https://api.siliconflow.cn/v1/rerank"
):
if not base_url:
base_url = "https://api.siliconflow.cn/v1/rerank"
@ -283,10 +304,11 @@ class SILICONFLOWRerank(Base):
response = requests.post(
self.base_url, json=payload, headers=self.headers
).json()
rank = np.array([d["relevance_score"] for d in response["results"]])
indexs = [d["index"] for d in response["results"]]
rank = np.zeros(len(texts), dtype=float)
for d in response["results"]:
rank[d["index"]] = d["relevance_score"]
return (
rank[indexs],
rank,
response["meta"]["tokens"]["input_tokens"] + response["meta"]["tokens"]["output_tokens"],
)
@ -308,9 +330,10 @@ class BaiduYiyanRerank(Base):
documents=texts,
top_n=len(texts),
).body
rank = np.array([d["relevance_score"] for d in res["results"]])
indexs = [d["index"] for d in res["results"]]
return rank[indexs], res["usage"]["total_tokens"]
rank = np.zeros(len(texts), dtype=float)
for d in res["results"]:
rank[d["index"]] = d["relevance_score"]
return rank, res["usage"]["total_tokens"]
class VoyageRerank(Base):
@ -324,6 +347,7 @@ class VoyageRerank(Base):
res = self.client.rerank(
query=query, documents=texts, model=self.model_name, top_k=len(texts)
)
rank = np.array([r.relevance_score for r in res.results])
indexs = [r.index for r in res.results]
return rank[indexs], res.total_tokens
rank = np.zeros(len(texts), dtype=float)
for r in res.results:
rank[r.index] = r.relevance_score
return rank, res.total_tokens

View File

@ -93,6 +93,8 @@ class AzureSeq2txt(Base):
class XinferenceSeq2txt(Base):
def __init__(self, key, model_name="", base_url=""):
if base_url.split("/")[-1] != "v1":
base_url = os.path.join(base_url, "v1")
self.client = OpenAI(api_key="xxx", base_url=base_url)
self.model_name = model_name

View File

@ -14,15 +14,32 @@
# limitations under the License.
#
from typing import Annotated, Literal
import _thread as thread
import base64
import datetime
import hashlib
import hmac
import json
import queue
import re
import ssl
import time
from abc import ABC
from datetime import datetime
from time import mktime
from typing import Annotated, Literal
from urllib.parse import urlencode
from wsgiref.handlers import format_date_time
import httpx
import ormsgpack
import requests
import websocket
from pydantic import BaseModel, conint
from rag.utils import num_tokens_from_string
import json
import re
import time
class ServeReferenceAudio(BaseModel):
audio: bytes
text: str
@ -78,13 +95,13 @@ class FishAudioTTS(Base):
with httpx.Client() as client:
try:
with client.stream(
method="POST",
url=self.base_url,
content=ormsgpack.packb(
request, option=ormsgpack.OPT_SERIALIZE_PYDANTIC
),
headers=self.headers,
timeout=None,
method="POST",
url=self.base_url,
content=ormsgpack.packb(
request, option=ormsgpack.OPT_SERIALIZE_PYDANTIC
),
headers=self.headers,
timeout=None,
) as response:
if response.status_code == HTTPStatus.OK:
for chunk in response.iter_bytes():
@ -144,9 +161,9 @@ class QwenTTS(Base):
text = self.normalize_text(text)
callback = Callback()
SpeechSynthesizer.call(model=self.model_name,
text=text,
callback=callback,
format="mp3")
text=text,
callback=callback,
format="mp3")
try:
for data in callback._run():
yield data
@ -154,3 +171,129 @@ class QwenTTS(Base):
except Exception as e:
raise RuntimeError(f"**ERROR**: {e}")
class OpenAITTS(Base):
def __init__(self, key, model_name="tts-1", base_url="https://api.openai.com/v1"):
if not base_url: base_url = "https://api.openai.com/v1"
self.api_key = key
self.model_name = model_name
self.base_url = base_url
self.headers = {
"Authorization": f"Bearer {self.api_key}",
"Content-Type": "application/json"
}
def tts(self, text, voice="alloy"):
text = self.normalize_text(text)
payload = {
"model": self.model_name,
"voice": voice,
"input": text
}
response = requests.post(f"{self.base_url}/audio/speech", headers=self.headers, json=payload, stream=True)
if response.status_code != 200:
raise Exception(f"**Error**: {response.status_code}, {response.text}")
for chunk in response.iter_content():
if chunk:
yield chunk
class SparkTTS:
STATUS_FIRST_FRAME = 0
STATUS_CONTINUE_FRAME = 1
STATUS_LAST_FRAME = 2
def __init__(self, key, model_name, base_url=""):
key = json.loads(key)
self.APPID = key.get("spark_app_id", "xxxxxxx")
self.APISecret = key.get("spark_api_secret", "xxxxxxx")
self.APIKey = key.get("spark_api_key", "xxxxxx")
self.model_name = model_name
self.CommonArgs = {"app_id": self.APPID}
self.audio_queue = queue.Queue()
# 用来存储音频数据
# 生成url
def create_url(self):
url = 'wss://tts-api.xfyun.cn/v2/tts'
now = datetime.now()
date = format_date_time(mktime(now.timetuple()))
signature_origin = "host: " + "ws-api.xfyun.cn" + "\n"
signature_origin += "date: " + date + "\n"
signature_origin += "GET " + "/v2/tts " + "HTTP/1.1"
signature_sha = hmac.new(self.APISecret.encode('utf-8'), signature_origin.encode('utf-8'),
digestmod=hashlib.sha256).digest()
signature_sha = base64.b64encode(signature_sha).decode(encoding='utf-8')
authorization_origin = "api_key=\"%s\", algorithm=\"%s\", headers=\"%s\", signature=\"%s\"" % (
self.APIKey, "hmac-sha256", "host date request-line", signature_sha)
authorization = base64.b64encode(authorization_origin.encode('utf-8')).decode(encoding='utf-8')
v = {
"authorization": authorization,
"date": date,
"host": "ws-api.xfyun.cn"
}
url = url + '?' + urlencode(v)
return url
def tts(self, text):
BusinessArgs = {"aue": "lame", "sfl": 1, "auf": "audio/L16;rate=16000", "vcn": self.model_name, "tte": "utf8"}
Data = {"status": 2, "text": base64.b64encode(text.encode('utf-8')).decode('utf-8')}
CommonArgs = {"app_id": self.APPID}
audio_queue = self.audio_queue
model_name = self.model_name
class Callback:
def __init__(self):
self.audio_queue = audio_queue
def on_message(self, ws, message):
message = json.loads(message)
code = message["code"]
sid = message["sid"]
audio = message["data"]["audio"]
audio = base64.b64decode(audio)
status = message["data"]["status"]
if status == 2:
ws.close()
if code != 0:
errMsg = message["message"]
raise Exception(f"sid:{sid} call error:{errMsg} code:{code}")
else:
self.audio_queue.put(audio)
def on_error(self, ws, error):
raise Exception(error)
def on_close(self, ws, close_status_code, close_msg):
self.audio_queue.put(None) # 放入 None 作为结束标志
def on_open(self, ws):
def run(*args):
d = {"common": CommonArgs,
"business": BusinessArgs,
"data": Data}
ws.send(json.dumps(d))
thread.start_new_thread(run, ())
wsUrl = self.create_url()
websocket.enableTrace(False)
a = Callback()
ws = websocket.WebSocketApp(wsUrl, on_open=a.on_open, on_error=a.on_error, on_close=a.on_close,
on_message=a.on_message)
status_code = 0
ws.run_forever(sslopt={"cert_reqs": ssl.CERT_NONE})
while True:
audio_chunk = self.audio_queue.get()
if audio_chunk is None:
if status_code == 0:
raise Exception(
f"Fail to access model({model_name}) using the provided credentials. **ERROR**: Invalid APPID, API Secret, or API Key.")
else:
break
status_code = 1
yield audio_chunk

View File

@ -64,7 +64,7 @@ class RagTokenizer:
self.stemmer = PorterStemmer()
self.lemmatizer = WordNetLemmatizer()
self.SPLIT_CHAR = r"([ ,\.<>/?;'\[\]\\`!@#$%^&*\(\)\{\}\|_+=《》,。?、;‘’:“”【】~!¥%……()——-]+|[a-z\.-]+|[0-9,\.-]+)"
self.SPLIT_CHAR = r"([ ,\.<>/?;:'\[\]\\`!@#$%^&*\(\)\{\}\|_+=《》,。?、;‘’:“”【】~!¥%……()——-]+|[a-z\.-]+|[0-9,\.-]+)"
try:
self.trie_ = datrie.Trie.load(self.DIR_ + ".txt.trie")
return

View File

@ -348,7 +348,7 @@ class Dealer:
ins_tw.append(tks)
tksim = self.qryr.token_similarity(keywords, ins_tw)
vtsim,_ = rerank_mdl.similarity(" ".join(keywords), [rmSpace(" ".join(tks)) for tks in ins_tw])
vtsim,_ = rerank_mdl.similarity(query, [rmSpace(" ".join(tks)) for tks in ins_tw])
return tkweight*np.array(tksim) + vtweight*vtsim, tksim, vtsim

View File

@ -14,6 +14,7 @@
# limitations under the License.
#
import os
import logging
from api.utils import get_base_config, decrypt_database_config
from api.utils.file_utils import get_project_base_directory
from api.utils.log_utils import LoggerFactory, getLogger
@ -48,10 +49,15 @@ minio_logger = getLogger("minio")
s3_logger = getLogger("s3")
azure_logger = getLogger("azure")
cron_logger = getLogger("cron_logger")
cron_logger.setLevel(20)
chunk_logger = getLogger("chunk_logger")
database_logger = getLogger("database")
formatter = logging.Formatter("%(asctime)-15s %(levelname)-8s (%(process)d) %(message)s")
for logger in [es_logger, minio_logger, s3_logger, azure_logger, cron_logger, chunk_logger, database_logger]:
logger.setLevel(logging.INFO)
for handler in logger.handlers:
handler.setFormatter(fmt=formatter)
SVR_QUEUE_NAME = "rag_flow_svr_queue"
SVR_QUEUE_RETENTION = 60*60
SVR_QUEUE_MAX_LEN = 1024

View File

@ -25,34 +25,31 @@ import time
import traceback
from concurrent.futures import ThreadPoolExecutor
from functools import partial
from api.db.services.file2document_service import File2DocumentService
from api.settings import retrievaler
from rag.raptor import RecursiveAbstractiveProcessing4TreeOrganizedRetrieval as Raptor
from rag.utils.storage_factory import STORAGE_IMPL
from api.db.db_models import close_connection
from rag.settings import database_logger, SVR_QUEUE_NAME
from rag.settings import cron_logger, DOC_MAXIMUM_SIZE
from multiprocessing import Pool
import numpy as np
from elasticsearch_dsl import Q, Search
from multiprocessing.context import TimeoutError
from api.db.services.task_service import TaskService
from rag.utils.es_conn import ELASTICSEARCH
from timeit import default_timer as timer
from rag.utils import rmSpace, findMaxTm, num_tokens_from_string
from rag.nlp import search, rag_tokenizer
from io import BytesIO
import pandas as pd
from multiprocessing.context import TimeoutError
from timeit import default_timer as timer
from rag.app import laws, paper, presentation, manual, qa, table, book, resume, picture, naive, one, audio, knowledge_graph, email
import numpy as np
import pandas as pd
from elasticsearch_dsl import Q
from api.db import LLMType, ParserType
from api.db.services.document_service import DocumentService
from api.db.services.llm_service import LLMBundle
from api.db.services.task_service import TaskService
from api.db.services.file2document_service import File2DocumentService
from api.settings import retrievaler
from api.utils.file_utils import get_project_base_directory
from rag.utils.redis_conn import REDIS_CONN
from api.db.db_models import close_connection
from rag.app import laws, paper, presentation, manual, qa, table, book, resume, picture, naive, one, audio, knowledge_graph, email
from rag.nlp import search, rag_tokenizer
from rag.raptor import RecursiveAbstractiveProcessing4TreeOrganizedRetrieval as Raptor
from rag.settings import database_logger, SVR_QUEUE_NAME
from rag.settings import cron_logger, DOC_MAXIMUM_SIZE
from rag.utils import rmSpace, num_tokens_from_string
from rag.utils.es_conn import ELASTICSEARCH
from rag.utils.redis_conn import REDIS_CONN, Payload
from rag.utils.storage_factory import STORAGE_IMPL
BATCH_SIZE = 64
@ -74,11 +71,11 @@ FACTORY = {
ParserType.KG.value: knowledge_graph
}
CONSUMEER_NAME = "task_consumer_" + ("0" if len(sys.argv) < 2 else sys.argv[1])
PAYLOAD = None
CONSUMER_NAME = "task_consumer_" + ("0" if len(sys.argv) < 2 else sys.argv[1])
PAYLOAD: Payload | None = None
def set_progress(task_id, from_page=0, to_page=-1,
prog=None, msg="Processing..."):
def set_progress(task_id, from_page=0, to_page=-1, prog=None, msg="Processing..."):
global PAYLOAD
if prog is not None and prog < 0:
msg = "[ERROR]" + msg
@ -107,11 +104,11 @@ def set_progress(task_id, from_page=0, to_page=-1,
def collect():
global CONSUMEER_NAME, PAYLOAD
global CONSUMER_NAME, PAYLOAD
try:
PAYLOAD = REDIS_CONN.get_unacked_for(CONSUMEER_NAME, SVR_QUEUE_NAME, "rag_flow_svr_task_broker")
PAYLOAD = REDIS_CONN.get_unacked_for(CONSUMER_NAME, SVR_QUEUE_NAME, "rag_flow_svr_task_broker")
if not PAYLOAD:
PAYLOAD = REDIS_CONN.queue_consumer(SVR_QUEUE_NAME, "rag_flow_svr_task_broker", CONSUMEER_NAME)
PAYLOAD = REDIS_CONN.queue_consumer(SVR_QUEUE_NAME, "rag_flow_svr_task_broker", CONSUMER_NAME)
if not PAYLOAD:
time.sleep(1)
return pd.DataFrame()
@ -137,7 +134,7 @@ def collect():
return tasks
def get_minio_binary(bucket, name):
def get_storage_binary(bucket, name):
return STORAGE_IMPL.get(bucket, name)
@ -155,12 +152,12 @@ def build(row):
chunker = FACTORY[row["parser_id"].lower()]
try:
st = timer()
bucket, name = File2DocumentService.get_minio_address(doc_id=row["doc_id"])
binary = get_minio_binary(bucket, name)
bucket, name = File2DocumentService.get_storage_address(doc_id=row["doc_id"])
binary = get_storage_binary(bucket, name)
cron_logger.info(
"From minio({}) {}/{}".format(timer() - st, row["location"], row["name"]))
except TimeoutError as e:
callback(-1, f"Internal server error: Fetch file from minio timeout. Could you try it again.")
except TimeoutError:
callback(-1, "Internal server error: Fetch file from minio timeout. Could you try it again.")
cron_logger.error(
"Minio {}/{}: Fetch file from minio timeout.".format(row["location"], row["name"]))
return
@ -168,8 +165,7 @@ def build(row):
if re.search("(No such file|not found)", str(e)):
callback(-1, "Can not find file <%s> from minio. Could you try it again?" % row["name"])
else:
callback(-1, f"Get file from minio: %s" %
str(e).replace("'", ""))
callback(-1, "Get file from minio: %s" % str(e).replace("'", ""))
traceback.print_exc()
return
@ -180,7 +176,7 @@ def build(row):
cron_logger.info(
"Chunking({}) {}/{}".format(timer() - st, row["location"], row["name"]))
except Exception as e:
callback(-1, f"Internal server error while chunking: %s" %
callback(-1, "Internal server error while chunking: %s" %
str(e).replace("'", ""))
cron_logger.error(
"Chunking {}/{}: {}".format(row["location"], row["name"], str(e)))
@ -236,7 +232,9 @@ def init_kb(row):
open(os.path.join(get_project_base_directory(), "conf", "mapping.json"), "r")))
def embedding(docs, mdl, parser_config={}, callback=None):
def embedding(docs, mdl, parser_config=None, callback=None):
if parser_config is None:
parser_config = {}
batch_size = 32
tts, cnts = [rmSpace(d["title_tks"]) for d in docs if d.get("title_tks")], [
re.sub(r"</?(table|td|caption|tr|th)( [^<>]{0,12})?>", " ", d["content_with_weight"]) for d in docs]
@ -277,7 +275,7 @@ def embedding(docs, mdl, parser_config={}, callback=None):
def run_raptor(row, chat_mdl, embd_mdl, callback=None):
vts, _ = embd_mdl.encode(["ok"])
vctr_nm = "q_%d_vec"%len(vts[0])
vctr_nm = "q_%d_vec" % len(vts[0])
chunks = []
for d in retrievaler.chunk_list(row["doc_id"], row["tenant_id"], fields=["content_with_weight", vctr_nm]):
chunks.append((d["content_with_weight"], np.array(d[vctr_nm])))
@ -374,7 +372,7 @@ def main():
cron_logger.info("Indexing elapsed({}): {:.2f}".format(r["name"], timer() - st))
if es_r:
callback(-1, f"Insert chunk error, detail info please check ragflow-logs/api/cron_logger.log. Please also check ES status!")
callback(-1, "Insert chunk error, detail info please check ragflow-logs/api/cron_logger.log. Please also check ES status!")
ELASTICSEARCH.deleteByQuery(
Q("match", doc_id=r["doc_id"]), idxnm=search.index_name(r["tenant_id"]))
cron_logger.error(str(es_r))
@ -392,15 +390,15 @@ def main():
def report_status():
global CONSUMEER_NAME
global CONSUMER_NAME
while True:
try:
obj = REDIS_CONN.get("TASKEXE")
if not obj: obj = {}
else: obj = json.loads(obj)
if CONSUMEER_NAME not in obj: obj[CONSUMEER_NAME] = []
obj[CONSUMEER_NAME].append(timer())
obj[CONSUMEER_NAME] = obj[CONSUMEER_NAME][-60:]
if CONSUMER_NAME not in obj: obj[CONSUMER_NAME] = []
obj[CONSUMER_NAME].append(timer())
obj[CONSUMER_NAME] = obj[CONSUMER_NAME][-60:]
REDIS_CONN.set_obj("TASKEXE", obj, 60*2)
except Exception as e:
print("[Exception]:", str(e))

View File

@ -78,11 +78,9 @@ encoder = tiktoken.encoding_for_model("gpt-3.5-turbo")
def num_tokens_from_string(string: str) -> int:
"""Returns the number of tokens in a text string."""
try:
num_tokens = len(encoder.encode(string))
return num_tokens
except Exception as e:
pass
return 0
return len(encoder.encode(string))
except Exception:
return 0
def truncate(string: str, max_len: int) -> str:

View File

@ -27,4 +27,5 @@ class StorageFactory:
return cls.storage_mapping[storage]()
STORAGE_IMPL = StorageFactory.create(Storage[os.getenv('STORAGE_IMPL', 'MINIO')])
STORAGE_IMPL_TYPE = os.getenv('STORAGE_IMPL', 'MINIO')
STORAGE_IMPL = StorageFactory.create(Storage[STORAGE_IMPL_TYPE])

View File

@ -1,104 +0,0 @@
akshare==1.14.72
azure-storage-blob==12.22.0
azure-identity==1.17.1
azure-storage-file-datalake==12.16.0
anthropic===0.34.1
arxiv==2.1.3
Aspose.Slides==24.2.0
BCEmbedding==0.1.3
Bio==1.7.1
boto3==1.34.140
botocore==1.34.140
cachetools==5.3.3
chardet==5.2.0
cn2an==0.5.22
cohere==5.6.2
dashscope==1.14.1
datrie==0.8.2
deepl==1.18.0
demjson3==3.0.6
discord.py==2.3.2
duckduckgo_search==6.1.9
editdistance==0.8.1
elastic_transport==8.12.0
elasticsearch==8.12.1
elasticsearch_dsl==8.12.0
fastembed==0.2.6
fasttext==0.9.3
filelock==3.15.4
FlagEmbedding==1.2.10
Flask==3.0.3
Flask_Cors==5.0.0
Flask_Login==0.6.3
flask_session==0.8.0
google_search_results==2.4.2
groq==0.9.0
hanziconv==0.3.2
html_text==0.6.2
httpx==0.27.0
huggingface_hub==0.20.3
infinity_emb==0.0.51
itsdangerous==2.1.2
Markdown==3.6
markdown_to_json==2.1.1
minio==7.2.4
mistralai==0.4.2
nltk==3.9
numpy==1.26.4
ollama==0.2.1
onnxruntime==1.17.3
onnxruntime_gpu==1.17.1
openai==1.12.0
opencv_python==4.9.0.80
opencv_python_headless==4.9.0.80
openpyxl==3.1.2
ormsgpack==1.5.0
pandas==2.2.2
pdfplumber==0.10.4
peewee==3.17.1
Pillow==10.3.0
pipreqs==0.5.0
protobuf==5.27.2
psycopg2-binary==2.9.9
pyclipper==1.3.0.post5
pycryptodomex==3.20.0
pypdf==4.3.0
PyPDF2==3.0.1
pytest==8.2.2
python-dotenv==1.0.1
python_dateutil==2.8.2
python_pptx==0.6.23
pywencai==0.12.2
qianfan==0.4.6
ranx==0.3.20
readability_lxml==0.8.1
redis==5.0.3
Requests==2.32.2
replicate==0.31.0
roman_numbers==1.0.2
ruamel.base==1.0.0
scholarly==1.7.11
scikit_learn==1.5.0
selenium==4.22.0
setuptools==70.0.0
Shapely==2.0.5
six==1.16.0
StrEnum==0.4.15
tabulate==0.9.0
tencentcloud-sdk-python==3.0.1215
tika==2.6.0
tiktoken==0.6.0
torch==2.3.0
transformers==4.38.1
umap==0.1.1
vertexai==1.64.0
volcengine==1.0.146
voyageai==0.2.3
webdriver_manager==4.0.1
Werkzeug==3.0.3
wikipedia==1.4.0
word2number==1.1
xgboost==2.1.0
xpinyin==0.7.6
yfinance==0.1.96
zhipuai==2.0.1

View File

@ -1,175 +0,0 @@
accelerate==0.27.2
aiohttp==3.10.2
aiosignal==1.3.1
annotated-types==0.6.0
anthropic===0.34.1
anyio==4.3.0
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
#Aspose.Slides==24.2.0
attrs==23.2.0
blinker==1.7.0
cachelib==0.12.0
cachetools==5.3.3
certifi==2024.7.4
cffi==1.16.0
charset-normalizer==3.3.2
click==8.1.7
cohere==5.6.2
coloredlogs==15.0.1
cryptography==43.0.1
dashscope==1.14.1
datasets==2.17.1
datrie==0.8.2
demjson3==3.0.6
dill==0.3.8
distro==1.9.0
elastic-transport==8.12.0
elasticsearch==8.12.1
elasticsearch-dsl==8.12.0
et-xmlfile==1.1.0
filelock==3.13.1
fastembed==0.2.6
FlagEmbedding==1.2.5
Flask==3.0.2
Flask-Cors==5.0.0
Flask-Login==0.6.3
Flask-Session==0.6.0
flatbuffers==23.5.26
frozenlist==1.4.1
fsspec==2023.10.0
h11==0.14.0
hanziconv==0.3.2
httpcore==1.0.4
httpx==0.27.0
huggingface-hub==0.20.3
humanfriendly==10.0
idna==3.7
itsdangerous==2.1.2
Jinja2==3.1.4
joblib==1.3.2
lxml==5.1.0
MarkupSafe==2.1.5
minio==7.2.4
mpmath==1.3.0
multidict==6.0.5
multiprocess==0.70.16
networkx==3.2.1
nltk==3.9
numpy==1.26.4
# nvidia-cublas-cu12==12.1.3.1
# nvidia-cuda-cupti-cu12==12.1.105
# nvidia-cuda-nvrtc-cu12==12.1.105
# nvidia-cuda-runtime-cu12==12.1.105
# nvidia-cudnn-cu12==8.9.2.26
# nvidia-cufft-cu12==11.0.2.54
# nvidia-curand-cu12==10.3.2.106
# nvidia-cusolver-cu12==11.4.5.107
# nvidia-cusparse-cu12==12.1.0.106
# nvidia-nccl-cu12==2.19.3
# nvidia-nvjitlink-cu12==12.3.101
# nvidia-nvtx-cu12==12.1.105
ollama==0.1.9
# onnxruntime-gpu==1.17.1
openai==1.12.0
opencv-python==4.9.0.80
openpyxl==3.1.2
ormsgpack==1.5.0
packaging==23.2
pandas==2.2.1
pdfminer.six==20221105
pdfplumber==0.10.4
peewee==3.17.1
pillow==10.3.0
protobuf==4.25.3
psutil==5.9.8
psycopg2-binary==2.9.9
pyarrow==15.0.0
pyarrow-hotfix==0.6
pyclipper==1.3.0.post5
pycparser==2.21
pycryptodome
pycryptodome-test-vectors
pycryptodomex
pydantic==2.6.2
pydantic_core==2.16.3
PyJWT==2.8.0
PyMySQL==1.1.1
PyPDF2==3.0.1
pypdfium2==4.27.0
python-dateutil==2.8.2
python-docx==1.1.0
python-dotenv==1.0.1
python-pptx==0.6.23
PyYAML==6.0.1
qianfan==0.4.6
redis==5.0.3
regex==2023.12.25
replicate==0.31.0
requests==2.32.2
ruamel.yaml==0.18.6
ruamel.yaml.clib==0.2.8
safetensors==0.4.2
scikit-learn==1.5.0
scipy==1.12.0
sentence-transformers==2.4.0
shapely==2.0.3
six==1.16.0
sniffio==1.3.1
StrEnum==0.4.15
sympy==1.12
tencentcloud-sdk-python==3.0.1215
threadpoolctl==3.3.0
tika==2.6.0
tiktoken==0.6.0
tokenizers==0.15.2
torch==2.2.1
tqdm==4.66.3
transformers==4.38.1
# triton==2.2.0
typing_extensions==4.10.0
tzdata==2024.1
urllib3==2.2.2
Werkzeug==3.0.3
xgboost==2.0.3
XlsxWriter==3.2.0
xpinyin==0.7.6
xxhash==3.4.1
yarl==1.9.4
zhipuai==2.0.1
BCEmbedding
loguru==0.7.2
umap-learn
fasttext==0.9.2
volcengine==1.0.141
voyageai==0.2.3
opencv-python-headless==4.9.0.80
readability-lxml==0.8.1
html_text==0.6.2
selenium==4.21.0
webdriver-manager==4.0.1
cn2an==0.5.22
roman-numbers==1.0.2
word2number==1.1
markdown==3.6
mistralai==0.4.2
boto3==1.34.140
duckduckgo_search==6.1.9
google-generativeai==0.7.2
groq==0.9.0
wikipedia==1.4.0
Bio==1.7.1
arxiv==2.1.3
pypdf==4.3.0
google_search_results==2.4.2
editdistance==0.8.1
markdown_to_json==2.1.1
scholarly==1.7.11
deepl==1.18.0
psycopg2-binary==2.9.9
tabulate==0.9.0
vertexai==1.64.0
yfinance==0.1.96
pywencai==0.12.2
akshare==1.14.72
ranx==0.3.20

View File

@ -76,7 +76,7 @@ class Assistant(Base):
raise Exception(res["retmsg"])
def get_session(self, id) -> Session:
res = self.get("/session/get", {"id": id})
res = self.get("/session/get", {"id": id,"assistant_id":self.id})
res = res.json()
if res.get("retmsg") == "success":
return Session(self.rag, res["data"])

View File

@ -3,32 +3,46 @@ from .base import Base
class Chunk(Base):
def __init__(self, rag, res_dict):
# 初始化类的属性
self.id = ""
self.content_with_weight = ""
self.content_ltks = []
self.content_sm_ltks = []
self.important_kwd = []
self.important_tks = []
self.content = ""
self.important_keywords = []
self.create_time = ""
self.create_timestamp_flt = 0.0
self.kb_id = None
self.docnm_kwd = ""
self.doc_id = ""
self.q_vec = []
self.status = "1"
for k, v in res_dict.items():
if hasattr(self, k):
setattr(self, k, v)
self.create_timestamp = 0.0
self.knowledgebase_id = None
self.document_name = ""
self.document_id = ""
self.available = 1
for k in list(res_dict.keys()):
if k not in self.__dict__:
res_dict.pop(k)
super().__init__(rag, res_dict)
def delete(self) -> bool:
"""
Delete the chunk in the document.
"""
res = self.rm('/doc/chunk/rm',
{"doc_id": [self.id],""})
res = self.post('/doc/chunk/rm',
{"document_id": self.document_id, 'chunk_ids': [self.id]})
res = res.json()
if res.get("retmsg") == "success":
return True
raise Exception(res["retmsg"])
def save(self) -> bool:
"""
Save the document details to the server.
"""
res = self.post('/doc/chunk/set',
{"chunk_id": self.id,
"knowledgebase_id": self.knowledgebase_id,
"name": self.document_name,
"content": self.content,
"important_keywords": self.important_keywords,
"document_id": self.document_id,
"available": self.available,
})
res = res.json()
if res.get("retmsg") == "success":
return True
raise Exception(res["retmsg"])

View File

@ -65,7 +65,7 @@ class DataSet(Base):
"""
# Construct the request payload for listing documents
payload = {
"kb_id": self.id,
"knowledgebase_id": self.id,
"keywords": keywords,
"offset": offset,
"limit": limit

Some files were not shown because too many files have changed in this diff Show More