Compare commits

...

165 Commits

Author SHA1 Message Date
a36a0fe71c Docs: Update version references to v0.22.0 in READMEs and docs (#11211)
### What problem does this PR solve?

- Update version tags in README files (including translations) from
v0.21.1 to v0.22.0
- Modify Docker image references and documentation to reflect new
version
- Update version badges and image descriptions
- Maintain consistency across all language variants of README files

### Type of change

- [x] Documentation Update
2025-11-12 14:54:28 +08:00
a81f6d1b24 Fix: Bug Fixes - Added disabled logic RAPTOR scope #10703 (#11207)
### What problem does this PR solve?

Fix: Bug Fixes #10703

- Fixed the menu order in the user center
- Added a disabled RAPTOR scope
- Fixed some style issues

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-12 14:36:30 +08:00
8406a5ea47 Fix typos (#11208)
### What problem does this PR solve?

As title

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-12 14:20:04 +08:00
20b6dafbd8 Update docs (#11204)
### What problem does this PR solve?

as title

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-12 14:01:47 +08:00
33cc9cafa9 chore(readme): remove slim image from docs (#11199)
### What problem does this PR solve?

RAGFlow will no longer offer docker images that contains embedding
models.

### Type of change

- [x] Documentation Update
2025-11-12 13:57:35 +08:00
6567ecf15a Bump infinity to 0.6.5 (#11203)
### What problem does this PR solve?

Bump infinity to 0.6.5

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-12 13:33:33 +08:00
3a7322f5b2 Docs: Added v0.22.0 release notes. (#11202)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-11-12 13:10:07 +08:00
829e5f287b Fixes: Fixed some bugs #10703 (#11200)
### What problem does this PR solve?

Fixes: Fixed some bugs #10703

- Removed login page animation
- Modified some styles in the user profile center

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-12 12:53:41 +08:00
1e8efa2631 chore(template): update agent template's title (#11201)
### What problem does this PR solve?

Update title

### Type of change

- [x] Other (please describe):
2025-11-12 12:53:28 +08:00
e7f7c09b0b Fix: Fixed an issue that caused the page to crash when a knowledge base variable was selected. #10427 (#11197)
### What problem does this PR solve?

Fix: Fixed an issue that caused the page to crash when a knowledge base
variable was selected. #10427

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-12 12:30:08 +08:00
8ae562504b Fix: GraphRAG and RAPTOR tasks do not affect document status (#11194)
### What problem does this PR solve?

GraphRAG and RAPTOR tasks do not affect document status.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-12 12:03:41 +08:00
bacc9d3ab9 Revert PR#11151 (#11196)
### What problem does this PR solve?

Revert PR#11151

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-12 11:58:02 +08:00
d226764ed0 Fix: connector auto-parse issue. (#11189)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-12 11:50:39 +08:00
39120d49cf Docs: Removed descriptions of the slim edition. (#11192)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-11-12 11:34:45 +08:00
27211a9b34 Update Chinese README.md on slim version (#11190)
### What problem does this PR solve?

As title.

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-12 11:06:08 +08:00
e9de25c973 Docs: update latest updates. (#11188)
### Type of change

- [x] Documentation Update
2025-11-12 10:38:33 +08:00
09e971dcc8 chore(templates): add user interaction agent (#11185)
### What problem does this PR solve?
Add user interaction agent template

### Type of change

- [x] Other (please describe): new agent template
2025-11-12 09:38:39 +08:00
883df22aa2 Update LLM factories ranks in llm_factories.json (#11184)
### What problem does this PR solve?

[Update LLM factory ranks in llm_factories.json]

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-12 09:38:06 +08:00
2bd7abadd3 Fix: Confluence cannot retrieve updated files (#11182)
### What problem does this PR solve?

Confluence cannot retrieve updated files。

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-12 09:37:32 +08:00
435479adb3 Fixes: Fixed some bugs #10703 (#11180)
### What problem does this PR solve?

Fixes: Fixed some bugs #10703

- Removed S3 upload from the file upload component
- Updated the dropdown menu style on the model provider page
- Updated some model provider icons
- Fixed other style issues

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-12 09:36:48 +08:00
2c727a4a9c Docs: parser behavior change (#11176)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2025-11-11 21:10:06 +08:00
a15f522dc9 Update Admin UI user guide docs (#11183)
### What problem does this PR solve?

- Update Admin UI user guide docs

### Type of change

- [x] Documentation Update
2025-11-11 20:29:20 +08:00
de53498b39 Fix: Update env to support PPTX and update README for version changes (#11167)
### What problem does this PR solve?

Fix: Update env to support PPTX
Fix: update README for version changes #11138

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2025-11-11 19:56:54 +08:00
72740eb5b9 Fix:data_operations input return (#11177)
### What problem does this PR solve?

change:
data_operations input return

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-11 19:54:17 +08:00
c30ffb5716 Fix: ollama model list issue. (#11175)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-11 19:46:41 +08:00
6dcff7db97 Feat: The input parameters of data manipulation operators can only be of type object. #10427 (#11179)
### What problem does this PR solve?

Feat: The input parameters of data manipulation operators can only be of
type object. #10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-11 19:43:49 +08:00
9213568692 Feat: add mechanism to check cancellation in Agent (#10766)
### What problem does this PR solve?

Add mechanism to check cancellation in Agent.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-11 17:36:48 +08:00
d81e4095de Feat: Google drive supports web-based credentials (#11173)
### What problem does this PR solve?

 Google drive supports web-based credentials.

<img width="1204" height="612" alt="image"
src="https://github.com/user-attachments/assets/70291c63-a2dd-4a80-ae20-807fe034cdbc"
/>


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-11 17:21:08 +08:00
8ddeaca3d6 Feat: Place the new mcp button at the end of the line. #10427 (#11170)
### What problem does this PR solve?

Feat: Place the new mcp button at the end of the line. #10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-11 17:11:32 +08:00
f441f8ffc2 Fix: waitForResponse component. (#11172)
### What problem does this PR solve?

#10056

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-11-11 16:58:47 +08:00
522c7b7ac6 Fixe: model provider issues and improved some features #10703 (#11168)
### What problem does this PR solve?

Fixes: Fixed model provider issues and improved some features
- Removed the old login page
- Updated model provider icons
- Added RAPTOR modification range parameter

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-11 16:26:26 +08:00
377c0fb4fa Feat: Call the interface to stop the output of the large model #10997 (#11164)
### What problem does this PR solve?

Feat: Call the interface to stop the output of the large model #10997

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-11 15:21:08 +08:00
7dd9758056 Add task executor bar chart, add system version string (#11155)
### What problem does this PR solve?

- Add task executor bar chart
- Add read version string

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-11 15:20:37 +08:00
26cf5131c9 Fix: filter builtin llm factories. (#11163)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-11 14:52:59 +08:00
93207f83ba Changed infinity log level to info (#11165)
### What problem does this PR solve?

Changed infinity log level to info

### Type of change

- [x] Refactoring
2025-11-11 14:43:25 +08:00
f77604db26 Docs: add admin UI user guide (#11156)
### What problem does this PR solve?

Add admin UI user guide

### Type of change

- [x] Documentation Update
2025-11-11 14:20:35 +08:00
dd5b8e2e1a Fix: add auto_parse to kb detail. (#11153)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-11 12:22:43 +08:00
83ff8e8009 Fix:update agent variable name rule (#11124)
### What problem does this PR solve?

change:

1. update agent variable name rule.
2. reset() in Canvas doesn't reset the env var.
3. correct log input binding in message component
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-11 11:18:30 +08:00
7db6cb8ca3 Fixes: Bugs fixed #10703 (#11154)
### What problem does this PR solve?

Fixes: Bugs fixed
- Removed invalid code,
- Modified the user center style,
- Added an automatic data source parsing switch.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-11 11:18:07 +08:00
ba6470a7a5 Chore(config): Added rank values for the LLM vendors and remove deprecated LLM (#11133)
### What problem does this PR solve?

Added vendor ranking so that frequently used model providers appear
higher on the page for easier access.
Remove deprecated LLM configurations from llm_factories.json to
streamline model management

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-10 19:17:35 +08:00
df16a80f25 Feat: add initial Google Drive connector support (#11147)
### What problem does this PR solve?

This feature is primarily ported from the
[Onyx](https://github.com/onyx-dot-app/onyx) project with necessary
modifications. Thanks for such a brilliant project.

Minor: consistently use `google_drive` rather than `google_driver`.

<img width="566" height="731" alt="image"
src="https://github.com/user-attachments/assets/6f64e70e-881e-42c7-b45f-809d3e0024a4"
/>

<img width="904" height="830" alt="image"
src="https://github.com/user-attachments/assets/dfa7d1ef-819a-4a82-8c52-0999f48ed4a6"
/>

<img width="911" height="869" alt="image"
src="https://github.com/user-attachments/assets/39e792fb-9fbe-4f3d-9b3c-b2265186bc22"
/>

<img width="947" height="323" alt="image"
src="https://github.com/user-attachments/assets/27d70e96-d9c0-42d9-8c89-276919b6d61d"
/>


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-10 19:15:02 +08:00
29ea059f90 Feat: Adjust the style of mcp and checkbox. #10427 (#11150)
### What problem does this PR solve?

Feat: Adjust the style of mcp and checkbox. #10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-10 19:02:41 +08:00
a191933f81 Fix(config): Add raptor_kwd field to infinity mapping (#11146)
### What problem does this PR solve?

fix infinity "INSERT: Column raptor_kwd not found in table" error

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-10 19:02:25 +08:00
6e1ebb2855 Fix: Optimize Prompts and Regex for use_sql() (#11148)
### What problem does this PR solve?

Fix: Optimize Prompts and Regex for use_sql() #11127 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-10 19:02:07 +08:00
68b952abb1 Don't select vector on infinity (#11151)
### What problem does this PR solve?

Don't select vector on infinity

### Type of change

- [x] Performance Improvement
2025-11-10 18:01:40 +08:00
0879b6af2c Feat: Globally defined conversation variables can be selected in the operator's query variables. #10427 (#11135)
### What problem does this PR solve?

Feat: Globally defined conversation variables can be selected in the
operator's query variables. #10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-10 15:09:33 +08:00
2b9145948f Fix:not enough values to unpack (expected 3, got 2) in general chunk (#11139)
### What problem does this PR solve?
issue:
#11136
change:
not enough values to unpack (expected 3, got 2) in general chunk

### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-10 15:08:24 +08:00
726473fd39 Fix: Bugs fixed #10703 (#11132)
### What problem does this PR solve?

Fix: Bugs fixed #10703

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-10 14:12:45 +08:00
d207291217 Fix: add download stats to kb logs. (#11112)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-10 13:28:07 +08:00
bf382e5c4d Fix: remove unsupported models in siliconflow api (#11126)
### What problem does this PR solve?

Fix: remove unsupported models in siliconflow api

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-10 13:27:42 +08:00
4338e706c6 Fix: missing file formats in hierarchical_manager (#11129)
### What problem does this PR solve?

Fix: missing file formats in hierarchical_manager  #11084 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-10 13:27:22 +08:00
86af330f06 Feat: The keys for data manipulation operators can only be numbers, letters, and underscores. #10427 (#11130)
### What problem does this PR solve?

Feat: The keys for data manipulation operators can only be numbers,
letters, and underscores. #10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-10 13:27:09 +08:00
d016a06fd5 Feat/monitor task (#11116)
### What problem does this PR solve?

Show task executor.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-10 12:51:39 +08:00
7423a5806e Feature: Added global variable functionality #10703 (#11117)
### What problem does this PR solve?

Feature: Added global variable functionality

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-10 10:16:12 +08:00
b6cd282ccd fix: layout structure to use main tag (#11119)
### What problem does this PR solve?

For proper semantics Layout should use HTML `<main>` element to wrap the
Header and Outlet which produce`<section>` HTML elements.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-10 10:15:57 +08:00
82ca2e0378 Refactor: QWenCV release temp path (#11122)
### What problem does this PR solve?

QWenCV release temp path

### Type of change
- [x] Refactoring
2025-11-10 10:15:37 +08:00
1cd54832b5 Adjust styles to match the design system (#11118)
### What problem does this PR solve?

- Modify and adjust styles (CSS vars, components) to match the design
system
- Adjust file and directory structure of admin UI

### Type of change

- [x] Refactoring
2025-11-10 10:05:19 +08:00
660386d3b5 Fix: cannot parse images (#11044)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/11043

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-10 09:31:19 +08:00
4cdaa77545 Docs: refine MinerU part in FAQ (#11111)
### What problem does this PR solve?

Refine MinerU part in FAQ.

### Type of change

- [x] Documentation Update
2025-11-07 19:58:07 +08:00
9fcc4946e2 Feat: add kimi-k2-thinking and moonshot-v1-vision-preview (#11110)
### What problem does this PR solve?

Add kimi-k2-thinking and moonshot-v1-vision-preview.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-07 19:52:57 +08:00
98e9d68c75 Feat: Add Variable aggregator (#11114)
### What problem does this PR solve?
Feat: Add Variable aggregator

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-07 19:52:26 +08:00
8f34824aa4 Feat: Display the selected variables in the variable aggregation node. #10427 (#11113)
### What problem does this PR solve?
Feat: Display the selected variables in the variable aggregation node.
#10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-07 19:52:04 +08:00
9a6808230a Fix workflows 2025-11-07 17:14:04 +08:00
c7bd0a755c Fix: python api streaming structure (#11105)
### What problem does this PR solve?

Fix: python api streaming structure

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-07 16:50:58 +08:00
dd1c8c5779 Feat: add auto parse to connector. (#11099)
### What problem does this PR solve?

#10953

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-07 16:49:29 +08:00
526ba3388f Feat: The output is derived based on the configuration of the variable aggregation operator. #10427 (#11109)
### What problem does this PR solve?

Feat: The output is derived based on the configuration of the variable
aggregation operator. #10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-07 16:35:32 +08:00
cb95072ecf Fix workflows 2025-11-07 15:57:33 +08:00
f6aeebc608 Fix: cannot write mode RGBA as JPEG (#11102)
### What problem does this PR solve?
Fix #11091 
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-07 15:45:10 +08:00
307f53dae8 Minor tweaks (#11106)
### What problem does this PR solve?

Refactor

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-07 15:44:57 +08:00
fa98cc2bb9 Fix: add huggingface model download functionality (#11101)
### What problem does this PR solve?

reverse #11048

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-07 15:12:47 +08:00
c58d95ed69 Bump infinity to 0.6.4 (#11104)
### What problem does this PR solve?

Bump infinity to 0.6.4

Fixed https://github.com/infiniflow/infinity/issues/3048

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-07 14:44:34 +08:00
edbc396bc6 Fix: Added some prompts and polling functionality to retrieve data source logs. #10703 (#11103)
### What problem does this PR solve?

Fix: Added some prompts and polling functionality to retrieve data
source logs.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-07 14:28:45 +08:00
b137de1def Fix: Plain parser is skipped (#11094)
### What problem does this PR solve?

plain parser skipeed

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-07 13:39:29 +08:00
2cb1046cbf fix: The doc file cannot be parsed(#11092) (#11093)
### What problem does this PR solve?

The doc file cannot be parsed(#11092)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: virgilwong <hyhvirgil@gmail.com>
2025-11-07 11:46:10 +08:00
a880beb1f6 Feat: Add a form for variable aggregation operators #10427 (#11095)
### What problem does this PR solve?

Feat: Add a form for variable aggregation operators #10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-07 11:44:22 +08:00
34283d4db4 Feat: add data source to pipleline logs . (#11075)
### What problem does this PR solve?

#10953

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-07 11:43:59 +08:00
5629fbd2ca Fix: OpenSearch retrieval no return & Add documentation of /retrieval (#11083)
### What problem does this PR solve?

Fix: OpenSearch retrieval no return #11006
Add documentation #11072
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Documentation Update

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2025-11-07 09:28:42 +08:00
b7aa6d6c4f Fix: add avatar for UI (#11080)
### What problem does this PR solve?

Add avatar for admin UI.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-07 09:27:31 +08:00
0b7b88592f Fix: Improve some functional issues with the data source. #10703 (#11081)
### What problem does this PR solve?

Fix: Improve some functional issues with the data source. #10703

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-06 20:07:38 +08:00
42edecc98f Add 'SHOW VERSION' to document (#11082)
### What problem does this PR solve?

As title

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-06 19:34:47 +08:00
af98763e27 Admin: add 'show version' (#11079)
### What problem does this PR solve?

```
admin> show version;
show_version
+-----------------------+
| version               |
+-----------------------+
| v0.21.0-241-gc6cf58d5 |
+-----------------------+
admin> \q
Goodbye!

```

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-06 19:24:46 +08:00
5a8fbc5a81 Fix: Can't add more models (#11076)
### What problem does this PR solve?

Currently we cannot add any models, since factory is a string, and the
return type of get_allowed_llm_factories() is List[object]
https://github.com/infiniflow/ragflow/pull/11003

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-06 18:54:13 +08:00
0cd8024c34 Feat: RAPTOR handle cancel gracefully (#11074)
### What problem does this PR solve?

RAPTOR handle cancel gracefully.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-06 17:18:03 +08:00
3bd1fefe1f Feat: debug sync data. (#11073)
### What problem does this PR solve?

#10953 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-06 16:48:04 +08:00
e18c408759 Feat: Add variable aggregator node #10427 (#11070)
### What problem does this PR solve?

Feat: Add variable aggregator node #10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-06 16:18:00 +08:00
23b81eae77 Feat: GraphRAG handle cancel gracefully (#11061)
### What problem does this PR solve?

 GraghRAG handle cancel gracefully. #10997.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-06 16:12:20 +08:00
66c01c7274 Minor tweaks (#11060)
### What problem does this PR solve?

Minor tweaks

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-06 15:28:48 +08:00
4b8ce08050 Fix: fix pdf_parser ignored in rag/app/naive.py (#11065)
### What problem does this PR solve?

Fix: fix pdf_parser ignored in rag/app/naive.py #11000

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-06 15:20:35 +08:00
ca30ef83bf Feat: Add variable assignment node #10427 (#11058)
### What problem does this PR solve?

Feat: Add variable assignment node #10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-06 14:42:47 +08:00
d469ae6d50 Feat: The agent operator and message operator can only select string variables as prompt words. #10427 (#11054)
### What problem does this PR solve?

Feat: The agent operator and message operator can only select string
variables as prompt words. #10427
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-06 13:58:20 +08:00
f581a1c4e5 Feature: Added data source functionality #10703 (#11046)
### What problem does this PR solve?

Feature: Added data source functionality

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-06 11:53:46 +08:00
15c75bbf15 Refa: Remove HuggingFace repo downloads (#11048)
### What problem does this PR solve?

- Removed download_model function and HuggingFace repo download loop

### Type of change

- [x] Refactoring
2025-11-06 11:53:33 +08:00
adbb8319e0 Fix: add fields for logs. (#11039)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-06 09:49:57 +08:00
f98b24c9bf Move api.settings to common.settings (#11036)
### What problem does this PR solve?

As title

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-06 09:36:38 +08:00
87c9a054d3 Feat: The value of data operations operators can be either input or referenced from variables. #10427 (#11037)
### What problem does this PR solve?

Feat: The value of data operations operators can be either input or
referenced from variables. #10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-05 20:04:23 +08:00
cd6ed4b380 Feat: add webhook component. (#11033)
### What problem does this PR solve?

#10427

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-05 19:59:23 +08:00
f29a3dd651 fix:data operations update (#11013)
### What problem does this PR solve?

change:data operations update

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-05 19:59:10 +08:00
e658beee38 Fix: Fixed the issue of errors when using agents created from templates. #10427 (#11035)
### What problem does this PR solve?

Fix: Fixed the issue of errors when using agents created from templates.
#10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-05 19:15:43 +08:00
17ea5c1dee Fix: MCP cannot handle empty Auth field properly (#11034)
### What problem does this PR solve?

Fix MCP cannot handle empty Auth field properly, then result in 

```bash
2025-11-05 11:10:41,919 INFO     51209 Negotiated protocol version: 2025-06-18
2025-11-05 11:10:41,920 INFO     51209 client_session initialized successfully
2025-11-05 11:10:41,994 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:10:41] "GET /api/v1/datasets?page=1&page_size=1000&orderby=create_time&desc=True HTTP/1.1" 200 -
2025-11-05 11:10:41,999 INFO     51209 Want to clean up 1 MCP sessions
2025-11-05 11:10:42,000 INFO     51209 1 MCP sessions has been cleaned up. 0 in global context.
2025-11-05 11:10:42,001 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:10:42] "POST /v1/mcp_server/test_mcp HTTP/1.1" 200 -
2025-11-05 11:11:30,441 INFO     51209 Negotiated protocol version: 2025-06-18
2025-11-05 11:11:30,442 INFO     51209 client_session initialized successfully
2025-11-05 11:11:30,520 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:11:30] "GET /api/v1/datasets?page=1&page_size=1000&orderby=create_time&desc=True HTTP/1.1" 200 -
2025-11-05 11:11:30,525 INFO     51209 Want to clean up 1 MCP sessions
2025-11-05 11:11:30,526 INFO     51209 1 MCP sessions has been cleaned up. 0 in global context.
2025-11-05 11:11:30,527 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:11:30] "POST /v1/mcp_server/test_mcp HTTP/1.1" 200 -
2025-11-05 11:11:31,476 INFO     51209 Negotiated protocol version: 2025-06-18
2025-11-05 11:11:31,476 INFO     51209 client_session initialized successfully
2025-11-05 11:11:31,549 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:11:31] "GET /api/v1/datasets?page=1&page_size=1000&orderby=create_time&desc=True HTTP/1.1" 200 -
2025-11-05 11:11:31,552 INFO     51209 Want to clean up 1 MCP sessions
2025-11-05 11:11:31,553 INFO     51209 1 MCP sessions has been cleaned up. 0 in global context.
2025-11-05 11:11:31,554 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:11:31] "POST /v1/mcp_server/test_mcp HTTP/1.1" 200 -
2025-11-05 11:11:51,930 ERROR    51209 unhandled errors in a TaskGroup (1 sub-exception)
  + Exception Group Traceback (most recent call last):
  |   File "/home/xxxxxxxxx/workspace/ragflow/rag/utils/mcp_tool_call_conn.py", line 86, in _mcp_server_loop
  |     async with streamablehttp_client(url, headers) as (read_stream, write_stream, _):
  |   File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 217, in __aexit__
  |     await self.gen.athrow(typ, value, traceback)
  |   File "/home/xxxxxxxxx/workspace/ragflow/.venv/lib/python3.10/site-packages/mcp/client/streamable_http.py", line 478, in streamablehttp_client
  |     async with anyio.create_task_group() as tg:
  |   File "/home/xxxxxxxxx/workspace/ragflow/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 781, in __aexit__
  |     raise BaseExceptionGroup(
  | exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   File "/home/xxxxxxxxx/workspace/ragflow/.venv/lib/python3.10/site-packages/mcp/client/streamable_http.py", line 409, in handle_request_async
    |     await self._handle_post_request(ctx)
    |   File "/home/xxxxxxxxx/workspace/ragflow/.venv/lib/python3.10/site-packages/mcp/client/streamable_http.py", line 278, in _handle_post_request
    |     response.raise_for_status()
    |   File "/home/xxxxxxxxx/workspace/ragflow/.venv/lib/python3.10/site-packages/httpx/_models.py", line 829, in raise_for_status
    |     raise HTTPStatusError(message, request=request, response=self)
    | httpx.HTTPStatusError: Server error '502 Bad Gateway' for url 'http://192.168.1.38:9382/mcp'
    | For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/502
    +------------------------------------
2025-11-05 11:11:51,942 ERROR    51209 Error fetching tools from MCP server: streamable-http: http://192.168.1.38:9382/mcp
Traceback (most recent call last):
  File "/home/xxxxxxxxx/workspace/ragflow/rag/utils/mcp_tool_call_conn.py", line 168, in get_tools
    return future.result(timeout=timeout)
  File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/concurrent/futures/_base.py", line 458, in result
    return self.__get_result()
  File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "<@beartype(rag.utils.mcp_tool_call_conn.MCPToolCallSession._get_tools_from_mcp_server) at 0x7d58f02e2c20>", line 40, in _get_tools_from_mcp_server
  File "/home/xxxxxxxxx/workspace/ragflow/rag/utils/mcp_tool_call_conn.py", line 160, in _get_tools_from_mcp_server
    result: ListToolsResult = await self._call_mcp_server("list_tools", timeout=timeout)
  File "<@beartype(rag.utils.mcp_tool_call_conn.MCPToolCallSession._call_mcp_server) at 0x7d58f02e2b00>", line 63, in _call_mcp_server
  File "/home/xxxxxxxxx/workspace/ragflow/rag/utils/mcp_tool_call_conn.py", line 139, in _call_mcp_server
    raise result
ValueError: Connection failed (possibly due to auth error). Please check authentication settings first
2025-11-05 11:11:51,943 ERROR    51209 Test MCP error: Connection failed (possibly due to auth error). Please check authentication settings first
Traceback (most recent call last):
  File "/home/xxxxxxxxx/workspace/ragflow/api/apps/mcp_server_app.py", line 429, in test_mcp
    tools = tool_call_session.get_tools(timeout)
  File "<@beartype(rag.utils.mcp_tool_call_conn.MCPToolCallSession.get_tools) at 0x7d58f02e2cb0>", line 40, in get_tools
  File "/home/xxxxxxxxx/workspace/ragflow/rag/utils/mcp_tool_call_conn.py", line 168, in get_tools
    return future.result(timeout=timeout)
  File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/concurrent/futures/_base.py", line 458, in result
    return self.__get_result()
  File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "<@beartype(rag.utils.mcp_tool_call_conn.MCPToolCallSession._get_tools_from_mcp_server) at 0x7d58f02e2c20>", line 40, in _get_tools_from_mcp_server
  File "/home/xxxxxxxxx/workspace/ragflow/rag/utils/mcp_tool_call_conn.py", line 160, in _get_tools_from_mcp_server
    result: ListToolsResult = await self._call_mcp_server("list_tools", timeout=timeout)
  File "<@beartype(rag.utils.mcp_tool_call_conn.MCPToolCallSession._call_mcp_server) at 0x7d58f02e2b00>", line 63, in _call_mcp_server
  File "/home/xxxxxxxxx/workspace/ragflow/rag/utils/mcp_tool_call_conn.py", line 139, in _call_mcp_server
    raise result
ValueError: Connection failed (possibly due to auth error). Please check authentication settings first
2025-11-05 11:11:51,944 INFO     51209 Want to clean up 1 MCP sessions
2025-11-05 11:11:51,945 INFO     51209 1 MCP sessions has been cleaned up. 0 in global context.
2025-11-05 11:11:51,946 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:11:51] "POST /v1/mcp_server/test_mcp HTTP/1.1" 200 -
2025-11-05 11:12:20,484 INFO     51209 Negotiated protocol version: 2025-06-18
2025-11-05 11:12:20,485 INFO     51209 client_session initialized successfully
2025-11-05 11:12:20,570 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:12:20] "GET /api/v1/datasets?page=1&page_size=1000&orderby=create_time&desc=True HTTP/1.1" 200 -
2025-11-05 11:12:20,573 INFO     51209 Want to clean up 1 MCP sessions
2025-11-05 11:12:20,574 INFO     51209 1 MCP sessions has been cleaned up. 0 in global context.
2025-11-05 11:12:20,575 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:12:20] "POST /v1/mcp_server/test_mcp HTTP/1.1" 200 -
2025-11-05 11:15:02,119 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:15:02] "GET /api/v1/datasets?page=1&page_size=1000&orderby=create_time&desc=True HTTP/1.1" 200 -
2025-11-05 11:16:24,967 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:16:24] "GET /api/v1/datasets?page=1&page_size=1000&orderby=create_time&desc=True HTTP/1.1" 200 -
2025-11-05 11:30:24,284 ERROR    51209 Task was destroyed but it is pending!
task: <Task pending name='Task-58' coro=<MCPToolCallSession._mcp_server_loop() running at <@beartype(rag.utils.mcp_tool_call_conn.MCPToolCallSession._mcp_server_loop) at 0x7d58f02e29e0>:11> wait_for=<Future pending cb=[Task.task_wakeup()]> cb=[_chain_future.<locals>._call_set_state() at /home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/futures.py:392]>
2025-11-05 11:30:24,285 ERROR    51209 Task was destroyed but it is pending!
task: <Task pending name='Task-67' coro=<Queue.get() running at /home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/queues.py:159> wait_for=<Future pending cb=[Task.task_wakeup()]> cb=[_release_waiter(<Future pendi...ask_wakeup()]>)() at /home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/tasks.py:387]>
Exception ignored in: <coroutine object Queue.get at 0x7d585480ace0>
Traceback (most recent call last):
  File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/queues.py", line 161, in get
    getter.cancel()  # Just in case getter is not done yet.
  File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/base_events.py", line 753, in call_soon
    self._check_closed()
  File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/base_events.py", line 515, in _check_closed
    raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed

```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-05 19:15:27 +08:00
4e76220e25 Feat: Submit clean data operations form data to the backend. #10427 (#11030)
### What problem does this PR solve?

Feat: Submit clean data operations form data to the backend. #10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-05 17:32:35 +08:00
24335485bf Fix: get_allowed_llm_factories() return type (#11031)
### What problem does this PR solve?

Fix: get_allowed_llm_factories() return type #11003

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

<img width="2880" height="215" alt="截图 2025-11-05 17-02-01"
src="https://github.com/user-attachments/assets/ee892077-21f9-4b1e-a1d2-b921fa7f6121"
/>
2025-11-05 17:32:12 +08:00
121c51661d Fix: Markdown table extractor (#11018)
### What problem does this PR solve?

Now markdown table extractor supports <table ...>. #10966 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-05 16:10:21 +08:00
02d10f8eda Move var from rag.settings to common.globals (#11022)
### What problem does this PR solve?

As title.

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-05 15:48:50 +08:00
dddf766470 Feat: start data sync service. (#11026)
### What problem does this PR solve?

#10953 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-05 15:43:15 +08:00
8584d4b642 Fix: numeric string miss transformation. (#11025)
### What problem does this PR solve?

#11024

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-05 15:14:30 +08:00
b86e07088b Fix: escape multi-steps issues. (#11016)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-05 14:51:00 +08:00
1a9215bc6f Move some vars to globals (#11017)
### What problem does this PR solve?

As title.

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-05 14:14:38 +08:00
cf9611c96f Feat: Support more chunking methods (#11000)
### What problem does this PR solve?

Feat: Support more chunking methods #10772 

This PR enables multiple chunking methods — including books, laws,
naive, one, and presentation — to be used with all existing PDF parsers
(DeepDOC, MinerU, Docling, TCADP, Plain Text, and Vision modes).

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-05 13:00:42 +08:00
f126875ec6 Apply some tweaks on Admin UI (#11011)
### What problem does this PR solve?

- Fix selected radio button text misaligned with radio button dot
- Fix `<ScrollArea>` scrollbar z-index issue
- Add backdrop blur effect on scrollbar thumbs
- Adjust some styles to match the design 


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-05 12:58:43 +08:00
89410d2381 fix:api /factories wrong return (#11015)
### What problem does this PR solve?

change:
api /factories wrong return

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-05 12:50:11 +08:00
96c015fb85 Fix and refactor imports (#11010)
### What problem does this PR solve?

1. Move EMBEDDING_CFG to common.globals
2. Fix error imports
3. Move signal handles to common/signal_utils.py

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-05 11:07:54 +08:00
ca40b56839 Feat:Data Operations (#11002)
### What problem does this PR solve?

new component:Data Operations

#10427

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-05 10:49:41 +08:00
3654ae61c1 feat: add allowed factories variable to allow admins to restrict llms users can add (#11003)
### What problem does this PR solve?

Currently, if we want to restrict the allowed factories users can use we
need to delete from the database table manually. The proposal of this PR
is to include a variable to that, if set, will restrict the LLM
factories the users can see and add. This allow us to not touch the
llm_factories.json or the database if the LLM factory is already
inserted.

Obs.: All the lint changes were from the pre-commit hook which I did not
change.

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
2025-11-05 10:47:50 +08:00
bab3fce136 Move some constants to common (#11004)
### What problem does this PR solve?

As title.

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-05 08:01:39 +08:00
4bbbf92331 Refa: link connector to KB. (#10991)
### What problem does this PR solve?

#10953

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-04 20:13:52 +08:00
db9fa3042b Feat: Add a form with data operations operators #10427 (#11001)
### What problem does this PR solve?

Feat: Add a form with data operations operators #10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-04 19:42:59 +08:00
880a6a0428 Move some enumerate type to constants.py (#10998)
### What problem does this PR solve?

As title.

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-04 19:25:25 +08:00
465a140727 Feat: refine Confluence connector (#10994)
### What problem does this PR solve?

Refine Confluence connector.
#10953

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Refactoring
2025-11-04 17:29:11 +08:00
2677617f93 Feat: supports MinerU http-client/server method (#10961)
### What problem does this PR solve?

Add support for MinerU http-client/server method.

To use MinerU with vLLM server:

1. Set up a vLLM server running MinerU:
   ```bash
   mineru-vllm-server --port 30000
   ```

2. Configure the following environment variables:
- `MINERU_EXECUTABLE=/ragflow/uv_tools/.venv/bin/mineru` (or the path to
your MinerU executable)
   - `MINERU_BACKEND="vlm-http-client"`
   - `MINERU_SERVER_URL="http://your-vllm-server-ip:30000"`

3. Follow the standard MinerU setup steps as described above.

With this configuration, RAGFlow will connect to your vLLM server to
perform document parsing, which can significantly improve parsing
performance for complex documents while reducing the resource
requirements on your RAGFlow server.



![1](https://github.com/user-attachments/assets/46624a0c-0f3b-423e-ace8-81801e97a27d)

![2](https://github.com/user-attachments/assets/66ccc004-a598-47d4-93cb-fe176834f83b)


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update

---------

Co-authored-by: writinwaters <cai.keith@gmail.com>
2025-11-04 16:03:30 +08:00
03038c7d3d Update RetCode to common.constants (#10984)
### What problem does this PR solve?

1. Update RetCode to common.constants
2. Decouple the admin and API modules

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-04 15:12:53 +08:00
16d2be623c Minor tweaks (#10987)
### What problem does this PR solve?

1. Rename identifier name
2. Fix some return statement
3. Fix some typos

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-04 14:15:31 +08:00
021b2ac51a Feat: Add data operation node #10427 (#10985)
### What problem does this PR solve?

Feat: Add data operation node #10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-04 13:48:44 +08:00
19f71a961a Fix: Create dataset performance unmatched between HTTP api and web ui (#10960)
### What problem does this PR solve?

Fix: Create dataset performance unmatched between HTTP api and web ui
#10925

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-04 13:45:14 +08:00
1e45137284 Move 'timeout' to common folder (#10983)
### What problem does this PR solve?

As title.

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-04 11:51:12 +08:00
5283a10387 Fix:wrong param in meta_data_filter (#10978)
### What problem does this PR solve?
change:
wrong param in meta_data_filter

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-04 11:22:10 +08:00
d55344bc11 Remove unused code (#10981)
### What problem does this PR solve?

As title

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-04 11:10:29 +08:00
640e8e3f3e Chore(docker): Remove outdated sandbox config (#10977)
### What problem does this PR solve?

Remove outdated sandbox config

### Type of change

- [x] Refactoring
2025-11-04 10:59:56 +08:00
c20f5675c6 Fix: elasticsearch connection hardcoded (#10975)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/10930

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-04 10:59:35 +08:00
378bdfccfc Refactor log utils (#10973)
### What problem does this PR solve?

As title.

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-03 20:25:02 +08:00
395ce16b3c Fix: correct MCP server authentication header format in frontend (#9819)
- Fix MCP test connection authentication issues by updating frontend
request format
- Add variables field with authorization_token for template substitution
- Change headers to use proper Authorization Bearer format with template
variable

🤖 Generated with [Claude Code](https://claude.ai/code)

### What problem does this PR solve?

correct MCP server authentication header format in frontend
### Type of change

 * [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Marvion <marvionliu@wukongjx.cn>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-03 20:00:27 +08:00
be3ae0eda9 Feat: Add variables to the metadata filtering function of the knowledge retrieval component. #10861 (#10974)
### What problem does this PR solve?

Feat: Add variables to the metadata filtering function of the knowledge
retrieval component. #10861

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-03 19:59:45 +08:00
3e5a39482e Feat: Support multiple data sources synchronizations (#10954)
### What problem does this PR solve?
#10953

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-03 19:59:18 +08:00
9a486e0f51 Move some funcs from api to rag module (#10972)
### What problem does this PR solve?

As title

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-03 19:26:09 +08:00
ee9ac15174 Feat: Fixed an issue where dragged operators within an iteration were not associated with the iteration. #10866 (#10969)
### What problem does this PR solve?

Feat: Fixed an issue where dragged operators within an iteration were
not associated with the iteration. #10866

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-03 19:19:26 +08:00
ac465ba2a6 Feat:add variables to the metadata filtering function of the knowledg… (#10967)
…e retrieval component.

### What problem does this PR solve?

issue:
#10861 
change:
add variables to the metadata filtering function of the knowledge
retrieval component

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-03 19:19:09 +08:00
fd4aa79c07 Fix:missing embedding vector on Tokenizer (#10964)
### What problem does this PR solve?
issue:
[#10890](https://github.com/infiniflow/ragflow/issues/10890)
change:
missing embedding vector on Tokenizer
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-03 19:17:05 +08:00
2d83c64eed Fix:wrong describe_with_prompt() in ollama (#10963)
### What problem does this PR solve?

change:
wrong describe_with_prompt() in ollama

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-03 19:16:41 +08:00
1284647694 Refactor file utils (#10970)
### What problem does this PR solve?

As title.

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-03 18:54:55 +08:00
076d811086 Introduce common/config_utils.py (#10968)
### What problem does this PR solve?

As title.

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-03 17:25:06 +08:00
121d3fd815 Introduce common/constants.py (#10965)
### What problem does this PR solve?

As title.

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-03 16:32:37 +08:00
d008a4df9f Move base64_image related functions to common directory (#10957)
### What problem does this PR solve?

As title

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-03 15:20:46 +08:00
5a88c01111 Feat: Filter structured output data directly during the rendering stage. #10866 (#10958)
### What problem does this PR solve?

Feat: Filter structured output data directly during the rendering stage.
#10866

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-03 14:48:35 +08:00
256b0fb19c Remove redundant ut (#10955)
### What problem does this PR solve?

Remove redundant ut cases.

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-03 13:04:20 +08:00
78631a3fd3 Move some functions out of 'api/utils/common.py' (#10948)
### What problem does this PR solve?

as title.

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-03 12:34:47 +08:00
4117f41758 Fix: decode error in email parser app (#10920)
### What problem does this PR solve?

Fix: UnicodeDecodeError: 'gb2312' codec can't decode byte 0xab in
position 560: illegal multibyte sequence.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-03 12:31:06 +08:00
a52bdf0b7e Feat: The structured output of the variable query can also be clicked. #10866 (#10952)
### What problem does this PR solve?

Feat: The structured output of the variable query can also be clicked.
#10866

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-03 12:30:30 +08:00
b47361432a Fix: API: chunk.update does not update positions (#10945)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/10944

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-03 11:01:44 +08:00
061d8f78e5 Feat: location rule for http (#10901)
### What problem does this PR solve?

Location rule for http.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-03 11:01:24 +08:00
7ec587fa9e Feat: Admin UI whitelist management and role management (#10910)
### What problem does this PR solve?

Add whitelist management and role management in Admin UI

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-03 09:52:23 +08:00
685311814f description (#10928)
### Type of change

- [x] Documentation Update
2025-11-03 09:50:21 +08:00
410c0a829d Feat: The query variable of a loop operator can be a nested array variable. #10866 (#10921)
### What problem does this PR solve?

Feat: The query variable of a loop operator can be a nested array
variable. #10866

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-11-03 09:40:47 +08:00
33371cda11 Fix:output_structure in agent (#10907)
### What problem does this PR solve?
change:
output_structure in agent

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-03 09:39:53 +08:00
fa210e7c58 Feat: parsing hyperlinks in docx and pdf & Fix: default parser config of toc extraction (#10877)
### What problem does this PR solve?

Feat: parsing hyperlinks in docx and pdf #10848
Fix: default parser config of toc extraction

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-03 09:34:12 +08:00
360f5c1179 Move token related functions to common (#10942)
### What problem does this PR solve?

As title

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-03 08:50:05 +08:00
44f2d6f5da Move 'get_project_base_directory' to common directory (#10940)
### What problem does this PR solve?

As title

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-02 21:05:28 +08:00
57a83eca8a Remove unused code (#10938)
### What problem does this PR solve?

As title

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-02 16:25:16 +08:00
6447b737ab Move singleton to common directory (#10935)
### What problem does this PR solve?

As title

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-11-02 12:24:08 +08:00
fe4852cb71 TEI auto truncate inputs (#10916)
### What problem does this PR solve?

TEI auto truncate inputs

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-31 16:46:20 +08:00
f52e56c2d6 Remove 'get_lan_ip' and add common misc_utils.py (#10880)
### What problem does this PR solve?

Add get_uuid, download_img and hash_str2int into misc_utils.py

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-31 16:42:01 +08:00
e9debfd74d Fix: The nodes on the canvas were not updated in time after the operator name was modified. #10866 (#10911)
### What problem does this PR solve?

Fix: The nodes on the canvas were not updated in time after the operator
name was modified. #10866

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-31 14:46:03 +08:00
d8a7fb6f2b Fix: Fixed the styling and logic issues on the model provider page #10703 (#10909)
### What problem does this PR solve?

Fix: Fixed the styling and logic issues on the model provider page

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-31 13:42:28 +08:00
c8a82da722 Feat: Rename the files in the jsonjoy-builder directory to lowercase. #10866 (#10908)
### What problem does this PR solve?

Feat: Rename the files in the jsonjoy-builder directory to lowercase.
#10866

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-31 13:42:11 +08:00
09dd786674 Fix:KeyError: 'table_body' of mineru parser (#10773)
### What problem does this PR solve?
https://github.com/infiniflow/ragflow/issues/10769

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-31 10:07:56 +08:00
0ecccd27eb Refactor:improve the logic for rerank models to cal the total token count (#10882)
### What problem does this PR solve?

improve the logic for rerank models to cal the total token count

### Type of change

- [x] Refactoring
2025-10-31 09:46:16 +08:00
5a830ea68b Refactor(setting-model): Refactor the model management interface and optimize the component structure. #10703 (#10905)
### What problem does this PR solve?

Refactor(setting-model): Refactor the model management interface and
optimize the component structure. #10703

### Type of change

- [x] Refactoring
2025-10-31 09:27:30 +08:00
636 changed files with 31251 additions and 21197 deletions

View File

@ -19,7 +19,7 @@ jobs:
runs-on: [ "self-hosted", "ragflow-test" ]
steps:
- name: Ensure workspace ownership
run: echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
run: echo "chown -R ${USER} ${GITHUB_WORKSPACE}" && sudo chown -R ${USER} ${GITHUB_WORKSPACE}
# https://github.com/actions/checkout/blob/v3/README.md
- name: Check out code
@ -31,37 +31,37 @@ jobs:
- name: Prepare release body
run: |
if [[ $GITHUB_EVENT_NAME == 'create' ]]; then
if [[ ${GITHUB_EVENT_NAME} == "create" ]]; then
RELEASE_TAG=${GITHUB_REF#refs/tags/}
if [[ $RELEASE_TAG == 'nightly' ]]; then
if [[ ${RELEASE_TAG} == "nightly" ]]; then
PRERELEASE=true
else
PRERELEASE=false
fi
echo "Workflow triggered by create tag: $RELEASE_TAG"
echo "Workflow triggered by create tag: ${RELEASE_TAG}"
else
RELEASE_TAG=nightly
PRERELEASE=true
echo "Workflow triggered by schedule"
fi
echo "RELEASE_TAG=$RELEASE_TAG" >> $GITHUB_ENV
echo "PRERELEASE=$PRERELEASE" >> $GITHUB_ENV
echo "RELEASE_TAG=${RELEASE_TAG}" >> ${GITHUB_ENV}
echo "PRERELEASE=${PRERELEASE}" >> ${GITHUB_ENV}
RELEASE_DATETIME=$(date --rfc-3339=seconds)
echo Release $RELEASE_TAG created from $GITHUB_SHA at $RELEASE_DATETIME > release_body.md
echo Release ${RELEASE_TAG} created from ${GITHUB_SHA} at ${RELEASE_DATETIME} > release_body.md
- name: Move the existing mutable tag
# https://github.com/softprops/action-gh-release/issues/171
run: |
git fetch --tags
if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
if [[ ${GITHUB_EVENT_NAME} == "schedule" ]]; then
# Determine if a given tag exists and matches a specific Git commit.
# actions/checkout@v4 fetch-tags doesn't work when triggered by schedule
if [ "$(git rev-parse -q --verify "refs/tags/$RELEASE_TAG")" = "$GITHUB_SHA" ]; then
echo "mutable tag $RELEASE_TAG exists and matches $GITHUB_SHA"
if [ "$(git rev-parse -q --verify "refs/tags/${RELEASE_TAG}")" = "${GITHUB_SHA}" ]; then
echo "mutable tag ${RELEASE_TAG} exists and matches ${GITHUB_SHA}"
else
git tag -f $RELEASE_TAG $GITHUB_SHA
git push -f origin $RELEASE_TAG:refs/tags/$RELEASE_TAG
echo "created/moved mutable tag $RELEASE_TAG to $GITHUB_SHA"
git tag -f ${RELEASE_TAG} ${GITHUB_SHA}
git push -f origin ${RELEASE_TAG}:refs/tags/${RELEASE_TAG}
echo "created/moved mutable tag ${RELEASE_TAG} to ${GITHUB_SHA}"
fi
fi
@ -87,7 +87,7 @@ jobs:
- name: Build and push image
run: |
echo ${{ secrets.DOCKERHUB_TOKEN }} | sudo docker login --username infiniflow --password-stdin
sudo docker login --username infiniflow --password-stdin <<< ${{ secrets.DOCKERHUB_TOKEN }}
sudo docker build --build-arg NEED_MIRROR=1 -t infiniflow/ragflow:${RELEASE_TAG} -f Dockerfile .
sudo docker tag infiniflow/ragflow:${RELEASE_TAG} infiniflow/ragflow:latest
sudo docker push infiniflow/ragflow:${RELEASE_TAG}

View File

@ -9,8 +9,11 @@ on:
- 'docs/**'
- '*.md'
- '*.mdx'
pull_request:
types: [ labeled, synchronize, reopened ]
# The only difference between pull_request and pull_request_target is the context in which the workflow runs:
# — pull_request_target workflows use the workflow files from the default branch, and secrets are available.
# — pull_request workflows use the workflow files from the pull request branch, and secrets are unavailable.
pull_request_target:
types: [ synchronize, ready_for_review ]
paths-ignore:
- 'docs/**'
- '*.md'
@ -28,7 +31,7 @@ jobs:
name: ragflow_tests
# https://docs.github.com/en/actions/using-jobs/using-conditions-to-control-job-execution
# https://github.com/orgs/community/discussions/26261
if: ${{ github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci') }}
if: ${{ github.event_name != 'pull_request_target' || contains(github.event.pull_request.labels.*.name, 'ci') }}
runs-on: [ "self-hosted", "ragflow-test" ]
steps:
# https://github.com/hmarr/debug-action
@ -37,19 +40,20 @@ jobs:
- name: Ensure workspace ownership
run: |
echo "Workflow triggered by ${{ github.event_name }}"
echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
echo "chown -R ${USER} ${GITHUB_WORKSPACE}" && sudo chown -R ${USER} ${GITHUB_WORKSPACE}
# https://github.com/actions/checkout/issues/1781
- name: Check out code
uses: actions/checkout@v4
with:
ref: ${{ (github.event_name == 'pull_request' || github.event_name == 'pull_request_target') && format('refs/pull/{0}/merge', github.event.pull_request.number) || github.sha }}
fetch-depth: 0
fetch-tags: true
- name: Check workflow duplication
if: ${{ !cancelled() && !failure() && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci')) }}
if: ${{ !cancelled() && !failure() }}
run: |
if [[ "$GITHUB_EVENT_NAME" != "pull_request" && "$GITHUB_EVENT_NAME" != "schedule" ]]; then
if [[ ${GITHUB_EVENT_NAME} != "pull_request_target" && ${GITHUB_EVENT_NAME} != "schedule" ]]; then
HEAD=$(git rev-parse HEAD)
# Find a PR that introduced a given commit
gh auth login --with-token <<< "${{ secrets.GITHUB_TOKEN }}"
@ -67,14 +71,14 @@ jobs:
gh run cancel ${GITHUB_RUN_ID}
while true; do
status=$(gh run view ${GITHUB_RUN_ID} --json status -q .status)
[ "$status" = "completed" ] && break
[ "${status}" = "completed" ] && break
sleep 5
done
exit 1
fi
fi
fi
else
elif [[ ${GITHUB_EVENT_NAME} == "pull_request_target" ]]; then
PR_NUMBER=${{ github.event.pull_request.number }}
PR_SHA_FP=${RUNNER_WORKSPACE_PREFIX}/artifacts/${GITHUB_REPOSITORY}/PR_${PR_NUMBER}
# Calculate the hash of the current workspace content
@ -93,18 +97,18 @@ jobs:
- name: Build ragflow:nightly
run: |
RUNNER_WORKSPACE_PREFIX=${RUNNER_WORKSPACE_PREFIX:-$HOME}
RUNNER_WORKSPACE_PREFIX=${RUNNER_WORKSPACE_PREFIX:-${HOME}}
RAGFLOW_IMAGE=infiniflow/ragflow:${GITHUB_RUN_ID}
echo "RAGFLOW_IMAGE=${RAGFLOW_IMAGE}" >> $GITHUB_ENV
echo "RAGFLOW_IMAGE=${RAGFLOW_IMAGE}" >> ${GITHUB_ENV}
sudo docker pull ubuntu:22.04
sudo DOCKER_BUILDKIT=1 docker build --build-arg NEED_MIRROR=1 -f Dockerfile -t ${RAGFLOW_IMAGE} .
if [[ "$GITHUB_EVENT_NAME" == "schedule" ]]; then
if [[ ${GITHUB_EVENT_NAME} == "schedule" ]]; then
export HTTP_API_TEST_LEVEL=p3
else
export HTTP_API_TEST_LEVEL=p2
fi
echo "HTTP_API_TEST_LEVEL=${HTTP_API_TEST_LEVEL}" >> $GITHUB_ENV
echo "RAGFLOW_CONTAINER=${GITHUB_RUN_ID}-ragflow-cpu-1" >> $GITHUB_ENV
echo "HTTP_API_TEST_LEVEL=${HTTP_API_TEST_LEVEL}" >> ${GITHUB_ENV}
echo "RAGFLOW_CONTAINER=${GITHUB_RUN_ID}-ragflow-cpu-1" >> ${GITHUB_ENV}
- name: Start ragflow:nightly
run: |
@ -154,7 +158,7 @@ jobs:
echo -e "COMPOSE_PROFILES=\${COMPOSE_PROFILES},tei-cpu" >> docker/.env
echo -e "TEI_MODEL=BAAI/bge-small-en-v1.5" >> docker/.env
echo -e "RAGFLOW_IMAGE=${RAGFLOW_IMAGE}" >> docker/.env
echo "HOST_ADDRESS=http://host.docker.internal:${SVR_HTTP_PORT}" >> $GITHUB_ENV
echo "HOST_ADDRESS=http://host.docker.internal:${SVR_HTTP_PORT}" >> ${GITHUB_ENV}
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} up -d
uv sync --python 3.10 --only-group test --no-default-groups --frozen && uv pip install sdk/python
@ -189,7 +193,8 @@ jobs:
- name: Stop ragflow:nightly
if: always() # always run this step even if previous steps failed
run: |
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} down -v
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} down -v || true
sudo docker ps -a --filter "label=com.docker.compose.project=${GITHUB_RUN_ID}" -q | xargs -r sudo docker rm -f
- name: Start ragflow:nightly
run: |
@ -226,5 +231,9 @@ jobs:
- name: Stop ragflow:nightly
if: always() # always run this step even if previous steps failed
run: |
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} down -v
sudo docker rmi -f ${RAGFLOW_IMAGE:-NO_IMAGE} || true
# Sometimes `docker compose down` fail due to hang container, heavy load etc. Need to remove such containers to release resources(for example, listen ports).
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} down -v || true
sudo docker ps -a --filter "label=com.docker.compose.project=${GITHUB_RUN_ID}" -q | xargs -r sudo docker rm -f
if [[ -n ${RAGFLOW_IMAGE} ]]; then
sudo docker rmi -f ${RAGFLOW_IMAGE}
fi

116
CLAUDE.md Normal file
View File

@ -0,0 +1,116 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It's a full-stack application with:
- Python backend (Flask-based API server)
- React/TypeScript frontend (built with UmiJS)
- Microservices architecture with Docker deployment
- Multiple data stores (MySQL, Elasticsearch/Infinity, Redis, MinIO)
## Architecture
### Backend (`/api/`)
- **Main Server**: `api/ragflow_server.py` - Flask application entry point
- **Apps**: Modular Flask blueprints in `api/apps/` for different functionalities:
- `kb_app.py` - Knowledge base management
- `dialog_app.py` - Chat/conversation handling
- `document_app.py` - Document processing
- `canvas_app.py` - Agent workflow canvas
- `file_app.py` - File upload/management
- **Services**: Business logic in `api/db/services/`
- **Models**: Database models in `api/db/db_models.py`
### Core Processing (`/rag/`)
- **Document Processing**: `deepdoc/` - PDF parsing, OCR, layout analysis
- **LLM Integration**: `rag/llm/` - Model abstractions for chat, embedding, reranking
- **RAG Pipeline**: `rag/flow/` - Chunking, parsing, tokenization
- **Graph RAG**: `graphrag/` - Knowledge graph construction and querying
### Agent System (`/agent/`)
- **Components**: Modular workflow components (LLM, retrieval, categorize, etc.)
- **Templates**: Pre-built agent workflows in `agent/templates/`
- **Tools**: External API integrations (Tavily, Wikipedia, SQL execution, etc.)
### Frontend (`/web/`)
- React/TypeScript with UmiJS framework
- Ant Design + shadcn/ui components
- State management with Zustand
- Tailwind CSS for styling
## Common Development Commands
### Backend Development
```bash
# Install Python dependencies
uv sync --python 3.10 --all-extras
uv run download_deps.py
pre-commit install
# Start dependent services
docker compose -f docker/docker-compose-base.yml up -d
# Run backend (requires services to be running)
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
# Run tests
uv run pytest
# Linting
ruff check
ruff format
```
### Frontend Development
```bash
cd web
npm install
npm run dev # Development server
npm run build # Production build
npm run lint # ESLint
npm run test # Jest tests
```
### Docker Development
```bash
# Full stack with Docker
cd docker
docker compose -f docker-compose.yml up -d
# Check server status
docker logs -f ragflow-server
# Rebuild images
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly .
```
## Key Configuration Files
- `docker/.env` - Environment variables for Docker deployment
- `docker/service_conf.yaml.template` - Backend service configuration
- `pyproject.toml` - Python dependencies and project configuration
- `web/package.json` - Frontend dependencies and scripts
## Testing
- **Python**: pytest with markers (p1/p2/p3 priority levels)
- **Frontend**: Jest with React Testing Library
- **API Tests**: HTTP API and SDK tests in `test/` and `sdk/python/test/`
## Database Engines
RAGFlow supports switching between Elasticsearch (default) and Infinity:
- Set `DOC_ENGINE=infinity` in `docker/.env` to use Infinity
- Requires container restart: `docker compose down -v && docker compose up -d`
## Development Environment Requirements
- Python 3.10-3.12
- Node.js >=18.20.4
- Docker & Docker Compose
- uv package manager
- 16GB+ RAM, 50GB+ disk space

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.22.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -61,8 +61,7 @@
- 🔎 [System Architecture](#-system-architecture)
- 🎬 [Get Started](#-get-started)
- 🔧 [Configurations](#-configurations)
- 🔧 [Build a docker image without embedding models](#-build-a-docker-image-without-embedding-models)
- 🔧 [Build a docker image including embedding models](#-build-a-docker-image-including-embedding-models)
- 🔧 [Build a Docker image](#-build-a-docker-image)
- 🔨 [Launch service from source for development](#-launch-service-from-source-for-development)
- 📚 [Documentation](#-documentation)
- 📜 [Roadmap](#-roadmap)
@ -86,6 +85,7 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Latest Updates
- 2025-11-12 Supports data synchronization from Confluence, AWS S3, Discord, Google Drive.
- 2025-10-23 Supports MinerU & Docling as document parsing methods.
- 2025-10-15 Supports orchestrable ingestion pipeline.
- 2025-08-08 Supports OpenAI's latest GPT-5 series models.
@ -93,7 +93,6 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
- 2025-05-23 Adds a Python/JavaScript code executor component to Agent.
- 2025-05-05 Supports cross-language query.
- 2025-03-19 Supports using a multi-modal model to make sense of images within PDF or DOCX files.
- 2025-02-28 Combined with Internet search (Tavily), supports reasoning like Deep Research for any LLMs.
- 2024-12-18 Upgrades Document Layout Analysis model in DeepDoc.
- 2024-08-22 Support text to SQL statements through RAG.
@ -189,25 +188,29 @@ releases! 🌟
> All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64.
> If you are on an ARM64 platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a Docker image compatible with your system.
> The command below downloads the `v0.21.1-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.21.1-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server.
> The command below downloads the `v0.22.0` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.22.0`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server.
```bash
$ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
# Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases), e.g.: git checkout v0.22.0
# Use CPU for DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# To use GPU to accelerate DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | -------------------------- |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | ❌ | _Unstable_ nightly build |
> Note: Prior to `v0.22.0`, we provided both images with embedding models and slim images without embedding models. Details as follows:
> Note: Starting with `v0.22.0`, we ship only the slim edition and no longer append the **-slim** suffix to the image tag.
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
> Starting with `v0.22.0`, we ship only the slim edition and no longer append the **-slim** suffix to the image tag.
4. Check the server status after having the server up and running:
@ -288,7 +291,7 @@ RAGFlow uses Elasticsearch by default for storing full text and vectors. To swit
> [!WARNING]
> Switching to Infinity on a Linux/arm64 machine is not yet officially supported.
## 🔧 Build a Docker image without embedding models
## 🔧 Build a Docker image
This image is approximately 2 GB in size and relies on external LLM and embedding services.

View File

@ -22,7 +22,7 @@
<img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.22.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru">
@ -61,8 +61,7 @@
- 🔎 [Arsitektur Sistem](#-arsitektur-sistem)
- 🎬 [Mulai](#-mulai)
- 🔧 [Konfigurasi](#-konfigurasi)
- 🔧 [Membangun Image Docker tanpa Model Embedding](#-membangun-image-docker-tanpa-model-embedding)
- 🔧 [Membangun Image Docker dengan Model Embedding](#-membangun-image-docker-dengan-model-embedding)
- 🔧 [Membangun Image Docker](#-membangun-docker-image)
- 🔨 [Meluncurkan aplikasi dari Sumber untuk Pengembangan](#-meluncurkan-aplikasi-dari-sumber-untuk-pengembangan)
- 📚 [Dokumentasi](#-dokumentasi)
- 📜 [Peta Jalan](#-peta-jalan)
@ -86,6 +85,7 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Pembaruan Terbaru
- 2025-11-12 Mendukung sinkronisasi data dari Confluence, AWS S3, Discord, Google Drive.
- 2025-10-23 Mendukung MinerU & Docling sebagai metode penguraian dokumen.
- 2025-10-15 Dukungan untuk jalur data yang terorkestrasi.
- 2025-08-08 Mendukung model seri GPT-5 terbaru dari OpenAI.
@ -93,7 +93,6 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
- 2025-05-23 Menambahkan komponen pelaksana kode Python/JS ke Agen.
- 2025-05-05 Mendukung kueri lintas bahasa.
- 2025-03-19 Mendukung penggunaan model multi-modal untuk memahami gambar di dalam file PDF atau DOCX.
- 2025-02-28 dikombinasikan dengan pencarian Internet (TAVILY), mendukung penelitian mendalam untuk LLM apa pun.
- 2024-12-18 Meningkatkan model Analisis Tata Letak Dokumen di DeepDoc.
- 2024-08-22 Dukungan untuk teks ke pernyataan SQL melalui RAG.
@ -187,25 +186,29 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
> Semua gambar Docker dibangun untuk platform x86. Saat ini, kami tidak menawarkan gambar Docker untuk ARM64.
> Jika Anda menggunakan platform ARM64, [silakan gunakan panduan ini untuk membangun gambar Docker yang kompatibel dengan sistem Anda](https://ragflow.io/docs/dev/build_docker_image).
> Perintah di bawah ini mengunduh edisi v0.21.1 dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.21.1, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server.
> Perintah di bawah ini mengunduh edisi v0.22.0 dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.22.0, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server.
```bash
$ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
# Opsional: gunakan tag stabil (lihat releases: https://github.com/infiniflow/ragflow/releases), contoh: git checkout v0.22.0
# Use CPU for DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# To use GPU to accelerate DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | -------------------------- |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | ❌ | _Unstable_ nightly build |
> Catatan: Sebelum `v0.22.0`, kami menyediakan image dengan model embedding dan image slim tanpa model embedding. Detailnya sebagai berikut:
> Catatan: Mulai dari `v0.22.0`, kami hanya menyediakan edisi slim dan tidak lagi menambahkan akhiran **-slim** pada tag image.
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
> Mulai dari `v0.22.0`, kami hanya menyediakan edisi slim dan tidak lagi menambahkan akhiran **-slim** pada tag image.
1. Periksa status server setelah server aktif dan berjalan:
@ -260,7 +263,7 @@ Pembaruan konfigurasi ini memerlukan reboot semua kontainer agar efektif:
> $ docker compose -f docker-compose.yml up -d
> ```
## 🔧 Membangun Docker Image tanpa Model Embedding
## 🔧 Membangun Docker Image
Image ini berukuran sekitar 2 GB dan bergantung pada aplikasi LLM eksternal dan embedding.

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.22.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -66,6 +66,7 @@
## 🔥 最新情報
- 2025-11-12 Confluence、AWS S3、Discord、Google Drive からのデータ同期をサポートします。
- 2025-10-23 ドキュメント解析方法として MinerU と Docling をサポートします。
- 2025-10-15 オーケストレーションされたデータパイプラインのサポート。
- 2025-08-08 OpenAI の最新 GPT-5 シリーズモデルをサポートします。
@ -73,7 +74,6 @@
- 2025-05-23 エージェントに Python/JS コードエグゼキュータコンポーネントを追加しました。
- 2025-05-05 言語間クエリをサポートしました。
- 2025-03-19 PDFまたはDOCXファイル内の画像を理解するために、多モーダルモデルを使用することをサポートします。
- 2025-02-28 インターネット検索 (TAVILY) と組み合わせて、あらゆる LLM の詳細な調査をサポートします。
- 2024-12-18 DeepDoc のドキュメント レイアウト分析モデルをアップグレードします。
- 2024-08-22 RAG を介して SQL ステートメントへのテキストをサポートします。
@ -166,28 +166,32 @@
> 現在、公式に提供されているすべての Docker イメージは x86 アーキテクチャ向けにビルドされており、ARM64 用の Docker イメージは提供されていません。
> ARM64 アーキテクチャのオペレーティングシステムを使用している場合は、[このドキュメント](https://ragflow.io/docs/dev/build_docker_image)を参照して Docker イメージを自分でビルドしてください。
> 以下のコマンドは、RAGFlow Docker イメージの v0.21.1 エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.21.1 とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。
> 以下のコマンドは、RAGFlow Docker イメージの v0.22.0 エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.22.0 とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。
```bash
$ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
# 任意: 安定版タグを利用 (一覧: https://github.com/infiniflow/ragflow/releases) 例: git checkout v0.22.0
# Use CPU for DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# To use GPU to accelerate DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | -------------------------- |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | ❌ | _Unstable_ nightly build |
> 注意:`v0.22.0` より前のバージョンでは、embedding モデルを含むイメージと、embedding モデルを含まない slim イメージの両方を提供していました。詳細は以下の通りです:
> 注意:`v0.22.0` 以降、当プロジェクトでは slim エディションのみを提供し、イメージタグに **-slim** サフィックスを付けなくなりました。
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
1. サーバーを立ち上げた後、サーバーの状態を確認する:
> `v0.22.0` 以降、当プロジェクトでは slim エディションのみを提供し、イメージタグに **-slim** サフィックスを付けなくなりました。
1. サーバーを立ち上げた後、サーバーの状態を確認する:
```bash
$ docker logs -f docker-ragflow-cpu-1
```
@ -259,7 +263,7 @@ RAGFlow はデフォルトで Elasticsearch を使用して全文とベクトル
> Linux/arm64 マシンでの Infinity への切り替えは正式にサポートされていません。
>
## 🔧 ソースコードで Docker イメージを作成(埋め込みモデルなし)
## 🔧 ソースコードで Docker イメージを作成
この Docker イメージのサイズは約 1GB で、外部の大モデルと埋め込みサービスに依存しています。

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.22.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -67,6 +67,7 @@
## 🔥 업데이트
- 2025-11-12 Confluence, AWS S3, Discord, Google Drive에서 데이터 동기화를 지원합니다.
- 2025-10-23 문서 파싱 방법으로 MinerU 및 Docling을 지원합니다.
- 2025-10-15 조정된 데이터 파이프라인 지원.
- 2025-08-08 OpenAI의 최신 GPT-5 시리즈 모델을 지원합니다.
@ -74,7 +75,6 @@
- 2025-05-23 Agent에 Python/JS 코드 실행기 구성 요소를 추가합니다.
- 2025-05-05 언어 간 쿼리를 지원합니다.
- 2025-03-19 PDF 또는 DOCX 파일 내의 이미지를 이해하기 위해 다중 모드 모델을 사용하는 것을 지원합니다.
- 2025-02-28 인터넷 검색(TAVILY)과 결합되어 모든 LLM에 대한 심층 연구를 지원합니다.
- 2024-12-18 DeepDoc의 문서 레이아웃 분석 모델 업그레이드.
- 2024-08-22 RAG를 통해 SQL 문에 텍스트를 지원합니다.
@ -168,25 +168,29 @@
> 모든 Docker 이미지는 x86 플랫폼을 위해 빌드되었습니다. 우리는 현재 ARM64 플랫폼을 위한 Docker 이미지를 제공하지 않습니다.
> ARM64 플랫폼을 사용 중이라면, [시스템과 호환되는 Docker 이미지를 빌드하려면 이 가이드를 사용해 주세요](https://ragflow.io/docs/dev/build_docker_image).
> 아래 명령어는 RAGFlow Docker 이미지의 v0.21.1 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.21.1과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오.
> 아래 명령어는 RAGFlow Docker 이미지의 v0.22.0 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.22.0과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오.
```bash
$ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
# Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases), e.g.: git checkout v0.22.0
# Use CPU for DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# To use GPU to accelerate DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | ❌ | _Unstable_ nightly build |
> 참고: `v0.22.0` 이전 버전에서는 embedding 모델이 포함된 이미지와 embedding 모델이 포함되지 않은 slim 이미지를 모두 제공했습니다. 자세한 내용은 다음과 같습니다:
> 참고: `v0.22.0`부터는 slim 에디션만 배포하며 이미지 태그에 **-slim** 접미사를 더 이상 붙이지 않습니다.
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
> `v0.22.0`부터는 slim 에디션만 배포하며 이미지 태그에 **-slim** 접미사를 더 이상 붙이지 않습니다.
1. 서버가 시작된 후 서버 상태를 확인하세요:
@ -253,7 +257,7 @@ RAGFlow 는 기본적으로 Elasticsearch 를 사용하여 전체 텍스트 및
> [!WARNING]
> Linux/arm64 시스템에서 Infinity로 전환하는 것은 공식적으로 지원되지 않습니다.
## 🔧 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함하지 않음)
## 🔧 소스 코드로 Docker 이미지를 컴파일합니다
이 Docker 이미지의 크기는 약 1GB이며, 외부 대형 모델과 임베딩 서비스에 의존합니다.

View File

@ -22,7 +22,7 @@
<img alt="Badge Estático" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.22.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Última%20Relese" alt="Última Versão">
@ -86,6 +86,7 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Últimas Atualizações
- 12-11-2025 Suporta a sincronização de dados do Confluence, AWS S3, Discord e Google Drive.
- 23-10-2025 Suporta MinerU e Docling como métodos de análise de documentos.
- 15-10-2025 Suporte para pipelines de dados orquestrados.
- 08-08-2025 Suporta a mais recente série GPT-5 da OpenAI.
@ -93,7 +94,6 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
- 23-05-2025 Adicione o componente executor de código Python/JS ao Agente.
- 05-05-2025 Suporte a consultas entre idiomas.
- 19-03-2025 Suporta o uso de um modelo multi-modal para entender imagens dentro de arquivos PDF ou DOCX.
- 28-02-2025 combinado com a pesquisa na Internet (T AVI LY), suporta pesquisas profundas para qualquer LLM.
- 18-12-2024 Atualiza o modelo de Análise de Layout de Documentos no DeepDoc.
- 22-08-2024 Suporta conversão de texto para comandos SQL via RAG.
@ -186,25 +186,29 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
> Todas as imagens Docker são construídas para plataformas x86. Atualmente, não oferecemos imagens Docker para ARM64.
> Se você estiver usando uma plataforma ARM64, por favor, utilize [este guia](https://ragflow.io/docs/dev/build_docker_image) para construir uma imagem Docker compatível com o seu sistema.
> O comando abaixo baixa a edição`v0.21.1` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.21.1`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor.
> O comando abaixo baixa a edição`v0.22.0` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.22.0`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor.
```bash
$ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
# Opcional: use uma tag estável (veja releases: https://github.com/infiniflow/ragflow/releases), ex.: git checkout v0.22.0
# Use CPU for DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# To use GPU to accelerate DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
| Tag da imagem RAGFlow | Tamanho da imagem (GB) | Possui modelos de incorporação? | Estável? |
| --------------------- | ---------------------- | --------------------------------- | ------------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Lançamento estável |
| v0.21.1-slim | &approx;2 | ❌ | Lançamento estável |
| nightly | &approx;2 | ❌ | Construção noturna instável |
> Nota: Antes da `v0.22.0`, fornecíamos imagens com modelos de embedding e imagens slim sem modelos de embedding. Detalhes a seguir:
> Observação: A partir da`v0.22.0`, distribuímos apenas a edição slim e não adicionamos mais o sufixo **-slim** às tags das imagens.
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
> A partir da `v0.22.0`, distribuímos apenas a edição slim e não adicionamos mais o sufixo **-slim** às tags das imagens.
4. Verifique o status do servidor após tê-lo iniciado:
@ -274,9 +278,9 @@ O RAGFlow usa o Elasticsearch por padrão para armazenar texto completo e vetore
```
> [!ATENÇÃO]
> A mudança para o Infinity em uma máquina Linux/arm64 ainda não é oficialmente suportada.
> A mudança para o Infinity em uma máquina Linux/arm64 ainda não é oficialmente suportada.
## 🔧 Criar uma imagem Docker sem modelos de incorporação
## 🔧 Criar uma imagem Docker
Esta imagem tem cerca de 2 GB de tamanho e depende de serviços externos de LLM e incorporação.

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.22.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -85,6 +85,7 @@
## 🔥 近期更新
- 2025-11-12 支援從 Confluence、AWS S3、Discord、Google Drive 進行資料同步。
- 2025-10-23 支援 MinerU 和 Docling 作為文件解析方法。
- 2025-10-15 支援可編排的資料管道。
- 2025-08-08 支援 OpenAI 最新的 GPT-5 系列模型。
@ -92,7 +93,6 @@
- 2025-05-23 為 Agent 新增 Python/JS 程式碼執行器元件。
- 2025-05-05 支援跨語言查詢。
- 2025-03-19 PDF和DOCX中的圖支持用多模態大模型去解析得到描述.
- 2025-02-28 結合網路搜尋Tavily對於任意大模型實現類似 Deep Research 的推理功能.
- 2024-12-18 升級了 DeepDoc 的文檔佈局分析模型。
- 2024-08-22 支援用 RAG 技術實現從自然語言到 SQL 語句的轉換。
@ -185,25 +185,29 @@
> 所有 Docker 映像檔都是為 x86 平台建置的。目前,我們不提供 ARM64 平台的 Docker 映像檔。
> 如果您使用的是 ARM64 平台,請使用 [這份指南](https://ragflow.io/docs/dev/build_docker_image) 來建置適合您系統的 Docker 映像檔。
> 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.21.1`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.21.1` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。
> 執行以下指令會自動下載 RAGFlow Docker 映像 `v0.22.0`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.22.0` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。
```bash
$ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
# 可選使用穩定版標籤查看發佈https://github.com/infiniflow/ragflow/releasesgit checkout v0.22.0
# Use CPU for DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# To use GPU to accelerate DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | -------------------------- |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | ❌ | _Unstable_ nightly build |
> 注意:在 `v0.22.0` 之前的版本,我們會同時提供包含 embedding 模型的映像和不含 embedding 模型的 slim 映像。具體如下:
> 注意:自 `v0.22.0` 起,我們僅發佈 slim 版本,並且不再在映像標籤後附加 **-slim** 後綴。
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
> 從 `v0.22.0` 開始,我們只發佈 slim 版本,並且不再在映像標籤後附加 **-slim** 後綴。
> [!TIP]
> 如果你遇到 Docker 映像檔拉不下來的問題,可以在 **docker/.env** 檔案內根據變數 `RAGFLOW_IMAGE` 的註解提示選擇華為雲或阿里雲的對應映像。
@ -285,7 +289,7 @@ RAGFlow 預設使用 Elasticsearch 儲存文字和向量資料. 如果要切換
> [!WARNING]
> Infinity 目前官方並未正式支援在 Linux/arm64 架構下的機器上運行.
## 🔧 原始碼編譯 Docker 映像(不含 embedding 模型)
## 🔧 原始碼編譯 Docker 映像
本 Docker 映像大小約 2 GB 左右並且依賴外部的大模型和 embedding 服務。

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.22.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -85,6 +85,7 @@
## 🔥 近期更新
- 2025-11-12 支持从 Confluence、AWS S3、Discord、Google Drive 进行数据同步。
- 2025-10-23 支持 MinerU 和 Docling 作为文档解析方法。
- 2025-10-15 支持可编排的数据管道。
- 2025-08-08 支持 OpenAI 最新的 GPT-5 系列模型。
@ -92,7 +93,6 @@
- 2025-05-23 Agent 新增 Python/JS 代码执行器组件。
- 2025-05-05 支持跨语言查询。
- 2025-03-19 PDF 和 DOCX 中的图支持用多模态大模型去解析得到描述.
- 2025-02-28 结合互联网搜索Tavily对于任意大模型实现类似 Deep Research 的推理功能.
- 2024-12-18 升级了 DeepDoc 的文档布局分析模型。
- 2024-08-22 支持用 RAG 技术实现从自然语言到 SQL 语句的转换。
@ -186,25 +186,29 @@
> 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。
> 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.21.1`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.21.1` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。
> 运行以下命令会自动下载 RAGFlow Docker 镜像 `v0.22.0`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.22.0` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。
```bash
$ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
# 可选使用稳定版本标签查看发布https://github.com/infiniflow/ragflow/releases例如git checkout v0.22.0
# Use CPU for DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# To use GPU to accelerate DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
> 注意:在 `v0.22.0` 之前的版本,我们会同时提供包含 embedding 模型的镜像和不含 embedding 模型的 slim 镜像。具体如下:
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | ❌ | _Unstable_ nightly build |
> 注意:从 `v0.22.0` 开始,我们只发布 slim 版本,并且不再在镜像标签后附加 **-slim** 后缀。
> 从 `v0.22.0` 开始,我们只发布 slim 版本,并且不再在镜像标签后附加 **-slim** 后缀。
> [!TIP]
> 如果你遇到 Docker 镜像拉不下来的问题,可以在 **docker/.env** 文件内根据变量 `RAGFLOW_IMAGE` 的注释提示选择华为云或者阿里云的相应镜像。
@ -284,7 +288,7 @@ RAGFlow 默认使用 Elasticsearch 存储文本和向量数据. 如果要切换
> [!WARNING]
> Infinity 目前官方并未正式支持在 Linux/arm64 架构下的机器上运行.
## 🔧 源码编译 Docker 镜像(不含 embedding 模型)
## 🔧 源码编译 Docker 镜像
本 Docker 镜像大小约 2 GB 左右并且依赖外部的大模型和 embedding 服务。

View File

@ -48,7 +48,7 @@ It consists of a server-side Service and a command-line client (CLI), both imple
1. Ensure the Admin Service is running.
2. Install ragflow-cli.
```bash
pip install ragflow-cli==0.21.1
pip install ragflow-cli==0.22.0
```
3. Launch the CLI client:
```bash

View File

@ -23,6 +23,7 @@ from Cryptodome.Cipher import PKCS1_v1_5 as Cipher_pkcs1_v1_5
from typing import Dict, List, Any
from lark import Lark, Transformer, Tree
import requests
import getpass
GRAMMAR = r"""
start: command
@ -51,6 +52,7 @@ sql_command: list_services
| revoke_permission
| alter_user_role
| show_user_permission
| show_version
// meta command definition
meta_command: "\\" meta_command_name [meta_args]
@ -92,6 +94,7 @@ FOR: "FOR"i
RESOURCES: "RESOURCES"i
ON: "ON"i
SET: "SET"i
VERSION: "VERSION"i
list_services: LIST SERVICES ";"
show_service: SHOW SERVICE NUMBER ";"
@ -120,6 +123,8 @@ revoke_permission: REVOKE action_list ON identifier FROM ROLE identifier ";"
alter_user_role: ALTER USER quoted_string SET ROLE identifier ";"
show_user_permission: SHOW USER PERMISSION quoted_string ";"
show_version: SHOW VERSION ";"
action_list: identifier ("," identifier)*
identifier: WORD
@ -246,6 +251,9 @@ class AdminTransformer(Transformer):
user_name = items[3]
return {"type": "show_user_permission", "user_name": user_name}
def show_version(self, items):
return {"type": "show_version"}
def action_list(self, items):
return items
@ -359,7 +367,7 @@ class AdminCLI(Cmd):
if single_command:
admin_passwd = arguments['password']
else:
admin_passwd = input(f"password for {self.admin_account}: ").strip()
admin_passwd = getpass.getpass(f"password for {self.admin_account}: ").strip()
try:
self.admin_password = encrypt(admin_passwd)
response = self.session.post(url, json={'email': self.admin_account, 'password': self.admin_password})
@ -370,7 +378,7 @@ class AdminCLI(Cmd):
self.session.headers.update({
'Content-Type': 'application/json',
'Authorization': response.headers['Authorization'],
'User-Agent': 'RAGFlow-CLI/0.21.1'
'User-Agent': 'RAGFlow-CLI/0.22.0'
})
print("Authentication successful.")
return True
@ -384,6 +392,21 @@ class AdminCLI(Cmd):
print(str(e))
print(f"Can't access {self.host}, port: {self.port}")
def _format_service_detail_table(self, data):
if not any([isinstance(v, list) for v in data.values()]):
# normal table
return data
# handle task_executor heartbeats map, for example {'name': [{'done': 2, 'now': timestamp1}, {'done': 3, 'now': timestamp2}]
task_executor_list = []
for k, v in data.items():
# display latest status
heartbeats = sorted(v, key=lambda x: x["now"], reverse=True)
task_executor_list.append({
"task_executor_name": k,
**heartbeats[0],
})
return task_executor_list
def _print_table_simple(self, data):
if not data:
print("No data to print")
@ -555,6 +578,8 @@ class AdminCLI(Cmd):
self._alter_user_role(command_dict)
case 'show_user_permission':
self._show_user_permission(command_dict)
case 'show_version':
self._show_version(command_dict)
case 'meta':
self._handle_meta_command(command_dict)
case _:
@ -585,7 +610,8 @@ class AdminCLI(Cmd):
if isinstance(res_data['message'], str):
print(res_data['message'])
else:
self._print_table_simple(res_data['message'])
data = self._format_service_detail_table(res_data['message'])
self._print_table_simple(data)
else:
print(f"Service {res_data['service_name']} is down, {res_data['message']}")
else:
@ -622,7 +648,9 @@ class AdminCLI(Cmd):
response = self.session.get(url)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
table_data = res_json['data']
table_data.pop('avatar')
self._print_table_simple(table_data)
else:
print(f"Fail to get user {user_name}, code: {res_json['code']}, message: {res_json['message']}")
@ -695,7 +723,10 @@ class AdminCLI(Cmd):
response = self.session.get(url)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
table_data = res_json['data']
for t in table_data:
t.pop('avatar')
self._print_table_simple(table_data)
else:
print(f"Fail to get all datasets of {user_name}, code: {res_json['code']}, message: {res_json['message']}")
@ -707,7 +738,10 @@ class AdminCLI(Cmd):
response = self.session.get(url)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
table_data = res_json['data']
for t in table_data:
t.pop('avatar')
self._print_table_simple(table_data)
else:
print(f"Fail to get all agents of {user_name}, code: {res_json['code']}, message: {res_json['message']}")
@ -861,6 +895,16 @@ class AdminCLI(Cmd):
print(
f"Fail to show user: {user_name_str} permission, code: {res_json['code']}, message: {res_json['message']}")
def _show_version(self, command):
print("show_version")
url = f'http://{self.host}:{self.port}/api/v1/admin/version'
response = self.session.get(url)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to show version, code: {res_json['code']}, message: {res_json['message']}")
def _handle_meta_command(self, command):
meta_command = command['command']
args = command.get('args', [])

View File

@ -1,6 +1,6 @@
[project]
name = "ragflow-cli"
version = "0.21.1"
version = "0.22.0"
description = "Admin Service's client of [RAGFlow](https://github.com/infiniflow/ragflow). The Admin Service provides user management and system monitoring. "
authors = [{ name = "Lynn", email = "lynn_inf@hotmail.com" }]
license = { text = "Apache License, Version 2.0" }

View File

@ -23,13 +23,15 @@ import traceback
from werkzeug.serving import run_simple
from flask import Flask
from routes import admin_bp
from api.utils.log_utils import init_root_logger
from api.constants import SERVICE_CONF
from api import settings
from common.log_utils import init_root_logger
from common.constants import SERVICE_CONF
from common.config_utils import show_configs
from common import settings
from config import load_configurations, SERVICE_CONFIGS
from auth import init_default_admin, setup_auth
from flask_session import Session
from flask_login import LoginManager
from common.versions import get_ragflow_version
stop_event = threading.Event()
@ -51,6 +53,8 @@ if __name__ == '__main__':
os.environ.get("MAX_CONTENT_LENGTH", 1024 * 1024 * 1024)
)
Session(app)
logging.info(f'RAGFlow version: {get_ragflow_version()}')
show_configs()
login_manager = LoginManager()
login_manager.init_app(app)
settings.init_settings()
@ -65,7 +69,7 @@ if __name__ == '__main__':
port=9381,
application=app,
threaded=True,
use_reloader=True,
use_reloader=False,
use_debugger=True,
)
except Exception:

View File

@ -23,17 +23,15 @@ from flask import request, jsonify
from flask_login import current_user, login_user
from itsdangerous.url_safe import URLSafeTimedSerializer as Serializer
from api import settings
from api.common.exceptions import AdminException, UserNotFoundError
from api.db.init_data import encode_to_base64
from api.common.base64 import encode_to_base64
from api.db.services import UserService
from api.db import ActiveEnum, StatusEnum
from common.constants import ActiveEnum, StatusEnum
from api.utils.crypt import decrypt
from api.utils import get_uuid
from common.misc_utils import get_uuid
from common.time_utils import current_timestamp, datetime_format, get_format_time
from api.utils.api_utils import (
construct_response,
)
from common.connection_utils import construct_response
from common import settings
def setup_auth(login_manager):

View File

@ -21,7 +21,7 @@ from enum import Enum
from pydantic import BaseModel
from typing import Any
from api.utils.configs import read_config
from common.config_utils import read_config
from urllib.parse import urlparse
@ -183,11 +183,13 @@ class RAGFlowServerConfig(BaseConfig):
class TaskExecutorConfig(BaseConfig):
message_queue_type: str
def to_dict(self) -> dict[str, Any]:
result = super().to_dict()
if 'extra' not in result:
result['extra'] = dict()
result['extra']['message_queue_type'] = self.message_queue_type
return result
@ -299,6 +301,15 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
id_count += 1
case "admin":
pass
case "task_executor":
name: str = 'task_executor'
host: str = v.get('host', '')
port: int = v.get('port', 0)
message_queue_type: str = v.get('message_queue_type')
config = TaskExecutorConfig(id=id_count, name=name, host=host, port=port, message_queue_type=message_queue_type,
service_type="task_executor", detail_func_name="check_task_executor_alive")
configurations.append(config)
id_count += 1
case _:
logging.warning(f"Unknown configuration key: {k}")
continue

View File

@ -24,6 +24,7 @@ from responses import success_response, error_response
from services import UserMgr, ServiceMgr, UserServiceMgr
from roles import RoleMgr
from api.common.exceptions import AdminException
from common.versions import get_ragflow_version
admin_bp = Blueprint('admin', __name__, url_prefix='/api/v1/admin')
@ -369,3 +370,13 @@ def get_user_permission(user_name: str):
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/version', methods=['GET'])
@login_required
@check_admin_auth
def show_version():
try:
res = {"version": get_ragflow_version()}
return success_response(res)
except Exception as e:
return error_response(str(e), 500)

View File

@ -17,7 +17,7 @@
import re
from werkzeug.security import check_password_hash
from api.db import ActiveEnum
from common.constants import ActiveEnum
from api.db.services import UserService
from api.db.joint_services.user_account_service import create_new_user, delete_user_data
from api.db.services.canvas_service import UserCanvasService
@ -52,6 +52,7 @@ class UserMgr:
result = []
for user in users:
result.append({
'avatar': user.avatar,
'email': user.email,
'language': user.language,
'last_login_time': user.last_login_time,
@ -170,7 +171,8 @@ class UserServiceMgr:
return [{
'title': r['title'],
'permission': r['permission'],
'canvas_category': r['canvas_category'].split('_')[0]
'canvas_category': r['canvas_category'].split('_')[0],
'avatar': r['avatar']
} for r in res]
@ -190,6 +192,10 @@ class ServiceMgr:
config_dict['status'] = 'timeout'
except Exception:
config_dict['status'] = 'timeout'
if not config_dict['host']:
config_dict['host'] = '-'
if not config_dict['port']:
config_dict['port'] = '-'
result.append(config_dict)
return result

View File

@ -26,7 +26,9 @@ from typing import Any, Union, Tuple
from agent.component import component_class
from agent.component.base import ComponentBase
from api.db.services.file_service import FileService
from api.utils import get_uuid, hash_str2int
from api.db.services.task_service import has_canceled
from common.misc_utils import get_uuid, hash_str2int
from common.exceptions import TaskCanceledException
from rag.prompts.generator import chunks_format
from rag.utils.redis_conn import REDIS_CONN
@ -126,6 +128,7 @@ class Graph:
self.components[k]["obj"].reset()
try:
REDIS_CONN.delete(f"{self.task_id}-logs")
REDIS_CONN.delete(f"{self.task_id}-cancel")
except Exception as e:
logging.exception(e)
@ -153,6 +156,33 @@ class Graph:
def get_tenant_id(self):
return self._tenant_id
def get_value_with_variable(self,value: str) -> Any:
pat = re.compile(r"\{* *\{([a-zA-Z:0-9]+@[A-Za-z0-9_.]+|sys\.[A-Za-z0-9_.]+|env\.[A-Za-z0-9_.]+)\} *\}*")
out_parts = []
last = 0
for m in pat.finditer(value):
out_parts.append(value[last:m.start()])
key = m.group(1)
v = self.get_variable_value(key)
if v is None:
rep = ""
elif isinstance(v, partial):
buf = []
for chunk in v():
buf.append(chunk)
rep = "".join(buf)
elif isinstance(v, str):
rep = v
else:
rep = json.dumps(v, ensure_ascii=False)
out_parts.append(rep)
last = m.end()
out_parts.append(value[last:])
return("".join(out_parts))
def get_variable_value(self, exp: str) -> Any:
exp = exp.strip("{").strip("}").strip(" ").strip("{").strip("}")
if exp.find("@") < 0:
@ -161,7 +191,43 @@ class Graph:
cpn = self.get_component(cpn_id)
if not cpn:
raise Exception(f"Can't find variable: '{cpn_id}@{var_nm}'")
return cpn["obj"].output(var_nm)
parts = var_nm.split(".", 1)
root_key = parts[0]
rest = parts[1] if len(parts) > 1 else ""
root_val = cpn["obj"].output(root_key)
if not rest:
return root_val
return self.get_variable_param_value(root_val,rest)
def get_variable_param_value(self, obj: Any, path: str) -> Any:
cur = obj
if not path:
return cur
for key in path.split('.'):
if cur is None:
return None
if isinstance(cur, str):
try:
cur = json.loads(cur)
except Exception:
return None
if isinstance(cur, dict):
cur = cur.get(key)
else:
cur = getattr(cur, key, None)
return cur
def is_canceled(self) -> bool:
return has_canceled(self.task_id)
def cancel_task(self) -> bool:
try:
REDIS_CONN.set(f"{self.task_id}-cancel", "x")
except Exception as e:
logging.exception(e)
return False
return True
class Canvas(Graph):
@ -187,7 +253,7 @@ class Canvas(Graph):
"sys.conversation_turns": 0,
"sys.files": []
}
self.retrieval = self.dsl["retrieval"]
self.memory = self.dsl.get("memory", [])
@ -204,18 +270,19 @@ class Canvas(Graph):
self.retrieval = []
self.memory = []
for k in self.globals.keys():
if isinstance(self.globals[k], str):
self.globals[k] = ""
elif isinstance(self.globals[k], int):
self.globals[k] = 0
elif isinstance(self.globals[k], float):
self.globals[k] = 0
elif isinstance(self.globals[k], list):
self.globals[k] = []
elif isinstance(self.globals[k], dict):
self.globals[k] = {}
else:
self.globals[k] = None
if k.startswith("sys."):
if isinstance(self.globals[k], str):
self.globals[k] = ""
elif isinstance(self.globals[k], int):
self.globals[k] = 0
elif isinstance(self.globals[k], float):
self.globals[k] = 0
elif isinstance(self.globals[k], list):
self.globals[k] = []
elif isinstance(self.globals[k], dict):
self.globals[k] = {}
else:
self.globals[k] = None
def run(self, **kwargs):
st = time.perf_counter()
@ -225,6 +292,14 @@ class Canvas(Graph):
for k, cpn in self.components.items():
self.components[k]["obj"].reset(True)
if kwargs.get("webhook_payload"):
for k, cpn in self.components.items():
if self.components[k]["obj"].component_name.lower() == "webhook":
for kk, vv in kwargs["webhook_payload"].items():
self.components[k]["obj"].set_output(kk, vv)
self.components[k]["obj"].reset(True)
for k in kwargs.keys():
if k in ["query", "user_id", "files"] and kwargs[k]:
if k == "files":
@ -250,18 +325,37 @@ class Canvas(Graph):
self.path.append("begin")
self.retrieval.append({"chunks": [], "doc_aggs": []})
if self.is_canceled():
msg = f"Task {self.task_id} has been canceled before starting."
logging.info(msg)
raise TaskCanceledException(msg)
yield decorate("workflow_started", {"inputs": kwargs.get("inputs")})
self.retrieval.append({"chunks": {}, "doc_aggs": {}})
def _run_batch(f, t):
if self.is_canceled():
msg = f"Task {self.task_id} has been canceled during batch execution."
logging.info(msg)
raise TaskCanceledException(msg)
with ThreadPoolExecutor(max_workers=5) as executor:
thr = []
for i in range(f, t):
i = f
while i < t:
cpn = self.get_component_obj(self.path[i])
if cpn.component_name.lower() in ["begin", "userfillup"]:
thr.append(executor.submit(cpn.invoke, inputs=kwargs.get("inputs", {})))
i += 1
else:
thr.append(executor.submit(cpn.invoke, **cpn.get_input()))
for _, ele in cpn.get_input_elements().items():
if isinstance(ele, dict) and ele.get("_cpn_id") and ele.get("_cpn_id") not in self.path[:i] and self.path[0].lower().find("userfillup") < 0:
self.path.pop(i)
t -= 1
break
else:
thr.append(executor.submit(cpn.invoke, **cpn.get_input()))
i += 1
for t in thr:
t.result()
@ -291,6 +385,7 @@ class Canvas(Graph):
"thoughts": self.get_component_thoughts(self.path[i])
})
_run_batch(idx, to)
to = len(self.path)
# post processing of components invocation
for i in range(idx, to):
cpn = self.get_component(self.path[i])
@ -385,9 +480,10 @@ class Canvas(Graph):
for c in path:
o = self.get_component_obj(c)
if o.component_name.lower() == "userfillup":
o.invoke()
another_inputs.update(o.get_input_elements())
if o.get_param("enable_tips"):
tips = o.get_param("tips")
tips = o.output("tips")
self.path = path
yield decorate("user_inputs", {"inputs": another_inputs, "tips": tips})
return
@ -401,6 +497,14 @@ class Canvas(Graph):
"created_at": st,
})
self.history.append(("assistant", self.get_component_obj(self.path[-1]).output()))
elif "Task has been canceled" in self.error:
yield decorate("workflow_finished",
{
"inputs": kwargs.get("inputs"),
"outputs": "Task has been canceled",
"elapsed_time": time.perf_counter() - st,
"created_at": st,
})
def is_reff(self, exp: str) -> bool:
exp = exp.strip("{").strip("}")

View File

@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import importlib
import inspect
@ -50,9 +49,10 @@ del _package_path, _import_submodules, _extract_classes_from_module
def component_class(class_name):
for mdl in ["agent.component", "agent.tools", "rag.flow"]:
for module_name in ["agent.component", "agent.tools", "rag.flow"]:
try:
return getattr(importlib.import_module(mdl), class_name)
return getattr(importlib.import_module(module_name), class_name)
except Exception:
# logging.warning(f"Can't import module: {module_name}, error: {e}")
pass
assert False, f"Can't import {class_name}"

View File

@ -27,7 +27,7 @@ from agent.tools.base import LLMToolPluginCallSession, ToolParamBase, ToolBase,
from api.db.services.llm_service import LLMBundle
from api.db.services.tenant_llm_service import TenantLLMService
from api.db.services.mcp_server_service import MCPServerService
from api.utils.api_utils import timeout
from common.connection_utils import timeout
from rag.prompts.generator import next_step, COMPLETE_TASK, analyze_task, \
citation_prompt, reflect, rank_memories, kb_prompt, citation_plus, full_question, message_fit_in
from rag.utils.mcp_tool_call_conn import MCPToolCallSession, mcp_tool_metadata_to_openai_tool
@ -139,6 +139,9 @@ class Agent(LLM, ToolBase):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 20*60)))
def _invoke(self, **kwargs):
if self.check_if_canceled("Agent processing"):
return
if kwargs.get("user_prompt"):
usr_pmt = ""
if kwargs.get("reasoning"):
@ -152,13 +155,20 @@ class Agent(LLM, ToolBase):
self._param.prompts = [{"role": "user", "content": usr_pmt}]
if not self.tools:
if self.check_if_canceled("Agent processing"):
return
return LLM._invoke(self, **kwargs)
prompt, msg, user_defined_prompt = self._prepare_prompt_variables()
downstreams = self._canvas.get_component(self._id)["downstream"] if self._canvas.get_component(self._id) else []
ex = self.exception_handler()
if any([self._canvas.get_component_obj(cid).component_name.lower()=="message" for cid in downstreams]) and not self._param.output_structure and not (ex and ex["goto"]):
output_structure=None
try:
output_structure=self._param.outputs['structured']
except Exception:
pass
if any([self._canvas.get_component_obj(cid).component_name.lower()=="message" for cid in downstreams]) and not output_structure and not (ex and ex["goto"]):
self.set_output("content", partial(self.stream_output_with_tools, prompt, msg, user_defined_prompt))
return
@ -166,6 +176,8 @@ class Agent(LLM, ToolBase):
use_tools = []
ans = ""
for delta_ans, tk in self._react_with_tools_streamly(prompt, msg, use_tools, user_defined_prompt):
if self.check_if_canceled("Agent processing"):
return
ans += delta_ans
if ans.find("**ERROR**") >= 0:
@ -186,12 +198,16 @@ class Agent(LLM, ToolBase):
answer_without_toolcall = ""
use_tools = []
for delta_ans,_ in self._react_with_tools_streamly(prompt, msg, use_tools, user_defined_prompt):
if self.check_if_canceled("Agent streaming"):
return
if delta_ans.find("**ERROR**") >= 0:
if self.get_exception_default_value():
self.set_output("content", self.get_exception_default_value())
yield self.get_exception_default_value()
else:
self.set_output("_ERROR", delta_ans)
return
answer_without_toolcall += delta_ans
yield delta_ans
@ -266,6 +282,8 @@ class Agent(LLM, ToolBase):
st = timer()
txt = ""
for delta_ans in self._gen_citations(entire_txt):
if self.check_if_canceled("Agent streaming"):
return
yield delta_ans, 0
txt += delta_ans
@ -281,6 +299,8 @@ class Agent(LLM, ToolBase):
task_desc = analyze_task(self.chat_mdl, prompt, user_request, tool_metas, user_defined_prompt)
self.callback("analyze_task", {}, task_desc, elapsed_time=timer()-st)
for _ in range(self._param.max_rounds + 1):
if self.check_if_canceled("Agent streaming"):
return
response, tk = next_step(self.chat_mdl, hist, tool_metas, task_desc, user_defined_prompt)
# self.callback("next_step", {}, str(response)[:256]+"...")
token_count += tk
@ -328,6 +348,8 @@ Instructions:
6. Focus on delivering VALUE with the information already gathered
Respond immediately with your final comprehensive answer.
"""
if self.check_if_canceled("Agent final instruction"):
return
append_user_content(hist, final_instruction)
for txt, tkcnt in complete():

View File

@ -25,7 +25,7 @@ from typing import Any, List, Union
import pandas as pd
import trio
from agent import settings
from api.utils.api_utils import timeout
from common.connection_utils import timeout
_FEEDED_DEPRECATED_PARAMS = "_feeded_deprecated_params"
@ -393,7 +393,7 @@ class ComponentParamBase(ABC):
class ComponentBase(ABC):
component_name: str
thread_limiter = trio.CapacityLimiter(int(os.environ.get('MAX_CONCURRENT_CHATS', 10)))
variable_ref_patt = r"\{* *\{([a-zA-Z:0-9]+@[A-Za-z:0-9_.-]+|sys\.[a-z_]+)\} *\}*"
variable_ref_patt = r"\{* *\{([a-zA-Z:0-9]+@[A-Za-z0-9_.]+|sys\.[A-Za-z0-9_.]+|env\.[A-Za-z0-9_.]+)\} *\}*"
def __str__(self):
"""
@ -417,6 +417,20 @@ class ComponentBase(ABC):
self._param = param
self._param.check()
def is_canceled(self) -> bool:
return self._canvas.is_canceled()
def check_if_canceled(self, message: str = "") -> bool:
if self.is_canceled():
task_id = getattr(self._canvas, 'task_id', 'unknown')
log_message = f"Task {task_id} has been canceled"
if message:
log_message += f" during {message}"
logging.info(log_message)
self.set_output("_ERROR", "Task has been canceled")
return True
return False
def invoke(self, **kwargs) -> dict[str, Any]:
self.set_output("_created_time", time.perf_counter())
try:
@ -514,6 +528,7 @@ class ComponentBase(ABC):
def get_param(self, name):
if hasattr(self._param, name):
return getattr(self._param, name)
return None
def debug(self, **kwargs):
return self._invoke(**kwargs)
@ -521,7 +536,7 @@ class ComponentBase(ABC):
def get_parent(self) -> Union[object, None]:
pid = self._canvas.get_component(self._id).get("parent_id")
if not pid:
return
return None
return self._canvas.get_component(pid)["obj"]
def get_upstream(self) -> List[str]:
@ -546,7 +561,7 @@ class ComponentBase(ABC):
def exception_handler(self):
if not self._param.exception_method:
return
return None
return {
"goto": self._param.exception_goto,
"default_value": self._param.exception_default_value

View File

@ -37,7 +37,13 @@ class Begin(UserFillUp):
component_name = "Begin"
def _invoke(self, **kwargs):
if self.check_if_canceled("Begin processing"):
return
for k, v in kwargs.get("inputs", {}).items():
if self.check_if_canceled("Begin processing"):
return
if isinstance(v, dict) and v.get("type", "").lower().find("file") >=0:
if v.get("optional") and v.get("value", None) is None:
v = None

View File

@ -18,10 +18,10 @@ import os
import re
from abc import ABC
from api.db import LLMType
from common.constants import LLMType
from api.db.services.llm_service import LLMBundle
from agent.component.llm import LLMParam, LLM
from api.utils.api_utils import timeout
from common.connection_utils import timeout
from rag.llm.chat_model import ERROR_PREFIX
@ -98,6 +98,9 @@ class Categorize(LLM, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
def _invoke(self, **kwargs):
if self.check_if_canceled("Categorize processing"):
return
msg = self._canvas.get_history(self._param.message_history_window_size)
if not msg:
msg = [{"role": "user", "content": ""}]
@ -114,10 +117,18 @@ class Categorize(LLM, ABC):
---- Real Data ----
{}
""".format(" | ".join(["{}: \"{}\"".format(c["role"].upper(), re.sub(r"\n", "", c["content"], flags=re.DOTALL)) for c in msg]))
if self.check_if_canceled("Categorize processing"):
return
ans = chat_mdl.chat(self._param.sys_prompt, [{"role": "user", "content": user_prompt}], self._param.gen_conf())
logging.info(f"input: {user_prompt}, answer: {str(ans)}")
if ERROR_PREFIX in ans:
raise Exception(ans)
if self.check_if_canceled("Categorize processing"):
return
# Count the number of times each category appears in the answer.
category_counts = {}
for c in self._param.category_description.keys():

View File

@ -0,0 +1,203 @@
from abc import ABC
import ast
import os
from agent.component.base import ComponentBase, ComponentParamBase
from api.utils.api_utils import timeout
class DataOperationsParam(ComponentParamBase):
"""
Define the Data Operations component parameters.
"""
def __init__(self):
super().__init__()
self.query = []
self.operations = "literal_eval"
self.select_keys = []
self.filter_values=[]
self.updates=[]
self.remove_keys=[]
self.rename_keys=[]
self.outputs = {
"result": {
"value": [],
"type": "Array of Object"
}
}
def check(self):
self.check_valid_value(self.operations, "Support operations", ["select_keys", "literal_eval","combine","filter_values","append_or_update","remove_keys","rename_keys"])
class DataOperations(ComponentBase,ABC):
component_name = "DataOperations"
def get_input_form(self) -> dict[str, dict]:
return {
k: {"name": o.get("name", ""), "type": "line"}
for input_item in (self._param.query or [])
for k, o in self.get_input_elements_from_text(input_item).items()
}
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
def _invoke(self, **kwargs):
self.input_objects=[]
inputs = getattr(self._param, "query", None)
if not isinstance(inputs, (list, tuple)):
inputs = [inputs]
for input_ref in inputs:
input_object=self._canvas.get_variable_value(input_ref)
self.set_input_value(input_ref, input_object)
if input_object is None:
continue
if isinstance(input_object,dict):
self.input_objects.append(input_object)
elif isinstance(input_object,list):
self.input_objects.extend(x for x in input_object if isinstance(x, dict))
else:
continue
if self._param.operations == "select_keys":
self._select_keys()
elif self._param.operations == "recursive_eval":
self._literal_eval()
elif self._param.operations == "combine":
self._combine()
elif self._param.operations == "filter_values":
self._filter_values()
elif self._param.operations == "append_or_update":
self._append_or_update()
elif self._param.operations == "remove_keys":
self._remove_keys()
else:
self._rename_keys()
def _select_keys(self):
filter_criteria: list[str] = self._param.select_keys
results = [{key: value for key, value in data_dict.items() if key in filter_criteria} for data_dict in self.input_objects]
self.set_output("result", results)
def _recursive_eval(self, data):
if isinstance(data, dict):
return {k: self.recursive_eval(v) for k, v in data.items()}
if isinstance(data, list):
return [self.recursive_eval(item) for item in data]
if isinstance(data, str):
try:
if (
data.strip().startswith(("{", "[", "(", "'", '"'))
or data.strip().lower() in ("true", "false", "none")
or data.strip().replace(".", "").isdigit()
):
return ast.literal_eval(data)
except (ValueError, SyntaxError, TypeError, MemoryError):
return data
else:
return data
return data
def _literal_eval(self):
self.set_output("result", self._recursive_eval(self.input_objects))
def _combine(self):
result={}
for obj in self.input_objects:
for key, value in obj.items():
if key not in result:
result[key] = value
elif isinstance(result[key], list):
if isinstance(value, list):
result[key].extend(value)
else:
result[key].append(value)
else:
result[key] = (
[result[key], value] if not isinstance(value, list) else [result[key], *value]
)
self.set_output("result", result)
def norm(self,v):
s = "" if v is None else str(v)
return s
def match_rule(self, obj, rule):
key = rule.get("key")
op = (rule.get("operator") or "equals").lower()
target = self.norm(rule.get("value"))
target = self._canvas.get_value_with_variable(target) or target
if key not in obj:
return False
val = obj.get(key, None)
v = self.norm(val)
if op == "=":
return v == target
if op == "":
return v != target
if op == "contains":
return target in v
if op == "start with":
return v.startswith(target)
if op == "end with":
return v.endswith(target)
return False
def _filter_values(self):
results=[]
rules = (getattr(self._param, "filter_values", None) or [])
for obj in self.input_objects:
if not rules:
results.append(obj)
continue
if all(self.match_rule(obj, r) for r in rules):
results.append(obj)
self.set_output("result", results)
def _append_or_update(self):
results=[]
updates = getattr(self._param, "updates", []) or []
for obj in self.input_objects:
new_obj = dict(obj)
for item in updates:
if not isinstance(item, dict):
continue
k = (item.get("key") or "").strip()
if not k:
continue
new_obj[k] = self._canvas.get_value_with_variable(item.get("value")) or item.get("value")
results.append(new_obj)
self.set_output("result", results)
def _remove_keys(self):
results = []
remove_keys = getattr(self._param, "remove_keys", []) or []
for obj in (self.input_objects or []):
new_obj = dict(obj)
for k in remove_keys:
if not isinstance(k, str):
continue
new_obj.pop(k, None)
results.append(new_obj)
self.set_output("result", results)
def _rename_keys(self):
results = []
rename_pairs = getattr(self._param, "rename_keys", []) or []
for obj in (self.input_objects or []):
new_obj = dict(obj)
for pair in rename_pairs:
if not isinstance(pair, dict):
continue
old = (pair.get("old_key") or "").strip()
new = (pair.get("new_key") or "").strip()
if not old or not new or old == new:
continue
if old in new_obj:
new_obj[new] = new_obj.pop(old)
results.append(new_obj)
self.set_output("result", results)
def thoughts(self) -> str:
return "DataOperation in progress"

View File

@ -13,7 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from agent.component.base import ComponentBase, ComponentParamBase
import json
import re
from functools import partial
from agent.component.base import ComponentParamBase, ComponentBase
class UserFillUpParam(ComponentParamBase):
@ -31,10 +35,35 @@ class UserFillUp(ComponentBase):
component_name = "UserFillUp"
def _invoke(self, **kwargs):
if self.check_if_canceled("UserFillUp processing"):
return
if self._param.enable_tips:
content = self._param.tips
for k, v in self.get_input_elements_from_text(self._param.tips).items():
v = v["value"]
ans = ""
if isinstance(v, partial):
for t in v():
ans += t
elif isinstance(v, list):
ans = ",".join([str(vv) for vv in v])
elif not isinstance(v, str):
try:
ans = json.dumps(v, ensure_ascii=False)
except Exception:
pass
else:
ans = v
if not ans:
ans = ""
content = re.sub(r"\{%s\}"%k, ans, content)
self.set_output("tips", content)
for k, v in kwargs.get("inputs", {}).items():
if self.check_if_canceled("UserFillUp processing"):
return
self.set_output(k, v)
def thoughts(self) -> str:
return "Waiting for your input..."

View File

@ -23,7 +23,7 @@ from abc import ABC
import requests
from agent.component.base import ComponentBase, ComponentParamBase
from api.utils.api_utils import timeout
from common.connection_utils import timeout
from deepdoc.parser import HtmlParser
@ -56,6 +56,9 @@ class Invoke(ComponentBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 3)))
def _invoke(self, **kwargs):
if self.check_if_canceled("Invoke processing"):
return
args = {}
for para in self._param.variables:
if para.get("value"):
@ -89,6 +92,9 @@ class Invoke(ComponentBase, ABC):
last_e = ""
for _ in range(self._param.max_retries + 1):
if self.check_if_canceled("Invoke processing"):
return
try:
if method == "get":
response = requests.get(url=url, params=args, headers=headers, proxies=proxies, timeout=self._param.timeout)
@ -121,6 +127,9 @@ class Invoke(ComponentBase, ABC):
return self.output("result")
except Exception as e:
if self.check_if_canceled("Invoke processing"):
return
last_e = e
logging.exception(f"Http request error: {e}")
time.sleep(self._param.delay_after_error)

View File

@ -16,6 +16,13 @@
from abc import ABC
from agent.component.base import ComponentBase, ComponentParamBase
"""
class VariableModel(BaseModel):
data_type: Annotated[Literal["string", "number", "Object", "Boolean", "Array<string>", "Array<number>", "Array<object>", "Array<boolean>"], Field(default="Array<string>")]
input_mode: Annotated[Literal["constant", "variable"], Field(default="constant")]
value: Annotated[Any, Field(default=None)]
model_config = ConfigDict(extra="forbid")
"""
class IterationParam(ComponentParamBase):
"""
@ -49,6 +56,9 @@ class Iteration(ComponentBase, ABC):
return cid
def _invoke(self, **kwargs):
if self.check_if_canceled("Iteration processing"):
return
arr = self._canvas.get_variable_value(self._param.items_ref)
if not isinstance(arr, list):
self.set_output("_ERROR", self._param.items_ref + " must be an array, but its type is "+str(type(arr)))

View File

@ -33,6 +33,9 @@ class IterationItem(ComponentBase, ABC):
self._idx = 0
def _invoke(self, **kwargs):
if self.check_if_canceled("IterationItem processing"):
return
parent = self.get_parent()
arr = self._canvas.get_variable_value(parent._param.items_ref)
if not isinstance(arr, list):
@ -40,12 +43,17 @@ class IterationItem(ComponentBase, ABC):
raise Exception(parent._param.items_ref + " must be an array, but its type is "+str(type(arr)))
if self._idx > 0:
if self.check_if_canceled("IterationItem processing"):
return
self.output_collation()
if self._idx >= len(arr):
self._idx = -1
return
if self.check_if_canceled("IterationItem processing"):
return
self.set_output("item", arr[self._idx])
self.set_output("index", self._idx)
@ -80,4 +88,4 @@ class IterationItem(ComponentBase, ABC):
return self._idx == -1
def thoughts(self) -> str:
return "Next turn..."
return "Next turn..."

View File

@ -21,12 +21,12 @@ from copy import deepcopy
from typing import Any, Generator
import json_repair
from functools import partial
from api.db import LLMType
from common.constants import LLMType
from api.db.services.llm_service import LLMBundle
from api.db.services.tenant_llm_service import TenantLLMService
from agent.component.base import ComponentBase, ComponentParamBase
from api.utils.api_utils import timeout
from rag.prompts.generator import tool_call_summary, message_fit_in, citation_prompt
from common.connection_utils import timeout
from rag.prompts.generator import tool_call_summary, message_fit_in, citation_prompt, structured_output_prompt
class LLMParam(ComponentParamBase):
@ -207,6 +207,9 @@ class LLM(ComponentBase):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
def _invoke(self, **kwargs):
if self.check_if_canceled("LLM processing"):
return
def clean_formated_answer(ans: str) -> str:
ans = re.sub(r"^.*</think>", "", ans, flags=re.DOTALL)
ans = re.sub(r"^.*```json", "", ans, flags=re.DOTALL)
@ -214,11 +217,18 @@ class LLM(ComponentBase):
prompt, msg, _ = self._prepare_prompt_variables()
error: str = ""
if self._param.output_structure:
prompt += "\nThe output MUST follow this JSON format:\n"+json.dumps(self._param.output_structure, ensure_ascii=False, indent=2)
prompt += "\nRedundant information is FORBIDDEN."
output_structure=None
try:
output_structure = self._param.outputs['structured']
except Exception:
pass
if output_structure:
schema=json.dumps(output_structure, ensure_ascii=False, indent=2)
prompt += structured_output_prompt(schema)
for _ in range(self._param.max_retries+1):
if self.check_if_canceled("LLM processing"):
return
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
error = ""
ans = self._generate(msg)
@ -228,7 +238,7 @@ class LLM(ComponentBase):
error = ans
continue
try:
self.set_output("structured_content", json_repair.loads(clean_formated_answer(ans)))
self.set_output("structured", json_repair.loads(clean_formated_answer(ans)))
return
except Exception:
msg.append({"role": "user", "content": "The answer can't not be parsed as JSON"})
@ -239,11 +249,14 @@ class LLM(ComponentBase):
downstreams = self._canvas.get_component(self._id)["downstream"] if self._canvas.get_component(self._id) else []
ex = self.exception_handler()
if any([self._canvas.get_component_obj(cid).component_name.lower()=="message" for cid in downstreams]) and not self._param.output_structure and not (ex and ex["goto"]):
if any([self._canvas.get_component_obj(cid).component_name.lower()=="message" for cid in downstreams]) and not output_structure and not (ex and ex["goto"]):
self.set_output("content", partial(self._stream_output, prompt, msg))
return
for _ in range(self._param.max_retries+1):
if self.check_if_canceled("LLM processing"):
return
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
error = ""
ans = self._generate(msg)
@ -265,6 +278,9 @@ class LLM(ComponentBase):
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
answer = ""
for ans in self._generate_streamly(msg):
if self.check_if_canceled("LLM streaming"):
return
if ans.find("**ERROR**") >= 0:
if self.get_exception_default_value():
self.set_output("content", self.get_exception_default_value())
@ -283,4 +299,4 @@ class LLM(ComponentBase):
def thoughts(self) -> str:
_, msg,_ = self._prepare_prompt_variables()
return "⌛Give me a moment—starting from: \n\n" + re.sub(r"(User's query:|[\\]+)", '', msg[-1]['content'], flags=re.DOTALL) + "\n\nIll figure out our best next move."
return "⌛Give me a moment—starting from: \n\n" + re.sub(r"(User's query:|[\\]+)", '', msg[-1]['content'], flags=re.DOTALL) + "\n\nIll figure out our best next move."

View File

@ -23,7 +23,7 @@ from typing import Any
from agent.component.base import ComponentBase, ComponentParamBase
from jinja2 import Template as Jinja2Template
from api.utils.api_utils import timeout
from common.connection_utils import timeout
class MessageParam(ComponentParamBase):
@ -49,6 +49,9 @@ class MessageParam(ComponentParamBase):
class Message(ComponentBase):
component_name = "Message"
def get_input_elements(self) -> dict[str, Any]:
return self.get_input_elements_from_text("".join(self._param.content))
def get_kwargs(self, script:str, kwargs:dict = {}, delimiter:str=None) -> tuple[str, dict[str, str | list | Any]]:
for k,v in self.get_input_elements_from_text(script).items():
if k in kwargs:
@ -86,6 +89,9 @@ class Message(ComponentBase):
all_content = ""
cache = {}
for r in re.finditer(self.variable_ref_patt, rand_cnt, flags=re.DOTALL):
if self.check_if_canceled("Message streaming"):
return
all_content += rand_cnt[s: r.start()]
yield rand_cnt[s: r.start()]
s = r.end()
@ -96,26 +102,33 @@ class Message(ComponentBase):
continue
v = self._canvas.get_variable_value(exp)
if not v:
if v is None:
v = ""
if isinstance(v, partial):
cnt = ""
for t in v():
if self.check_if_canceled("Message streaming"):
return
all_content += t
cnt += t
yield t
self.set_input_value(exp, cnt)
continue
elif not isinstance(v, str):
try:
v = json.dumps(v, ensure_ascii=False, indent=2)
v = json.dumps(v, ensure_ascii=False)
except Exception:
v = str(v)
yield v
self.set_input_value(exp, v)
all_content += v
cache[exp] = v
if s < len(rand_cnt):
if self.check_if_canceled("Message streaming"):
return
all_content += rand_cnt[s: ]
yield rand_cnt[s: ]
@ -129,6 +142,9 @@ class Message(ComponentBase):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
def _invoke(self, **kwargs):
if self.check_if_canceled("Message processing"):
return
rand_cnt = random.choice(self._param.content)
if self._param.stream and not self._is_jinjia2(rand_cnt):
self.set_output("content", partial(self._stream, rand_cnt))
@ -141,6 +157,9 @@ class Message(ComponentBase):
except Exception:
pass
if self.check_if_canceled("Message processing"):
return
for n, v in kwargs.items():
content = re.sub(n, v, content)

View File

@ -16,9 +16,11 @@
import os
import re
from abc import ABC
from typing import Any
from jinja2 import Template as Jinja2Template
from agent.component.base import ComponentParamBase
from api.utils.api_utils import timeout
from common.connection_utils import timeout
from .message import Message
@ -43,6 +45,9 @@ class StringTransformParam(ComponentParamBase):
class StringTransform(Message, ABC):
component_name = "StringTransform"
def get_input_elements(self) -> dict[str, Any]:
return self.get_input_elements_from_text(self._param.script)
def get_input_form(self) -> dict[str, dict]:
if self._param.method == "split":
return {
@ -58,17 +63,24 @@ class StringTransform(Message, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
def _invoke(self, **kwargs):
if self.check_if_canceled("StringTransform processing"):
return
if self._param.method == "split":
self._split(kwargs.get("line"))
else:
self._merge(kwargs)
def _split(self, line:str|None = None):
if self.check_if_canceled("StringTransform split processing"):
return
var = self._canvas.get_variable_value(self._param.split_ref) if not line else line
if not var:
var = ""
assert isinstance(var, str), "The input variable is not a string: {}".format(type(var))
self.set_input_value(self._param.split_ref, var)
res = []
for i,s in enumerate(re.split(r"(%s)"%("|".join([re.escape(d) for d in self._param.delimiters])), var, flags=re.DOTALL)):
if i % 2 == 1:
@ -77,6 +89,9 @@ class StringTransform(Message, ABC):
self.set_output("result", res)
def _merge(self, kwargs:dict[str, str] = {}):
if self.check_if_canceled("StringTransform merge processing"):
return
script = self._param.script
script, kwargs = self.get_kwargs(script, kwargs, self._param.delimiters[0])

View File

@ -19,7 +19,7 @@ from abc import ABC
from typing import Any
from agent.component.base import ComponentBase, ComponentParamBase
from api.utils.api_utils import timeout
from common.connection_utils import timeout
class SwitchParam(ComponentParamBase):
@ -63,9 +63,18 @@ class Switch(ComponentBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 3)))
def _invoke(self, **kwargs):
if self.check_if_canceled("Switch processing"):
return
for cond in self._param.conditions:
if self.check_if_canceled("Switch processing"):
return
res = []
for item in cond["items"]:
if self.check_if_canceled("Switch processing"):
return
if not item["cpn_id"]:
continue
cpn_v = self._canvas.get_variable_value(item["cpn_id"])
@ -128,4 +137,4 @@ class Switch(ComponentBase, ABC):
raise ValueError('Not supported operator' + operator)
def thoughts(self) -> str:
return "Im weighing a few options and will pick the next step shortly."
return "Im weighing a few options and will pick the next step shortly."

View File

@ -0,0 +1,84 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any
import os
from common.connection_utils import timeout
from agent.component.base import ComponentBase, ComponentParamBase
class VariableAggregatorParam(ComponentParamBase):
"""
Parameters for VariableAggregator
- groups: list of dicts {"group_name": str, "variables": [variable selectors]}
"""
def __init__(self):
super().__init__()
# each group expects: {"group_name": str, "variables": List[str]}
self.groups = []
def check(self):
self.check_empty(self.groups, "[VariableAggregator] groups")
for g in self.groups:
if not g.get("group_name"):
raise ValueError("[VariableAggregator] group_name can not be empty!")
if not g.get("variables"):
raise ValueError(
f"[VariableAggregator] variables of group `{g.get('group_name')}` can not be empty"
)
if not isinstance(g.get("variables"), list):
raise ValueError(
f"[VariableAggregator] variables of group `{g.get('group_name')}` should be a list of strings"
)
def get_input_form(self) -> dict[str, dict]:
return {
"variables": {
"name": "Variables",
"type": "list",
}
}
class VariableAggregator(ComponentBase):
component_name = "VariableAggregator"
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 3)))
def _invoke(self, **kwargs):
# Group mode: for each group, pick the first available variable
for group in self._param.groups:
gname = group.get("group_name")
# record candidate selectors within this group
self.set_input_value(f"{gname}.variables", list(group.get("variables", [])))
for selector in group.get("variables", []):
val = self._canvas.get_variable_value(selector['value'])
if val:
self.set_output(gname, val)
break
@staticmethod
def _to_object(value: Any) -> Any:
# Try to convert value to serializable object if it has to_object()
try:
return value.to_object() # type: ignore[attr-defined]
except Exception:
return value
def thoughts(self) -> str:
return "Aggregating variables from canvas and grouping as configured."

View File

@ -0,0 +1,38 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from agent.component.base import ComponentParamBase, ComponentBase
class WebhookParam(ComponentParamBase):
"""
Define the Begin component parameters.
"""
def __init__(self):
super().__init__()
def get_input_form(self) -> dict[str, dict]:
return getattr(self, "inputs")
class Webhook(ComponentBase):
component_name = "Webhook"
def _invoke(self, **kwargs):
pass
def thoughts(self) -> str:
return ""

View File

@ -0,0 +1,519 @@
{
"id": 27,
"title": {
"en": "Interactive Agent",
"zh": "可交互的 Agent"
},
"description": {
"en": "During the Agents execution, users can actively intervene and interact with the Agent to adjust or guide its output, ensuring the final result aligns with their intentions.",
"zh": "在 Agent 的运行过程中,用户可以随时介入,与 Agent 进行交互,以调整或引导生成结果,使最终输出更符合预期。"
},
"canvas_type": "Agent",
"dsl": {
"components": {
"Agent:LargeFliesMelt": {
"downstream": [
"UserFillUp:GoldBroomsRelate"
],
"obj": {
"component_name": "Agent",
"params": {
"cite": true,
"delay_after_error": 1,
"description": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": "",
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.7,
"llm_id": "qwen-turbo@Tongyi-Qianwen",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 1,
"max_tokens": 256,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
},
"structured": {}
},
"presencePenaltyEnabled": false,
"presence_penalty": 0.4,
"prompts": [
{
"content": "User query:{sys.query}",
"role": "user"
}
],
"sys_prompt": "<role>\nYou are the Planning Agent in a multi-agent RAG workflow.\nYour sole job is to design a crisp, executable Search Plan for the next agent. Do not search or answer the users question.\n</role>\n<objectives>\nUnderstand the users task and decompose it into evidence-seeking steps.\nProduce high-quality queries and retrieval settings tailored to the task type (fact lookup, multi-hop reasoning, comparison, statistics, how-to, etc.).\nIdentify missing information that would materially change the plan (≤3 concise questions).\nOptimize for source trustworthiness, diversity, and recency; define stopping criteria to avoid over-searching.\nAnswer in 150 words.\n<objectives>",
"temperature": 0.1,
"temperatureEnabled": false,
"tools": [],
"topPEnabled": false,
"top_p": 0.3,
"user_prompt": "",
"visual_files_var": ""
}
},
"upstream": [
"begin"
]
},
"Agent:TangyWordsType": {
"downstream": [
"Message:FreshWallsStudy"
],
"obj": {
"component_name": "Agent",
"params": {
"cite": true,
"delay_after_error": 1,
"description": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": "",
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.7,
"llm_id": "qwen-turbo@Tongyi-Qianwen",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 1,
"max_tokens": 256,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
},
"structured": {}
},
"presencePenaltyEnabled": false,
"presence_penalty": 0.4,
"prompts": [
{
"content": "Search Plan: {Agent:LargeFliesMelt@content}\n\n\n\nAwait Response feedback:{UserFillUp:GoldBroomsRelate@instructions}\n",
"role": "user"
}
],
"sys_prompt": "<role>\nYou are the Search Agent.\nYour job is to execute the approved Search Plan, integrate the Await Response feedback, retrieve evidence, and produce a well-grounded answer.\n</role>\n<objectives>\nTranslate the plan + feedback into concrete searches.\nCollect diverse, trustworthy, and recent evidence meeting the plans evidence bar.\nSynthesize a concise answer; include citations next to claims they support.\nIf evidence is insufficient or conflicting, clearly state limitations and propose next steps.\n</objectives>\n <tools>\nRetrieval: You must use Retrieval to do the search.\n </tools>\n",
"temperature": 0.1,
"temperatureEnabled": false,
"tools": [
{
"component_name": "Retrieval",
"name": "Retrieval",
"params": {
"cross_languages": [],
"description": "",
"empty_response": "",
"kb_ids": [],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
},
"json": {
"type": "Array<Object>",
"value": []
}
},
"rerank_id": "",
"similarity_threshold": 0.2,
"toc_enhance": false,
"top_k": 1024,
"top_n": 8,
"use_kg": false
}
}
],
"topPEnabled": false,
"top_p": 0.3,
"user_prompt": "",
"visual_files_var": ""
}
},
"upstream": [
"UserFillUp:GoldBroomsRelate"
]
},
"Message:FreshWallsStudy": {
"downstream": [],
"obj": {
"component_name": "Message",
"params": {
"content": [
"{Agent:TangyWordsType@content}"
]
}
},
"upstream": [
"Agent:TangyWordsType"
]
},
"UserFillUp:GoldBroomsRelate": {
"downstream": [
"Agent:TangyWordsType"
],
"obj": {
"component_name": "UserFillUp",
"params": {
"enable_tips": true,
"inputs": {
"instructions": {
"name": "instructions",
"optional": false,
"options": [],
"type": "paragraph"
}
},
"outputs": {
"instructions": {
"name": "instructions",
"optional": false,
"options": [],
"type": "paragraph"
}
},
"tips": "Here is my search plan:\n{Agent:LargeFliesMelt@content}\nAre you okay with it?"
}
},
"upstream": [
"Agent:LargeFliesMelt"
]
},
"begin": {
"downstream": [
"Agent:LargeFliesMelt"
],
"obj": {
"component_name": "Begin",
"params": {}
},
"upstream": []
}
},
"globals": {
"sys.conversation_turns": 0,
"sys.files": [],
"sys.query": "",
"sys.user_id": ""
},
"graph": {
"edges": [
{
"data": {
"isHovered": false
},
"id": "xy-edge__beginstart-Agent:LargeFliesMeltend",
"source": "begin",
"sourceHandle": "start",
"target": "Agent:LargeFliesMelt",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:LargeFliesMeltstart-UserFillUp:GoldBroomsRelateend",
"source": "Agent:LargeFliesMelt",
"sourceHandle": "start",
"target": "UserFillUp:GoldBroomsRelate",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__UserFillUp:GoldBroomsRelatestart-Agent:TangyWordsTypeend",
"source": "UserFillUp:GoldBroomsRelate",
"sourceHandle": "start",
"target": "Agent:TangyWordsType",
"targetHandle": "end"
},
{
"id": "xy-edge__Agent:TangyWordsTypetool-Tool:NastyBatsGoend",
"source": "Agent:TangyWordsType",
"sourceHandle": "tool",
"target": "Tool:NastyBatsGo",
"targetHandle": "end"
},
{
"id": "xy-edge__Agent:TangyWordsTypestart-Message:FreshWallsStudyend",
"source": "Agent:TangyWordsType",
"sourceHandle": "start",
"target": "Message:FreshWallsStudy",
"targetHandle": "end"
}
],
"nodes": [
{
"data": {
"label": "Begin",
"name": "begin"
},
"dragging": false,
"id": "begin",
"measured": {
"height": 50,
"width": 200
},
"position": {
"x": 154.9008789064451,
"y": 119.51001744285344
},
"selected": false,
"sourcePosition": "left",
"targetPosition": "right",
"type": "beginNode"
},
{
"data": {
"form": {
"cite": true,
"delay_after_error": 1,
"description": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": "",
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.7,
"llm_id": "qwen-turbo@Tongyi-Qianwen",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 1,
"max_tokens": 256,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
},
"structured": {}
},
"presencePenaltyEnabled": false,
"presence_penalty": 0.4,
"prompts": [
{
"content": "User query:{sys.query}",
"role": "user"
}
],
"sys_prompt": "<role>\nYou are the Planning Agent in a multi-agent RAG workflow.\nYour sole job is to design a crisp, executable Search Plan for the next agent. Do not search or answer the users question.\n</role>\n<objectives>\nUnderstand the users task and decompose it into evidence-seeking steps.\nProduce high-quality queries and retrieval settings tailored to the task type (fact lookup, multi-hop reasoning, comparison, statistics, how-to, etc.).\nIdentify missing information that would materially change the plan (≤3 concise questions).\nOptimize for source trustworthiness, diversity, and recency; define stopping criteria to avoid over-searching.\nAnswer in 150 words.\n<objectives>",
"temperature": 0.1,
"temperatureEnabled": false,
"tools": [],
"topPEnabled": false,
"top_p": 0.3,
"user_prompt": "",
"visual_files_var": ""
},
"label": "Agent",
"name": "Planning Agent"
},
"dragging": false,
"id": "Agent:LargeFliesMelt",
"measured": {
"height": 90,
"width": 200
},
"position": {
"x": 443.96309330796714,
"y": 104.61370811205677
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "agentNode"
},
{
"data": {
"form": {
"enable_tips": true,
"inputs": {
"instructions": {
"name": "instructions",
"optional": false,
"options": [],
"type": "paragraph"
}
},
"outputs": {
"instructions": {
"name": "instructions",
"optional": false,
"options": [],
"type": "paragraph"
}
},
"tips": "Here is my search plan:\n{Agent:LargeFliesMelt@content}\nAre you okay with it?"
},
"label": "UserFillUp",
"name": "Await Response"
},
"dragging": false,
"id": "UserFillUp:GoldBroomsRelate",
"measured": {
"height": 50,
"width": 200
},
"position": {
"x": 683.3409492927474,
"y": 116.76274137645598
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "ragNode"
},
{
"data": {
"form": {
"cite": true,
"delay_after_error": 1,
"description": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": "",
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.7,
"llm_id": "qwen-turbo@Tongyi-Qianwen",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 1,
"max_tokens": 256,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
},
"structured": {}
},
"presencePenaltyEnabled": false,
"presence_penalty": 0.4,
"prompts": [
{
"content": "Search Plan: {Agent:LargeFliesMelt@content}\n\n\n\nAwait Response feedback:{UserFillUp:GoldBroomsRelate@instructions}\n",
"role": "user"
}
],
"sys_prompt": "<role>\nYou are the Search Agent.\nYour job is to execute the approved Search Plan, integrate the Await Response feedback, retrieve evidence, and produce a well-grounded answer.\n</role>\n<objectives>\nTranslate the plan + feedback into concrete searches.\nCollect diverse, trustworthy, and recent evidence meeting the plans evidence bar.\nSynthesize a concise answer; include citations next to claims they support.\nIf evidence is insufficient or conflicting, clearly state limitations and propose next steps.\n</objectives>\n <tools>\nRetrieval: You must use Retrieval to do the search.\n </tools>\n",
"temperature": 0.1,
"temperatureEnabled": false,
"tools": [
{
"component_name": "Retrieval",
"name": "Retrieval",
"params": {
"cross_languages": [],
"description": "",
"empty_response": "",
"kb_ids": [],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
},
"json": {
"type": "Array<Object>",
"value": []
}
},
"rerank_id": "",
"similarity_threshold": 0.2,
"toc_enhance": false,
"top_k": 1024,
"top_n": 8,
"use_kg": false
}
}
],
"topPEnabled": false,
"top_p": 0.3,
"user_prompt": "",
"visual_files_var": ""
},
"label": "Agent",
"name": "Search Agent"
},
"dragging": false,
"id": "Agent:TangyWordsType",
"measured": {
"height": 90,
"width": 200
},
"position": {
"x": 944.6411255659472,
"y": 99.84499066368488
},
"selected": true,
"sourcePosition": "right",
"targetPosition": "left",
"type": "agentNode"
},
{
"data": {
"form": {
"description": "This is an agent for a specific task.",
"user_prompt": "This is the order you need to send to the agent."
},
"label": "Tool",
"name": "flow.tool_0"
},
"id": "Tool:NastyBatsGo",
"measured": {
"height": 50,
"width": 200
},
"position": {
"x": 862.6411255659472,
"y": 239.84499066368488
},
"sourcePosition": "right",
"targetPosition": "left",
"type": "toolNode"
},
{
"data": {
"form": {
"content": [
"{Agent:TangyWordsType@content}"
]
},
"label": "Message",
"name": "Message"
},
"dragging": false,
"id": "Message:FreshWallsStudy",
"measured": {
"height": 50,
"width": 200
},
"position": {
"x": 1216.7057997987163,
"y": 120.48541298149814
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "messageNode"
}
]
},
"history": [],
"messages": [],
"path": [],
"retrieval": [],
"variables": {}
},
"avatar":
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADAAAAAwCAYAAABXAvmHAAAACXBIWXMAABYlAAAWJQFJUiTwAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA1FSURBVHgBzVppcFRVFv7e672TTjrprGTrkI1tgFjiAI4TmgEGcEFgVFAHxr2mLFe0cKxyBOeHljVjwVDqzDhVTqksOlOiuCBrCwk7hCAkLAGy70kn3eklvb03571e8rrTCQGi5em6ecu9fe85557znXNuh0EUHTGb5/Ast4TnmXsB3oifBTFVxEsVy7PrZ5lM9RE9oZvTZrPeAe51Gvh85ABGMuonJh6Ra9Mzw2KDm2PXm0ymPoS6ReZ5n5n6p4f45QOsj0ihEXxwdGg9Pta3Ax2R98ErH+wIzMFjFFTlhcwkCMEKT4qE+Nc5np8e5Aa8MAf94UNtmEl5DPYFrpHPER8+xn3U3KNkXqDpKtFaSGi1Wm188MH7655+6inoExLR0toW0N211R+8jJ198YHtiFyDH5EHE8Oy7Iccx/1BeJ4yZRI2bXgHSfokyGQsLH19YDgax4RXGJz450A8v1HGMMw62soMEgRxcVqcrDyG9X95E339fSidPh05Odnot9vB0Qe84CB8wMQglSe8HWPI3HAmJVmLYTOYJ55YxVedOYcTxyux59svsfqpJ5GWkorGhhZYLL0oKSnC2pfX4DemOaIg9n473B63ZCFGOu2YU4QIgikIJiF5yaxcuZTPzzfC42WQnZWJLdu2IiXFgG+/3oOs7Ey0t3fC7/cjMyMdU6dOwpyyMswtmwO9PhmdnYE+5hrcS9Eq8MzSHRfuvT7Dj5qbiKeGY4fKsWL1ShQWFKG1uQVz55Thh5pqVJ87D6fLDY/HA7/PD87PiYxMnToRLzz7LI0zoamphQTxYewpKIzUB6OUJcIoOTEOVpjhcroRH69FQ2MLHANOZKZn4PcPP4DikgKoVWrI5CzkcjmUSiUuXriKRx77I2bMuh0tba1IStKL84TXDTU26hp9z0QzJYVVbhBuJdDL8Vy4yeRK+TrBNHp7e9HR0QWXawDNTa3o7ulB2R2zER+nQXZ2FhRKBVwOl7BloO+IVwGpBtxebPv0v0hNN+CO2beHHT4cD6I9Pvo+iobElmtYk6xkQuE6nvOjpaWNmHeDJxPp7bVCp4vHvv0HsXfvQVSeOgOtRoOFi+aJu+C0O6FVa2AwJEOhkMPhcODEiVNwuJ0w5ubRLsbB7/UPMsvH5HTYxvNBwSUAIT5yQcG4wTlk3+/5bt2cX5chIzMdbW3tOH/+EtRaNex2B1RkKkuXLkJCog41NZdw8MBhFBUVICU1WTQDwZwMKcloJJN74snHMeByoq7uCr1XECBkEzB4I3ge6p4xkp3hiOFj+rdcYFpg5vaZs3H3nXehh0znyPFjWLv2NcyeNQPbtn0hDlRr1GKc2L1rP5QqJeQyMiMZRH8QqKujDa+9+gruWf4A4siPlLQzeXlG+Dl/hAjR2B7KnURQEnyDC4kyXPoS5dN7dn7FR0umJ4d8aNUjBKEdePSxh8iMDuDixcsiCglbK5PJ4PMFUEe4F6B01zfbMeD1iTx8vPUTMi0lXn7uJXT1dIr+Eph/EFViAMqoKGBag89sZFIVuMaTplsJWZYtvwv7yQdqai5CQWYhoMyECUWYv8BE2s0RJ8gk03v/3Q24UHsJ27/YjnaKDa4BD1INBpQfrgDlWoFZRRPgcdNBW4hlLBNGM3mEdMGrmzDfSNt/4OBhdLR1Yv26tXj77U0iEqUYUvDvf34AjUpOaUclssaNQ2cXoZdzAC++/Cds3vwZLl+oxtZPt2INPVefOYH6hsbALtwkDabtg8jG8pK0OURtbR34dMtHaKhrImlZvPXW35FsSMKKFfeivOIwsnKMWLDwbrjdbtEHOB8HFfnF8YoD+Md7m7Bv3y7s2rtfXOjkqUqMFYnMR5tQsCdCQwK+t5MQ+3d/g4cfXiHCZHeXBZ98/D/R5gX0sfRZsGTZAzDNW4SG5kbk5uSSL3DIzc1GaekMkNyQ0Z+PN29FYmICxooEPkNN5Hv3zh0jhgq9PhFXrtSh+nw1Kg4dIah04Oix02JfQUEepk2bgs8//0bcocWLfotXXlqD7u5uzF1wp7hIcnIyKr7fjSZKTyDV5HXkO9EkrfjksQZI62Brnw2pqQbMz5yH5fcug4/zUtCqxIcffSRmsI2NrSKjlp5ebNnyGWG/B2++8QYW3TlfTAhZGQOej8EAw0SYbcTaGLk6kxZR8miGY4wWGRAY6+7pFu1PSLH/88G/yAcG8NcNG3H85CkxSgrBT4gPDqcDE4sn4GjKCbBCsGAki7KDO8AybIBRugqLMFFMhuPDCALJI9CBGfxidKkYfi+Mp0mFGCEkUy888wy0Wq3YJyf/sFOa0W/rR093DzLS0tDXZw3WHkPnE9li5IhzdMCu0EGupnn8/sgxLEISBJTMISKSsVKnCDEtvYY+ovLIKQVGQk1waCF56yDs7+zsotjRDlu/DR6fF4sXL0RN9QWwFD/ik1JJwdwQ7XEspSKudqSqFUhsqcWAtUccH4tC/Ilrh/hlmbB84RcjtZgTSwQMPQtayhqXiTPvvIDHJyXjtWVlUCUkQ6ZQBHZDXIvMR6akOmMALtoxTUom9K2XaPesYl9wsmsSG2aQYTDih4ndQvl89LOb0o72wlkonTYZ9y25BwffWYOexstQJ+jhJ8377b1Ic9ngam4A53GC9zqgpCrPd7YcnFwZqB4l64aVHNXYmBoMDkbMgmMUJNqqH6rEFChvW0wMWjFv0RKom37AuX3bkcnbUKCOQ3LqOFgb69Gu1GGgs5HiiwYTbp0JS8UOMBpdxLrDRXI2WruICmrRGh5VEz9kIj4PtIZMqGdQ1HZZEZ+ZhxmFuVD73WT/bhx5/8+wZk1AvHEqrAYjCdoHH9UUxRMnoefIDggFHitjR9QVK9VyQIBYzNxAHhPEeI6EkJM2vfm3QSnzwpA/FZaOeuzdvAn9GUYUlC2Gj2zfkJoPRfpkNFdW0DgWeVlGKJqr4SKQkFNmOxwHbDSTTFjxTFAuaQ9z7Y8EIUK+RdgIXq1DUsFM2Pva0dfWDGu/C8ZxBhiaTyGXbF/ltsBSfRI+dQKhmB8KbTwUcYlgG2vQ095CZaxy0KwxaOLscEyEtyWYx4ey4ZhloFQ/XLD5o5pgDl46V+q4igOU6PVyGtxy10ooEtPQfKYc+z/cBBfrRM7EKZR3dYs7x/IK6JJT4K2vRq8QTxgpfwgaaizn4MN/pFsyfAsWKRFNFlCPCNRC3k7PjVZKCpvrULZ8BV7927vobmxG7dG9OHK4HDYqW6/W1cI3wCAtdyql8c0U/Z3wUuGUWzwFF8w7oIzTiZYptQ2WCUdoVnKVoFL4lgn3D+5UYKx45YVdCupEsmNhUyRUUhrywGWUoGhCKdpbWmGpO42LZ8/BR/EgnmqNxb97FCqFH9UHd8JG6YjT1U9w7IOP6pNJhUYc3f0VtCSElBt5KDeJFEKKnbwElQLhX0oMI70O52rBBM3nRELRL9HQeh5x8KGy3IzjVbWwWfvxwc7dsFssuHz2NBo7W1Ciywdnt4KOQ+ChkxC1SoWmc+XwmBZRYaUWq0Oe80H26KpV60TNhTQW0fiAVoNFBBPSMs9E+ED08/CNxgm2HZcCy5XjkGt16Dp/Fms3vUfiyGFtv4KK/ftQWFxEhwEMvFQwyTihWFKIJ4Ie+u6V0+XkUjzcDhvcNJ9ckkxgSN6LoNfzsYA08t3ooTYA1UrjrXD+cAC/mj+X0CYJ/cT8oe++ppQiBf2OfjE5tFD2qzekIk9GdXpGDtRUSMnpN4wByptSUhOhdLVK6gFecu4SfY3Nxo0TbX18qhHujHqkjy9G84WT6GptwZUOC2qqziM7Lx16jRI54/MJwTzgKYdiBSgWYopSQ0edJXB6/NhyslYCo+F8Q5KDYGiUHVTiTYkAn9MGfcks9NFRpnAUqc/MIgY5LF1xH9Ioe1247H5MnjQNxZNvAUMptttuQyYdlrXRaYmccqnaLhvSEjWRFdlozEDK+I0KIVZiAsRSkaSdOBfeumPobWkUS9RcOqZJjtchJbsEjt42eEhQpS5ZFBiETHVX68jnPHR040YGHW1GFjQxXED67mbq2IhpJTvJu21QjZ8BFaXRPnq2kP3rE3RouUBVHsGzlQ4PEhLi4CB/aCLo9dLhmRDFs+k3jEt0GC3sQD01Y2jCoatJb6MGXEOe0QnMUPCyI+OWBdDk/AK1x3ejraFeLJYG6DTE7XIhjiC0z9IFJ9UNOq2AOz4UZ2bD6hyokj22enU+zTETN0LMyO16MliOzEmpVsH4i1lIzy2kn0+14CjAUSYHi80uFj8cOfL4wiJkpKfT4QKHLB37HVNuNs9hWc6MkVU5IiL9GCSccMsVKjrOl5HZuKnWtqKj4TI6as+AcffTkaUKfl6WL9rEoQP7NpAenhvNxEOO92LGjx+BGCGfUkKh0ZIv21Bz9vTGJ59+8XlRALPZrFexvLAL0zEGNNLJ80h910FVmoQkU2lpaeBfDYT/OXBzjIm0uhFjQMwN9o2KOH6jhngVmI853xGz2QgZ1lH2MI3yIXFHeP4nNP7YVE9cfMlw7BezTKbvpR3/Bx465XnKBextAAAAAElFTkSuQmCC"
}

View File

@ -16,7 +16,7 @@
import argparse
import os
from agent.canvas import Canvas
from api import settings
from common import settings
if __name__ == '__main__':
parser = argparse.ArgumentParser()

View File

@ -19,7 +19,7 @@ import time
from abc import ABC
import arxiv
from agent.tools.base import ToolParamBase, ToolMeta, ToolBase
from api.utils.api_utils import timeout
from common.connection_utils import timeout
class ArXivParam(ToolParamBase):
@ -63,12 +63,18 @@ class ArXiv(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 12)))
def _invoke(self, **kwargs):
if self.check_if_canceled("ArXiv processing"):
return
if not kwargs.get("query"):
self.set_output("formalized_content", "")
return ""
last_e = ""
for _ in range(self._param.max_retries+1):
if self.check_if_canceled("ArXiv processing"):
return
try:
sort_choices = {"relevance": arxiv.SortCriterion.Relevance,
"lastUpdatedDate": arxiv.SortCriterion.LastUpdatedDate,
@ -79,12 +85,20 @@ class ArXiv(ToolBase, ABC):
max_results=self._param.top_n,
sort_by=sort_choices[self._param.sort_by]
)
self._retrieve_chunks(list(arxiv_client.results(search)),
results = list(arxiv_client.results(search))
if self.check_if_canceled("ArXiv processing"):
return
self._retrieve_chunks(results,
get_title=lambda r: r.title,
get_url=lambda r: r.pdf_url,
get_content=lambda r: r.summary)
return self.output("formalized_content")
except Exception as e:
if self.check_if_canceled("ArXiv processing"):
return
last_e = e
logging.exception(f"ArXiv error: {e}")
time.sleep(self._param.delay_after_error)

View File

@ -20,7 +20,7 @@ from copy import deepcopy
from functools import partial
from typing import TypedDict, List, Any
from agent.component.base import ComponentParamBase, ComponentBase
from api.utils import hash_str2int
from common.misc_utils import hash_str2int
from rag.llm.chat_model import ToolCallSession
from rag.prompts.generator import kb_prompt
from rag.utils.mcp_tool_call_conn import MCPToolCallSession
@ -125,6 +125,9 @@ class ToolBase(ComponentBase):
return self._param.get_meta()
def invoke(self, **kwargs):
if self.check_if_canceled("Tool processing"):
return
self.set_output("_created_time", time.perf_counter())
try:
res = self._invoke(**kwargs)
@ -170,4 +173,4 @@ class ToolBase(ComponentBase):
self.set_output("formalized_content", "\n".join(kb_prompt({"chunks": chunks, "doc_aggs": aggs}, 200000, True)))
def thoughts(self) -> str:
return self._canvas.get_component_name(self._id) + " is running..."
return self._canvas.get_component_name(self._id) + " is running..."

View File

@ -21,8 +21,8 @@ from strenum import StrEnum
from typing import Optional
from pydantic import BaseModel, Field, field_validator
from agent.tools.base import ToolParamBase, ToolBase, ToolMeta
from api import settings
from api.utils.api_utils import timeout
from common.connection_utils import timeout
from common import settings
class Language(StrEnum):
@ -131,10 +131,14 @@ class CodeExec(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
def _invoke(self, **kwargs):
if self.check_if_canceled("CodeExec processing"):
return
lang = kwargs.get("lang", self._param.lang)
script = kwargs.get("script", self._param.script)
arguments = {}
for k, v in self._param.arguments.items():
if kwargs.get(k):
arguments[k] = kwargs[k]
continue
@ -149,15 +153,28 @@ class CodeExec(ToolBase, ABC):
def _execute_code(self, language: str, code: str, arguments: dict):
import requests
if self.check_if_canceled("CodeExec execution"):
return
try:
code_b64 = self._encode_code(code)
code_req = CodeExecutionRequest(code_b64=code_b64, language=language, arguments=arguments).model_dump()
except Exception as e:
if self.check_if_canceled("CodeExec execution"):
return
self.set_output("_ERROR", "construct code request error: " + str(e))
try:
if self.check_if_canceled("CodeExec execution"):
return "Task has been canceled"
resp = requests.post(url=f"http://{settings.SANDBOX_HOST}:9385/run", json=code_req, timeout=int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
logging.info(f"http://{settings.SANDBOX_HOST}:9385/run, code_req: {code_req}, resp.status_code {resp.status_code}:")
if self.check_if_canceled("CodeExec execution"):
return "Task has been canceled"
if resp.status_code != 200:
resp.raise_for_status()
body = resp.json()
@ -173,16 +190,25 @@ class CodeExec(ToolBase, ABC):
logging.info(f"http://{settings.SANDBOX_HOST}:9385/run -> {rt}")
if isinstance(rt, tuple):
for i, (k, o) in enumerate(self._param.outputs.items()):
if self.check_if_canceled("CodeExec execution"):
return
if k.find("_") == 0:
continue
o["value"] = rt[i]
elif isinstance(rt, dict):
for i, (k, o) in enumerate(self._param.outputs.items()):
if self.check_if_canceled("CodeExec execution"):
return
if k not in rt or k.find("_") == 0:
continue
o["value"] = rt[k]
else:
for i, (k, o) in enumerate(self._param.outputs.items()):
if self.check_if_canceled("CodeExec execution"):
return
if k.find("_") == 0:
continue
o["value"] = rt
@ -190,6 +216,9 @@ class CodeExec(ToolBase, ABC):
self.set_output("_ERROR", "There is no response from sandbox")
except Exception as e:
if self.check_if_canceled("CodeExec execution"):
return
self.set_output("_ERROR", "Exception executing code: " + str(e))
return self.output()

View File

@ -29,7 +29,7 @@ class CrawlerParam(ToolParamBase):
super().__init__()
self.proxy = None
self.extract_type = "markdown"
def check(self):
self.check_valid_value(self.extract_type, "Type of content from the crawler", ['html', 'markdown', 'content'])
@ -47,18 +47,24 @@ class Crawler(ToolBase, ABC):
result = asyncio.run(self.get_web(ans))
return Crawler.be_output(result)
except Exception as e:
return Crawler.be_output(f"An unexpected error occurred: {str(e)}")
async def get_web(self, url):
if self.check_if_canceled("Crawler async operation"):
return
proxy = self._param.proxy if self._param.proxy else None
async with AsyncWebCrawler(verbose=True, proxy=proxy) as crawler:
result = await crawler.arun(
url=url,
bypass_cache=True
)
if self.check_if_canceled("Crawler async operation"):
return
if self._param.extract_type == 'html':
return result.cleaned_html
elif self._param.extract_type == 'markdown':

View File

@ -46,11 +46,16 @@ class DeepL(ComponentBase, ABC):
component_name = "DeepL"
def _run(self, history, **kwargs):
if self.check_if_canceled("DeepL processing"):
return
ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else ""
if not ans:
return DeepL.be_output("")
if self.check_if_canceled("DeepL processing"):
return
try:
translator = deepl.Translator(self._param.auth_key)
result = translator.translate_text(ans, source_lang=self._param.source_lang,
@ -58,4 +63,6 @@ class DeepL(ComponentBase, ABC):
return DeepL.be_output(result.text)
except Exception as e:
if self.check_if_canceled("DeepL processing"):
return
DeepL.be_output("**Error**:" + str(e))

View File

@ -19,7 +19,7 @@ import time
from abc import ABC
from duckduckgo_search import DDGS
from agent.tools.base import ToolMeta, ToolParamBase, ToolBase
from api.utils.api_utils import timeout
from common.connection_utils import timeout
class DuckDuckGoParam(ToolParamBase):
@ -75,17 +75,30 @@ class DuckDuckGo(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 12)))
def _invoke(self, **kwargs):
if self.check_if_canceled("DuckDuckGo processing"):
return
if not kwargs.get("query"):
self.set_output("formalized_content", "")
return ""
last_e = ""
for _ in range(self._param.max_retries+1):
if self.check_if_canceled("DuckDuckGo processing"):
return
try:
if kwargs.get("topic", "general") == "general":
with DDGS() as ddgs:
if self.check_if_canceled("DuckDuckGo processing"):
return
# {'title': '', 'href': '', 'body': ''}
duck_res = ddgs.text(kwargs["query"], max_results=self._param.top_n)
if self.check_if_canceled("DuckDuckGo processing"):
return
self._retrieve_chunks(duck_res,
get_title=lambda r: r["title"],
get_url=lambda r: r.get("href", r.get("url")),
@ -94,8 +107,15 @@ class DuckDuckGo(ToolBase, ABC):
return self.output("formalized_content")
else:
with DDGS() as ddgs:
if self.check_if_canceled("DuckDuckGo processing"):
return
# {'date': '', 'title': '', 'body': '', 'url': '', 'image': '', 'source': ''}
duck_res = ddgs.news(kwargs["query"], max_results=self._param.top_n)
if self.check_if_canceled("DuckDuckGo processing"):
return
self._retrieve_chunks(duck_res,
get_title=lambda r: r["title"],
get_url=lambda r: r.get("href", r.get("url")),
@ -103,6 +123,9 @@ class DuckDuckGo(ToolBase, ABC):
self.set_output("json", duck_res)
return self.output("formalized_content")
except Exception as e:
if self.check_if_canceled("DuckDuckGo processing"):
return
last_e = e
logging.exception(f"DuckDuckGo error: {e}")
time.sleep(self._param.delay_after_error)

View File

@ -25,7 +25,7 @@ from email.header import Header
from email.utils import formataddr
from agent.tools.base import ToolParamBase, ToolBase, ToolMeta
from api.utils.api_utils import timeout
from common.connection_utils import timeout
class EmailParam(ToolParamBase):
@ -101,19 +101,27 @@ class Email(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 60)))
def _invoke(self, **kwargs):
if self.check_if_canceled("Email processing"):
return
if not kwargs.get("to_email"):
self.set_output("success", False)
return ""
last_e = ""
for _ in range(self._param.max_retries+1):
if self.check_if_canceled("Email processing"):
return
try:
# Parse JSON string passed from upstream
email_data = kwargs
# Validate required fields
if "to_email" not in email_data:
return Email.be_output("Missing required field: to_email")
self.set_output("_ERROR", "Missing required field: to_email")
self.set_output("success", False)
return False
# Create email object
msg = MIMEMultipart('alternative')
@ -133,6 +141,9 @@ class Email(ToolBase, ABC):
# Connect to SMTP server and send
logging.info(f"Connecting to SMTP server {self._param.smtp_server}:{self._param.smtp_port}")
if self.check_if_canceled("Email processing"):
return
context = smtplib.ssl.create_default_context()
with smtplib.SMTP(self._param.smtp_server, self._param.smtp_port) as server:
server.ehlo()
@ -149,6 +160,10 @@ class Email(ToolBase, ABC):
# Send email
logging.info(f"Sending email to recipients: {recipients}")
if self.check_if_canceled("Email processing"):
return
try:
server.send_message(msg, self._param.email, recipients)
success = True

View File

@ -22,7 +22,7 @@ import pymysql
import psycopg2
import pyodbc
from agent.tools.base import ToolParamBase, ToolBase, ToolMeta
from api.utils.api_utils import timeout
from common.connection_utils import timeout
class ExeSQLParam(ToolParamBase):
@ -81,6 +81,8 @@ class ExeSQL(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 60)))
def _invoke(self, **kwargs):
if self.check_if_canceled("ExeSQL processing"):
return
def convert_decimals(obj):
from decimal import Decimal
@ -96,6 +98,9 @@ class ExeSQL(ToolBase, ABC):
if not sql:
raise Exception("SQL for `ExeSQL` MUST not be empty.")
if self.check_if_canceled("ExeSQL processing"):
return
vars = self.get_input_elements_from_text(sql)
args = {}
for k, o in vars.items():
@ -108,6 +113,9 @@ class ExeSQL(ToolBase, ABC):
self.set_input_value(k, args[k])
sql = self.string_format(sql, args)
if self.check_if_canceled("ExeSQL processing"):
return
sqls = sql.split(";")
if self._param.db_type in ["mysql", "mariadb"]:
db = pymysql.connect(db=self._param.database, user=self._param.username, host=self._param.host,
@ -181,6 +189,10 @@ class ExeSQL(ToolBase, ABC):
sql_res = []
formalized_content = []
for single_sql in sqls:
if self.check_if_canceled("ExeSQL processing"):
ibm_db.close(conn)
return
single_sql = single_sql.replace("```", "").strip()
if not single_sql:
continue
@ -190,6 +202,9 @@ class ExeSQL(ToolBase, ABC):
rows = []
row = ibm_db.fetch_assoc(stmt)
while row and len(rows) < self._param.max_records:
if self.check_if_canceled("ExeSQL processing"):
ibm_db.close(conn)
return
rows.append(row)
row = ibm_db.fetch_assoc(stmt)
@ -220,6 +235,11 @@ class ExeSQL(ToolBase, ABC):
sql_res = []
formalized_content = []
for single_sql in sqls:
if self.check_if_canceled("ExeSQL processing"):
cursor.close()
db.close()
return
single_sql = single_sql.replace('```','')
if not single_sql:
continue
@ -244,6 +264,9 @@ class ExeSQL(ToolBase, ABC):
sql_res.append(convert_decimals(single_res.to_dict(orient='records')))
formalized_content.append(single_res.to_markdown(index=False, floatfmt=".6f"))
cursor.close()
db.close()
self.set_output("json", sql_res)
self.set_output("formalized_content", "\n\n".join(formalized_content))
return self.output("formalized_content")

View File

@ -19,7 +19,7 @@ import time
from abc import ABC
import requests
from agent.tools.base import ToolParamBase, ToolMeta, ToolBase
from api.utils.api_utils import timeout
from common.connection_utils import timeout
class GitHubParam(ToolParamBase):
@ -59,17 +59,27 @@ class GitHub(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 12)))
def _invoke(self, **kwargs):
if self.check_if_canceled("GitHub processing"):
return
if not kwargs.get("query"):
self.set_output("formalized_content", "")
return ""
last_e = ""
for _ in range(self._param.max_retries+1):
if self.check_if_canceled("GitHub processing"):
return
try:
url = 'https://api.github.com/search/repositories?q=' + kwargs["query"] + '&sort=stars&order=desc&per_page=' + str(
self._param.top_n)
headers = {"Content-Type": "application/vnd.github+json", "X-GitHub-Api-Version": '2022-11-28'}
response = requests.get(url=url, headers=headers).json()
if self.check_if_canceled("GitHub processing"):
return
self._retrieve_chunks(response['items'],
get_title=lambda r: r["name"],
get_url=lambda r: r["html_url"],
@ -77,6 +87,9 @@ class GitHub(ToolBase, ABC):
self.set_output("json", response['items'])
return self.output("formalized_content")
except Exception as e:
if self.check_if_canceled("GitHub processing"):
return
last_e = e
logging.exception(f"GitHub error: {e}")
time.sleep(self._param.delay_after_error)

View File

@ -19,7 +19,7 @@ import time
from abc import ABC
from serpapi import GoogleSearch
from agent.tools.base import ToolParamBase, ToolMeta, ToolBase
from api.utils.api_utils import timeout
from common.connection_utils import timeout
class GoogleParam(ToolParamBase):
@ -118,6 +118,9 @@ class Google(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 12)))
def _invoke(self, **kwargs):
if self.check_if_canceled("Google processing"):
return
if not kwargs.get("q"):
self.set_output("formalized_content", "")
return ""
@ -132,8 +135,15 @@ class Google(ToolBase, ABC):
}
last_e = ""
for _ in range(self._param.max_retries+1):
if self.check_if_canceled("Google processing"):
return
try:
search = GoogleSearch(params).get_dict()
if self.check_if_canceled("Google processing"):
return
self._retrieve_chunks(search["organic_results"],
get_title=lambda r: r["title"],
get_url=lambda r: r["link"],
@ -142,6 +152,9 @@ class Google(ToolBase, ABC):
self.set_output("json", search["organic_results"])
return self.output("formalized_content")
except Exception as e:
if self.check_if_canceled("Google processing"):
return
last_e = e
logging.exception(f"Google error: {e}")
time.sleep(self._param.delay_after_error)

View File

@ -19,7 +19,7 @@ import time
from abc import ABC
from scholarly import scholarly
from agent.tools.base import ToolMeta, ToolParamBase, ToolBase
from api.utils.api_utils import timeout
from common.connection_utils import timeout
class GoogleScholarParam(ToolParamBase):
@ -65,15 +65,25 @@ class GoogleScholar(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 12)))
def _invoke(self, **kwargs):
if self.check_if_canceled("GoogleScholar processing"):
return
if not kwargs.get("query"):
self.set_output("formalized_content", "")
return ""
last_e = ""
for _ in range(self._param.max_retries+1):
if self.check_if_canceled("GoogleScholar processing"):
return
try:
scholar_client = scholarly.search_pubs(kwargs["query"], patents=self._param.patents, year_low=self._param.year_low,
year_high=self._param.year_high, sort_by=self._param.sort_by)
if self.check_if_canceled("GoogleScholar processing"):
return
self._retrieve_chunks(scholar_client,
get_title=lambda r: r['bib']['title'],
get_url=lambda r: r["pub_url"],
@ -82,6 +92,9 @@ class GoogleScholar(ToolBase, ABC):
self.set_output("json", list(scholar_client))
return self.output("formalized_content")
except Exception as e:
if self.check_if_canceled("GoogleScholar processing"):
return
last_e = e
logging.exception(f"GoogleScholar error: {e}")
time.sleep(self._param.delay_after_error)

View File

@ -50,6 +50,9 @@ class Jin10(ComponentBase, ABC):
component_name = "Jin10"
def _run(self, history, **kwargs):
if self.check_if_canceled("Jin10 processing"):
return
ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else ""
if not ans:
@ -58,6 +61,9 @@ class Jin10(ComponentBase, ABC):
jin10_res = []
headers = {'secret-key': self._param.secret_key}
try:
if self.check_if_canceled("Jin10 processing"):
return
if self._param.type == "flash":
params = {
'category': self._param.flash_type,
@ -69,6 +75,8 @@ class Jin10(ComponentBase, ABC):
headers=headers, data=json.dumps(params))
response = response.json()
for i in response['data']:
if self.check_if_canceled("Jin10 processing"):
return
jin10_res.append({"content": i['data']['content']})
if self._param.type == "calendar":
params = {
@ -79,6 +87,8 @@ class Jin10(ComponentBase, ABC):
headers=headers, data=json.dumps(params))
response = response.json()
if self.check_if_canceled("Jin10 processing"):
return
jin10_res.append({"content": pd.DataFrame(response['data']).to_markdown()})
if self._param.type == "symbols":
params = {
@ -90,8 +100,12 @@ class Jin10(ComponentBase, ABC):
url='https://open-data-api.jin10.com/data-api/' + self._param.symbols_datatype + '?type=' + self._param.symbols_type,
headers=headers, data=json.dumps(params))
response = response.json()
if self.check_if_canceled("Jin10 processing"):
return
if self._param.symbols_datatype == "symbols":
for i in response['data']:
if self.check_if_canceled("Jin10 processing"):
return
i['Commodity Code'] = i['c']
i['Stock Exchange'] = i['e']
i['Commodity Name'] = i['n']
@ -99,6 +113,8 @@ class Jin10(ComponentBase, ABC):
del i['c'], i['e'], i['n'], i['t']
if self._param.symbols_datatype == "quotes":
for i in response['data']:
if self.check_if_canceled("Jin10 processing"):
return
i['Selling Price'] = i['a']
i['Buying Price'] = i['b']
i['Commodity Code'] = i['c']
@ -120,8 +136,12 @@ class Jin10(ComponentBase, ABC):
url='https://open-data-api.jin10.com/data-api/news',
headers=headers, data=json.dumps(params))
response = response.json()
if self.check_if_canceled("Jin10 processing"):
return
jin10_res.append({"content": pd.DataFrame(response['data']).to_markdown()})
except Exception as e:
if self.check_if_canceled("Jin10 processing"):
return
return Jin10.be_output("**ERROR**: " + str(e))
if not jin10_res:

View File

@ -21,7 +21,7 @@ from Bio import Entrez
import re
import xml.etree.ElementTree as ET
from agent.tools.base import ToolParamBase, ToolMeta, ToolBase
from api.utils.api_utils import timeout
from common.connection_utils import timeout
class PubMedParam(ToolParamBase):
@ -71,23 +71,40 @@ class PubMed(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 12)))
def _invoke(self, **kwargs):
if self.check_if_canceled("PubMed processing"):
return
if not kwargs.get("query"):
self.set_output("formalized_content", "")
return ""
last_e = ""
for _ in range(self._param.max_retries+1):
if self.check_if_canceled("PubMed processing"):
return
try:
Entrez.email = self._param.email
pubmedids = Entrez.read(Entrez.esearch(db='pubmed', retmax=self._param.top_n, term=kwargs["query"]))['IdList']
if self.check_if_canceled("PubMed processing"):
return
pubmedcnt = ET.fromstring(re.sub(r'<(/?)b>|<(/?)i>', '', Entrez.efetch(db='pubmed', id=",".join(pubmedids),
retmode="xml").read().decode("utf-8")))
if self.check_if_canceled("PubMed processing"):
return
self._retrieve_chunks(pubmedcnt.findall("PubmedArticle"),
get_title=lambda child: child.find("MedlineCitation").find("Article").find("ArticleTitle").text,
get_url=lambda child: "https://pubmed.ncbi.nlm.nih.gov/" + child.find("MedlineCitation").find("PMID").text,
get_content=lambda child: self._format_pubmed_content(child),)
return self.output("formalized_content")
except Exception as e:
if self.check_if_canceled("PubMed processing"):
return
last_e = e
logging.exception(f"PubMed error: {e}")
time.sleep(self._param.delay_after_error)

View File

@ -58,12 +58,18 @@ class QWeather(ComponentBase, ABC):
component_name = "QWeather"
def _run(self, history, **kwargs):
if self.check_if_canceled("Qweather processing"):
return
ans = self.get_input()
ans = "".join(ans["content"]) if "content" in ans else ""
if not ans:
return QWeather.be_output("")
try:
if self.check_if_canceled("Qweather processing"):
return
response = requests.get(
url="https://geoapi.qweather.com/v2/city/lookup?location=" + ans + "&key=" + self._param.web_apikey).json()
if response["code"] == "200":
@ -71,16 +77,23 @@ class QWeather(ComponentBase, ABC):
else:
return QWeather.be_output("**Error**" + self._param.error_code[response["code"]])
if self.check_if_canceled("Qweather processing"):
return
base_url = "https://api.qweather.com/v7/" if self._param.user_type == 'paid' else "https://devapi.qweather.com/v7/"
if self._param.type == "weather":
url = base_url + "weather/" + self._param.time_period + "?location=" + location_id + "&key=" + self._param.web_apikey + "&lang=" + self._param.lang
response = requests.get(url=url).json()
if self.check_if_canceled("Qweather processing"):
return
if response["code"] == "200":
if self._param.time_period == "now":
return QWeather.be_output(str(response["now"]))
else:
qweather_res = [{"content": str(i) + "\n"} for i in response["daily"]]
if self.check_if_canceled("Qweather processing"):
return
if not qweather_res:
return QWeather.be_output("")
@ -92,6 +105,8 @@ class QWeather(ComponentBase, ABC):
elif self._param.type == "indices":
url = base_url + "indices/1d?type=0&location=" + location_id + "&key=" + self._param.web_apikey + "&lang=" + self._param.lang
response = requests.get(url=url).json()
if self.check_if_canceled("Qweather processing"):
return
if response["code"] == "200":
indices_res = response["daily"][0]["date"] + "\n" + "\n".join(
[i["name"] + ": " + i["category"] + ", " + i["text"] for i in response["daily"]])
@ -103,9 +118,13 @@ class QWeather(ComponentBase, ABC):
elif self._param.type == "airquality":
url = base_url + "air/now?location=" + location_id + "&key=" + self._param.web_apikey + "&lang=" + self._param.lang
response = requests.get(url=url).json()
if self.check_if_canceled("Qweather processing"):
return
if response["code"] == "200":
return QWeather.be_output(str(response["now"]))
else:
return QWeather.be_output("**Error**" + self._param.error_code[response["code"]])
except Exception as e:
if self.check_if_canceled("Qweather processing"):
return
return QWeather.be_output("**Error**" + str(e))

View File

@ -13,17 +13,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from functools import partial
import json
import os
import re
from abc import ABC
from agent.tools.base import ToolParamBase, ToolBase, ToolMeta
from api.db import LLMType
from common.constants import LLMType
from api.db.services.document_service import DocumentService
from api.db.services.dialog_service import meta_filter
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle
from api import settings
from api.utils.api_utils import timeout
from common import settings
from common.connection_utils import timeout
from rag.app.tag import label_question
from rag.prompts.generator import cross_languages, kb_prompt, gen_meta_filter
@ -80,8 +82,12 @@ class Retrieval(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 12)))
def _invoke(self, **kwargs):
if self.check_if_canceled("Retrieval processing"):
return
if not kwargs.get("query"):
self.set_output("formalized_content", self._param.empty_response)
return
kb_ids: list[str] = []
for id in self._param.kb_ids:
@ -120,7 +126,7 @@ class Retrieval(ToolBase, ABC):
vars = self.get_input_elements_from_text(kwargs["query"])
vars = {k:o["value"] for k,o in vars.items()}
query = self.string_format(kwargs["query"], vars)
doc_ids=[]
if self._param.meta_data_filter!={}:
metas = DocumentService.get_meta_by_kbs(kb_ids)
@ -131,7 +137,35 @@ class Retrieval(ToolBase, ABC):
if not doc_ids:
doc_ids = None
elif self._param.meta_data_filter.get("method") == "manual":
doc_ids.extend(meta_filter(metas, self._param.meta_data_filter["manual"]))
filters=self._param.meta_data_filter["manual"]
for flt in filters:
pat = re.compile(self.variable_ref_patt)
s = flt["value"]
out_parts = []
last = 0
for m in pat.finditer(s):
out_parts.append(s[last:m.start()])
key = m.group(1)
v = self._canvas.get_variable_value(key)
if v is None:
rep = ""
elif isinstance(v, partial):
buf = []
for chunk in v():
buf.append(chunk)
rep = "".join(buf)
elif isinstance(v, str):
rep = v
else:
rep = json.dumps(v, ensure_ascii=False)
out_parts.append(rep)
last = m.end()
out_parts.append(s[last:])
flt["value"] = "".join(out_parts)
doc_ids.extend(meta_filter(metas, filters))
if not doc_ids:
doc_ids = None
@ -154,9 +188,14 @@ class Retrieval(ToolBase, ABC):
rerank_mdl=rerank_mdl,
rank_feature=label_question(query, kbs),
)
if self.check_if_canceled("Retrieval processing"):
return
if self._param.toc_enhance:
chat_mdl = LLMBundle(self._canvas._tenant_id, LLMType.CHAT)
cks = settings.retriever.retrieval_by_toc(query, kbinfos["chunks"], [kb.tenant_id for kb in kbs], chat_mdl, self._param.top_n)
if self.check_if_canceled("Retrieval processing"):
return
if cks:
kbinfos["chunks"] = cks
if self._param.use_kg:
@ -165,6 +204,8 @@ class Retrieval(ToolBase, ABC):
kb_ids,
embd_mdl,
LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT))
if self.check_if_canceled("Retrieval processing"):
return
if ck["content_with_weight"]:
kbinfos["chunks"].insert(0, ck)
else:
@ -172,6 +213,8 @@ class Retrieval(ToolBase, ABC):
if self._param.use_kg and kbs:
ck = settings.kg_retriever.retrieval(query, [kb.tenant_id for kb in kbs], filtered_kb_ids, embd_mdl, LLMBundle(kbs[0].tenant_id, LLMType.CHAT))
if self.check_if_canceled("Retrieval processing"):
return
if ck["content_with_weight"]:
ck["content"] = ck["content_with_weight"]
del ck["content_with_weight"]

View File

@ -19,7 +19,7 @@ import time
from abc import ABC
import requests
from agent.tools.base import ToolMeta, ToolParamBase, ToolBase
from api.utils.api_utils import timeout
from common.connection_utils import timeout
class SearXNGParam(ToolParamBase):
@ -79,6 +79,9 @@ class SearXNG(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 12)))
def _invoke(self, **kwargs):
if self.check_if_canceled("SearXNG processing"):
return
# Gracefully handle try-run without inputs
query = kwargs.get("query")
if not query or not isinstance(query, str) or not query.strip():
@ -93,6 +96,9 @@ class SearXNG(ToolBase, ABC):
last_e = ""
for _ in range(self._param.max_retries+1):
if self.check_if_canceled("SearXNG processing"):
return
try:
search_params = {
'q': query,
@ -110,6 +116,9 @@ class SearXNG(ToolBase, ABC):
)
response.raise_for_status()
if self.check_if_canceled("SearXNG processing"):
return
data = response.json()
if not data or not isinstance(data, dict):
@ -121,6 +130,9 @@ class SearXNG(ToolBase, ABC):
results = results[:self._param.top_n]
if self.check_if_canceled("SearXNG processing"):
return
self._retrieve_chunks(results,
get_title=lambda r: r.get("title", ""),
get_url=lambda r: r.get("url", ""),
@ -130,10 +142,16 @@ class SearXNG(ToolBase, ABC):
return self.output("formalized_content")
except requests.RequestException as e:
if self.check_if_canceled("SearXNG processing"):
return
last_e = f"Network error: {e}"
logging.exception(f"SearXNG network error: {e}")
time.sleep(self._param.delay_after_error)
except Exception as e:
if self.check_if_canceled("SearXNG processing"):
return
last_e = str(e)
logging.exception(f"SearXNG error: {e}")
time.sleep(self._param.delay_after_error)

View File

@ -19,7 +19,7 @@ import time
from abc import ABC
from tavily import TavilyClient
from agent.tools.base import ToolParamBase, ToolBase, ToolMeta
from api.utils.api_utils import timeout
from common.connection_utils import timeout
class TavilySearchParam(ToolParamBase):
@ -103,6 +103,9 @@ class TavilySearch(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 12)))
def _invoke(self, **kwargs):
if self.check_if_canceled("TavilySearch processing"):
return
if not kwargs.get("query"):
self.set_output("formalized_content", "")
return ""
@ -113,10 +116,16 @@ class TavilySearch(ToolBase, ABC):
if fld not in kwargs:
kwargs[fld] = getattr(self._param, fld)
for _ in range(self._param.max_retries+1):
if self.check_if_canceled("TavilySearch processing"):
return
try:
kwargs["include_images"] = False
kwargs["include_raw_content"] = False
res = self.tavily_client.search(**kwargs)
if self.check_if_canceled("TavilySearch processing"):
return
self._retrieve_chunks(res["results"],
get_title=lambda r: r["title"],
get_url=lambda r: r["url"],
@ -125,6 +134,9 @@ class TavilySearch(ToolBase, ABC):
self.set_output("json", res["results"])
return self.output("formalized_content")
except Exception as e:
if self.check_if_canceled("TavilySearch processing"):
return
last_e = e
logging.exception(f"Tavily error: {e}")
time.sleep(self._param.delay_after_error)
@ -201,6 +213,9 @@ class TavilyExtract(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
def _invoke(self, **kwargs):
if self.check_if_canceled("TavilyExtract processing"):
return
self.tavily_client = TavilyClient(api_key=self._param.api_key)
last_e = None
for fld in ["urls", "extract_depth", "format"]:
@ -209,12 +224,21 @@ class TavilyExtract(ToolBase, ABC):
if kwargs.get("urls") and isinstance(kwargs["urls"], str):
kwargs["urls"] = kwargs["urls"].split(",")
for _ in range(self._param.max_retries+1):
if self.check_if_canceled("TavilyExtract processing"):
return
try:
kwargs["include_images"] = False
res = self.tavily_client.extract(**kwargs)
if self.check_if_canceled("TavilyExtract processing"):
return
self.set_output("json", res["results"])
return self.output("json")
except Exception as e:
if self.check_if_canceled("TavilyExtract processing"):
return
last_e = e
logging.exception(f"Tavily error: {e}")
if last_e:

View File

@ -43,12 +43,18 @@ class TuShare(ComponentBase, ABC):
component_name = "TuShare"
def _run(self, history, **kwargs):
if self.check_if_canceled("TuShare processing"):
return
ans = self.get_input()
ans = ",".join(ans["content"]) if "content" in ans else ""
if not ans:
return TuShare.be_output("")
try:
if self.check_if_canceled("TuShare processing"):
return
tus_res = []
params = {
"api_name": "news",
@ -58,12 +64,18 @@ class TuShare(ComponentBase, ABC):
}
response = requests.post(url="http://api.tushare.pro", data=json.dumps(params).encode('utf-8'))
response = response.json()
if self.check_if_canceled("TuShare processing"):
return
if response['code'] != 0:
return TuShare.be_output(response['msg'])
df = pd.DataFrame(response['data']['items'])
df.columns = response['data']['fields']
if self.check_if_canceled("TuShare processing"):
return
tus_res.append({"content": (df[df['content'].str.contains(self._param.keyword, case=False)]).to_markdown()})
except Exception as e:
if self.check_if_canceled("TuShare processing"):
return
return TuShare.be_output("**ERROR**: " + str(e))
if not tus_res:

View File

@ -21,7 +21,7 @@ import pandas as pd
import pywencai
from agent.tools.base import ToolParamBase, ToolMeta, ToolBase
from api.utils.api_utils import timeout
from common.connection_utils import timeout
class WenCaiParam(ToolParamBase):
@ -70,19 +70,31 @@ class WenCai(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 12)))
def _invoke(self, **kwargs):
if self.check_if_canceled("WenCai processing"):
return
if not kwargs.get("query"):
self.set_output("report", "")
return ""
last_e = ""
for _ in range(self._param.max_retries+1):
if self.check_if_canceled("WenCai processing"):
return
try:
wencai_res = []
res = pywencai.get(query=kwargs["query"], query_type=self._param.query_type, perpage=self._param.top_n)
if self.check_if_canceled("WenCai processing"):
return
if isinstance(res, pd.DataFrame):
wencai_res.append(res.to_markdown())
elif isinstance(res, dict):
for item in res.items():
if self.check_if_canceled("WenCai processing"):
return
if isinstance(item[1], list):
wencai_res.append(item[0] + "\n" + pd.DataFrame(item[1]).to_markdown())
elif isinstance(item[1], str):
@ -100,6 +112,9 @@ class WenCai(ToolBase, ABC):
self.set_output("report", "\n\n".join(wencai_res))
return self.output("report")
except Exception as e:
if self.check_if_canceled("WenCai processing"):
return
last_e = e
logging.exception(f"WenCai error: {e}")
time.sleep(self._param.delay_after_error)

View File

@ -19,7 +19,7 @@ import time
from abc import ABC
import wikipedia
from agent.tools.base import ToolMeta, ToolParamBase, ToolBase
from api.utils.api_utils import timeout
from common.connection_utils import timeout
class WikipediaParam(ToolParamBase):
@ -66,17 +66,26 @@ class Wikipedia(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 60)))
def _invoke(self, **kwargs):
if self.check_if_canceled("Wikipedia processing"):
return
if not kwargs.get("query"):
self.set_output("formalized_content", "")
return ""
last_e = ""
for _ in range(self._param.max_retries+1):
if self.check_if_canceled("Wikipedia processing"):
return
try:
wikipedia.set_lang(self._param.language)
wiki_engine = wikipedia
pages = []
for p in wiki_engine.search(kwargs["query"], results=self._param.top_n):
if self.check_if_canceled("Wikipedia processing"):
return
try:
pages.append(wikipedia.page(p))
except Exception:
@ -87,6 +96,9 @@ class Wikipedia(ToolBase, ABC):
get_content=lambda r: r.summary)
return self.output("formalized_content")
except Exception as e:
if self.check_if_canceled("Wikipedia processing"):
return
last_e = e
logging.exception(f"Wikipedia error: {e}")
time.sleep(self._param.delay_after_error)

View File

@ -20,7 +20,7 @@ from abc import ABC
import pandas as pd
import yfinance as yf
from agent.tools.base import ToolMeta, ToolParamBase, ToolBase
from api.utils.api_utils import timeout
from common.connection_utils import timeout
class YahooFinanceParam(ToolParamBase):
@ -74,15 +74,24 @@ class YahooFinance(ToolBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 60)))
def _invoke(self, **kwargs):
if self.check_if_canceled("YahooFinance processing"):
return
if not kwargs.get("stock_code"):
self.set_output("report", "")
return ""
last_e = ""
for _ in range(self._param.max_retries+1):
if self.check_if_canceled("YahooFinance processing"):
return
yohoo_res = []
try:
msft = yf.Ticker(kwargs["stock_code"])
if self.check_if_canceled("YahooFinance processing"):
return
if self._param.info:
yohoo_res.append("# Information:\n" + pd.Series(msft.info).to_markdown() + "\n")
if self._param.history:
@ -100,6 +109,9 @@ class YahooFinance(ToolBase, ABC):
self.set_output("report", "\n\n".join(yohoo_res))
return self.output("report")
except Exception as e:
if self.check_if_canceled("YahooFinance processing"):
return
last_e = e
logging.exception(f"YahooFinance error: {e}")
time.sleep(self._param.delay_after_error)

View File

@ -24,7 +24,7 @@ from flask_cors import CORS
from flasgger import Swagger
from itsdangerous.url_safe import URLSafeTimedSerializer as Serializer
from api.db import StatusEnum
from common.constants import StatusEnum
from api.db.db_models import close_connection
from api.db.services import UserService
from api.utils.json_encode import CustomJSONEncoder
@ -33,7 +33,7 @@ from api.utils import commands
from flask_mail import Mail
from flask_session import Session
from flask_login import LoginManager
from api import settings
from common import settings
from api.utils.api_utils import server_error_response
from api.constants import API_VERSION

View File

@ -21,7 +21,7 @@ from flask import request, Response
from api.db.services.llm_service import LLMBundle
from flask_login import login_required, current_user
from api.db import VALID_FILE_TYPES, VALID_TASK_STATUS, FileType, LLMType, ParserType, FileSource
from api.db import VALID_FILE_TYPES, FileType
from api.db.db_models import APIToken, Task, File
from api.db.services import duplicate_name
from api.db.services.api_service import APITokenService, API4ConversationService
@ -32,21 +32,21 @@ from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.task_service import queue_tasks, TaskService
from api.db.services.user_service import UserTenantService
from api import settings
from api.utils import get_uuid
from common.misc_utils import get_uuid
from common.constants import RetCode, VALID_TASK_STATUS, LLMType, ParserType, FileSource
from api.utils.api_utils import server_error_response, get_data_error_result, get_json_result, validate_request, \
generate_confirmation_token
from api.utils.file_utils import filename_type, thumbnail
from rag.app.tag import label_question
from rag.prompts.generator import keyword_extraction
from rag.utils.storage_factory import STORAGE_IMPL
from common.time_utils import current_timestamp, datetime_format
from api.db.services.canvas_service import UserCanvasService
from agent.canvas import Canvas
from functools import partial
from pathlib import Path
from common import settings
@manager.route('/new_token', methods=['POST']) # noqa: F821
@ -145,7 +145,7 @@ def set_conversation():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR)
try:
if objs[0].source == "agent":
e, cvs = UserCanvasService.get_by_id(objs[0].dialog_id)
@ -186,7 +186,7 @@ def completion():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR)
req = request.json
e, conv = API4ConversationService.get_by_id(req["conversation_id"])
if not e:
@ -352,7 +352,7 @@ def get_conversation(conversation_id):
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR)
try:
e, conv = API4ConversationService.get_by_id(conversation_id)
@ -362,7 +362,7 @@ def get_conversation(conversation_id):
conv = conv.to_dict()
if token != APIToken.query(dialog_id=conv['dialog_id'])[0].token:
return get_json_result(data=False, message='Authentication error: API key is invalid for this conversation_id!"',
code=settings.RetCode.AUTHENTICATION_ERROR)
code=RetCode.AUTHENTICATION_ERROR)
for referenct_i in conv['reference']:
if referenct_i is None or len(referenct_i) == 0:
@ -383,7 +383,7 @@ def upload():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR)
kb_name = request.form.get("kb_name").strip()
tenant_id = objs[0].tenant_id
@ -399,12 +399,12 @@ def upload():
if 'file' not in request.files:
return get_json_result(
data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
data=False, message='No file part!', code=RetCode.ARGUMENT_ERROR)
file = request.files['file']
if file.filename == '':
return get_json_result(
data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
data=False, message='No file selected!', code=RetCode.ARGUMENT_ERROR)
root_folder = FileService.get_root_folder(tenant_id)
pf_id = root_folder["id"]
@ -427,10 +427,10 @@ def upload():
message="This type of file has not been supported yet!")
location = filename
while STORAGE_IMPL.obj_exist(kb_id, location):
while settings.STORAGE_IMPL.obj_exist(kb_id, location):
location += "_"
blob = request.files['file'].read()
STORAGE_IMPL.put(kb_id, location, blob)
settings.STORAGE_IMPL.put(kb_id, location, blob)
doc = {
"id": get_uuid(),
"kb_id": kb.id,
@ -466,10 +466,7 @@ def upload():
if "run" in form_data.keys():
if request.form.get("run").strip() == "1":
try:
info = {"run": 1, "progress": 0}
info["progress_msg"] = ""
info["chunk_num"] = 0
info["token_num"] = 0
info = {"run": 1, "progress": 0, "progress_msg": "", "chunk_num": 0, "token_num": 0}
DocumentService.update_by_id(doc["id"], info)
# if str(req["run"]) == TaskStatus.CANCEL.value:
tenant_id = DocumentService.get_tenant_id(doc["id"])
@ -496,17 +493,17 @@ def upload_parse():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR)
if 'file' not in request.files:
return get_json_result(
data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
data=False, message='No file part!', code=RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist('file')
for file_obj in file_objs:
if file_obj.filename == '':
return get_json_result(
data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
data=False, message='No file selected!', code=RetCode.ARGUMENT_ERROR)
doc_ids = doc_upload_and_parse(request.form.get("conversation_id"), file_objs, objs[0].tenant_id)
return get_json_result(data=doc_ids)
@ -519,7 +516,7 @@ def list_chunks():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR)
req = request.json
@ -559,7 +556,7 @@ def get_chunk(chunk_id):
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR)
try:
tenant_id = objs[0].tenant_id
kb_ids = KnowledgebaseService.get_kb_ids(tenant_id)
@ -584,7 +581,7 @@ def list_kb_docs():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR)
req = request.json
tenant_id = objs[0].tenant_id
@ -637,7 +634,7 @@ def docinfos():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR)
req = request.json
doc_ids = req["doc_ids"]
docs = DocumentService.get_by_ids(doc_ids)
@ -651,7 +648,7 @@ def document_rm():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR)
tenant_id = objs[0].tenant_id
req = request.json
@ -698,12 +695,12 @@ def document_rm():
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
File2DocumentService.delete_by_document_id(doc_id)
STORAGE_IMPL.rm(b, n)
settings.STORAGE_IMPL.rm(b, n)
except Exception as e:
errors += str(e)
if errors:
return get_json_result(data=False, message=errors, code=settings.RetCode.SERVER_ERROR)
return get_json_result(data=False, message=errors, code=RetCode.SERVER_ERROR)
return get_json_result(data=True)
@ -718,7 +715,7 @@ def completion_faq():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR)
e, conv = API4ConversationService.get_by_id(req["conversation_id"])
if not e:
@ -726,8 +723,7 @@ def completion_faq():
if "quote" not in req:
req["quote"] = True
msg = []
msg.append({"role": "user", "content": req["word"]})
msg = [{"role": "user", "content": req["word"]}]
if not msg[-1].get("id"):
msg[-1]["id"] = get_uuid()
message_id = msg[-1]["id"]
@ -791,7 +787,7 @@ def completion_faq():
if ans["reference"]["chunks"][chunk_idx]["img_id"]:
try:
bkt, nm = ans["reference"]["chunks"][chunk_idx]["img_id"].split("-")
response = STORAGE_IMPL.get(bkt, nm)
response = settings.STORAGE_IMPL.get(bkt, nm)
data_type_picture["url"] = base64.b64encode(response).decode('utf-8')
data.append(data_type_picture)
break
@ -836,7 +832,7 @@ def completion_faq():
if ans["reference"]["chunks"][chunk_idx]["img_id"]:
try:
bkt, nm = ans["reference"]["chunks"][chunk_idx]["img_id"].split("-")
response = STORAGE_IMPL.get(bkt, nm)
response = settings.STORAGE_IMPL.get(bkt, nm)
data_type_picture["url"] = base64.b64encode(response).decode('utf-8')
data.append(data_type_picture)
break
@ -857,7 +853,7 @@ def retrieval():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR)
req = request.json
kb_ids = req.get("kb_id", [])
@ -876,7 +872,7 @@ def retrieval():
if len(embd_nms) != 1:
return get_json_result(
data=False, message='Knowledge bases use different embedding models or does not exist."',
code=settings.RetCode.AUTHENTICATION_ERROR)
code=RetCode.AUTHENTICATION_ERROR)
embd_mdl = LLMBundle(kbs[0].tenant_id, LLMType.EMBEDDING, llm_name=kbs[0].embd_id)
rerank_mdl = None
@ -895,5 +891,5 @@ def retrieval():
except Exception as e:
if str(e).find("not_found") > 0:
return get_json_result(data=False, message='No chunk found! Check the chunk status please!',
code=settings.RetCode.DATA_ERROR)
code=RetCode.DATA_ERROR)
return server_error_response(e)

View File

@ -34,7 +34,7 @@ class GithubOAuthClient(OAuthClient):
def fetch_user_info(self, access_token, **kwargs):
"""
Fetch github user info.
Fetch GitHub user info.
"""
user_info = {}
try:

View File

@ -43,7 +43,8 @@ class OIDCClient(OAuthClient):
self.jwks_uri = config['jwks_uri']
def _load_oidc_metadata(self, issuer):
@staticmethod
def _load_oidc_metadata(issuer):
"""
Load OIDC metadata from `/.well-known/openid-configuration`.
"""

View File

@ -25,7 +25,6 @@ from flask import request, Response
from flask_login import login_required, current_user
from agent.component import LLM
from api import settings
from api.db import CanvasCategory, FileType
from api.db.services.canvas_service import CanvasTemplateService, UserCanvasService, API4ConversationService
from api.db.services.document_service import DocumentService
@ -34,8 +33,8 @@ from api.db.services.pipeline_operation_log_service import PipelineOperationLogS
from api.db.services.task_service import queue_dataflow, CANVAS_DEBUG_DOC_ID, TaskService
from api.db.services.user_service import TenantService
from api.db.services.user_canvas_version import UserCanvasVersionService
from api.settings import RetCode
from api.utils import get_uuid
from common.constants import RetCode
from common.misc_utils import get_uuid
from api.utils.api_utils import get_json_result, server_error_response, validate_request, get_data_error_result
from agent.canvas import Canvas
from peewee import MySQLDatabase, PostgresqlDatabase
@ -46,6 +45,7 @@ from api.utils.file_utils import filename_type, read_potential_broken_pdf
from rag.flow.pipeline import Pipeline
from rag.nlp import search
from rag.utils.redis_conn import REDIS_CONN
from common import settings
@manager.route('/templates', methods=['GET']) # noqa: F821
@ -156,7 +156,7 @@ def run():
return get_json_result(data={"message_id": task_id})
try:
canvas = Canvas(cvs.dsl, current_user.id, req["id"])
canvas = Canvas(cvs.dsl, current_user.id)
except Exception as e:
return server_error_response(e)
@ -168,8 +168,10 @@ def run():
cvs.dsl = json.loads(str(canvas))
UserCanvasService.update_by_id(req["id"], cvs.to_dict())
except Exception as e:
logging.exception(e)
canvas.cancel_task()
yield "data:" + json.dumps({"code": 500, "message": str(e), "data": False}, ensure_ascii=False) + "\n\n"
resp = Response(sse(), mimetype="text/event-stream")
@ -177,6 +179,7 @@ def run():
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
resp.call_on_close(lambda: canvas.cancel_task())
return resp
@ -410,27 +413,27 @@ def test_db_connect():
ibm_db.close(conn)
return get_json_result(data="Database Connection Successful!")
elif req["db_type"] == 'trino':
def _parse_catalog_schema(db: str):
if not db:
def _parse_catalog_schema(db_name: str):
if not db_name:
return None, None
if "." in db:
c, s = db.split(".", 1)
elif "/" in db:
c, s = db.split("/", 1)
if "." in db_name:
catalog_name, schema_name = db_name.split(".", 1)
elif "/" in db_name:
catalog_name, schema_name = db_name.split("/", 1)
else:
c, s = db, "default"
return c, s
catalog_name, schema_name = db_name, "default"
return catalog_name, schema_name
try:
import trino
import os
from trino.auth import BasicAuthentication
except Exception:
return server_error_response("Missing dependency 'trino'. Please install: pip install trino")
except Exception as e:
return server_error_response(f"Missing dependency 'trino'. Please install: pip install trino, detail: {e}")
catalog, schema = _parse_catalog_schema(req["database"])
if not catalog:
return server_error_response("For Trino, 'database' must be 'catalog.schema' or at least 'catalog'.")
http_scheme = "https" if os.environ.get("TRINO_USE_TLS", "0") == "1" else "http"
auth = None
@ -479,7 +482,6 @@ def getlistversion(canvas_id):
@login_required
def getversion( version_id):
try:
e, version = UserCanvasVersionService.get_by_id(version_id)
if version:
return get_json_result(data=version.to_dict())
@ -546,11 +548,11 @@ def trace():
cvs_id = request.args.get("canvas_id")
msg_id = request.args.get("message_id")
try:
bin = REDIS_CONN.get(f"{cvs_id}-{msg_id}-logs")
if not bin:
binary = REDIS_CONN.get(f"{cvs_id}-{msg_id}-logs")
if not binary:
return get_json_result(data={})
return get_json_result(data=json.loads(bin.encode("utf-8")))
return get_json_result(data=json.loads(binary.encode("utf-8")))
except Exception as e:
logging.exception(e)
@ -604,4 +606,4 @@ def download():
id = request.args.get("id")
created_by = request.args.get("created_by")
blob = FileService.get_blob(created_by, id)
return flask.make_response(blob)
return flask.make_response(blob)

View File

@ -21,8 +21,6 @@ import xxhash
from flask import request
from flask_login import current_user, login_required
from api import settings
from api.db import LLMType, ParserType
from api.db.services.dialog_service import meta_filter
from api.db.services.document_service import DocumentService
from api.db.services.knowledgebase_service import KnowledgebaseService
@ -34,8 +32,9 @@ from rag.app.qa import beAdoc, rmPrefix
from rag.app.tag import label_question
from rag.nlp import rag_tokenizer, search
from rag.prompts.generator import gen_meta_filter, cross_languages, keyword_extraction
from rag.settings import PAGERANK_FLD
from common.string_utils import remove_redundant_spaces
from common.constants import RetCode, LLMType, ParserType, PAGERANK_FLD
from common import settings
@manager.route('/list', methods=['POST']) # noqa: F821
@ -83,7 +82,7 @@ def list_chunk():
except Exception as e:
if str(e).find("not_found") > 0:
return get_json_result(data=False, message='No chunk found!',
code=settings.RetCode.DATA_ERROR)
code=RetCode.DATA_ERROR)
return server_error_response(e)
@ -115,7 +114,7 @@ def get():
except Exception as e:
if str(e).find("NotFoundError") >= 0:
return get_json_result(data=False, message='Chunk not found!',
code=settings.RetCode.DATA_ERROR)
code=RetCode.DATA_ERROR)
return server_error_response(e)
@ -200,7 +199,6 @@ def switch():
@login_required
@validate_request("chunk_ids", "doc_id")
def rm():
from rag.utils.storage_factory import STORAGE_IMPL
req = request.json
try:
e, doc = DocumentService.get_by_id(req["doc_id"])
@ -214,8 +212,8 @@ def rm():
chunk_number = len(deleted_chunk_ids)
DocumentService.decrement_chunk_num(doc.id, doc.kb_id, 1, chunk_number, 0)
for cid in deleted_chunk_ids:
if STORAGE_IMPL.obj_exist(doc.kb_id, cid):
STORAGE_IMPL.rm(doc.kb_id, cid)
if settings.STORAGE_IMPL.obj_exist(doc.kb_id, cid):
settings.STORAGE_IMPL.rm(doc.kb_id, cid)
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@ -292,7 +290,7 @@ def retrieval_test():
kb_ids = [kb_ids]
if not kb_ids:
return get_json_result(data=False, message='Please specify dataset firstly.',
code=settings.RetCode.DATA_ERROR)
code=RetCode.DATA_ERROR)
doc_ids = req.get("doc_ids", [])
use_kg = req.get("use_kg", False)
@ -326,7 +324,7 @@ def retrieval_test():
else:
return get_json_result(
data=False, message='Only owner of knowledgebase authorized for this operation.',
code=settings.RetCode.OPERATING_ERROR)
code=RetCode.OPERATING_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_ids[0])
if not e:
@ -371,7 +369,7 @@ def retrieval_test():
except Exception as e:
if str(e).find("not_found") > 0:
return get_json_result(data=False, message='No chunk found! Check the chunk status please!',
code=settings.RetCode.DATA_ERROR)
code=RetCode.DATA_ERROR)
return server_error_response(e)

295
api/apps/connector_app.py Normal file
View File

@ -0,0 +1,295 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
import logging
import time
import uuid
from html import escape
from typing import Any
from flask import make_response, request
from flask_login import current_user, login_required
from google_auth_oauthlib.flow import Flow
from api.db import InputType
from api.db.services.connector_service import ConnectorService, SyncLogsService
from api.utils.api_utils import get_data_error_result, get_json_result, validate_request
from common.constants import RetCode, TaskStatus
from common.data_source.config import GOOGLE_DRIVE_WEB_OAUTH_REDIRECT_URI, DocumentSource
from common.data_source.google_util.constant import GOOGLE_DRIVE_WEB_OAUTH_POPUP_TEMPLATE, GOOGLE_SCOPES
from common.misc_utils import get_uuid
from rag.utils.redis_conn import REDIS_CONN
@manager.route("/set", methods=["POST"]) # noqa: F821
@login_required
def set_connector():
req = request.json
if req.get("id"):
conn = {fld: req[fld] for fld in ["prune_freq", "refresh_freq", "config", "timeout_secs"] if fld in req}
ConnectorService.update_by_id(req["id"], conn)
else:
req["id"] = get_uuid()
conn = {
"id": req["id"],
"tenant_id": current_user.id,
"name": req["name"],
"source": req["source"],
"input_type": InputType.POLL,
"config": req["config"],
"refresh_freq": int(req.get("refresh_freq", 30)),
"prune_freq": int(req.get("prune_freq", 720)),
"timeout_secs": int(req.get("timeout_secs", 60 * 29)),
"status": TaskStatus.SCHEDULE,
}
conn["status"] = TaskStatus.SCHEDULE
ConnectorService.save(**conn)
time.sleep(1)
e, conn = ConnectorService.get_by_id(req["id"])
return get_json_result(data=conn.to_dict())
@manager.route("/list", methods=["GET"]) # noqa: F821
@login_required
def list_connector():
return get_json_result(data=ConnectorService.list(current_user.id))
@manager.route("/<connector_id>", methods=["GET"]) # noqa: F821
@login_required
def get_connector(connector_id):
e, conn = ConnectorService.get_by_id(connector_id)
if not e:
return get_data_error_result(message="Can't find this Connector!")
return get_json_result(data=conn.to_dict())
@manager.route("/<connector_id>/logs", methods=["GET"]) # noqa: F821
@login_required
def list_logs(connector_id):
req = request.args.to_dict(flat=True)
arr, total = SyncLogsService.list_sync_tasks(connector_id, int(req.get("page", 1)), int(req.get("page_size", 15)))
return get_json_result(data={"total": total, "logs": arr})
@manager.route("/<connector_id>/resume", methods=["PUT"]) # noqa: F821
@login_required
def resume(connector_id):
req = request.json
if req.get("resume"):
ConnectorService.resume(connector_id, TaskStatus.SCHEDULE)
else:
ConnectorService.resume(connector_id, TaskStatus.CANCEL)
return get_json_result(data=True)
@manager.route("/<connector_id>/rebuild", methods=["PUT"]) # noqa: F821
@login_required
@validate_request("kb_id")
def rebuild(connector_id):
req = request.json
err = ConnectorService.rebuild(req["kb_id"], connector_id, current_user.id)
if err:
return get_json_result(data=False, message=err, code=RetCode.SERVER_ERROR)
return get_json_result(data=True)
@manager.route("/<connector_id>/rm", methods=["POST"]) # noqa: F821
@login_required
def rm_connector(connector_id):
ConnectorService.resume(connector_id, TaskStatus.CANCEL)
ConnectorService.delete_by_id(connector_id)
return get_json_result(data=True)
GOOGLE_WEB_FLOW_STATE_PREFIX = "google_drive_web_flow_state"
GOOGLE_WEB_FLOW_RESULT_PREFIX = "google_drive_web_flow_result"
WEB_FLOW_TTL_SECS = 15 * 60
def _web_state_cache_key(flow_id: str) -> str:
return f"{GOOGLE_WEB_FLOW_STATE_PREFIX}:{flow_id}"
def _web_result_cache_key(flow_id: str) -> str:
return f"{GOOGLE_WEB_FLOW_RESULT_PREFIX}:{flow_id}"
def _load_credentials(payload: str | dict[str, Any]) -> dict[str, Any]:
if isinstance(payload, dict):
return payload
try:
return json.loads(payload)
except json.JSONDecodeError as exc: # pragma: no cover - defensive
raise ValueError("Invalid Google credentials JSON.") from exc
def _get_web_client_config(credentials: dict[str, Any]) -> dict[str, Any]:
web_section = credentials.get("web")
if not isinstance(web_section, dict):
raise ValueError("Google OAuth JSON must include a 'web' client configuration to use browser-based authorization.")
return {"web": web_section}
def _render_web_oauth_popup(flow_id: str, success: bool, message: str):
status = "success" if success else "error"
auto_close = "window.close();" if success else ""
escaped_message = escape(message)
payload_json = json.dumps(
{
"type": "ragflow-google-drive-oauth",
"status": status,
"flowId": flow_id or "",
"message": message,
}
)
html = GOOGLE_DRIVE_WEB_OAUTH_POPUP_TEMPLATE.format(
heading="Authorization complete" if success else "Authorization failed",
message=escaped_message,
payload_json=payload_json,
auto_close=auto_close,
)
response = make_response(html, 200)
response.headers["Content-Type"] = "text/html; charset=utf-8"
return response
@manager.route("/google-drive/oauth/web/start", methods=["POST"]) # noqa: F821
@login_required
@validate_request("credentials")
def start_google_drive_web_oauth():
if not GOOGLE_DRIVE_WEB_OAUTH_REDIRECT_URI:
return get_json_result(
code=RetCode.SERVER_ERROR,
message="Google Drive OAuth redirect URI is not configured on the server.",
)
req = request.json or {}
raw_credentials = req.get("credentials", "")
try:
credentials = _load_credentials(raw_credentials)
except ValueError as exc:
return get_json_result(code=RetCode.ARGUMENT_ERROR, message=str(exc))
if credentials.get("refresh_token"):
return get_json_result(
code=RetCode.ARGUMENT_ERROR,
message="Uploaded credentials already include a refresh token.",
)
try:
client_config = _get_web_client_config(credentials)
except ValueError as exc:
return get_json_result(code=RetCode.ARGUMENT_ERROR, message=str(exc))
flow_id = str(uuid.uuid4())
try:
flow = Flow.from_client_config(client_config, scopes=GOOGLE_SCOPES[DocumentSource.GOOGLE_DRIVE])
flow.redirect_uri = GOOGLE_DRIVE_WEB_OAUTH_REDIRECT_URI
authorization_url, _ = flow.authorization_url(
access_type="offline",
include_granted_scopes="true",
prompt="consent",
state=flow_id,
)
except Exception as exc: # pragma: no cover - defensive
logging.exception("Failed to create Google OAuth flow: %s", exc)
return get_json_result(
code=RetCode.SERVER_ERROR,
message="Failed to initialize Google OAuth flow. Please verify the uploaded client configuration.",
)
cache_payload = {
"user_id": current_user.id,
"client_config": client_config,
"created_at": int(time.time()),
}
REDIS_CONN.set_obj(_web_state_cache_key(flow_id), cache_payload, WEB_FLOW_TTL_SECS)
return get_json_result(
data={
"flow_id": flow_id,
"authorization_url": authorization_url,
"expires_in": WEB_FLOW_TTL_SECS,
}
)
@manager.route("/google-drive/oauth/web/callback", methods=["GET"]) # noqa: F821
def google_drive_web_oauth_callback():
state_id = request.args.get("state")
error = request.args.get("error")
error_description = request.args.get("error_description") or error
if not state_id:
return _render_web_oauth_popup("", False, "Missing OAuth state parameter.")
state_cache = REDIS_CONN.get(_web_state_cache_key(state_id))
if not state_cache:
return _render_web_oauth_popup(state_id, False, "Authorization session expired. Please restart from the main window.")
state_obj = json.loads(state_cache)
client_config = state_obj.get("client_config")
if not client_config:
REDIS_CONN.delete(_web_state_cache_key(state_id))
return _render_web_oauth_popup(state_id, False, "Authorization session was invalid. Please retry.")
if error:
REDIS_CONN.delete(_web_state_cache_key(state_id))
return _render_web_oauth_popup(state_id, False, error_description or "Authorization was cancelled.")
code = request.args.get("code")
if not code:
return _render_web_oauth_popup(state_id, False, "Missing authorization code from Google.")
try:
flow = Flow.from_client_config(client_config, scopes=GOOGLE_SCOPES[DocumentSource.GOOGLE_DRIVE])
flow.redirect_uri = GOOGLE_DRIVE_WEB_OAUTH_REDIRECT_URI
flow.fetch_token(code=code)
except Exception as exc: # pragma: no cover - defensive
logging.exception("Failed to exchange Google OAuth code: %s", exc)
REDIS_CONN.delete(_web_state_cache_key(state_id))
return _render_web_oauth_popup(state_id, False, "Failed to exchange tokens with Google. Please retry.")
creds_json = flow.credentials.to_json()
result_payload = {
"user_id": state_obj.get("user_id"),
"credentials": creds_json,
}
REDIS_CONN.set_obj(_web_result_cache_key(state_id), result_payload, WEB_FLOW_TTL_SECS)
REDIS_CONN.delete(_web_state_cache_key(state_id))
return _render_web_oauth_popup(state_id, True, "Authorization completed successfully.")
@manager.route("/google-drive/oauth/web/result", methods=["POST"]) # noqa: F821
@login_required
@validate_request("flow_id")
def poll_google_drive_web_result():
req = request.json or {}
flow_id = req.get("flow_id")
cache_raw = REDIS_CONN.get(_web_result_cache_key(flow_id))
if not cache_raw:
return get_json_result(code=RetCode.RUNNING, message="Authorization is still pending.")
result = json.loads(cache_raw)
if result.get("user_id") != current_user.id:
return get_json_result(code=RetCode.PERMISSION_ERROR, message="You are not allowed to access this authorization result.")
REDIS_CONN.delete(_web_result_cache_key(flow_id))
return get_json_result(data={"credentials": result.get("credentials")})

View File

@ -19,8 +19,6 @@ import logging
from copy import deepcopy
from flask import Response, request
from flask_login import current_user, login_required
from api import settings
from api.db import LLMType
from api.db.db_models import APIToken
from api.db.services.conversation_service import ConversationService, structure_answer
from api.db.services.dialog_service import DialogService, ask, chat, gen_mindmap
@ -31,6 +29,7 @@ from api.db.services.user_service import TenantService, UserTenantService
from api.utils.api_utils import get_data_error_result, get_json_result, server_error_response, validate_request
from rag.prompts.template import load_prompt
from rag.prompts.generator import chunks_format
from common.constants import RetCode, LLMType
@manager.route("/set", methods=["POST"]) # noqa: F821
@ -93,7 +92,7 @@ def get():
avatar = dialog[0].icon
break
else:
return get_json_result(data=False, message="Only owner of conversation authorized for this operation.", code=settings.RetCode.OPERATING_ERROR)
return get_json_result(data=False, message="Only owner of conversation authorized for this operation.", code=RetCode.OPERATING_ERROR)
for ref in conv.reference:
if isinstance(ref, list):
@ -142,7 +141,7 @@ def rm():
if DialogService.query(tenant_id=tenant.tenant_id, id=conv.dialog_id):
break
else:
return get_json_result(data=False, message="Only owner of conversation authorized for this operation.", code=settings.RetCode.OPERATING_ERROR)
return get_json_result(data=False, message="Only owner of conversation authorized for this operation.", code=RetCode.OPERATING_ERROR)
ConversationService.delete_by_id(cid)
return get_json_result(data=True)
except Exception as e:
@ -155,7 +154,7 @@ def list_conversation():
dialog_id = request.args["dialog_id"]
try:
if not DialogService.query(tenant_id=current_user.id, id=dialog_id):
return get_json_result(data=False, message="Only owner of dialog authorized for this operation.", code=settings.RetCode.OPERATING_ERROR)
return get_json_result(data=False, message="Only owner of dialog authorized for this operation.", code=RetCode.OPERATING_ERROR)
convs = ConversationService.query(dialog_id=dialog_id, order_by=ConversationService.model.create_time, reverse=True)
convs = [d.to_dict() for d in convs]

View File

@ -18,13 +18,13 @@ from flask import request
from flask_login import login_required, current_user
from api.db.services import duplicate_name
from api.db.services.dialog_service import DialogService
from api.db import StatusEnum
from common.constants import StatusEnum
from api.db.services.tenant_llm_service import TenantLLMService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.user_service import TenantService, UserTenantService
from api import settings
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.utils import get_uuid
from common.misc_utils import get_uuid
from common.constants import RetCode
from api.utils.api_utils import get_json_result
@ -219,7 +219,7 @@ def rm():
else:
return get_json_result(
data=False, message='Only owner of dialog authorized for this operation.',
code=settings.RetCode.OPERATING_ERROR)
code=RetCode.OPERATING_ERROR)
dialog_list.append({"id": id,"status":StatusEnum.INVALID.value})
DialogService.update_many_by_id(dialog_list)
return get_json_result(data=True)

View File

@ -23,30 +23,31 @@ import flask
from flask import request
from flask_login import current_user, login_required
from api import settings
from api.common.check_team_permission import check_kb_team_permission
from api.constants import FILE_NAME_LEN_LIMIT, IMG_BASE64_PREFIX
from api.db import VALID_FILE_TYPES, VALID_TASK_STATUS, FileSource, FileType, ParserType, TaskStatus
from api.db.db_models import File, Task
from api.db import VALID_FILE_TYPES, FileType
from api.db.db_models import Task
from api.db.services import duplicate_name
from api.db.services.document_service import DocumentService, doc_upload_and_parse
from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.task_service import TaskService, cancel_all_task_of, queue_tasks, queue_dataflow
from api.db.services.task_service import TaskService, cancel_all_task_of
from api.db.services.user_service import UserTenantService
from api.utils import get_uuid
from common.misc_utils import get_uuid
from api.utils.api_utils import (
get_data_error_result,
get_json_result,
server_error_response,
validate_request,
)
from api.utils.file_utils import filename_type, get_project_base_directory, thumbnail
from api.utils.file_utils import filename_type, thumbnail
from common.file_utils import get_project_base_directory
from common.constants import RetCode, VALID_TASK_STATUS, ParserType, TaskStatus
from api.utils.web_utils import CONTENT_TYPE_MAP, html2pdf, is_valid_url
from deepdoc.parser.html_parser import RAGFlowHtmlParser
from rag.nlp import search, rag_tokenizer
from rag.utils.storage_factory import STORAGE_IMPL
from common import settings
@manager.route("/upload", methods=["POST"]) # noqa: F821
@ -55,29 +56,29 @@ from rag.utils.storage_factory import STORAGE_IMPL
def upload():
kb_id = request.form.get("kb_id")
if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR)
if "file" not in request.files:
return get_json_result(data=False, message="No file part!", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message="No file part!", code=RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist("file")
for file_obj in file_objs:
if file_obj.filename == "":
return get_json_result(data=False, message="No file selected!", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message="No file selected!", code=RetCode.ARGUMENT_ERROR)
if len(file_obj.filename.encode("utf-8")) > FILE_NAME_LEN_LIMIT:
return get_json_result(data=False, message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=RetCode.ARGUMENT_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e:
raise LookupError("Can't find this knowledgebase!")
if not check_kb_team_permission(kb, current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR)
err, files = FileService.upload_document(kb, file_objs, current_user.id)
if err:
return get_json_result(data=files, message="\n".join(err), code=settings.RetCode.SERVER_ERROR)
return get_json_result(data=files, message="\n".join(err), code=RetCode.SERVER_ERROR)
if not files:
return get_json_result(data=files, message="There seems to be an issue with your file format. Please verify it is correct and not corrupted.", code=settings.RetCode.DATA_ERROR)
return get_json_result(data=files, message="There seems to be an issue with your file format. Please verify it is correct and not corrupted.", code=RetCode.DATA_ERROR)
files = [f[0] for f in files] # remove the blob
return get_json_result(data=files)
@ -89,16 +90,16 @@ def upload():
def web_crawl():
kb_id = request.form.get("kb_id")
if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR)
name = request.form.get("name")
url = request.form.get("url")
if not is_valid_url(url):
return get_json_result(data=False, message="The URL format is invalid", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message="The URL format is invalid", code=RetCode.ARGUMENT_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e:
raise LookupError("Can't find this knowledgebase!")
if check_kb_team_permission(kb, current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR)
blob = html2pdf(url)
if not blob:
@ -117,9 +118,9 @@ def web_crawl():
raise RuntimeError("This type of file has not been supported yet!")
location = filename
while STORAGE_IMPL.obj_exist(kb_id, location):
while settings.STORAGE_IMPL.obj_exist(kb_id, location):
location += "_"
STORAGE_IMPL.put(kb_id, location, blob)
settings.STORAGE_IMPL.put(kb_id, location, blob)
doc = {
"id": get_uuid(),
"kb_id": kb.id,
@ -155,12 +156,12 @@ def create():
req = request.json
kb_id = req["kb_id"]
if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR)
if len(req["name"].encode("utf-8")) > FILE_NAME_LEN_LIMIT:
return get_json_result(data=False, message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=RetCode.ARGUMENT_ERROR)
if req["name"].strip() == "":
return get_json_result(data=False, message="File name can't be empty.", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message="File name can't be empty.", code=RetCode.ARGUMENT_ERROR)
req["name"] = req["name"].strip()
try:
@ -210,13 +211,13 @@ def create():
def list_docs():
kb_id = request.args.get("kb_id")
if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR)
tenants = UserTenantService.query(user_id=current_user.id)
for tenant in tenants:
if KnowledgebaseService.query(tenant_id=tenant.tenant_id, id=kb_id):
break
else:
return get_json_result(data=False, message="Only owner of knowledgebase authorized for this operation.", code=settings.RetCode.OPERATING_ERROR)
return get_json_result(data=False, message="Only owner of knowledgebase authorized for this operation.", code=RetCode.OPERATING_ERROR)
keywords = request.args.get("keywords", "")
page_number = int(request.args.get("page", 0))
@ -259,6 +260,8 @@ def list_docs():
for doc_item in docs:
if doc_item["thumbnail"] and not doc_item["thumbnail"].startswith(IMG_BASE64_PREFIX):
doc_item["thumbnail"] = f"/v1/document/image/{kb_id}-{doc_item['thumbnail']}"
if doc_item.get("source_type"):
doc_item["source_type"] = doc_item["source_type"].split("/")[0]
return get_json_result(data={"total": tol, "docs": docs})
except Exception as e:
@ -272,13 +275,13 @@ def get_filter():
kb_id = req.get("kb_id")
if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR)
tenants = UserTenantService.query(user_id=current_user.id)
for tenant in tenants:
if KnowledgebaseService.query(tenant_id=tenant.tenant_id, id=kb_id):
break
else:
return get_json_result(data=False, message="Only owner of knowledgebase authorized for this operation.", code=settings.RetCode.OPERATING_ERROR)
return get_json_result(data=False, message="Only owner of knowledgebase authorized for this operation.", code=RetCode.OPERATING_ERROR)
keywords = req.get("keywords", "")
@ -310,7 +313,7 @@ def docinfos():
doc_ids = req["doc_ids"]
for doc_id in doc_ids:
if not DocumentService.accessible(doc_id, current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR)
docs = DocumentService.get_by_ids(doc_ids)
return get_json_result(data=list(docs.dicts()))
@ -320,7 +323,7 @@ def docinfos():
def thumbnails():
doc_ids = request.args.getlist("doc_ids")
if not doc_ids:
return get_json_result(data=False, message='Lack of "Document ID"', code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message='Lack of "Document ID"', code=RetCode.ARGUMENT_ERROR)
try:
docs = DocumentService.get_thumbnails(doc_ids)
@ -343,7 +346,7 @@ def change_status():
status = str(req.get("status", ""))
if status not in ["0", "1"]:
return get_json_result(data=False, message='"Status" must be either 0 or 1!', code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message='"Status" must be either 0 or 1!', code=RetCode.ARGUMENT_ERROR)
result = {}
for doc_id in doc_ids:
@ -385,50 +388,12 @@ def rm():
for doc_id in doc_ids:
if not DocumentService.accessible4deletion(doc_id, current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR)
root_folder = FileService.get_root_folder(current_user.id)
pf_id = root_folder["id"]
FileService.init_knowledgebase_docs(pf_id, current_user.id)
errors = ""
kb_table_num_map = {}
for doc_id in doc_ids:
try:
e, doc = DocumentService.get_by_id(doc_id)
if not e:
return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id:
return get_data_error_result(message="Tenant not found!")
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
TaskService.filter_delete([Task.doc_id == doc_id])
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(message="Database error (Document removal)!")
f2d = File2DocumentService.get_by_document_id(doc_id)
deleted_file_count = 0
if f2d:
deleted_file_count = FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
File2DocumentService.delete_by_document_id(doc_id)
if deleted_file_count > 0:
STORAGE_IMPL.rm(b, n)
doc_parser = doc.parser_id
if doc_parser == ParserType.TABLE:
kb_id = doc.kb_id
if kb_id not in kb_table_num_map:
counts = DocumentService.count_by_kb_id(kb_id=kb_id, keywords="", run_status=[TaskStatus.DONE], types=[])
kb_table_num_map[kb_id] = counts
kb_table_num_map[kb_id] -= 1
if kb_table_num_map[kb_id] <= 0:
KnowledgebaseService.delete_field_map(kb_id)
except Exception as e:
errors += str(e)
errors = FileService.delete_docs(doc_ids, current_user.id)
if errors:
return get_json_result(data=False, message=errors, code=settings.RetCode.SERVER_ERROR)
return get_json_result(data=False, message=errors, code=RetCode.SERVER_ERROR)
return get_json_result(data=True)
@ -440,7 +405,7 @@ def run():
req = request.json
for doc_id in req["doc_ids"]:
if not DocumentService.accessible(doc_id, current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR)
try:
kb_table_num_map = {}
for id in req["doc_ids"]:
@ -473,23 +438,7 @@ def run():
if str(req["run"]) == TaskStatus.RUNNING.value:
doc = doc.to_dict()
doc["tenant_id"] = tenant_id
doc_parser = doc.get("parser_id", ParserType.NAIVE)
if doc_parser == ParserType.TABLE:
kb_id = doc.get("kb_id")
if not kb_id:
continue
if kb_id not in kb_table_num_map:
count = DocumentService.count_by_kb_id(kb_id=kb_id, keywords="", run_status=[TaskStatus.DONE], types=[])
kb_table_num_map[kb_id] = count
if kb_table_num_map[kb_id] <= 0:
KnowledgebaseService.delete_field_map(kb_id)
if doc.get("pipeline_id", ""):
queue_dataflow(tenant_id, flow_id=doc["pipeline_id"], task_id=get_uuid(), doc_id=id)
else:
bucket, name = File2DocumentService.get_storage_address(doc_id=doc["id"])
queue_tasks(doc, bucket, name, 0)
DocumentService.run(tenant_id, doc, kb_table_num_map)
return get_json_result(data=True)
except Exception as e:
@ -502,15 +451,15 @@ def run():
def rename():
req = request.json
if not DocumentService.accessible(req["doc_id"], current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR)
try:
e, doc = DocumentService.get_by_id(req["doc_id"])
if not e:
return get_data_error_result(message="Document not found!")
if pathlib.Path(req["name"].lower()).suffix != pathlib.Path(doc.name.lower()).suffix:
return get_json_result(data=False, message="The extension of file can't be changed", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message="The extension of file can't be changed", code=RetCode.ARGUMENT_ERROR)
if len(req["name"].encode("utf-8")) > FILE_NAME_LEN_LIMIT:
return get_json_result(data=False, message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=RetCode.ARGUMENT_ERROR)
for d in DocumentService.query(name=req["name"], kb_id=doc.kb_id):
if d.name == req["name"]:
@ -553,7 +502,7 @@ def get(doc_id):
return get_data_error_result(message="Document not found!")
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
response = flask.make_response(STORAGE_IMPL.get(b, n))
response = flask.make_response(settings.STORAGE_IMPL.get(b, n))
ext = re.search(r"\.([^.]+)$", doc.name.lower())
ext = ext.group(1) if ext else None
@ -575,7 +524,7 @@ def change_parser():
req = request.json
if not DocumentService.accessible(req["doc_id"], current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR)
e, doc = DocumentService.get_by_id(req["doc_id"])
if not e:
@ -629,7 +578,7 @@ def get_image(image_id):
if len(arr) != 2:
return get_data_error_result(message="Image not found.")
bkt, nm = image_id.split("-")
response = flask.make_response(STORAGE_IMPL.get(bkt, nm))
response = flask.make_response(settings.STORAGE_IMPL.get(bkt, nm))
response.headers.set("Content-Type", "image/JPEG")
return response
except Exception as e:
@ -641,12 +590,12 @@ def get_image(image_id):
@validate_request("conversation_id")
def upload_and_parse():
if "file" not in request.files:
return get_json_result(data=False, message="No file part!", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message="No file part!", code=RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist("file")
for file_obj in file_objs:
if file_obj.filename == "":
return get_json_result(data=False, message="No file selected!", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message="No file selected!", code=RetCode.ARGUMENT_ERROR)
doc_ids = doc_upload_and_parse(request.form.get("conversation_id"), file_objs, current_user.id)
@ -659,7 +608,7 @@ def parse():
url = request.json.get("url") if request.json else ""
if url:
if not is_valid_url(url):
return get_json_result(data=False, message="The URL format is invalid", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message="The URL format is invalid", code=RetCode.ARGUMENT_ERROR)
download_path = os.path.join(get_project_base_directory(), "logs/downloads")
os.makedirs(download_path, exist_ok=True)
from seleniumwire.webdriver import Chrome, ChromeOptions
@ -692,13 +641,13 @@ def parse():
r = re.search(r"filename=\"([^\"]+)\"", str(res_headers))
if not r or not r.group(1):
return get_json_result(data=False, message="Can't not identify downloaded file", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message="Can't not identify downloaded file", code=RetCode.ARGUMENT_ERROR)
f = File(r.group(1), os.path.join(download_path, r.group(1)))
txt = FileService.parse_docs([f], current_user.id)
return get_json_result(data=txt)
if "file" not in request.files:
return get_json_result(data=False, message="No file part!", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message="No file part!", code=RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist("file")
txt = FileService.parse_docs(file_objs, current_user.id)
@ -712,18 +661,18 @@ def parse():
def set_meta():
req = request.json
if not DocumentService.accessible(req["doc_id"], current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR)
try:
meta = json.loads(req["meta"])
if not isinstance(meta, dict):
return get_json_result(data=False, message="Only dictionary type supported.", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message="Only dictionary type supported.", code=RetCode.ARGUMENT_ERROR)
for k, v in meta.items():
if not isinstance(v, str) and not isinstance(v, int) and not isinstance(v, float):
return get_json_result(data=False, message=f"The type is not supported: {v}", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message=f"The type is not supported: {v}", code=RetCode.ARGUMENT_ERROR)
except Exception as e:
return get_json_result(data=False, message=f"Json syntax error: {e}", code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message=f"Json syntax error: {e}", code=RetCode.ARGUMENT_ERROR)
if not isinstance(meta, dict):
return get_json_result(data=False, message='Meta data should be in Json map format, like {"key": "value"}', code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message='Meta data should be in Json map format, like {"key": "value"}', code=RetCode.ARGUMENT_ERROR)
try:
e, doc = DocumentService.get_by_id(req["doc_id"])

View File

@ -23,10 +23,10 @@ from flask import request
from flask_login import login_required, current_user
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.utils import get_uuid
from common.misc_utils import get_uuid
from common.constants import RetCode
from api.db import FileType
from api.db.services.document_service import DocumentService
from api import settings
from api.utils.api_utils import get_json_result
@ -108,7 +108,7 @@ def rm():
file_ids = req["file_ids"]
if not file_ids:
return get_json_result(
data=False, message='Lack of "Files ID"', code=settings.RetCode.ARGUMENT_ERROR)
data=False, message='Lack of "Files ID"', code=RetCode.ARGUMENT_ERROR)
try:
for file_id in file_ids:
informs = File2DocumentService.get_by_file_id(file_id)

View File

@ -26,15 +26,15 @@ from api.common.check_team_permission import check_file_team_permission
from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.utils import get_uuid
from api.db import FileType, FileSource
from common.misc_utils import get_uuid
from common.constants import RetCode, FileSource
from api.db import FileType
from api.db.services import duplicate_name
from api.db.services.file_service import FileService
from api import settings
from api.utils.api_utils import get_json_result
from api.utils.file_utils import filename_type
from api.utils.web_utils import CONTENT_TYPE_MAP
from rag.utils.storage_factory import STORAGE_IMPL
from common import settings
@manager.route('/upload', methods=['POST']) # noqa: F821
@ -49,21 +49,21 @@ def upload():
if 'file' not in request.files:
return get_json_result(
data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
data=False, message='No file part!', code=RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist('file')
for file_obj in file_objs:
if file_obj.filename == '':
return get_json_result(
data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
data=False, message='No file selected!', code=RetCode.ARGUMENT_ERROR)
file_res = []
try:
e, pf_folder = FileService.get_by_id(pf_id)
if not e:
return get_data_error_result( message="Can't find this folder!")
for file_obj in file_objs:
MAX_FILE_NUM_PER_USER = int(os.environ.get('MAX_FILE_NUM_PER_USER', 0))
if MAX_FILE_NUM_PER_USER > 0 and DocumentService.get_doc_count(current_user.id) >= MAX_FILE_NUM_PER_USER:
MAX_FILE_NUM_PER_USER: int = int(os.environ.get('MAX_FILE_NUM_PER_USER', 0))
if 0 < MAX_FILE_NUM_PER_USER <= DocumentService.get_doc_count(current_user.id):
return get_data_error_result( message="Exceed the maximum file number of a free user!")
# split file name path
@ -95,14 +95,14 @@ def upload():
# file type
filetype = filename_type(file_obj_names[file_len - 1])
location = file_obj_names[file_len - 1]
while STORAGE_IMPL.obj_exist(last_folder.id, location):
while settings.STORAGE_IMPL.obj_exist(last_folder.id, location):
location += "_"
blob = file_obj.read()
filename = duplicate_name(
FileService.query,
name=file_obj_names[file_len - 1],
parent_id=last_folder.id)
STORAGE_IMPL.put(last_folder.id, location, blob)
settings.STORAGE_IMPL.put(last_folder.id, location, blob)
file = {
"id": get_uuid(),
"parent_id": last_folder.id,
@ -134,7 +134,7 @@ def create():
try:
if not FileService.is_parent_folder_exist(pf_id):
return get_json_result(
data=False, message="Parent Folder Doesn't Exist!", code=settings.RetCode.OPERATING_ERROR)
data=False, message="Parent Folder Doesn't Exist!", code=RetCode.OPERATING_ERROR)
if FileService.query(name=req["name"], parent_id=pf_id):
return get_data_error_result(
message="Duplicated folder name in the same folder.")
@ -245,7 +245,7 @@ def rm():
def _delete_single_file(file):
try:
if file.location:
STORAGE_IMPL.rm(file.parent_id, file.location)
settings.STORAGE_IMPL.rm(file.parent_id, file.location)
except Exception:
logging.exception(f"Fail to remove object: {file.parent_id}/{file.location}")
@ -279,7 +279,7 @@ def rm():
if not file.tenant_id:
return get_data_error_result(message="Tenant not found!")
if not check_file_team_permission(file, current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR)
if file.source_type == FileSource.KNOWLEDGEBASE:
continue
@ -306,14 +306,14 @@ def rename():
if not e:
return get_data_error_result(message="File not found!")
if not check_file_team_permission(file, current_user.id):
return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR)
return get_json_result(data=False, message='No authorization.', code=RetCode.AUTHENTICATION_ERROR)
if file.type != FileType.FOLDER.value \
and pathlib.Path(req["name"].lower()).suffix != pathlib.Path(
file.name.lower()).suffix:
return get_json_result(
data=False,
message="The extension of file can't be changed",
code=settings.RetCode.ARGUMENT_ERROR)
code=RetCode.ARGUMENT_ERROR)
for file in FileService.query(name=req["name"], pf_id=file.parent_id):
if file.name == req["name"]:
return get_data_error_result(
@ -344,12 +344,12 @@ def get(file_id):
if not e:
return get_data_error_result(message="Document not found!")
if not check_file_team_permission(file, current_user.id):
return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR)
return get_json_result(data=False, message='No authorization.', code=RetCode.AUTHENTICATION_ERROR)
blob = STORAGE_IMPL.get(file.parent_id, file.location)
blob = settings.STORAGE_IMPL.get(file.parent_id, file.location)
if not blob:
b, n = File2DocumentService.get_storage_address(file_id=file_id)
blob = STORAGE_IMPL.get(b, n)
blob = settings.STORAGE_IMPL.get(b, n)
response = flask.make_response(blob)
ext = re.search(r"\.([^.]+)$", file.name.lower())
@ -376,7 +376,7 @@ def move():
ok, dest_folder = FileService.get_by_id(dest_parent_id)
if not ok or not dest_folder:
return get_data_error_result(message="Parent Folder not found!")
return get_data_error_result(message="Parent folder not found!")
files = FileService.get_by_ids(file_ids)
if not files:
@ -387,14 +387,14 @@ def move():
for file_id in file_ids:
file = files_dict.get(file_id)
if not file:
return get_data_error_result(message="File or Folder not found!")
return get_data_error_result(message="File or folder not found!")
if not file.tenant_id:
return get_data_error_result(message="Tenant not found!")
if not check_file_team_permission(file, current_user.id):
return get_json_result(
data=False,
message="No authorization.",
code=settings.RetCode.AUTHENTICATION_ERROR,
code=RetCode.AUTHENTICATION_ERROR,
)
def _move_entry_recursive(source_file_entry, dest_folder):
@ -428,11 +428,11 @@ def move():
filename = source_file_entry.name
new_location = filename
while STORAGE_IMPL.obj_exist(dest_folder.id, new_location):
while settings.STORAGE_IMPL.obj_exist(dest_folder.id, new_location):
new_location += "_"
try:
STORAGE_IMPL.move(old_parent_id, old_location, dest_folder.id, new_location)
settings.STORAGE_IMPL.move(old_parent_id, old_location, dest_folder.id, new_location)
except Exception as storage_err:
raise RuntimeError(f"Move file failed at storage layer: {str(storage_err)}")

View File

@ -21,8 +21,8 @@ from flask import request
from flask_login import login_required, current_user
import numpy as np
from api.db import LLMType
from api.db.services import duplicate_name
from api.db.services.connector_service import Connector2KbService
from api.db.services.llm_service import LLMBundle
from api.db.services.document_service import DocumentService, queue_raptor_o_graphrag_tasks
from api.db.services.file2document_service import File2DocumentService
@ -31,82 +31,33 @@ from api.db.services.pipeline_operation_log_service import PipelineOperationLogS
from api.db.services.task_service import TaskService, GRAPH_RAPTOR_FAKE_DOC_ID
from api.db.services.user_service import TenantService, UserTenantService
from api.utils.api_utils import get_error_data_result, server_error_response, get_data_error_result, validate_request, not_allowed_parameters
from api.utils import get_uuid
from api.db import PipelineTaskType, StatusEnum, FileSource, VALID_FILE_TYPES, VALID_TASK_STATUS
from api.db import VALID_FILE_TYPES
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.db_models import File
from api.utils.api_utils import get_json_result
from api import settings
from rag.nlp import search
from api.constants import DATASET_NAME_LIMIT
from rag.settings import PAGERANK_FLD
from rag.utils.redis_conn import REDIS_CONN
from rag.utils.storage_factory import STORAGE_IMPL
from rag.utils.doc_store_conn import OrderByExpr
from rag.utils.doc_store_conn import OrderByExpr
from common.constants import RetCode, PipelineTaskType, StatusEnum, VALID_TASK_STATUS, FileSource, LLMType, PAGERANK_FLD
from common import settings
@manager.route('/create', methods=['post']) # noqa: F821
@login_required
@validate_request("name")
def create():
req = request.json
dataset_name = req["name"]
if not isinstance(dataset_name, str):
return get_data_error_result(message="Dataset name must be string.")
if dataset_name.strip() == "":
return get_data_error_result(message="Dataset name can't be empty.")
if len(dataset_name.encode("utf-8")) > DATASET_NAME_LIMIT:
return get_data_error_result(
message=f"Dataset name length is {len(dataset_name)} which is larger than {DATASET_NAME_LIMIT}")
req = KnowledgebaseService.create_with_name(
name = req.pop("name", None),
tenant_id = current_user.id,
parser_id = req.pop("parser_id", None),
**req
)
dataset_name = dataset_name.strip()
dataset_name = duplicate_name(
KnowledgebaseService.query,
name=dataset_name,
tenant_id=current_user.id,
status=StatusEnum.VALID.value)
try:
req["id"] = get_uuid()
req["name"] = dataset_name
req["tenant_id"] = current_user.id
req["created_by"] = current_user.id
if not req.get("parser_id"):
req["parser_id"] = "naive"
e, t = TenantService.get_by_id(current_user.id)
if not e:
return get_data_error_result(message="Tenant not found.")
req["parser_config"] = {
"layout_recognize": "DeepDOC",
"chunk_token_num": 512,
"delimiter": "\n",
"auto_keywords": 0,
"auto_questions": 0,
"html4excel": False,
"topn_tags": 3,
"raptor": {
"use_raptor": True,
"prompt": "Please summarize the following paragraphs. Be careful with the numbers, do not make things up. Paragraphs as following:\n {cluster_content}\nThe above is the content you need to summarize.",
"max_token": 256,
"threshold": 0.1,
"max_cluster": 64,
"random_seed": 0
},
"graphrag": {
"use_graphrag": True,
"entity_types": [
"organization",
"person",
"geo",
"event",
"category"
],
"method": "light"
}
}
if not KnowledgebaseService.save(**req):
return get_data_error_result()
return get_json_result(data={"kb_id": req["id"]})
return get_json_result(data={"kb_id":req["id"]})
except Exception as e:
return server_error_response(e)
@ -130,14 +81,14 @@ def update():
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
code=RetCode.AUTHENTICATION_ERROR
)
try:
if not KnowledgebaseService.query(
created_by=current_user.id, id=req["kb_id"]):
return get_json_result(
data=False, message='Only owner of knowledgebase authorized for this operation.',
code=settings.RetCode.OPERATING_ERROR)
code=RetCode.OPERATING_ERROR)
e, kb = KnowledgebaseService.get_by_id(req["kb_id"])
if not e:
@ -151,6 +102,10 @@ def update():
message="Duplicated knowledgebase name.")
del req["kb_id"]
connectors = []
if "connectors" in req:
connectors = req["connectors"]
del req["connectors"]
if not KnowledgebaseService.update_by_id(kb.id, req):
return get_data_error_result()
@ -167,8 +122,12 @@ def update():
if not e:
return get_data_error_result(
message="Database error (Knowledgebase rename)!")
errors = Connector2KbService.link_connectors(kb.id, [conn for conn in connectors], current_user.id)
if errors:
logging.error("Link KB errors: ", errors)
kb = kb.to_dict()
kb.update(req)
kb["connectors"] = connectors
return get_json_result(data=kb)
except Exception as e:
@ -188,12 +147,14 @@ def detail():
else:
return get_json_result(
data=False, message='Only owner of knowledgebase authorized for this operation.',
code=settings.RetCode.OPERATING_ERROR)
code=RetCode.OPERATING_ERROR)
kb = KnowledgebaseService.get_detail(kb_id)
if not kb:
return get_data_error_result(
message="Can't find this knowledgebase!")
kb["size"] = DocumentService.get_total_size_by_kb_id(kb_id=kb["id"],keywords="", run_status=[], types=[])
kb["connectors"] = Connector2KbService.list_connectors(kb_id)
for key in ["graphrag_task_finish_at", "raptor_task_finish_at", "mindmap_task_finish_at"]:
if finish_at := kb.get(key):
kb[key] = finish_at.strftime("%Y-%m-%d %H:%M:%S")
@ -246,7 +207,7 @@ def rm():
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
code=RetCode.AUTHENTICATION_ERROR
)
try:
kbs = KnowledgebaseService.query(
@ -254,7 +215,7 @@ def rm():
if not kbs:
return get_json_result(
data=False, message='Only owner of knowledgebase authorized for this operation.',
code=settings.RetCode.OPERATING_ERROR)
code=RetCode.OPERATING_ERROR)
for doc in DocumentService.query(kb_id=req["kb_id"]):
if not DocumentService.remove_document(doc, kbs[0].tenant_id):
@ -272,8 +233,8 @@ def rm():
for kb in kbs:
settings.docStoreConn.delete({"kb_id": kb.id}, search.index_name(kb.tenant_id), kb.id)
settings.docStoreConn.deleteIdx(search.index_name(kb.tenant_id), kb.id)
if hasattr(STORAGE_IMPL, 'remove_bucket'):
STORAGE_IMPL.remove_bucket(kb.id)
if hasattr(settings.STORAGE_IMPL, 'remove_bucket'):
settings.STORAGE_IMPL.remove_bucket(kb.id)
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@ -286,7 +247,7 @@ def list_tags(kb_id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
code=RetCode.AUTHENTICATION_ERROR
)
tenants = UserTenantService.get_tenants_by_user_id(current_user.id)
@ -305,7 +266,7 @@ def list_tags_from_kbs():
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
code=RetCode.AUTHENTICATION_ERROR
)
tenants = UserTenantService.get_tenants_by_user_id(current_user.id)
@ -323,7 +284,7 @@ def rm_tags(kb_id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
code=RetCode.AUTHENTICATION_ERROR
)
e, kb = KnowledgebaseService.get_by_id(kb_id)
@ -343,7 +304,7 @@ def rename_tags(kb_id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
code=RetCode.AUTHENTICATION_ERROR
)
e, kb = KnowledgebaseService.get_by_id(kb_id)
@ -361,7 +322,7 @@ def knowledge_graph(kb_id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
code=RetCode.AUTHENTICATION_ERROR
)
_, kb = KnowledgebaseService.get_by_id(kb_id)
req = {
@ -401,7 +362,7 @@ def delete_knowledge_graph(kb_id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
code=RetCode.AUTHENTICATION_ERROR
)
_, kb = KnowledgebaseService.get_by_id(kb_id)
settings.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]}, search.index_name(kb.tenant_id), kb_id)
@ -418,7 +379,7 @@ def get_meta():
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
code=RetCode.AUTHENTICATION_ERROR
)
return get_json_result(data=DocumentService.get_meta_by_kbs(kb_ids))
@ -431,7 +392,7 @@ def get_basic_info():
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
code=RetCode.AUTHENTICATION_ERROR
)
basic_info = DocumentService.knowledgebase_basic_info(kb_id)
@ -444,7 +405,7 @@ def get_basic_info():
def list_pipeline_logs():
kb_id = request.args.get("kb_id")
if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR)
keywords = request.args.get("keywords", "")
@ -488,7 +449,7 @@ def list_pipeline_logs():
def list_pipeline_dataset_logs():
kb_id = request.args.get("kb_id")
if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR)
page_number = int(request.args.get("page", 0))
items_per_page = int(request.args.get("page_size", 0))
@ -522,7 +483,7 @@ def list_pipeline_dataset_logs():
def delete_pipeline_logs():
kb_id = request.args.get("kb_id")
if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR)
req = request.get_json()
log_ids = req.get("log_ids", [])
@ -537,7 +498,7 @@ def delete_pipeline_logs():
def pipeline_log_detail():
log_id = request.args.get("log_id")
if not log_id:
return get_json_result(data=False, message='Lack of "Pipeline log ID"', code=settings.RetCode.ARGUMENT_ERROR)
return get_json_result(data=False, message='Lack of "Pipeline log ID"', code=RetCode.ARGUMENT_ERROR)
ok, log = PipelineOperationLogService.get_by_id(log_id)
if not ok:
@ -610,7 +571,7 @@ def trace_graphrag():
ok, task = TaskService.get_by_id(task_id)
if not ok:
return get_error_data_result(message="GraphRAG Task Not Found or Error Occurred")
return get_json_result(data={})
return get_json_result(data=task.to_dict())
@ -767,26 +728,30 @@ def delete_kb_task():
if not pipeline_task_type or pipeline_task_type not in [PipelineTaskType.GRAPH_RAG, PipelineTaskType.RAPTOR, PipelineTaskType.MINDMAP]:
return get_error_data_result(message="Invalid task type")
def cancel_task(task_id):
REDIS_CONN.set(f"{task_id}-cancel", "x")
match pipeline_task_type:
case PipelineTaskType.GRAPH_RAG:
settings.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]}, search.index_name(kb.tenant_id), kb_id)
kb_task_id_field = "graphrag_task_id"
task_id = kb.graphrag_task_id
kb_task_finish_at = "graphrag_task_finish_at"
cancel_task(task_id)
settings.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]}, search.index_name(kb.tenant_id), kb_id)
case PipelineTaskType.RAPTOR:
kb_task_id_field = "raptor_task_id"
task_id = kb.raptor_task_id
kb_task_finish_at = "raptor_task_finish_at"
cancel_task(task_id)
settings.docStoreConn.delete({"raptor_kwd": ["raptor"]}, search.index_name(kb.tenant_id), kb_id)
case PipelineTaskType.MINDMAP:
kb_task_id_field = "mindmap_task_id"
task_id = kb.mindmap_task_id
kb_task_finish_at = "mindmap_task_finish_at"
cancel_task(task_id)
case _:
return get_error_data_result(message="Internal Error: Invalid task type")
def cancel_task(task_id):
REDIS_CONN.set(f"{task_id}-cancel", "x")
cancel_task(task_id)
ok = KnowledgebaseService.update_by_id(kb_id, {kb_task_id_field: "", kb_task_finish_at: None})
if not ok:
@ -815,14 +780,14 @@ def check_embedding():
def _to_1d(x):
a = np.asarray(x, dtype=np.float32)
return a.reshape(-1)
return a.reshape(-1)
def _cos_sim(a, b, eps=1e-12):
a = _to_1d(a)
b = _to_1d(b)
na = np.linalg.norm(a)
nb = np.linalg.norm(b)
if na < eps or nb < eps:
if na < eps or nb < eps:
return 0.0
return float(np.dot(a, b) / (na * nb))
@ -860,7 +825,7 @@ def check_embedding():
indexNames=index_nm, knowledgebaseIds=[kb_id]
)
ids = docStoreConn.getChunkIds(res1)
if not ids:
if not ids:
continue
cid = ids[0]
@ -904,7 +869,7 @@ def check_embedding():
continue
try:
qv, _ = emb_mdl.encode_queries(txt)
qv, _ = emb_mdl.encode_queries(txt)
sim = _cos_sim(qv, ck["vector"])
except Exception:
return get_error_data_result(message="embedding failure")
@ -930,4 +895,6 @@ def check_embedding():
}
if summary["avg_cos_sim"] > 0.99:
return get_json_result(data={"summary": summary, "results": results})
return get_json_result(code=settings.RetCode.NOT_EFFECTIVE, message="failed", data={"summary": summary, "results": results})
return get_json_result(code=RetCode.NOT_EFFECTIVE, message="failed", data={"summary": summary, "results": results})

View File

@ -21,19 +21,19 @@ from flask_login import login_required, current_user
from api.db.services.tenant_llm_service import LLMFactoriesService, TenantLLMService
from api.db.services.llm_service import LLMService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.db import StatusEnum, LLMType
from common.constants import StatusEnum, LLMType
from api.db.db_models import TenantLLM
from api.utils.api_utils import get_json_result
from api.utils.base64_image import test_image
from api.utils.api_utils import get_json_result, get_allowed_llm_factories
from rag.utils.base64_image import test_image
from rag.llm import EmbeddingModel, ChatModel, RerankModel, CvModel, TTSModel
@manager.route('/factories', methods=['GET']) # noqa: F821
@manager.route("/factories", methods=["GET"]) # noqa: F821
@login_required
def factories():
try:
fac = LLMFactoriesService.get_all()
fac = [f.to_dict() for f in fac if f.name not in ["Youdao", "FastEmbed", "BAAI"]]
fac = get_allowed_llm_factories()
fac = [f.to_dict() for f in fac if f.name not in ["Youdao", "FastEmbed", "BAAI", "Builtin"]]
llms = LLMService.get_all()
mdl_types = {}
for m in llms:
@ -43,14 +43,13 @@ def factories():
mdl_types[m.fid] = set([])
mdl_types[m.fid].add(m.model_type)
for f in fac:
f["model_types"] = list(mdl_types.get(f["name"], [LLMType.CHAT, LLMType.EMBEDDING, LLMType.RERANK,
LLMType.IMAGE2TEXT, LLMType.SPEECH2TEXT, LLMType.TTS]))
f["model_types"] = list(mdl_types.get(f["name"], [LLMType.CHAT, LLMType.EMBEDDING, LLMType.RERANK, LLMType.IMAGE2TEXT, LLMType.SPEECH2TEXT, LLMType.TTS]))
return get_json_result(data=fac)
except Exception as e:
return server_error_response(e)
@manager.route('/set_api_key', methods=['POST']) # noqa: F821
@manager.route("/set_api_key", methods=["POST"]) # noqa: F821
@login_required
@validate_request("llm_factory", "api_key")
def set_api_key():
@ -63,8 +62,7 @@ def set_api_key():
for llm in LLMService.query(fid=factory):
if not embd_passed and llm.model_type == LLMType.EMBEDDING.value:
assert factory in EmbeddingModel, f"Embedding model from {factory} is not supported yet."
mdl = EmbeddingModel[factory](
req["api_key"], llm.llm_name, base_url=req.get("base_url"))
mdl = EmbeddingModel[factory](req["api_key"], llm.llm_name, base_url=req.get("base_url"))
try:
arr, tc = mdl.encode(["Test if the api key is available"])
if len(arr[0]) == 0:
@ -74,52 +72,40 @@ def set_api_key():
msg += f"\nFail to access embedding model({llm.llm_name}) using this api key." + str(e)
elif not chat_passed and llm.model_type == LLMType.CHAT.value:
assert factory in ChatModel, f"Chat model from {factory} is not supported yet."
mdl = ChatModel[factory](
req["api_key"], llm.llm_name, base_url=req.get("base_url"), **extra)
mdl = ChatModel[factory](req["api_key"], llm.llm_name, base_url=req.get("base_url"), **extra)
try:
m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}],
{"temperature": 0.9, 'max_tokens': 50})
m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}], {"temperature": 0.9, "max_tokens": 50})
if m.find("**ERROR**") >= 0:
raise Exception(m)
chat_passed = True
except Exception as e:
msg += f"\nFail to access model({llm.fid}/{llm.llm_name}) using this api key." + str(
e)
msg += f"\nFail to access model({llm.fid}/{llm.llm_name}) using this api key." + str(e)
elif not rerank_passed and llm.model_type == LLMType.RERANK:
assert factory in RerankModel, f"Re-rank model from {factory} is not supported yet."
mdl = RerankModel[factory](
req["api_key"], llm.llm_name, base_url=req.get("base_url"))
mdl = RerankModel[factory](req["api_key"], llm.llm_name, base_url=req.get("base_url"))
try:
arr, tc = mdl.similarity("What's the weather?", ["Is it sunny today?"])
if len(arr) == 0 or tc == 0:
raise Exception("Fail")
rerank_passed = True
logging.debug(f'passed model rerank {llm.llm_name}')
logging.debug(f"passed model rerank {llm.llm_name}")
except Exception as e:
msg += f"\nFail to access model({llm.fid}/{llm.llm_name}) using this api key." + str(
e)
msg += f"\nFail to access model({llm.fid}/{llm.llm_name}) using this api key." + str(e)
if any([embd_passed, chat_passed, rerank_passed]):
msg = ''
msg = ""
break
if msg:
return get_data_error_result(message=msg)
llm_config = {
"api_key": req["api_key"],
"api_base": req.get("base_url", "")
}
llm_config = {"api_key": req["api_key"], "api_base": req.get("base_url", "")}
for n in ["model_type", "llm_name"]:
if n in req:
llm_config[n] = req[n]
for llm in LLMService.query(fid=factory):
llm_config["max_tokens"]=llm.max_tokens
if not TenantLLMService.filter_update(
[TenantLLM.tenant_id == current_user.id,
TenantLLM.llm_factory == factory,
TenantLLM.llm_name == llm.llm_name],
llm_config):
llm_config["max_tokens"] = llm.max_tokens
if not TenantLLMService.filter_update([TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == factory, TenantLLM.llm_name == llm.llm_name], llm_config):
TenantLLMService.save(
tenant_id=current_user.id,
llm_factory=factory,
@ -127,13 +113,13 @@ def set_api_key():
model_type=llm.model_type,
api_key=llm_config["api_key"],
api_base=llm_config["api_base"],
max_tokens=llm_config["max_tokens"]
max_tokens=llm_config["max_tokens"],
)
return get_json_result(data=True)
@manager.route('/add_llm', methods=['POST']) # noqa: F821
@manager.route("/add_llm", methods=["POST"]) # noqa: F821
@login_required
@validate_request("llm_factory")
def add_llm():
@ -142,6 +128,9 @@ def add_llm():
api_key = req.get("api_key", "x")
llm_name = req.get("llm_name")
if factory not in [f.name for f in get_allowed_llm_factories()]:
return get_data_error_result(message=f"LLM factory {factory} is not allowed")
def apikey_json(keys):
nonlocal req
return json.dumps({k: req.get(k, "") for k in keys})
@ -204,7 +193,7 @@ def add_llm():
"llm_name": llm_name,
"api_base": req.get("api_base", ""),
"api_key": api_key,
"max_tokens": req.get("max_tokens")
"max_tokens": req.get("max_tokens"),
}
msg = ""
@ -212,10 +201,7 @@ def add_llm():
extra = {"provider": factory}
if llm["model_type"] == LLMType.EMBEDDING.value:
assert factory in EmbeddingModel, f"Embedding model from {factory} is not supported yet."
mdl = EmbeddingModel[factory](
key=llm['api_key'],
model_name=mdl_nm,
base_url=llm["api_base"])
mdl = EmbeddingModel[factory](key=llm["api_key"], model_name=mdl_nm, base_url=llm["api_base"])
try:
arr, tc = mdl.encode(["Test if the api key is available"])
if len(arr[0]) == 0:
@ -225,42 +211,31 @@ def add_llm():
elif llm["model_type"] == LLMType.CHAT.value:
assert factory in ChatModel, f"Chat model from {factory} is not supported yet."
mdl = ChatModel[factory](
key=llm['api_key'],
key=llm["api_key"],
model_name=mdl_nm,
base_url=llm["api_base"],
**extra,
)
try:
m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}], {
"temperature": 0.9})
m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}], {"temperature": 0.9})
if not tc and m.find("**ERROR**:") >= 0:
raise Exception(m)
except Exception as e:
msg += f"\nFail to access model({factory}/{mdl_nm})." + str(
e)
msg += f"\nFail to access model({factory}/{mdl_nm})." + str(e)
elif llm["model_type"] == LLMType.RERANK:
assert factory in RerankModel, f"RE-rank model from {factory} is not supported yet."
try:
mdl = RerankModel[factory](
key=llm["api_key"],
model_name=mdl_nm,
base_url=llm["api_base"]
)
mdl = RerankModel[factory](key=llm["api_key"], model_name=mdl_nm, base_url=llm["api_base"])
arr, tc = mdl.similarity("Hello~ RAGFlower!", ["Hi, there!", "Ohh, my friend!"])
if len(arr) == 0:
raise Exception("Not known.")
except KeyError:
msg += f"{factory} dose not support this model({factory}/{mdl_nm})"
except Exception as e:
msg += f"\nFail to access model({factory}/{mdl_nm})." + str(
e)
msg += f"\nFail to access model({factory}/{mdl_nm})." + str(e)
elif llm["model_type"] == LLMType.IMAGE2TEXT.value:
assert factory in CvModel, f"Image to text model from {factory} is not supported yet."
mdl = CvModel[factory](
key=llm["api_key"],
model_name=mdl_nm,
base_url=llm["api_base"]
)
mdl = CvModel[factory](key=llm["api_key"], model_name=mdl_nm, base_url=llm["api_base"])
try:
image_data = test_image
m, tc = mdl.describe(image_data)
@ -270,9 +245,7 @@ def add_llm():
msg += f"\nFail to access model({factory}/{mdl_nm})." + str(e)
elif llm["model_type"] == LLMType.TTS:
assert factory in TTSModel, f"TTS model from {factory} is not supported yet."
mdl = TTSModel[factory](
key=llm["api_key"], model_name=mdl_nm, base_url=llm["api_base"]
)
mdl = TTSModel[factory](key=llm["api_key"], model_name=mdl_nm, base_url=llm["api_base"])
try:
for resp in mdl.tts("Hello~ RAGFlower!"):
pass
@ -285,40 +258,46 @@ def add_llm():
if msg:
return get_data_error_result(message=msg)
if not TenantLLMService.filter_update(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == factory,
TenantLLM.llm_name == llm["llm_name"]], llm):
if not TenantLLMService.filter_update([TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == factory, TenantLLM.llm_name == llm["llm_name"]], llm):
TenantLLMService.save(**llm)
return get_json_result(data=True)
@manager.route('/delete_llm', methods=['POST']) # noqa: F821
@manager.route("/delete_llm", methods=["POST"]) # noqa: F821
@login_required
@validate_request("llm_factory", "llm_name")
def delete_llm():
req = request.json
TenantLLMService.filter_delete(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"],
TenantLLM.llm_name == req["llm_name"]])
TenantLLMService.filter_delete([TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"], TenantLLM.llm_name == req["llm_name"]])
return get_json_result(data=True)
@manager.route('/delete_factory', methods=['POST']) # noqa: F821
@manager.route("/enable_llm", methods=["POST"]) # noqa: F821
@login_required
@validate_request("llm_factory", "llm_name")
def enable_llm():
req = request.json
TenantLLMService.filter_update(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"], TenantLLM.llm_name == req["llm_name"]], {"status": str(req.get("status", "1"))}
)
return get_json_result(data=True)
@manager.route("/delete_factory", methods=["POST"]) # noqa: F821
@login_required
@validate_request("llm_factory")
def delete_factory():
req = request.json
TenantLLMService.filter_delete(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"]])
TenantLLMService.filter_delete([TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"]])
return get_json_result(data=True)
@manager.route('/my_llms', methods=['GET']) # noqa: F821
@manager.route("/my_llms", methods=["GET"]) # noqa: F821
@login_required
def my_llms():
try:
include_details = request.args.get('include_details', 'false').lower() == 'true'
include_details = request.args.get("include_details", "false").lower() == "true"
if include_details:
res = {}
@ -334,38 +313,31 @@ def my_llms():
break
if o_dict["llm_factory"] not in res:
res[o_dict["llm_factory"]] = {
"tags": factory_tags,
"llm": []
}
res[o_dict["llm_factory"]] = {"tags": factory_tags, "llm": []}
res[o_dict["llm_factory"]]["llm"].append({
"type": o_dict["model_type"],
"name": o_dict["llm_name"],
"used_token": o_dict["used_tokens"],
"api_base": o_dict["api_base"] or "",
"max_tokens": o_dict["max_tokens"] or 8192
})
res[o_dict["llm_factory"]]["llm"].append(
{
"type": o_dict["model_type"],
"name": o_dict["llm_name"],
"used_token": o_dict["used_tokens"],
"api_base": o_dict["api_base"] or "",
"max_tokens": o_dict["max_tokens"] or 8192,
"status": o_dict["status"] or "1",
}
)
else:
res = {}
for o in TenantLLMService.get_my_llms(current_user.id):
if o["llm_factory"] not in res:
res[o["llm_factory"]] = {
"tags": o["tags"],
"llm": []
}
res[o["llm_factory"]]["llm"].append({
"type": o["model_type"],
"name": o["llm_name"],
"used_token": o["used_tokens"]
})
res[o["llm_factory"]] = {"tags": o["tags"], "llm": []}
res[o["llm_factory"]]["llm"].append({"type": o["model_type"], "name": o["llm_name"], "used_token": o["used_tokens"], "status": o["status"]})
return get_json_result(data=res)
except Exception as e:
return server_error_response(e)
@manager.route('/list', methods=['GET']) # noqa: F821
@manager.route("/list", methods=["GET"]) # noqa: F821
@login_required
def list_app():
self_deployed = ["FastEmbed", "Ollama", "Xinference", "LocalAI", "LM-Studio", "GPUStack"]
@ -373,20 +345,20 @@ def list_app():
model_type = request.args.get("model_type")
try:
objs = TenantLLMService.query(tenant_id=current_user.id)
facts = set([o.to_dict()["llm_factory"] for o in objs if o.api_key])
facts = set([o.to_dict()["llm_factory"] for o in objs if o.api_key and o.status == StatusEnum.VALID.value])
status = {(o.llm_name + "@" + o.llm_factory) for o in objs if o.status == StatusEnum.VALID.value}
llms = LLMService.get_all()
llms = [m.to_dict()
for m in llms if m.status == StatusEnum.VALID.value and m.fid not in weighted]
llms = [m.to_dict() for m in llms if m.status == StatusEnum.VALID.value and m.fid not in weighted and (m.fid == 'Builtin' or (m.llm_name + "@" + m.fid) in status)]
for m in llms:
m["available"] = m["fid"] in facts or m["llm_name"].lower() == "flag-embedding" or m["fid"] in self_deployed
if "tei-" in os.getenv("COMPOSE_PROFILES", "") and m["model_type"]==LLMType.EMBEDDING and m["fid"]=="Builtin" and m["llm_name"]==os.getenv('TEI_MODEL', ''):
if "tei-" in os.getenv("COMPOSE_PROFILES", "") and m["model_type"] == LLMType.EMBEDDING and m["fid"] == "Builtin" and m["llm_name"] == os.getenv("TEI_MODEL", ""):
m["available"] = True
llm_set = set([m["llm_name"] + "@" + m["fid"] for m in llms])
for o in objs:
if o.llm_name + "@" + o.llm_factory in llm_set:
continue
llms.append({"llm_name": o.llm_name, "model_type": o.model_type, "fid": o.llm_factory, "available": True})
llms.append({"llm_name": o.llm_name, "model_type": o.model_type, "fid": o.llm_factory, "available": True, "status": StatusEnum.VALID.value})
res = {}
for m in llms:

View File

@ -16,13 +16,12 @@
from flask import Response, request
from flask_login import current_user, login_required
from api.db import VALID_MCP_SERVER_TYPES
from api.db.db_models import MCPServer
from api.db.services.mcp_server_service import MCPServerService
from api.db.services.user_service import TenantService
from api.settings import RetCode
from common.constants import RetCode, VALID_MCP_SERVER_TYPES
from api.utils import get_uuid
from common.misc_utils import get_uuid
from api.utils.api_utils import get_data_error_result, get_json_result, server_error_response, validate_request, \
get_mcp_tools
from api.utils.web_utils import get_float, safe_json_parse

View File

@ -15,15 +15,19 @@
#
import json
import logging
import time
from typing import Any, cast
from agent.canvas import Canvas
from api.db import CanvasCategory
from api.db.services.canvas_service import UserCanvasService
from api.db.services.user_canvas_version import UserCanvasVersionService
from api.settings import RetCode
from api.utils import get_uuid
from common.constants import RetCode
from common.misc_utils import get_uuid
from api.utils.api_utils import get_data_error_result, get_error_data_result, get_json_result, token_required
from api.utils.api_utils import get_result
from flask import request
from flask import request, Response
@manager.route('/agents', methods=['GET']) # noqa: F821
@ -127,3 +131,49 @@ def delete_agent(tenant_id: str, agent_id: str):
UserCanvasService.delete_by_id(agent_id)
return get_json_result(data=True)
@manager.route('/webhook/<agent_id>', methods=['POST']) # noqa: F821
@token_required
def webhook(tenant_id: str, agent_id: str):
req = request.json
if not UserCanvasService.accessible(req["id"], tenant_id):
return get_json_result(
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
e, cvs = UserCanvasService.get_by_id(req["id"])
if not e:
return get_data_error_result(message="canvas not found.")
if not isinstance(cvs.dsl, str):
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
if cvs.canvas_category == CanvasCategory.DataFlow:
return get_data_error_result(message="Dataflow can not be triggered by webhook.")
try:
canvas = Canvas(cvs.dsl, tenant_id, agent_id)
except Exception as e:
return get_json_result(
data=False, message=str(e),
code=RetCode.EXCEPTION_ERROR)
def sse():
nonlocal canvas
try:
for ans in canvas.run(query=req.get("query", ""), files=req.get("files", []), user_id=req.get("user_id", tenant_id), webhook_payload=req):
yield "data:" + json.dumps(ans, ensure_ascii=False) + "\n\n"
cvs.dsl = json.loads(str(canvas))
UserCanvasService.update_by_id(req["id"], cvs.to_dict())
except Exception as e:
logging.exception(e)
yield "data:" + json.dumps({"code": 500, "message": str(e), "data": False}, ensure_ascii=False) + "\n\n"
resp = Response(sse(), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp

View File

@ -17,13 +17,12 @@ import logging
from flask import request
from api import settings
from api.db import StatusEnum
from api.db.services.dialog_service import DialogService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.tenant_llm_service import TenantLLMService
from api.db.services.user_service import TenantService
from api.utils import get_uuid
from common.misc_utils import get_uuid
from common.constants import RetCode, StatusEnum
from api.utils.api_utils import check_duplicate_ids, get_error_data_result, get_result, token_required
@ -45,7 +44,7 @@ def create(tenant_id):
embd_ids = [TenantLLMService.split_model_name_and_factory(kb.embd_id)[0] for kb in kbs] # remove vendor suffix for comparison
embd_count = list(set(embd_ids))
if len(embd_count) > 1:
return get_result(message='Datasets use different embedding models."', code=settings.RetCode.AUTHENTICATION_ERROR)
return get_result(message='Datasets use different embedding models."', code=RetCode.AUTHENTICATION_ERROR)
req["kb_ids"] = ids
# llm
llm = req.get("llm")
@ -167,7 +166,7 @@ def update(tenant_id, chat_id):
embd_ids = [TenantLLMService.split_model_name_and_factory(kb.embd_id)[0] for kb in kbs] # remove vendor suffix for comparison
embd_count = list(set(embd_ids))
if len(embd_count) > 1:
return get_result(message='Datasets use different embedding models."', code=settings.RetCode.AUTHENTICATION_ERROR)
return get_result(message='Datasets use different embedding models."', code=RetCode.AUTHENTICATION_ERROR)
req["kb_ids"] = ids
else:
req["kb_ids"] = []

View File

@ -20,20 +20,17 @@ import os
import json
from flask import request
from peewee import OperationalError
from api import settings
from api.db import FileSource, StatusEnum
from api.db.db_models import File
from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.user_service import TenantService
from api.utils import get_uuid
from common.constants import RetCode, FileSource, StatusEnum
from api.utils.api_utils import (
deep_merge,
get_error_argument_result,
get_error_data_result,
get_error_operating_result,
get_error_permission_result,
get_parser_config,
get_result,
@ -50,7 +47,8 @@ from api.utils.validation_utils import (
validate_and_parse_request_args,
)
from rag.nlp import search
from rag.settings import PAGERANK_FLD
from common.constants import PAGERANK_FLD
from common import settings
@manager.route("/datasets", methods=["POST"]) # noqa: F821
@ -80,29 +78,28 @@ def create(tenant_id):
properties:
name:
type: string
description: Name of the dataset.
description: Dataset name (required).
avatar:
type: string
description: Base64 encoding of the avatar.
description: Optional base64-encoded avatar image.
description:
type: string
description: Description of the dataset.
description: Optional dataset description.
embedding_model:
type: string
description: Embedding model Name.
description: Optional embedding model name; if omitted, the tenant's default embedding model is used.
permission:
type: string
enum: ['me', 'team']
description: Dataset permission.
description: Visibility of the dataset (private to me or shared with team).
chunk_method:
type: string
enum: ["naive", "book", "email", "laws", "manual", "one", "paper",
"picture", "presentation", "qa", "table", "tag"
]
description: Chunking method.
"picture", "presentation", "qa", "table", "tag"]
description: Chunking method; if omitted, defaults to "naive".
parser_config:
type: object
description: Parser configuration.
description: Optional parser configuration; server-side defaults will be applied.
responses:
200:
description: Successful operation.
@ -117,44 +114,43 @@ def create(tenant_id):
# |----------------|-------------|
# | embedding_model| embd_id |
# | chunk_method | parser_id |
req, err = validate_and_parse_json_request(request, CreateDatasetReq)
if err is not None:
return get_error_argument_result(err)
req = KnowledgebaseService.create_with_name(
name = req.pop("name", None),
tenant_id = tenant_id,
parser_id = req.pop("parser_id", None),
**req
)
# Insert embedding model(embd id)
ok, t = TenantService.get_by_id(tenant_id)
if not ok:
return get_error_permission_result(message="Tenant not found")
if not req.get("embd_id"):
req["embd_id"] = t.embd_id
else:
ok, err = verify_embedding_availability(req["embd_id"], tenant_id)
if not ok:
return err
try:
if KnowledgebaseService.get_or_none(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_error_operating_result(message=f"Dataset name '{req['name']}' already exists")
req["parser_config"] = get_parser_config(req["parser_id"], req["parser_config"])
req["id"] = get_uuid()
req["tenant_id"] = tenant_id
req["created_by"] = tenant_id
ok, t = TenantService.get_by_id(tenant_id)
if not ok:
return get_error_permission_result(message="Tenant not found")
if not req.get("embd_id"):
req["embd_id"] = t.embd_id
else:
ok, err = verify_embedding_availability(req["embd_id"], tenant_id)
if not ok:
return err
if not KnowledgebaseService.save(**req):
return get_error_data_result(message="Create dataset error.(Database error)")
ok, k = KnowledgebaseService.get_by_id(req["id"])
if not ok:
return get_error_data_result(message="Dataset created failed")
response_data = remap_dictionary_keys(k.to_dict())
return get_result(data=response_data)
except OperationalError as e:
if not KnowledgebaseService.save(**req):
return get_error_data_result()
ok, k = KnowledgebaseService.get_by_id(req["id"])
if not ok:
return get_error_data_result(message="Dataset created failed")
response_data = remap_dictionary_keys(k.to_dict())
return get_result(data=response_data)
except Exception as e:
logging.exception(e)
return get_error_data_result(message="Database operation failed")
@manager.route("/datasets", methods=["DELETE"]) # noqa: F821
@token_required
def delete(tenant_id):
@ -488,7 +484,7 @@ def knowledge_graph(tenant_id, dataset_id):
return get_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
code=RetCode.AUTHENTICATION_ERROR
)
_, kb = KnowledgebaseService.get_by_id(dataset_id)
req = {
@ -529,7 +525,7 @@ def delete_knowledge_graph(tenant_id, dataset_id):
return get_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
code=RetCode.AUTHENTICATION_ERROR
)
_, kb = KnowledgebaseService.get_by_id(dataset_id)
settings.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]},

View File

@ -17,15 +17,14 @@ import logging
from flask import request, jsonify
from api.db import LLMType
from api.db.services.document_service import DocumentService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle
from api import settings
from api.utils.api_utils import validate_request, build_error_result, apikey_required
from rag.app.tag import label_question
from api.db.services.dialog_service import meta_filter, convert_conditions
from common.constants import RetCode, LLMType
from common import settings
@manager.route('/dify/retrieval', methods=['POST']) # noqa: F821
@apikey_required
@ -129,7 +128,7 @@ def retrieval(tenant_id):
e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e:
return build_error_result(message="Knowledgebase not found!", code=settings.RetCode.NOT_FOUND)
return build_error_result(message="Knowledgebase not found!", code=RetCode.NOT_FOUND)
embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
print(metadata_condition)
@ -179,7 +178,7 @@ def retrieval(tenant_id):
if str(e).find("not_found") > 0:
return build_error_result(
message='No chunk found! Check the chunk status please!',
code=settings.RetCode.NOT_FOUND
code=RetCode.NOT_FOUND
)
logging.exception(e)
return build_error_result(message=str(e), code=settings.RetCode.SERVER_ERROR)
return build_error_result(message=str(e), code=RetCode.SERVER_ERROR)

View File

@ -24,9 +24,8 @@ from flask import request, send_file
from peewee import OperationalError
from pydantic import BaseModel, Field, validator
from api import settings
from api.constants import FILE_NAME_LEN_LIMIT
from api.db import FileSource, FileType, LLMType, ParserType, TaskStatus
from api.db import FileType
from api.db.db_models import File, Task
from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService
@ -41,8 +40,9 @@ from rag.app.qa import beAdoc, rmPrefix
from rag.app.tag import label_question
from rag.nlp import rag_tokenizer, search
from rag.prompts.generator import cross_languages, keyword_extraction
from rag.utils.storage_factory import STORAGE_IMPL
from common.string_utils import remove_redundant_spaces
from common.constants import RetCode, LLMType, ParserType, TaskStatus, FileSource
from common import settings
MAXIMUM_OF_UPLOADING_FILES = 256
@ -127,13 +127,13 @@ def upload(dataset_id, tenant_id):
description: Processing status.
"""
if "file" not in request.files:
return get_error_data_result(message="No file part!", code=settings.RetCode.ARGUMENT_ERROR)
return get_error_data_result(message="No file part!", code=RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist("file")
for file_obj in file_objs:
if file_obj.filename == "":
return get_result(message="No file selected!", code=settings.RetCode.ARGUMENT_ERROR)
return get_result(message="No file selected!", code=RetCode.ARGUMENT_ERROR)
if len(file_obj.filename.encode("utf-8")) > FILE_NAME_LEN_LIMIT:
return get_result(message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=settings.RetCode.ARGUMENT_ERROR)
return get_result(message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=RetCode.ARGUMENT_ERROR)
"""
# total size
total_size = 0
@ -145,7 +145,7 @@ def upload(dataset_id, tenant_id):
if total_size > MAX_TOTAL_FILE_SIZE:
return get_result(
message=f"Total file size exceeds 10MB limit! ({total_size / (1024 * 1024):.2f} MB)",
code=settings.RetCode.ARGUMENT_ERROR,
code=RetCode.ARGUMENT_ERROR,
)
"""
e, kb = KnowledgebaseService.get_by_id(dataset_id)
@ -153,7 +153,7 @@ def upload(dataset_id, tenant_id):
raise LookupError(f"Can't find the dataset with ID {dataset_id}!")
err, files = FileService.upload_document(kb, file_objs, tenant_id)
if err:
return get_result(message="\n".join(err), code=settings.RetCode.SERVER_ERROR)
return get_result(message="\n".join(err), code=RetCode.SERVER_ERROR)
# rename key's name
renamed_doc_list = []
for file in files:
@ -253,12 +253,12 @@ def update_doc(tenant_id, dataset_id, document_id):
if len(req["name"].encode("utf-8")) > FILE_NAME_LEN_LIMIT:
return get_result(
message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.",
code=settings.RetCode.ARGUMENT_ERROR,
code=RetCode.ARGUMENT_ERROR,
)
if pathlib.Path(req["name"].lower()).suffix != pathlib.Path(doc.name.lower()).suffix:
return get_result(
message="The extension of file can't be changed",
code=settings.RetCode.ARGUMENT_ERROR,
code=RetCode.ARGUMENT_ERROR,
)
for d in DocumentService.query(name=req["name"], kb_id=doc.kb_id):
if d.name == req["name"]:
@ -400,9 +400,9 @@ def download(tenant_id, dataset_id, document_id):
return get_error_data_result(message=f"The dataset not own the document {document_id}.")
# The process of downloading
doc_id, doc_location = File2DocumentService.get_storage_address(doc_id=document_id) # minio address
file_stream = STORAGE_IMPL.get(doc_id, doc_location)
file_stream = settings.STORAGE_IMPL.get(doc_id, doc_location)
if not file_stream:
return construct_json_result(message="This file is empty.", code=settings.RetCode.DATA_ERROR)
return construct_json_result(message="This file is empty.", code=RetCode.DATA_ERROR)
file = BytesIO(file_stream)
# Use send_file with a proper filename and MIME type
return send_file(
@ -670,16 +670,16 @@ def delete(tenant_id, dataset_id):
)
File2DocumentService.delete_by_document_id(doc_id)
STORAGE_IMPL.rm(b, n)
settings.STORAGE_IMPL.rm(b, n)
success_count += 1
except Exception as e:
errors += str(e)
if not_found:
return get_result(message=f"Documents not found: {not_found}", code=settings.RetCode.DATA_ERROR)
return get_result(message=f"Documents not found: {not_found}", code=RetCode.DATA_ERROR)
if errors:
return get_result(message=errors, code=settings.RetCode.SERVER_ERROR)
return get_result(message=errors, code=RetCode.SERVER_ERROR)
if duplicate_messages:
if success_count > 0:
@ -763,7 +763,7 @@ def parse(tenant_id, dataset_id):
queue_tasks(doc, bucket, name, 0)
success_count += 1
if not_found:
return get_result(message=f"Documents not found: {not_found}", code=settings.RetCode.DATA_ERROR)
return get_result(message=f"Documents not found: {not_found}", code=RetCode.DATA_ERROR)
if duplicate_messages:
if success_count > 0:
return get_result(
@ -969,7 +969,7 @@ def list_chunks(tenant_id, dataset_id, document_id):
if req.get("id"):
chunk = settings.docStoreConn.get(req.get("id"), search.index_name(tenant_id), [dataset_id])
if not chunk:
return get_result(message=f"Chunk not found: {dataset_id}/{req.get('id')}", code=settings.RetCode.NOT_FOUND)
return get_result(message=f"Chunk not found: {dataset_id}/{req.get('id')}", code=RetCode.NOT_FOUND)
k = []
for n in chunk.keys():
if re.search(r"(_vec$|_sm_|_tks|_ltks)", n):
@ -1301,6 +1301,10 @@ def update_chunk(tenant_id, dataset_id, document_id, chunk_id):
d["question_tks"] = rag_tokenizer.tokenize("\n".join(req["questions"]))
if "available" in req:
d["available_int"] = int(req["available"])
if "positions" in req:
if not isinstance(req["positions"], list):
return get_error_data_result("`positions` should be a list")
d["position_int"] = req["positions"]
embd_id = DocumentService.get_embd_id(document_id)
embd_mdl = TenantLLMService.model_instance(tenant_id, LLMType.EMBEDDING.value, embd_id)
if doc.parser_id == ParserType.QA:
@ -1414,7 +1418,7 @@ def retrieval_test(tenant_id):
if len(embd_nms) != 1:
return get_result(
message='Datasets use different embedding models."',
code=settings.RetCode.DATA_ERROR,
code=RetCode.DATA_ERROR,
)
if "question" not in req:
return get_error_data_result("`question` is required.")
@ -1505,6 +1509,6 @@ def retrieval_test(tenant_id):
if str(e).find("not_found") > 0:
return get_result(
message="No chunk found! Check the chunk status please!",
code=settings.RetCode.DATA_ERROR,
code=RetCode.DATA_ERROR,
)
return server_error_response(e)

View File

@ -26,13 +26,13 @@ from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.utils.api_utils import server_error_response, token_required
from api.utils import get_uuid
from common.misc_utils import get_uuid
from api.db import FileType
from api.db.services import duplicate_name
from api.db.services.file_service import FileService
from api.utils.api_utils import get_json_result
from api.utils.file_utils import filename_type
from rag.utils.storage_factory import STORAGE_IMPL
from common import settings
@manager.route('/file/upload', methods=['POST']) # noqa: F821
@ -126,7 +126,7 @@ def upload(tenant_id):
filetype = filename_type(file_obj_names[file_len - 1])
location = file_obj_names[file_len - 1]
while STORAGE_IMPL.obj_exist(last_folder.id, location):
while settings.STORAGE_IMPL.obj_exist(last_folder.id, location):
location += "_"
blob = file_obj.read()
filename = duplicate_name(FileService.query, name=file_obj_names[file_len - 1], parent_id=last_folder.id)
@ -142,7 +142,7 @@ def upload(tenant_id):
"size": len(blob),
}
file = FileService.insert(file)
STORAGE_IMPL.put(last_folder.id, location, blob)
settings.STORAGE_IMPL.put(last_folder.id, location, blob)
file_res.append(file.to_json())
return get_json_result(data=file_res)
except Exception as e:
@ -497,10 +497,10 @@ def rm(tenant_id):
e, file = FileService.get_by_id(inner_file_id)
if not e:
return get_json_result(message="File not found!", code=404)
STORAGE_IMPL.rm(file.parent_id, file.location)
settings.STORAGE_IMPL.rm(file.parent_id, file.location)
FileService.delete_folder_by_pf_id(tenant_id, file_id)
else:
STORAGE_IMPL.rm(file.parent_id, file.location)
settings.STORAGE_IMPL.rm(file.parent_id, file.location)
if not FileService.delete(file):
return get_json_result(message="Database error (File removal)!", code=500)
@ -614,10 +614,10 @@ def get(tenant_id, file_id):
if not e:
return get_json_result(message="Document not found!", code=404)
blob = STORAGE_IMPL.get(file.parent_id, file.location)
blob = settings.STORAGE_IMPL.get(file.parent_id, file.location)
if not blob:
b, n = File2DocumentService.get_storage_address(file_id=file_id)
blob = STORAGE_IMPL.get(b, n)
blob = settings.STORAGE_IMPL.get(b, n)
response = flask.make_response(blob)
ext = re.search(r"\.([^.]+)$", file.name)

View File

@ -21,11 +21,9 @@ import tiktoken
from flask import Response, jsonify, request
from agent.canvas import Canvas
from api import settings
from api.db import LLMType, StatusEnum
from api.db.db_models import APIToken
from api.db.services.api_service import API4ConversationService
from api.db.services.canvas_service import UserCanvasService, completionOpenAI
from api.db.services.canvas_service import UserCanvasService, completion_openai
from api.db.services.canvas_service import completion as agent_completion
from api.db.services.conversation_service import ConversationService, iframe_completion
from api.db.services.conversation_service import completion as rag_completion
@ -35,13 +33,14 @@ from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle
from api.db.services.search_service import SearchService
from api.db.services.user_service import UserTenantService
from api.utils import get_uuid
from common.misc_utils import get_uuid
from api.utils.api_utils import check_duplicate_ids, get_data_openai, get_error_data_result, get_json_result, \
get_result, server_error_response, token_required, validate_request
from rag.app.tag import label_question
from rag.prompts.template import load_prompt
from rag.prompts.generator import cross_languages, gen_meta_filter, keyword_extraction, chunks_format
from common.constants import RetCode, LLMType, StatusEnum
from common import settings
@manager.route("/chats/<chat_id>/sessions", methods=["POST"]) # noqa: F821
@token_required
@ -412,7 +411,7 @@ def agents_completion_openai_compatibility(tenant_id, agent_id):
stream = req.pop("stream", False)
if stream:
resp = Response(
completionOpenAI(
completion_openai(
tenant_id,
agent_id,
question,
@ -430,7 +429,7 @@ def agents_completion_openai_compatibility(tenant_id, agent_id):
else:
# For non-streaming, just return the response directly
response = next(
completionOpenAI(
completion_openai(
tenant_id,
agent_id,
question,
@ -959,7 +958,7 @@ def retrieval_test_embedded():
kb_ids = [kb_ids]
if not kb_ids:
return get_json_result(data=False, message='Please specify dataset firstly.',
code=settings.RetCode.DATA_ERROR)
code=RetCode.DATA_ERROR)
doc_ids = req.get("doc_ids", [])
similarity_threshold = float(req.get("similarity_threshold", 0.0))
vector_similarity_weight = float(req.get("vector_similarity_weight", 0.3))
@ -996,7 +995,7 @@ def retrieval_test_embedded():
break
else:
return get_json_result(data=False, message="Only owner of knowledgebase authorized for this operation.",
code=settings.RetCode.OPERATING_ERROR)
code=RetCode.OPERATING_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_ids[0])
if not e:
@ -1034,7 +1033,7 @@ def retrieval_test_embedded():
except Exception as e:
if str(e).find("not_found") > 0:
return get_json_result(data=False, message="No chunk found! Check the chunk status please!",
code=settings.RetCode.DATA_ERROR)
code=RetCode.DATA_ERROR)
return server_error_response(e)
@ -1104,7 +1103,7 @@ def detail_share_embedded():
break
else:
return get_json_result(data=False, message="Has no permission for this operation.",
code=settings.RetCode.OPERATING_ERROR)
code=RetCode.OPERATING_ERROR)
search = SearchService.get_detail(search_id)
if not search:

View File

@ -17,14 +17,13 @@
from flask import request
from flask_login import current_user, login_required
from api import settings
from api.constants import DATASET_NAME_LIMIT
from api.db import StatusEnum
from api.db.db_models import DB
from api.db.services import duplicate_name
from api.db.services.search_service import SearchService
from api.db.services.user_service import TenantService, UserTenantService
from api.utils import get_uuid
from common.misc_utils import get_uuid
from common.constants import RetCode, StatusEnum
from api.utils.api_utils import get_data_error_result, get_json_result, not_allowed_parameters, server_error_response, validate_request
@ -82,12 +81,12 @@ def update():
search_id = req["search_id"]
if not SearchService.accessible4deletion(search_id, current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR)
try:
search_app = SearchService.query(tenant_id=tenant_id, id=search_id)[0]
if not search_app:
return get_json_result(data=False, message=f"Cannot find search {search_id}", code=settings.RetCode.DATA_ERROR)
return get_json_result(data=False, message=f"Cannot find search {search_id}", code=RetCode.DATA_ERROR)
if req["name"].lower() != search_app.name.lower() and len(SearchService.query(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value)) >= 1:
return get_data_error_result(message="Duplicated search name.")
@ -129,7 +128,7 @@ def detail():
if SearchService.query(tenant_id=tenant.tenant_id, id=search_id):
break
else:
return get_json_result(data=False, message="Has no permission for this operation.", code=settings.RetCode.OPERATING_ERROR)
return get_json_result(data=False, message="Has no permission for this operation.", code=RetCode.OPERATING_ERROR)
search = SearchService.get_detail(search_id)
if not search:
@ -178,7 +177,7 @@ def rm():
req = request.get_json()
search_id = req["search_id"]
if not SearchService.accessible4deletion(search_id, current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR)
try:
if not SearchService.delete_by_id(search_id):

View File

@ -23,21 +23,20 @@ from api.db.db_models import APIToken
from api.db.services.api_service import APITokenService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.user_service import UserTenantService
from api import settings
from api.utils.api_utils import (
get_json_result,
get_data_error_result,
server_error_response,
generate_confirmation_token,
)
from api.versions import get_ragflow_version
from common.versions import get_ragflow_version
from common.time_utils import current_timestamp, datetime_format
from rag.utils.storage_factory import STORAGE_IMPL, STORAGE_IMPL_TYPE
from timeit import default_timer as timer
from rag.utils.redis_conn import REDIS_CONN
from flask import jsonify
from api.utils.health_utils import run_health_checks
from common import settings
@manager.route("/version", methods=["GET"]) # noqa: F821
@ -112,15 +111,15 @@ def status():
st = timer()
try:
STORAGE_IMPL.health()
settings.STORAGE_IMPL.health()
res["storage"] = {
"storage": STORAGE_IMPL_TYPE.lower(),
"storage": settings.STORAGE_IMPL_TYPE.lower(),
"status": "green",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
}
except Exception as e:
res["storage"] = {
"storage": STORAGE_IMPL_TYPE.lower(),
"storage": settings.STORAGE_IMPL_TYPE.lower(),
"status": "red",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
"error": str(e),

View File

@ -17,16 +17,17 @@
from flask import request
from flask_login import login_required, current_user
from api import settings
from api.apps import smtp_mail_server
from api.db import UserTenantRole, StatusEnum
from api.db import UserTenantRole
from api.db.db_models import UserTenant
from api.db.services.user_service import UserTenantService, UserService
from api.utils import get_uuid
from common.constants import RetCode, StatusEnum
from common.misc_utils import get_uuid
from common.time_utils import delta_seconds
from api.utils.api_utils import get_json_result, validate_request, server_error_response, get_data_error_result
from api.utils.web_utils import send_invite_email
from common import settings
@manager.route("/<tenant_id>/user/list", methods=["GET"]) # noqa: F821
@ -36,7 +37,7 @@ def user_list(tenant_id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR)
code=RetCode.AUTHENTICATION_ERROR)
try:
users = UserTenantService.get_by_tenant_id(tenant_id)
@ -55,7 +56,7 @@ def create(tenant_id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR)
code=RetCode.AUTHENTICATION_ERROR)
req = request.json
invite_user_email = req["email"]
@ -109,7 +110,7 @@ def rm(tenant_id, user_id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR)
code=RetCode.AUTHENTICATION_ERROR)
try:
UserTenantService.filter_delete([UserTenant.tenant_id == tenant_id, UserTenant.user_id == user_id])

View File

@ -26,7 +26,6 @@ from flask import redirect, request, session, make_response
from flask_login import current_user, login_required, login_user, logout_user
from werkzeug.security import check_password_hash, generate_password_hash
from api import settings
from api.apps.auth import get_auth_client
from api.db import FileType, UserTenantRole
from api.db.db_models import TenantLLM
@ -35,9 +34,10 @@ from api.db.services.llm_service import get_init_tenant_llm
from api.db.services.tenant_llm_service import TenantLLMService
from api.db.services.user_service import TenantService, UserService, UserTenantService
from common.time_utils import current_timestamp, datetime_format, get_format_time
from api.utils import download_img, get_uuid
from common.misc_utils import download_img, get_uuid
from common.constants import RetCode
from common.connection_utils import construct_response
from api.utils.api_utils import (
construct_response,
get_data_error_result,
get_json_result,
server_error_response,
@ -57,6 +57,7 @@ from api.utils.web_utils import (
hash_code,
captcha_key,
)
from common import settings
@manager.route("/login", methods=["POST", "GET"]) # noqa: F821
@ -91,14 +92,14 @@ def login():
type: object
"""
if not request.json:
return get_json_result(data=False, code=settings.RetCode.AUTHENTICATION_ERROR, message="Unauthorized!")
return get_json_result(data=False, code=RetCode.AUTHENTICATION_ERROR, message="Unauthorized!")
email = request.json.get("email", "")
users = UserService.query(email=email)
if not users:
return get_json_result(
data=False,
code=settings.RetCode.AUTHENTICATION_ERROR,
code=RetCode.AUTHENTICATION_ERROR,
message=f"Email: {email} is not registered!",
)
@ -106,14 +107,14 @@ def login():
try:
password = decrypt(password)
except BaseException:
return get_json_result(data=False, code=settings.RetCode.SERVER_ERROR, message="Fail to crypt password")
return get_json_result(data=False, code=RetCode.SERVER_ERROR, message="Fail to crypt password")
user = UserService.query_user(email, password)
if user and hasattr(user, 'is_active') and user.is_active == "0":
return get_json_result(
data=False,
code=settings.RetCode.FORBIDDEN,
code=RetCode.FORBIDDEN,
message="This account has been disabled, please contact the administrator!",
)
elif user:
@ -128,7 +129,7 @@ def login():
else:
return get_json_result(
data=False,
code=settings.RetCode.AUTHENTICATION_ERROR,
code=RetCode.AUTHENTICATION_ERROR,
message="Email and password do not match!",
)
@ -151,7 +152,7 @@ def get_login_channels():
return get_json_result(data=channels)
except Exception as e:
logging.exception(e)
return get_json_result(data=[], message=f"Load channels failure, error: {str(e)}", code=settings.RetCode.EXCEPTION_ERROR)
return get_json_result(data=[], message=f"Load channels failure, error: {str(e)}", code=RetCode.EXCEPTION_ERROR)
@manager.route("/login/<channel>", methods=["GET"]) # noqa: F821
@ -535,7 +536,7 @@ def setting_user():
if not check_password_hash(current_user.password, decrypt(request_data["password"])):
return get_json_result(
data=False,
code=settings.RetCode.AUTHENTICATION_ERROR,
code=RetCode.AUTHENTICATION_ERROR,
message="Password error!",
)
@ -563,7 +564,7 @@ def setting_user():
return get_json_result(data=True)
except Exception as e:
logging.exception(e)
return get_json_result(data=False, message="Update failure!", code=settings.RetCode.EXCEPTION_ERROR)
return get_json_result(data=False, message="Update failure!", code=RetCode.EXCEPTION_ERROR)
@manager.route("/info", methods=["GET"]) # noqa: F821
@ -693,7 +694,7 @@ def user_add():
return get_json_result(
data=False,
message="User registration is disabled!",
code=settings.RetCode.OPERATING_ERROR,
code=RetCode.OPERATING_ERROR,
)
req = request.json
@ -704,7 +705,7 @@ def user_add():
return get_json_result(
data=False,
message=f"Invalid email address: {email_address}!",
code=settings.RetCode.OPERATING_ERROR,
code=RetCode.OPERATING_ERROR,
)
# Check if the email address is already used
@ -712,7 +713,7 @@ def user_add():
return get_json_result(
data=False,
message=f"Email: {email_address} has already registered!",
code=settings.RetCode.OPERATING_ERROR,
code=RetCode.OPERATING_ERROR,
)
# Construct user info data
@ -747,7 +748,7 @@ def user_add():
return get_json_result(
data=False,
message=f"User registration failure, error: {str(e)}",
code=settings.RetCode.EXCEPTION_ERROR,
code=RetCode.EXCEPTION_ERROR,
)
@ -847,11 +848,11 @@ def forget_get_captcha():
"""
email = (request.args.get("email") or "")
if not email:
return get_json_result(data=False, code=settings.RetCode.ARGUMENT_ERROR, message="email is required")
return get_json_result(data=False, code=RetCode.ARGUMENT_ERROR, message="email is required")
users = UserService.query(email=email)
if not users:
return get_json_result(data=False, code=settings.RetCode.DATA_ERROR, message="invalid email")
return get_json_result(data=False, code=RetCode.DATA_ERROR, message="invalid email")
# Generate captcha text
allowed = string.ascii_uppercase + string.digits
@ -878,17 +879,17 @@ def forget_send_otp():
captcha = (req.get("captcha") or "").strip()
if not email or not captcha:
return get_json_result(data=False, code=settings.RetCode.ARGUMENT_ERROR, message="email and captcha required")
return get_json_result(data=False, code=RetCode.ARGUMENT_ERROR, message="email and captcha required")
users = UserService.query(email=email)
if not users:
return get_json_result(data=False, code=settings.RetCode.DATA_ERROR, message="invalid email")
return get_json_result(data=False, code=RetCode.DATA_ERROR, message="invalid email")
stored_captcha = REDIS_CONN.get(captcha_key(email))
if not stored_captcha:
return get_json_result(data=False, code=settings.RetCode.NOT_EFFECTIVE, message="invalid or expired captcha")
return get_json_result(data=False, code=RetCode.NOT_EFFECTIVE, message="invalid or expired captcha")
if (stored_captcha or "").strip().lower() != captcha.lower():
return get_json_result(data=False, code=settings.RetCode.AUTHENTICATION_ERROR, message="invalid or expired captcha")
return get_json_result(data=False, code=RetCode.AUTHENTICATION_ERROR, message="invalid or expired captcha")
# Delete captcha to prevent reuse
REDIS_CONN.delete(captcha_key(email))
@ -903,7 +904,7 @@ def forget_send_otp():
elapsed = RESEND_COOLDOWN_SECONDS
remaining = RESEND_COOLDOWN_SECONDS - elapsed
if remaining > 0:
return get_json_result(data=False, code=settings.RetCode.NOT_EFFECTIVE, message=f"you still have to wait {remaining} seconds")
return get_json_result(data=False, code=RetCode.NOT_EFFECTIVE, message=f"you still have to wait {remaining} seconds")
# Generate OTP (uppercase letters only) and store hashed
otp = "".join(secrets.choice(string.ascii_uppercase) for _ in range(OTP_LENGTH))
@ -928,9 +929,9 @@ def forget_send_otp():
ttl_min=ttl_min,
)
except Exception:
return get_json_result(data=False, code=settings.RetCode.SERVER_ERROR, message="failed to send email")
return get_json_result(data=False, code=RetCode.SERVER_ERROR, message="failed to send email")
return get_json_result(data=True, code=settings.RetCode.SUCCESS, message="verification passed, email sent")
return get_json_result(data=True, code=RetCode.SUCCESS, message="verification passed, email sent")
@manager.route("/forget", methods=["POST"]) # noqa: F821
@ -946,31 +947,31 @@ def forget():
new_pwd2 = req.get("confirm_new_password")
if not all([email, otp, new_pwd, new_pwd2]):
return get_json_result(data=False, code=settings.RetCode.ARGUMENT_ERROR, message="email, otp and passwords are required")
return get_json_result(data=False, code=RetCode.ARGUMENT_ERROR, message="email, otp and passwords are required")
# For reset, passwords are provided as-is (no decrypt needed)
if new_pwd != new_pwd2:
return get_json_result(data=False, code=settings.RetCode.ARGUMENT_ERROR, message="passwords do not match")
return get_json_result(data=False, code=RetCode.ARGUMENT_ERROR, message="passwords do not match")
users = UserService.query(email=email)
if not users:
return get_json_result(data=False, code=settings.RetCode.DATA_ERROR, message="invalid email")
return get_json_result(data=False, code=RetCode.DATA_ERROR, message="invalid email")
user = users[0]
# Verify OTP from Redis
k_code, k_attempts, k_last, k_lock = otp_keys(email)
if REDIS_CONN.get(k_lock):
return get_json_result(data=False, code=settings.RetCode.NOT_EFFECTIVE, message="too many attempts, try later")
return get_json_result(data=False, code=RetCode.NOT_EFFECTIVE, message="too many attempts, try later")
stored = REDIS_CONN.get(k_code)
if not stored:
return get_json_result(data=False, code=settings.RetCode.NOT_EFFECTIVE, message="expired otp")
return get_json_result(data=False, code=RetCode.NOT_EFFECTIVE, message="expired otp")
try:
stored_hash, salt_hex = str(stored).split(":", 1)
salt = bytes.fromhex(salt_hex)
except Exception:
return get_json_result(data=False, code=settings.RetCode.EXCEPTION_ERROR, message="otp storage corrupted")
return get_json_result(data=False, code=RetCode.EXCEPTION_ERROR, message="otp storage corrupted")
# Case-insensitive verification: OTP generated uppercase
calc = hash_code(otp.upper(), salt)
@ -983,7 +984,7 @@ def forget():
REDIS_CONN.set(k_attempts, attempts, OTP_TTL_SECONDS)
if attempts >= ATTEMPT_LIMIT:
REDIS_CONN.set(k_lock, int(time.time()), ATTEMPT_LOCK_SECONDS)
return get_json_result(data=False, code=settings.RetCode.AUTHENTICATION_ERROR, message="expired otp")
return get_json_result(data=False, code=RetCode.AUTHENTICATION_ERROR, message="expired otp")
# Success: consume OTP and reset password
REDIS_CONN.delete(k_code)
@ -995,7 +996,7 @@ def forget():
UserService.update_user_password(user.id, new_pwd)
except Exception as e:
logging.exception(e)
return get_json_result(data=False, code=settings.RetCode.EXCEPTION_ERROR, message="failed to reset password")
return get_json_result(data=False, code=RetCode.EXCEPTION_ERROR, message="failed to reset password")
# Auto login (reuse login flow)
user.access_token = get_uuid()

View File

@ -17,8 +17,6 @@ NAME_LENGTH_LIMIT = 2**10
IMG_BASE64_PREFIX = "data:image/png;base64,"
SERVICE_CONF = "service_conf.yaml"
API_VERSION = "v1"
RAG_FLOW_SERVICE_NAME = "ragflow"
REQUEST_WAIT_SEC = 2

View File

@ -13,21 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from enum import Enum
from enum import IntEnum
from strenum import StrEnum
class StatusEnum(Enum):
VALID = "1"
INVALID = "0"
class ActiveEnum(Enum):
ACTIVE = "1"
INACTIVE = "0"
class UserTenantRole(StrEnum):
OWNER = 'owner'
ADMIN = 'admin'
@ -56,76 +46,18 @@ class FileType(StrEnum):
VALID_FILE_TYPES = {FileType.PDF, FileType.DOC, FileType.VISUAL, FileType.AURAL, FileType.VIRTUAL, FileType.FOLDER, FileType.OTHER}
class LLMType(StrEnum):
CHAT = 'chat'
EMBEDDING = 'embedding'
SPEECH2TEXT = 'speech2text'
IMAGE2TEXT = 'image2text'
RERANK = 'rerank'
TTS = 'tts'
class ChatStyle(StrEnum):
CREATIVE = 'Creative'
PRECISE = 'Precise'
EVENLY = 'Evenly'
CUSTOM = 'Custom'
class TaskStatus(StrEnum):
UNSTART = "0"
RUNNING = "1"
CANCEL = "2"
DONE = "3"
FAIL = "4"
VALID_TASK_STATUS = {TaskStatus.UNSTART, TaskStatus.RUNNING, TaskStatus.CANCEL, TaskStatus.DONE, TaskStatus.FAIL}
class ParserType(StrEnum):
PRESENTATION = "presentation"
LAWS = "laws"
MANUAL = "manual"
PAPER = "paper"
RESUME = "resume"
BOOK = "book"
QA = "qa"
TABLE = "table"
NAIVE = "naive"
PICTURE = "picture"
ONE = "one"
AUDIO = "audio"
EMAIL = "email"
KG = "knowledge_graph"
TAG = "tag"
class FileSource(StrEnum):
LOCAL = ""
KNOWLEDGEBASE = "knowledgebase"
S3 = "s3"
class CanvasType(StrEnum):
ChatBot = "chatbot"
DocBot = "docbot"
class InputType(StrEnum):
LOAD_STATE = "load_state" # e.g. loading a current full state or a save state, such as from a file
POLL = "poll" # e.g. calling an API to get all documents in the last hour
EVENT = "event" # e.g. registered an endpoint as a listener, and processing connector events
SLIM_RETRIEVAL = "slim_retrieval"
class CanvasCategory(StrEnum):
Agent = "agent_canvas"
DataFlow = "dataflow_canvas"
VALID_CANVAS_CATEGORIES = {CanvasCategory.Agent, CanvasCategory.DataFlow}
class MCPServerType(StrEnum):
SSE = "sse"
STREAMABLE_HTTP = "streamable-http"
VALID_MCP_SERVER_TYPES = {MCPServerType.SSE, MCPServerType.STREAMABLE_HTTP}
class PipelineTaskType(StrEnum):
PARSE = "Parse"
@ -138,4 +70,7 @@ class PipelineTaskType(StrEnum):
VALID_PIPELINE_TASK_TYPES = {PipelineTaskType.PARSE, PipelineTaskType.DOWNLOAD, PipelineTaskType.RAPTOR, PipelineTaskType.GRAPH_RAG, PipelineTaskType.MINDMAP}
PIPELINE_SPECIAL_PROGRESS_FREEZE_TASK_TYPES = {PipelineTaskType.RAPTOR.lower(), PipelineTaskType.GRAPH_RAG.lower(), PipelineTaskType.MINDMAP.lower()}
KNOWLEDGEBASE_FOLDER_NAME=".knowledgebase"

View File

@ -21,6 +21,7 @@ import os
import sys
import time
import typing
from datetime import datetime, timezone
from enum import Enum
from functools import wraps
@ -30,24 +31,15 @@ from peewee import InterfaceError, OperationalError, BigIntegerField, BooleanFie
from playhouse.migrate import MySQLMigrator, PostgresqlMigrator, migrate
from playhouse.pool import PooledMySQLDatabase, PooledPostgresqlDatabase
from api import settings, utils
from api.db import ParserType, SerializedType
from api import utils
from api.db import SerializedType
from api.utils.json_encode import json_dumps, json_loads
from api.utils.configs import deserialize_b64, serialize_b64
from common.time_utils import current_timestamp, timestamp_to_date, date_string_to_timestamp
def singleton(cls, *args, **kw):
instances = {}
def _singleton():
key = str(cls) + str(os.getpid())
if key not in instances:
instances[key] = cls(*args, **kw)
return instances[key]
return _singleton
from common.decorator import singleton
from common.constants import ParserType
from common import settings
CONTINUOUS_FIELD_TYPE = {IntegerField, FloatField, DateTimeField}
@ -379,6 +371,7 @@ class RetryingPooledPostgresqlDatabase(PooledPostgresqlDatabase):
time.sleep(self.retry_delay * (2 ** attempt))
else:
raise
return None
class PooledDatabase(Enum):
@ -676,6 +669,7 @@ class LLMFactories(DataBaseModel):
name = CharField(max_length=128, null=False, help_text="LLM factory name", primary_key=True)
logo = TextField(null=True, help_text="llm logo base64")
tags = CharField(max_length=255, null=False, help_text="LLM, Text Embedding, Image2Text, ASR", index=True)
rank = IntegerField(default=0, index=False)
status = CharField(max_length=1, null=True, help_text="is it validate(0: wasted, 1: validate)", default="1", index=True)
def __str__(self):
@ -713,6 +707,7 @@ class TenantLLM(DataBaseModel):
api_base = CharField(max_length=255, null=True, help_text="API Base")
max_tokens = IntegerField(default=8192, index=True)
used_tokens = IntegerField(default=0, index=True)
status = CharField(max_length=1, null=False, help_text="is it validate(0: wasted, 1: validate)", default="1", index=True)
def __str__(self):
return self.llm_name
@ -1046,6 +1041,77 @@ class PipelineOperationLog(DataBaseModel):
db_table = "pipeline_operation_log"
class Connector(DataBaseModel):
id = CharField(max_length=32, primary_key=True)
tenant_id = CharField(max_length=32, null=False, index=True)
name = CharField(max_length=128, null=False, help_text="Search name", index=False)
source = CharField(max_length=128, null=False, help_text="Data source", index=True)
input_type = CharField(max_length=128, null=False, help_text="poll/event/..", index=True)
config = JSONField(null=False, default={})
refresh_freq = IntegerField(default=0, index=False)
prune_freq = IntegerField(default=0, index=False)
timeout_secs = IntegerField(default=3600, index=False)
indexing_start = DateTimeField(null=True, index=True)
status = CharField(max_length=16, null=True, help_text="schedule", default="schedule", index=True)
def __str__(self):
return self.name
class Meta:
db_table = "connector"
class Connector2Kb(DataBaseModel):
id = CharField(max_length=32, primary_key=True)
connector_id = CharField(max_length=32, null=False, index=True)
kb_id = CharField(max_length=32, null=False, index=True)
auto_parse = CharField(max_length=1, null=False, default="1", index=False)
class Meta:
db_table = "connector2kb"
class DateTimeTzField(CharField):
field_type = 'VARCHAR'
def db_value(self, value: datetime|None) -> str|None:
if value is not None:
if value.tzinfo is not None:
return value.isoformat()
else:
return value.replace(tzinfo=timezone.utc).isoformat()
return value
def python_value(self, value: str|None) -> datetime|None:
if value is not None:
dt = datetime.fromisoformat(value)
if dt.tzinfo is None:
import pytz
return dt.replace(tzinfo=pytz.UTC)
return dt
return value
class SyncLogs(DataBaseModel):
id = CharField(max_length=32, primary_key=True)
connector_id = CharField(max_length=32, index=True)
status = CharField(max_length=128, null=False, help_text="Processing status", index=True)
from_beginning = CharField(max_length=1, null=True, help_text="", default="0", index=False)
new_docs_indexed = IntegerField(default=0, index=False)
total_docs_indexed = IntegerField(default=0, index=False)
docs_removed_from_index = IntegerField(default=0, index=False)
error_msg = TextField(null=False, help_text="process message", default="")
error_count = IntegerField(default=0, index=False)
full_exception_trace = TextField(null=True, help_text="process message", default="")
time_started = DateTimeField(null=True, index=True)
poll_range_start = DateTimeTzField(max_length=255, null=True, index=True)
poll_range_end = DateTimeTzField(max_length=255, null=True, index=True)
kb_id = CharField(max_length=32, null=False, index=True)
class Meta:
db_table = "sync_logs"
def migrate_db():
logging.disable(logging.ERROR)
migrator = DatabaseMigrator[settings.DATABASE_TYPE.upper()].value(DB)
@ -1214,4 +1280,16 @@ def migrate_db():
migrate(migrator.alter_column_type("tenant_llm", "api_key", TextField(null=True, help_text="API KEY")))
except Exception:
pass
try:
migrate(migrator.add_column("tenant_llm", "status", CharField(max_length=1, null=False, help_text="is it validate(0: wasted, 1: validate)", default="1", index=True)))
except Exception:
pass
try:
migrate(migrator.add_column("connector2kb", "auto_parse", CharField(max_length=1, null=False, default="1", index=False)))
except Exception:
pass
try:
migrate(migrator.add_column("llm_factories", "rank", IntegerField(default=0, index=False)))
except Exception:
pass
logging.disable(logging.NOTSET)

View File

@ -20,7 +20,7 @@ import time
import uuid
from copy import deepcopy
from api.db import LLMType, UserTenantRole
from api.db import UserTenantRole
from api.db.db_models import init_database_tables as init_web_db, LLMFactories, LLM, TenantLLM
from api.db.services import UserService
from api.db.services.canvas_service import CanvasTemplateService
@ -29,8 +29,9 @@ from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.tenant_llm_service import LLMFactoriesService, TenantLLMService
from api.db.services.llm_service import LLMService, LLMBundle, get_init_tenant_llm
from api.db.services.user_service import TenantService, UserTenantService
from api import settings
from api.utils.file_utils import get_project_base_directory
from common.constants import LLMType
from common.file_utils import get_project_base_directory
from common import settings
from api.common.base64 import encode_to_base64
@ -88,13 +89,7 @@ def init_superuser():
def init_llm_factory():
try:
LLMService.filter_delete([(LLM.fid == "MiniMax" or LLM.fid == "Minimax")])
LLMService.filter_delete([(LLM.fid == "cohere")])
LLMFactoriesService.filter_delete([LLMFactories.name == "cohere"])
except Exception:
pass
LLMFactoriesService.filter_delete([1 == 1])
factory_llm_infos = settings.FACTORY_LLM_INFOS
for factory_llm_info in factory_llm_infos:
info = deepcopy(factory_llm_info)

View File

@ -16,9 +16,8 @@
import logging
import uuid
from api import settings
from api.utils.api_utils import group_by
from api.db import FileType, UserTenantRole, ActiveEnum
from api.db import FileType, UserTenantRole
from api.db.services.api_service import APITokenService, API4ConversationService
from api.db.services.canvas_service import UserCanvasService
from api.db.services.conversation_service import ConversationService
@ -35,9 +34,9 @@ from api.db.services.task_service import TaskService
from api.db.services.tenant_llm_service import TenantLLMService
from api.db.services.user_canvas_version import UserCanvasVersionService
from api.db.services.user_service import TenantService, UserService, UserTenantService
from rag.utils.storage_factory import STORAGE_IMPL
from rag.nlp import search
from common.constants import ActiveEnum
from common import settings
def create_new_user(user_info: dict) -> dict:
"""
@ -158,8 +157,8 @@ def delete_user_data(user_id: str) -> dict:
if kb_ids:
# step1.1.1 delete files in storage, remove bucket
for kb_id in kb_ids:
if STORAGE_IMPL.bucket_exists(kb_id):
STORAGE_IMPL.remove_bucket(kb_id)
if settings.STORAGE_IMPL.bucket_exists(kb_id):
settings.STORAGE_IMPL.remove_bucket(kb_id)
done_msg += f"- Removed {len(kb_ids)} dataset's buckets.\n"
# step1.1.2 delete file and document info in db
doc_ids = DocumentService.get_all_doc_ids_by_kb_ids(kb_ids)
@ -218,7 +217,7 @@ def delete_user_data(user_id: str) -> dict:
if created_files:
# step2.1.1.1 delete file in storage
for f in created_files:
STORAGE_IMPL.rm(f.parent_id, f.location)
settings.STORAGE_IMPL.rm(f.parent_id, f.location)
done_msg += f"- Deleted {len(created_files)} uploaded file.\n"
# step2.1.1.2 delete file record
file_delete_res = FileService.delete_by_ids([f.id for f in created_files])

View File

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from api.versions import get_ragflow_version
from common.versions import get_ragflow_version
from .reload_config_base import ReloadConfigBase

View File

@ -22,7 +22,7 @@ from api.db import CanvasCategory, TenantPermission
from api.db.db_models import DB, CanvasTemplate, User, UserCanvas, API4Conversation
from api.db.services.api_service import API4ConversationService
from api.db.services.common_service import CommonService
from api.utils import get_uuid
from common.misc_utils import get_uuid
from api.utils.api_utils import get_data_openai
import tiktoken
from peewee import fn
@ -67,6 +67,7 @@ class UserCanvasService(CommonService):
# will get all permitted agents, be cautious
fields = [
cls.model.id,
cls.model.avatar,
cls.model.title,
cls.model.permission,
cls.model.canvas_type,
@ -232,9 +233,9 @@ def completion(tenant_id, agent_id, session_id=None, **kwargs):
API4ConversationService.append_message(conv["id"], conv)
def completionOpenAI(tenant_id, agent_id, question, session_id=None, stream=True, **kwargs):
tiktokenenc = tiktoken.get_encoding("cl100k_base")
prompt_tokens = len(tiktokenenc.encode(str(question)))
def completion_openai(tenant_id, agent_id, question, session_id=None, stream=True, **kwargs):
tiktoken_encoder = tiktoken.get_encoding("cl100k_base")
prompt_tokens = len(tiktoken_encoder.encode(str(question)))
user_id = kwargs.get("user_id", "")
if stream:
@ -252,7 +253,7 @@ def completionOpenAI(tenant_id, agent_id, question, session_id=None, stream=True
try:
ans = json.loads(ans[5:]) # remove "data:"
except Exception as e:
logging.exception(f"Agent OpenAI-Compatible completionOpenAI parse answer failed: {e}")
logging.exception(f"Agent OpenAI-Compatible completion_openai parse answer failed: {e}")
continue
if ans.get("event") not in ["message", "message_end"]:
continue
@ -261,7 +262,7 @@ def completionOpenAI(tenant_id, agent_id, question, session_id=None, stream=True
if ans["event"] == "message":
content_piece = ans["data"]["content"]
completion_tokens += len(tiktokenenc.encode(content_piece))
completion_tokens += len(tiktoken_encoder.encode(content_piece))
openai_data = get_data_openai(
id=session_id or str(uuid4()),
@ -288,7 +289,7 @@ def completionOpenAI(tenant_id, agent_id, question, session_id=None, stream=True
content=f"**ERROR**: {str(e)}",
finish_reason="stop",
prompt_tokens=prompt_tokens,
completion_tokens=len(tiktokenenc.encode(f"**ERROR**: {str(e)}")),
completion_tokens=len(tiktoken_encoder.encode(f"**ERROR**: {str(e)}")),
stream=True
),
ensure_ascii=False
@ -318,7 +319,7 @@ def completionOpenAI(tenant_id, agent_id, question, session_id=None, stream=True
if ans.get("data", {}).get("reference", None):
reference.update(ans["data"]["reference"])
completion_tokens = len(tiktokenenc.encode(all_content))
completion_tokens = len(tiktoken_encoder.encode(all_content))
openai_data = get_data_openai(
id=session_id or str(uuid4()),
@ -340,7 +341,7 @@ def completionOpenAI(tenant_id, agent_id, question, session_id=None, stream=True
id=session_id or str(uuid4()),
model=agent_id,
prompt_tokens=prompt_tokens,
completion_tokens=len(tiktokenenc.encode(f"**ERROR**: {str(e)}")),
completion_tokens=len(tiktoken_encoder.encode(f"**ERROR**: {str(e)}")),
content=f"**ERROR**: {str(e)}",
finish_reason="stop",
param=None

View File

@ -19,7 +19,7 @@ import peewee
from peewee import InterfaceError, OperationalError
from api.db.db_models import DB
from api.utils import get_uuid
from common.misc_utils import get_uuid
from common.time_utils import current_timestamp, datetime_format
def retry_db_operation(func):
@ -90,7 +90,7 @@ class CommonService:
else:
query_records = cls.model.select()
if reverse is not None:
if not order_by or not hasattr(cls, order_by):
if not order_by or not hasattr(cls.model, order_by):
order_by = "create_time"
if reverse is True:
query_records = query_records.order_by(cls.model.getter_by(order_by).desc())

View File

@ -0,0 +1,283 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from datetime import datetime
from typing import Tuple, List
from anthropic import BaseModel
from peewee import SQL, fn
from api.db import InputType
from api.db.db_models import Connector, SyncLogs, Connector2Kb, Knowledgebase
from api.db.services.common_service import CommonService
from api.db.services.document_service import DocumentService
from api.db.services.file_service import FileService
from common.misc_utils import get_uuid
from common.constants import TaskStatus
from common.time_utils import current_timestamp, timestamp_to_date
class ConnectorService(CommonService):
model = Connector
@classmethod
def resume(cls, connector_id, status):
for c2k in Connector2KbService.query(connector_id=connector_id):
task = SyncLogsService.get_latest_task(connector_id, c2k.kb_id)
if not task:
if status == TaskStatus.SCHEDULE:
SyncLogsService.schedule(connector_id, c2k.kb_id)
ConnectorService.update_by_id(connector_id, {"status": status})
return
if task.status == TaskStatus.DONE:
if status == TaskStatus.SCHEDULE:
SyncLogsService.schedule(connector_id, c2k.kb_id, task.poll_range_end, total_docs_indexed=task.total_docs_indexed)
ConnectorService.update_by_id(connector_id, {"status": status})
return
task = task.to_dict()
task["status"] = status
SyncLogsService.update_by_id(task["id"], task)
ConnectorService.update_by_id(connector_id, {"status": status})
@classmethod
def list(cls, tenant_id):
fields = [
cls.model.id,
cls.model.name,
cls.model.source,
cls.model.status
]
return list(cls.model.select(*fields).where(
cls.model.tenant_id == tenant_id
).dicts())
@classmethod
def rebuild(cls, kb_id:str, connector_id: str, tenant_id:str):
e, conn = cls.get_by_id(connector_id)
if not e:
return
SyncLogsService.filter_delete([SyncLogs.connector_id==connector_id, SyncLogs.kb_id==kb_id])
docs = DocumentService.query(source_type=f"{conn.source}/{conn.id}", kb_id=kb_id)
err = FileService.delete_docs([d.id for d in docs], tenant_id)
SyncLogsService.schedule(connector_id, kb_id, reindex=True)
return err
class SyncLogsService(CommonService):
model = SyncLogs
@classmethod
def list_sync_tasks(cls, connector_id=None, page_number=None, items_per_page=15) -> Tuple[List[dict], int]:
fields = [
cls.model.id,
cls.model.connector_id,
cls.model.kb_id,
cls.model.update_date,
cls.model.poll_range_start,
cls.model.poll_range_end,
cls.model.new_docs_indexed,
cls.model.total_docs_indexed,
cls.model.error_msg,
cls.model.full_exception_trace,
cls.model.error_count,
Connector.name,
Connector.source,
Connector.tenant_id,
Connector.timeout_secs,
Knowledgebase.name.alias("kb_name"),
Knowledgebase.avatar.alias("kb_avatar"),
Connector2Kb.auto_parse,
cls.model.from_beginning.alias("reindex"),
cls.model.status
]
if not connector_id:
fields.append(Connector.config)
query = cls.model.select(*fields)\
.join(Connector, on=(cls.model.connector_id==Connector.id))\
.join(Connector2Kb, on=(cls.model.kb_id==Connector2Kb.kb_id))\
.join(Knowledgebase, on=(cls.model.kb_id==Knowledgebase.id))
if connector_id:
query = query.where(cls.model.connector_id == connector_id)
else:
interval_expr = SQL("INTERVAL `t2`.`refresh_freq` MINUTE")
query = query.where(
Connector.input_type == InputType.POLL,
Connector.status == TaskStatus.SCHEDULE,
cls.model.status == TaskStatus.SCHEDULE,
cls.model.update_date < (fn.NOW() - interval_expr)
)
query = query.distinct().order_by(cls.model.update_time.desc())
totbal = query.count()
if page_number:
query = query.paginate(page_number, items_per_page)
return list(query.dicts()), totbal
@classmethod
def start(cls, id, connector_id):
cls.update_by_id(id, {"status": TaskStatus.RUNNING, "time_started": datetime.now().strftime('%Y-%m-%d %H:%M:%S') })
ConnectorService.update_by_id(connector_id, {"status": TaskStatus.RUNNING})
@classmethod
def done(cls, id, connector_id):
cls.update_by_id(id, {"status": TaskStatus.DONE})
ConnectorService.update_by_id(connector_id, {"status": TaskStatus.DONE})
@classmethod
def schedule(cls, connector_id, kb_id, poll_range_start=None, reindex=False, total_docs_indexed=0):
try:
if cls.model.select().where(cls.model.kb_id == kb_id, cls.model.connector_id == connector_id).count() > 100:
rm_ids = [m.id for m in cls.model.select(cls.model.id).where(cls.model.kb_id == kb_id, cls.model.connector_id == connector_id).order_by(cls.model.update_time.asc()).limit(70)]
deleted = cls.model.delete().where(cls.model.id.in_(rm_ids)).execute()
logging.info(f"[SyncLogService] Cleaned {deleted} old logs.")
except Exception as e:
logging.exception(e)
try:
e = cls.query(kb_id=kb_id, connector_id=connector_id, status=TaskStatus.SCHEDULE)
if e:
logging.warning(f"{kb_id}--{connector_id} has already had a scheduling sync task which is abnormal.")
return None
reindex = "1" if reindex else "0"
ConnectorService.update_by_id(connector_id, {"status": TaskStatus.SCHEDULE})
return cls.save(**{
"id": get_uuid(),
"kb_id": kb_id, "status": TaskStatus.SCHEDULE, "connector_id": connector_id,
"poll_range_start": poll_range_start, "from_beginning": reindex,
"total_docs_indexed": total_docs_indexed
})
except Exception as e:
logging.exception(e)
task = cls.get_latest_task(connector_id, kb_id)
if task:
cls.model.update(status=TaskStatus.SCHEDULE,
poll_range_start=poll_range_start,
error_msg=cls.model.error_msg + str(e),
full_exception_trace=cls.model.full_exception_trace + str(e)
) \
.where(cls.model.id == task.id).execute()
ConnectorService.update_by_id(connector_id, {"status": TaskStatus.SCHEDULE})
@classmethod
def increase_docs(cls, id, min_update, max_update, doc_num, err_msg="", error_count=0):
cls.model.update(new_docs_indexed=cls.model.new_docs_indexed + doc_num,
total_docs_indexed=cls.model.total_docs_indexed + doc_num,
poll_range_start=fn.COALESCE(fn.LEAST(cls.model.poll_range_start,min_update), min_update),
poll_range_end=fn.COALESCE(fn.GREATEST(cls.model.poll_range_end, max_update), max_update),
error_msg=cls.model.error_msg + err_msg,
error_count=cls.model.error_count + error_count,
update_time=current_timestamp(),
update_date=timestamp_to_date(current_timestamp())
)\
.where(cls.model.id == id).execute()
@classmethod
def duplicate_and_parse(cls, kb, docs, tenant_id, src, auto_parse=True):
if not docs:
return None
class FileObj(BaseModel):
filename: str
blob: bytes
def read(self) -> bytes:
return self.blob
errs = []
files = [FileObj(filename=d["semantic_identifier"]+(f"{d['extension']}" if d["semantic_identifier"][::-1].find(d['extension'][::-1])<0 else ""), blob=d["blob"]) for d in docs]
doc_ids = []
err, doc_blob_pairs = FileService.upload_document(kb, files, tenant_id, src)
errs.extend(err)
kb_table_num_map = {}
for doc, _ in doc_blob_pairs:
doc_ids.append(doc["id"])
if not auto_parse or auto_parse == "0":
continue
DocumentService.run(tenant_id, doc, kb_table_num_map)
return errs, doc_ids
@classmethod
def get_latest_task(cls, connector_id, kb_id):
return cls.model.select().where(
cls.model.connector_id==connector_id,
cls.model.kb_id == kb_id
).order_by(cls.model.update_time.desc()).first()
class Connector2KbService(CommonService):
model = Connector2Kb
@classmethod
def link_connectors(cls, kb_id:str, connectors: list[dict], tenant_id:str):
arr = cls.query(kb_id=kb_id)
old_conn_ids = [a.connector_id for a in arr]
connector_ids = []
for conn in connectors:
conn_id = conn["id"]
connector_ids.append(conn_id)
if conn_id in old_conn_ids:
cls.filter_update([cls.model.connector_id==conn_id, cls.model.kb_id==kb_id], {"auto_parse": conn.get("auto_parse", "1")})
continue
cls.save(**{
"id": get_uuid(),
"connector_id": conn_id,
"kb_id": kb_id,
"auto_parse": conn.get("auto_parse", "1")
})
SyncLogsService.schedule(conn_id, kb_id, reindex=True)
errs = []
for conn_id in old_conn_ids:
if conn_id in connector_ids:
continue
cls.filter_delete([cls.model.kb_id==kb_id, cls.model.connector_id==conn_id])
e, conn = ConnectorService.get_by_id(conn_id)
if not e:
continue
#SyncLogsService.filter_delete([SyncLogs.connector_id==conn_id, SyncLogs.kb_id==kb_id])
# Do not delete docs while unlinking.
SyncLogsService.filter_update([SyncLogs.connector_id==conn_id, SyncLogs.kb_id==kb_id, SyncLogs.status.in_([TaskStatus.SCHEDULE, TaskStatus.RUNNING])], {"status": TaskStatus.CANCEL})
#docs = DocumentService.query(source_type=f"{conn.source}/{conn.id}")
#err = FileService.delete_docs([d.id for d in docs], tenant_id)
#if err:
# errs.append(err)
return "\n".join(errs)
@classmethod
def list_connectors(cls, kb_id):
fields = [
Connector.id,
Connector.source,
Connector.name,
cls.model.auto_parse,
Connector.status
]
return list(cls.model.select(*fields)\
.join(Connector, on=(cls.model.connector_id==Connector.id))\
.where(
cls.model.kb_id==kb_id
).dicts()
)

View File

@ -15,12 +15,12 @@
#
import time
from uuid import uuid4
from api.db import StatusEnum
from common.constants import StatusEnum
from api.db.db_models import Conversation, DB
from api.db.services.api_service import API4ConversationService
from api.db.services.common_service import CommonService
from api.db.services.dialog_service import DialogService, chat
from api.utils import get_uuid
from common.misc_utils import get_uuid
import json
from rag.prompts.generator import chunks_format

View File

@ -25,8 +25,7 @@ import trio
from langfuse import Langfuse
from peewee import fn
from agentic_reasoning import DeepResearcher
from api import settings
from api.db import LLMType, ParserType, StatusEnum
from common.constants import LLMType, ParserType, StatusEnum
from api.db.db_models import DB, Dialog
from api.db.services.common_service import CommonService
from api.db.services.document_service import DocumentService
@ -41,9 +40,10 @@ from rag.app.tag import label_question
from rag.nlp.search import index_name
from rag.prompts.generator import chunks_format, citation_prompt, cross_languages, full_question, kb_prompt, keyword_extraction, message_fit_in, \
gen_meta_filter, PROMPT_JINJA_ENV, ASK_SUMMARY
from rag.utils import num_tokens_from_string
from common.token_utils import num_tokens_from_string
from rag.utils.tavily_conn import Tavily
from common.string_utils import remove_redundant_spaces
from common import settings
class DialogService(CommonService):
@ -293,12 +293,13 @@ def meta_filter(metas: dict, filters: list[dict]):
def filter_out(v2docs, operator, value):
ids = []
for input, docids in v2docs.items():
try:
input = float(input)
value = float(value)
except Exception:
input = str(input)
value = str(value)
if operator in ["=", "", ">", "<", "", ""]:
try:
input = float(input)
value = float(value)
except Exception:
input = str(input)
value = str(value)
for conds in [
(operator == "contains", str(value).lower() in str(input).lower()),
@ -618,7 +619,12 @@ def chat(dialog, messages, stream=True, **kwargs):
def use_sql(question, field_map, tenant_id, chat_mdl, quota=True, kb_ids=None):
sys_prompt = "You are a Database Administrator. You need to check the fields of the following tables based on the user's list of questions and write the SQL corresponding to the last question."
sys_prompt = """
You are a Database Administrator. You need to check the fields of the following tables based on the user's list of questions and write the SQL corresponding to the last question.
Ensure that:
1. Field names should not start with a digit. If any field name starts with a digit, use double quotes around it.
2. Write only the SQL, no explanations or additional text.
"""
user_prompt = """
Table name: {};
Table of database fields are as follows:
@ -639,6 +645,7 @@ Please write the SQL, only SQL, without any other explanations or text.
sql = re.sub(r".*select ", "select ", sql.lower())
sql = re.sub(r" +", " ", sql)
sql = re.sub(r"([;]|```).*", "", sql)
sql = re.sub(r"&", "and", sql)
if sql[: len("select ")] != "select ":
return None, None
if not re.search(r"((sum|avg|max|min)\(|group by )", sql.lower()):

View File

@ -26,22 +26,20 @@ import trio
import xxhash
from peewee import fn, Case, JOIN
from api import settings
from api.constants import IMG_BASE64_PREFIX, FILE_NAME_LEN_LIMIT
from api.db import FileType, LLMType, ParserType, StatusEnum, TaskStatus, UserTenantRole, CanvasCategory
from api.db import PIPELINE_SPECIAL_PROGRESS_FREEZE_TASK_TYPES, FileType, UserTenantRole, CanvasCategory
from api.db.db_models import DB, Document, Knowledgebase, Task, Tenant, UserTenant, File2Document, File, UserCanvas, \
User
from api.db.db_utils import bulk_insert_into_db
from api.db.services.common_service import CommonService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.utils import get_uuid
from common.misc_utils import get_uuid
from common.time_utils import current_timestamp, get_format_time
from common.constants import LLMType, ParserType, StatusEnum, TaskStatus, SVR_CONSUMER_GROUP_NAME
from rag.nlp import rag_tokenizer, search
from rag.settings import get_svr_queue_name, SVR_CONSUMER_GROUP_NAME
from rag.utils.redis_conn import REDIS_CONN
from rag.utils.storage_factory import STORAGE_IMPL
from rag.utils.doc_store_conn import OrderByExpr
from common import settings
class DocumentService(CommonService):
model = Document
@ -317,11 +315,11 @@ class DocumentService(CommonService):
all_chunk_ids.extend(chunk_ids)
page += 1
for cid in all_chunk_ids:
if STORAGE_IMPL.obj_exist(doc.kb_id, cid):
STORAGE_IMPL.rm(doc.kb_id, cid)
if settings.STORAGE_IMPL.obj_exist(doc.kb_id, cid):
settings.STORAGE_IMPL.rm(doc.kb_id, cid)
if doc.thumbnail and not doc.thumbnail.startswith(IMG_BASE64_PREFIX):
if STORAGE_IMPL.obj_exist(doc.kb_id, doc.thumbnail):
STORAGE_IMPL.rm(doc.kb_id, doc.thumbnail)
if settings.STORAGE_IMPL.obj_exist(doc.kb_id, doc.thumbnail):
settings.STORAGE_IMPL.rm(doc.kb_id, doc.thumbnail)
settings.docStoreConn.delete({"doc_id": doc.id}, search.index_name(tenant_id), doc.kb_id)
graph_source = settings.docStoreConn.getFields(
@ -374,12 +372,16 @@ class DocumentService(CommonService):
def get_unfinished_docs(cls):
fields = [cls.model.id, cls.model.process_begin_at, cls.model.parser_config, cls.model.progress_msg,
cls.model.run, cls.model.parser_id]
unfinished_task_query = Task.select(Task.doc_id).where(
(Task.progress >= 0) & (Task.progress < 1)
)
docs = cls.model.select(*fields) \
.where(
cls.model.status == StatusEnum.VALID.value,
~(cls.model.type == FileType.VIRTUAL.value),
cls.model.progress < 1,
cls.model.progress > 0)
(((cls.model.progress < 1) & (cls.model.progress > 0)) |
(cls.model.id.in_(unfinished_task_query)))) # including unfinished tasks like GraphRAG, RAPTOR and Mindmap
return list(docs.dicts())
@classmethod
@ -621,12 +623,17 @@ class DocumentService(CommonService):
@classmethod
@DB.connection_context()
def begin2parse(cls, docid):
cls.update_by_id(
docid, {"progress": random.random() * 1 / 100.,
"progress_msg": "Task is queued...",
"process_begin_at": get_format_time()
})
def begin2parse(cls, doc_id, keep_progress=False):
info = {
"progress_msg": "Task is queued...",
"process_begin_at": get_format_time(),
}
if not keep_progress:
info["progress"] = random.random() * 1 / 100.
info["run"] = TaskStatus.RUNNING.value
# keep the doc in DONE state when keep_progress=True for GraphRAG, RAPTOR and Mindmap tasks
cls.update_by_id(doc_id, info)
@classmethod
@DB.connection_context()
@ -685,8 +692,13 @@ class DocumentService(CommonService):
bad = 0
e, doc = DocumentService.get_by_id(d["id"])
status = doc.run # TaskStatus.RUNNING.value
doc_progress = doc.progress if doc and doc.progress else 0.0
special_task_running = False
priority = 0
for t in tsks:
task_type = (t.task_type or "").lower()
if task_type in PIPELINE_SPECIAL_PROGRESS_FREEZE_TASK_TYPES:
special_task_running = True
if 0 <= t.progress < 1:
finished = False
if t.progress == -1:
@ -703,13 +715,15 @@ class DocumentService(CommonService):
prg = 1
status = TaskStatus.DONE.value
# only for special task and parsed docs and unfinised
freeze_progress = special_task_running and doc_progress >= 1 and not finished
msg = "\n".join(sorted(msg))
info = {
"process_duration": datetime.timestamp(
datetime.now()) -
d["process_begin_at"].timestamp(),
"run": status}
if prg != 0:
if prg != 0 and not freeze_progress:
info["progress"] = prg
if msg:
info["progress_msg"] = msg
@ -756,6 +770,14 @@ class DocumentService(CommonService):
.where((cls.model.kb_id == kb_id) & (cls.model.run == TaskStatus.CANCEL))
.scalar()
)
downloaded = (
cls.model.select(fn.COUNT(1))
.where(
cls.model.kb_id == kb_id,
cls.model.source_type != "local"
)
.scalar()
)
row = (
cls.model.select(
@ -792,8 +814,32 @@ class DocumentService(CommonService):
"finished": int(row["finished"]),
"failed": int(row["failed"]),
"cancelled": int(cancelled),
"downloaded": int(downloaded)
}
@classmethod
def run(cls, tenant_id:str, doc:dict, kb_table_num_map:dict):
from api.db.services.task_service import queue_dataflow, queue_tasks
from api.db.services.file2document_service import File2DocumentService
doc["tenant_id"] = tenant_id
doc_parser = doc.get("parser_id", ParserType.NAIVE)
if doc_parser == ParserType.TABLE:
kb_id = doc.get("kb_id")
if not kb_id:
return
if kb_id not in kb_table_num_map:
count = DocumentService.count_by_kb_id(kb_id=kb_id, keywords="", run_status=[TaskStatus.DONE], types=[])
kb_table_num_map[kb_id] = count
if kb_table_num_map[kb_id] <= 0:
KnowledgebaseService.delete_field_map(kb_id)
if doc.get("pipeline_id", ""):
queue_dataflow(tenant_id, flow_id=doc["pipeline_id"], task_id=get_uuid(), doc_id=doc["id"])
else:
bucket, name = File2DocumentService.get_storage_address(doc_id=doc["id"])
queue_tasks(doc, bucket, name, 0)
def queue_raptor_o_graphrag_tasks(sample_doc_id, ty, priority, fake_doc_id="", doc_ids=[]):
"""
You can provide a fake_doc_id to bypass the restriction of tasks at the knowledgebase level.
@ -815,7 +861,7 @@ def queue_raptor_o_graphrag_tasks(sample_doc_id, ty, priority, fake_doc_id="", d
"to_page": 100000000,
"task_type": ty,
"progress_msg": datetime.now().strftime("%H:%M:%S") + " created task " + ty,
"begin_at": datetime.now(),
"begin_at": datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
}
task = new_task()
@ -827,13 +873,13 @@ def queue_raptor_o_graphrag_tasks(sample_doc_id, ty, priority, fake_doc_id="", d
task["doc_id"] = fake_doc_id
task["doc_ids"] = doc_ids
DocumentService.begin2parse(sample_doc_id["id"])
assert REDIS_CONN.queue_product(get_svr_queue_name(priority), message=task), "Can't access Redis. Please check the Redis' status."
DocumentService.begin2parse(sample_doc_id["id"], keep_progress=True)
assert REDIS_CONN.queue_product(settings.get_svr_queue_name(priority), message=task), "Can't access Redis. Please check the Redis' status."
return task["id"]
def get_queue_length(priority):
group_info = REDIS_CONN.queue_info(get_svr_queue_name(priority), SVR_CONSUMER_GROUP_NAME)
group_info = REDIS_CONN.queue_info(settings.get_svr_queue_name(priority), SVR_CONSUMER_GROUP_NAME)
if not group_info:
return 0
return int(group_info.get("lag", 0) or 0)
@ -915,7 +961,7 @@ def doc_upload_and_parse(conversation_id, file_objs, user_id):
else:
d["image"].save(output_buffer, format='JPEG')
STORAGE_IMPL.put(kb.id, d["id"], output_buffer.getvalue())
settings.STORAGE_IMPL.put(kb.id, d["id"], output_buffer.getvalue())
d["img_id"] = "{}-{}".format(kb.id, d["id"])
d.pop("image", None)
docs.append(d)
@ -962,7 +1008,7 @@ def doc_upload_and_parse(conversation_id, file_objs, user_id):
"content_with_weight": mind_map,
"knowledge_graph_kwd": "mind_map"
})
except Exception as e:
except Exception:
logging.exception("Mind map generation error")
vects = embedding(doc_id, [c["content_with_weight"] for c in cks])
@ -981,4 +1027,3 @@ def doc_upload_and_parse(conversation_id, file_objs, user_id):
doc_id, kb.id, token_counts[doc_id], chunk_counts[doc_id], 0)
return [d["id"] for d, _ in files]

View File

@ -15,7 +15,7 @@
#
from datetime import datetime
from api.db import FileSource
from common.constants import FileSource
from api.db.db_models import DB
from api.db.db_models import File, File2Document
from api.db.services.common_service import CommonService

View File

@ -21,16 +21,19 @@ from pathlib import Path
from flask_login import current_user
from peewee import fn
from api.db import KNOWLEDGEBASE_FOLDER_NAME, FileSource, FileType, ParserType
from api.db.db_models import DB, Document, File, File2Document, Knowledgebase
from api.db import KNOWLEDGEBASE_FOLDER_NAME, FileType
from api.db.db_models import DB, Document, File, File2Document, Knowledgebase, Task
from api.db.services import duplicate_name
from api.db.services.common_service import CommonService
from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService
from api.utils import get_uuid
from common.misc_utils import get_uuid
from common.constants import TaskStatus, FileSource, ParserType
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.task_service import TaskService
from api.utils.file_utils import filename_type, read_potential_broken_pdf, thumbnail_img
from rag.llm.cv_model import GptV4
from rag.utils.storage_factory import STORAGE_IMPL
from common import settings
class FileService(CommonService):
@ -420,7 +423,7 @@ class FileService(CommonService):
@classmethod
@DB.connection_context()
def upload_document(self, kb, file_objs, user_id):
def upload_document(self, kb, file_objs, user_id, src="local"):
root_folder = self.get_root_folder(user_id)
pf_id = root_folder["id"]
self.init_knowledgebase_docs(pf_id, user_id)
@ -437,13 +440,13 @@ class FileService(CommonService):
raise RuntimeError("This type of file has not been supported yet!")
location = filename
while STORAGE_IMPL.obj_exist(kb.id, location):
while settings.STORAGE_IMPL.obj_exist(kb.id, location):
location += "_"
blob = file.read()
if filetype == FileType.PDF.value:
blob = read_potential_broken_pdf(blob)
STORAGE_IMPL.put(kb.id, location, blob)
settings.STORAGE_IMPL.put(kb.id, location, blob)
doc_id = get_uuid()
@ -451,7 +454,7 @@ class FileService(CommonService):
thumbnail_location = ""
if img is not None:
thumbnail_location = f"thumbnail_{doc_id}.png"
STORAGE_IMPL.put(kb.id, thumbnail_location, img)
settings.STORAGE_IMPL.put(kb.id, thumbnail_location, img)
doc = {
"id": doc_id,
@ -462,6 +465,7 @@ class FileService(CommonService):
"created_by": user_id,
"type": filetype,
"name": filename,
"source_type": src,
"suffix": Path(filename).suffix.lstrip("."),
"location": location,
"size": len(blob),
@ -530,9 +534,54 @@ class FileService(CommonService):
@staticmethod
def get_blob(user_id, location):
bname = f"{user_id}-downloads"
return STORAGE_IMPL.get(bname, location)
return settings.STORAGE_IMPL.get(bname, location)
@staticmethod
def put_blob(user_id, location, blob):
bname = f"{user_id}-downloads"
return STORAGE_IMPL.put(bname, location, blob)
return settings.STORAGE_IMPL.put(bname, location, blob)
@classmethod
@DB.connection_context()
def delete_docs(cls, doc_ids, tenant_id):
root_folder = FileService.get_root_folder(tenant_id)
pf_id = root_folder["id"]
FileService.init_knowledgebase_docs(pf_id, tenant_id)
errors = ""
kb_table_num_map = {}
for doc_id in doc_ids:
try:
e, doc = DocumentService.get_by_id(doc_id)
if not e:
raise Exception("Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id:
raise Exception("Tenant not found!")
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
TaskService.filter_delete([Task.doc_id == doc_id])
if not DocumentService.remove_document(doc, tenant_id):
raise Exception("Database error (Document removal)!")
f2d = File2DocumentService.get_by_document_id(doc_id)
deleted_file_count = 0
if f2d:
deleted_file_count = FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
File2DocumentService.delete_by_document_id(doc_id)
if deleted_file_count > 0:
settings.STORAGE_IMPL.rm(b, n)
doc_parser = doc.parser_id
if doc_parser == ParserType.TABLE:
kb_id = doc.kb_id
if kb_id not in kb_table_num_map:
counts = DocumentService.count_by_kb_id(kb_id=kb_id, keywords="", run_status=[TaskStatus.DONE], types=[])
kb_table_num_map[kb_id] = counts
kb_table_num_map[kb_id] -= 1
if kb_table_num_map[kb_id] <= 0:
KnowledgebaseService.delete_field_map(kb_id)
except Exception as e:
errors += str(e)
return errors

View File

@ -17,11 +17,16 @@ from datetime import datetime
from peewee import fn, JOIN
from api.db import StatusEnum, TenantPermission
from api.db import TenantPermission
from api.db.db_models import DB, Document, Knowledgebase, User, UserTenant, UserCanvas
from api.db.services.common_service import CommonService
from common.time_utils import current_timestamp, datetime_format
from api.db.services import duplicate_name
from api.db.services.user_service import TenantService
from common.misc_utils import get_uuid
from common.constants import StatusEnum
from api.constants import DATASET_NAME_LIMIT
from api.utils.api_utils import get_parser_config, get_data_error_result
class KnowledgebaseService(CommonService):
"""Service class for managing knowledge base operations.
@ -87,7 +92,7 @@ class KnowledgebaseService(CommonService):
# Returns:
# If all documents are parsed successfully, returns (True, None)
# If any document is not fully parsed, returns (False, error_message)
from api.db import TaskStatus
from common.constants import TaskStatus
from api.db.services.document_service import DocumentService
# Get knowledge base information
@ -196,6 +201,7 @@ class KnowledgebaseService(CommonService):
# will get all permitted kb, be cautious.
fields = [
cls.model.name,
cls.model.avatar,
cls.model.language,
cls.model.permission,
cls.model.doc_num,
@ -281,7 +287,7 @@ class KnowledgebaseService(CommonService):
(cls.model.status == StatusEnum.VALID.value)
).dicts()
if not kbs:
return
return None
return kbs[0]
@classmethod
@ -363,6 +369,64 @@ class KnowledgebaseService(CommonService):
# List of all knowledge base IDs
return [m["id"] for m in cls.model.select(cls.model.id).dicts()]
@classmethod
@DB.connection_context()
def create_with_name(
cls,
*,
name: str,
tenant_id: str,
parser_id: str | None = None,
**kwargs
):
"""Create a dataset (knowledgebase) by name with kb_app defaults.
This encapsulates the creation logic used in kb_app.create so other callers
(including RESTFul endpoints) can reuse the same behavior.
Returns:
(ok: bool, model_or_msg): On success, returns (True, Knowledgebase model instance);
on failure, returns (False, error_message).
"""
# Validate name
if not isinstance(name, str):
return get_data_error_result(message="Dataset name must be string.")
dataset_name = name.strip()
if dataset_name == "":
return get_data_error_result(message="Dataset name can't be empty.")
if len(dataset_name.encode("utf-8")) > DATASET_NAME_LIMIT:
return get_data_error_result(message=f"Dataset name length is {len(dataset_name)} which is larger than {DATASET_NAME_LIMIT}")
# Deduplicate name within tenant
dataset_name = duplicate_name(
cls.query,
name=dataset_name,
tenant_id=tenant_id,
status=StatusEnum.VALID.value,
)
# Verify tenant exists
ok, _t = TenantService.get_by_id(tenant_id)
if not ok:
return False, "Tenant not found."
# Build payload
kb_id = get_uuid()
payload = {
"id": kb_id,
"name": dataset_name,
"tenant_id": tenant_id,
"created_by": tenant_id,
"parser_id": (parser_id or "naive"),
**kwargs
}
# Default parser_config (align with kb_app.create) — do not accept external overrides
payload["parser_config"] = get_parser_config(parser_id, kwargs.get("parser_config"))
return payload
@classmethod
@DB.connection_context()
def get_list(cls, joined_tenant_ids, user_id,

View File

@ -16,7 +16,7 @@
import inspect
import logging
import re
from rag.utils import num_tokens_from_string
from common.token_utils import num_tokens_from_string
from functools import partial
from typing import Generator
from api.db.db_models import LLM
@ -29,7 +29,7 @@ class LLMService(CommonService):
def get_init_tenant_llm(user_id):
from api import settings
from common import settings
tenant_llm = []
seen = set()

Some files were not shown because too many files have changed in this diff Show More