Compare commits

..

66 Commits

Author SHA1 Message Date
32dbed36e3 Fix: Unified terminology to "Pipeline" and optimized related component logic. #9869 (#10394)
### What problem does this PR solve?

Fix: Unified terminology to "Pipeline" and optimized related component
logic. #9869

- Added logic to clear pipeline_id when parseType changes in the chunk
method dialog.
- Fixed an issue in the Tooltip form component that prevented clicks
from triggering saves.
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 19:53:15 +08:00
7f62ab8eb3 Feat: View data flow test results #9869 (#10392)
### What problem does this PR solve?

Feat: View data flow test results #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-30 18:55:55 +08:00
e87987785c fix(web): add data stream selection component (#10387)
### What problem does this PR solve?

fix(web): add data stream selection component

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 17:35:06 +08:00
b3b0be832a Fix: input (#10386)
### What problem does this PR solve?

Fix input of some parser.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 15:39:09 +08:00
20b577a72c Fix: Merge main branch (#10377)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: jinhai <haijin.chn@gmail.com>
Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Lynn <lynn_inf@hotmail.com>
Co-authored-by: chanx <1243304602@qq.com>
Co-authored-by: balibabu <cike8899@users.noreply.github.com>
Co-authored-by: 纷繁下的无奈 <zhileihuang@126.com>
Co-authored-by: huangzl <huangzl@shinemo.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
Co-authored-by: Wilmer <33392318@qq.com>
Co-authored-by: Adrian Weidig <adrianweidig@gmx.net>
Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Yongteng Lei <yongtengrey@outlook.com>
Co-authored-by: Liu An <asiro@qq.com>
Co-authored-by: buua436 <66937541+buua436@users.noreply.github.com>
Co-authored-by: BadwomanCraZY <511528396@qq.com>
Co-authored-by: cucusenok <31804608+cucusenok@users.noreply.github.com>
Co-authored-by: Russell Valentine <russ@coldstonelabs.org>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Billy Bao <newyorkupperbay@gmail.com>
Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: TensorNull <129579691+TensorNull@users.noreply.github.com>
Co-authored-by: TensorNull <tensor.null@gmail.com>
Co-authored-by: Ajay <160579663+aybanda@users.noreply.github.com>
Co-authored-by: AB <aj@Ajays-MacBook-Air.local>
Co-authored-by: 天海蒼灆 <huangaoqin@tecpie.com>
Co-authored-by: He Wang <wanghechn@qq.com>
Co-authored-by: Atsushi Hatakeyama <atu729@icloud.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Mohamed Mathari <155896313+melmathari@users.noreply.github.com>
Co-authored-by: Mohamed Mathari <nocodeventure@Mac-mini-van-Mohamed.fritz.box>
Co-authored-by: Stephen Hu <stephenhu@seismic.com>
Co-authored-by: Shaun Zhang <zhangwfjh@users.noreply.github.com>
Co-authored-by: zhimeng123 <60221886+zhimeng123@users.noreply.github.com>
Co-authored-by: mxc <mxc@example.com>
Co-authored-by: Dominik Novotný <50611433+SgtMarmite@users.noreply.github.com>
Co-authored-by: EVGENY M <168018528+rjohny55@users.noreply.github.com>
Co-authored-by: mcoder6425 <mcoder64@gmail.com>
Co-authored-by: TeslaZY <TeslaZY@outlook.com>
Co-authored-by: lemsn <lemsn@msn.com>
Co-authored-by: lemsn <lemsn@126.com>
Co-authored-by: Adrian Gora <47756404+adagora@users.noreply.github.com>
Co-authored-by: Womsxd <45663319+Womsxd@users.noreply.github.com>
Co-authored-by: FatMii <39074672+FatMii@users.noreply.github.com>
2025-09-30 13:13:15 +08:00
4d6ff672eb Fix: Added read-only mode support and optimized navigation logic #9869 (#10370)
### What problem does this PR solve?

Fix: Added read-only mode support and optimized navigation logic #9869

- Added the `isReadonly` property to the parseResult component to
control the enabled state of editing and interactive features
- Added the `navigateToDataFile` navigation method to navigate to the
data file details page
- Refactored the `navigateToDataflowResult` method to use an object
parameter to support more flexible query parameter configuration
- Unified the `var(--accent-primary)` CSS variable format to
`rgb(var(--accent-primary))` to accommodate more styling scenarios
- Extracted the parser initialization logic into a separate hook
(`useParserInit`)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 12:00:29 +08:00
fb19e24f8a Feat: Delete flow related code. #9869 (#10371)
### What problem does this PR solve?

Feat: Delete flow related code. #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-30 12:00:17 +08:00
9989e06abb Fix: debug PDF positions.. (#10365)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 09:24:44 +08:00
c49e81882c Feat: Remove the copy icon from the toolbar for the Splitter and Parser nodes #9869 (#10367)
### What problem does this PR solve?
Feat: Remove the copy icon from the toolbar for the Splitter and Parser
nodes #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-29 18:55:53 +08:00
63cdce660e Feat: Limit the number of Splitter and Parser operators on the canvas to only one #9869 (#10362)
### What problem does this PR solve?

Feat: Limit the number of Splitter and Parser operators on the canvas to
only one #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-29 17:22:40 +08:00
8bc8126848 Feat: Move the github icon to the right #9869 (#10355)
### What problem does this PR solve?

Feat: Move the github icon to the right #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-29 11:50:58 +08:00
71f69cdb75 Fix: debug hierachical merging... (#10337)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-29 09:29:33 +08:00
664bc0b961 Feat: Displays the loading status of the data flow log #9869 (#10347)
### What problem does this PR solve?

Feat: Displays the loading status of the data flow log #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-28 19:38:46 +08:00
f4cc4dbd30 Fix: Interoperate with the pipeline rerun and unbindTask interfaces. #9869 (#10346)
### What problem does this PR solve?

Fix: Interoperate with the pipeline rerun and unbindTask interfaces.
#9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-28 19:32:19 +08:00
cce361d774 Feat: Filter the agent list by owner and category #9869 (#10344)
### What problem does this PR solve?

Feat: Filter the agent list by owner and category #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-28 18:43:20 +08:00
7a63b6386e Feat: limit pipeline operation logs to 1000 records (#10341)
### What problem does this PR solve?

 Limit pipeline operation logs to 1000 records.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-28 18:42:19 +08:00
4996dcb0eb Fix bug of image parser and prompt of parser supports customization (#10319)
### What problem does this PR solve?
BugFix: ERROR: KeyError: 'llm_id'
Feat: The prompt of the describe picture in cv_model supports
customization #10320


### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-09-28 12:47:36 +08:00
3521eb61fe Feat: add support for deleting KB tasks (#10335)
### What problem does this PR solve?

Add support for deleting KB tasks.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-28 12:46:00 +08:00
6b9b785b5c Feat: Fixed the issue where the cursor would go to the end when changing its own data #9869 (#10316)
### What problem does this PR solve?

Feat: Fixed the issue where the cursor would go to the end when changing
its own data #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-26 19:55:42 +08:00
4c0a89f262 Feat: add initial support for Mindmap (#10310)
### What problem does this PR solve?

Add initial support for Mindmap.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-09-26 19:45:01 +08:00
76b1ee2a00 Fix: debug pipeline... (#10311)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-26 19:11:30 +08:00
771a38434f Feat: Bring the parser operator when creating a new data flow #9869 (#10309)
### What problem does this PR solve?

Feat: Bring the parser operator when creating a new data flow #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-26 19:09:27 +08:00
886d38620e Fix: Improved knowledge base configuration and related logic #9869 (#10315)
### What problem does this PR solve?

Fix: Improved knowledge base configuration and related logic #9869
- Optimized the display logic of the Generate Log button to support
displaying completion time and task ID
- Implemented the ability to pause task generation and connect to the
data flow cancellation interface
- Fixed issues with type definitions and optional chaining calls in some
components
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-26 19:09:11 +08:00
c7efaab30e Feat: debug extractor... (#10294)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-26 10:51:05 +08:00
ff49454501 Feat: fetch KB config for GraphRAG and RAPTOR (#10288)
### What problem does this PR solve?

Fetch KB config for GraphRAG and RAPTOR.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-26 09:39:58 +08:00
14273b4595 Fix: Optimized knowledge base file parsing and display #9869 (#10292)
### What problem does this PR solve?

Fix: Optimized knowledge base file parsing and display #9869

- Optimized the ChunkMethodDialog component logic and adjusted
FormSchema validation rules
- Updated the document information interface definition, adding
pipeline_id, pipeline_name, and suffix fields
- Refactored the ChunkResultBar component, removing filter-related logic
and simplifying the input box and chunk creation functionality
- Improved FormatPreserveEditor to support text mode switching
(full/omitted) display control
- Updated timeline node titles to more accurate semantic descriptions
(e.g., character splitters)
- Optimized the data flow result page structure and style, dynamically
adjusting height and content display
- Fixed the table sorting function on the dataset overview page and
enhanced the display of task type icons and status mapping.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-25 19:53:49 +08:00
abe7132630 Feat: Change the corresponding prompt word according to the value of fieldName #9869 (#10291)
### What problem does this PR solve?

Feat: Change the corresponding prompt word according to the value of
fieldName #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-25 19:53:37 +08:00
c1151519a0 Feat: add foundational support for RAPTOR dataset pipeline logs (#10277)
### What problem does this PR solve?

Add foundational support for RAPTOR dataset pipeline logs.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-25 16:46:24 +08:00
a1147ce609 Feat: Allows the extractor operator's prompt to reference the output of an upstream operator #9869 (#10279)
### What problem does this PR solve?

Feat: Allows the extractor operator's prompt to reference the output of
an upstream operator #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-25 15:24:24 +08:00
d907e79893 Refa: fake doc ID. (#10276)
### What problem does this PR solve?
#10273
### Type of change

- [x] Refactoring
2025-09-25 13:52:50 +08:00
1b19d302c5 Feat: add extractor component. (#10271)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-25 11:34:47 +08:00
840b2b5809 Feat: add foundational support for GraphRAG dataset pipeline logs (#10264)
### What problem does this PR solve?

Add foundational support for GraphRAG dataset pipeline logs

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-25 09:35:50 +08:00
a6039cf563 Fix: Optimized the timeline component and parser editing features #9869 (#10268)
### What problem does this PR solve?

Fix: Optimized the timeline component and parser editing features #9869

- Introduced the TimelineNodeType type, restructured the timeline node
structure, and supported dynamic node generation
- Enhanced the FormatPreserveEditor component to support editing and
line wrapping of JSON-formatted content
- Added a rerun function and loading state to the parser and splitter
components
- Adjusted the timeline style and interaction logic to enhance the user
experience
- Improved the modal component and added a destroy method to support
more flexible control
- Optimized the chunk result display and operation logic, supporting
batch deletion and selection
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-24 19:58:30 +08:00
8be7380b79 Feat: Added the context operator form for data flow #9869 (#10270)
### What problem does this PR solve?
Feat: Added the context operator form for data flow #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 19:58:16 +08:00
afb8a84f7b Feat: Add context node #9869 (#10266)
### What problem does this PR solve?

Feat: Add context node #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 18:48:31 +08:00
6bf0cda16f Feat: Cancel a running data flow test #9869 (#10257)
### What problem does this PR solve?

Feat: Cancel a running data flow test #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 16:33:33 +08:00
5715ca6b74 Fix: pipeline debug... (#10206)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 11:12:08 +08:00
8f465525f7 Feat: Display the log after the data flow runs #9869 (#10232)
### What problem does this PR solve?

Feat: Display the log after the data flow runs #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-23 19:30:47 +08:00
f20dca2895 Fix: Interface integration for the file log page in the overview #9869 (#10222)
### What problem does this PR solve?

Fix: Interface integration for the file log page in the overview

- Support for selecting data pipeline parsing types
- Use the RunningStatus enumeration instead of numeric status
- Obtain and display data pipeline file log details
- Replace existing mock data with new interface data on the page
- Link the file log list to the real data source
- Optimize log information display
- Fixed a typo in the field name "pipeline_id" → "pipeline_id"

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-23 10:33:17 +08:00
0c557e37ad Feat: add support for pipeline logs operation (#10207)
### What problem does this PR solve?

Add support for pipeline logs operation

### Type of change


- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-23 09:46:31 +08:00
d0bfe8b10c Feat: Display the data flow log on the far right. #9869 (#10214)
### What problem does this PR solve?

Feat: Display the data flow log on the far right. #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-22 19:13:18 +08:00
28afc7e67d Feat: Exporting the results of data flow tests #9869 (#10209)
### What problem does this PR solve?

Feat: Exporting the results of data flow tests #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-22 18:08:04 +08:00
73c33bc8d2 Fix: Fixed the issue where the drop-down box could not be displayed after selecting a large model #9869 (#10205)
### What problem does this PR solve?

Fix: Fixed the issue where the drop-down box could not be displayed
after selecting a large model #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-22 17:16:34 +08:00
476852e8f1 Feat: Remove useless files from the data flow #9869 (#10198)
### What problem does this PR solve?

Feat: Remove useless files from the data flow #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-22 15:48:39 +08:00
e6cf00cb33 Feat: Add suffix field to all operators #9869 (#10195)
### What problem does this PR solve?

Feat: Add suffix field to all operators #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-22 14:37:06 +08:00
d039d1e73d fix: Added dataset generation logging functionality #9869 (#10180)
### What problem does this PR solve?

fix: Added dataset generation logging functionality #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-22 10:01:34 +08:00
d050ef568d Feat: support dataflow run. (#10182)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-22 09:36:21 +08:00
028c2d83e9 Feat: parse email (#10181)
### What problem does this PR solve?

- Dataflow support email.
- Fix old email parser.
- Add new depends to parse msg file.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [x] Other (please describe): add new depends.
2025-09-22 09:29:38 +08:00
b5d6a6e8f2 Feat: Remove unnecessary data from the dsl #9869 (#10177)
### What problem does this PR solve?
Feat: Remove unnecessary data from the dsl #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-19 19:06:33 +08:00
5dfdbcce3a Feat: pipeline supports PPTX (#10167)
### What problem does this PR solve?

Pipeline supports parsing PPTX naively (text only).

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-19 12:14:35 +08:00
4fae40f66a Feat: Translate the splitter operator field #9869 (#10166)
### What problem does this PR solve?

Feat: Translate the splitter operator field #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-19 11:11:22 +08:00
a1b947ffd6 Feat: add splitter (#10161)
### What problem does this PR solve?


### Type of change
- [x] New Feature (non-breaking change which adds functionality)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Lynn <lynn_inf@hotmail.com>
Co-authored-by: chanx <1243304602@qq.com>
Co-authored-by: balibabu <cike8899@users.noreply.github.com>
Co-authored-by: 纷繁下的无奈 <zhileihuang@126.com>
Co-authored-by: huangzl <huangzl@shinemo.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
Co-authored-by: Wilmer <33392318@qq.com>
Co-authored-by: Adrian Weidig <adrianweidig@gmx.net>
Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Yongteng Lei <yongtengrey@outlook.com>
Co-authored-by: Liu An <asiro@qq.com>
Co-authored-by: buua436 <66937541+buua436@users.noreply.github.com>
Co-authored-by: BadwomanCraZY <511528396@qq.com>
Co-authored-by: cucusenok <31804608+cucusenok@users.noreply.github.com>
Co-authored-by: Russell Valentine <russ@coldstonelabs.org>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Billy Bao <newyorkupperbay@gmail.com>
Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: TensorNull <129579691+TensorNull@users.noreply.github.com>
Co-authored-by: TensorNull <tensor.null@gmail.com>
2025-09-19 10:15:19 +08:00
f9c7404bee Fix: Updated color parsing functions and optimized component logic. (#10159)
### What problem does this PR solve?

refactor(timeline, modal, dataflow-result, dataset-overview): Updated
color parsing functions and optimized component logic.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-19 09:57:44 +08:00
5c1791d7f0 Feat: Upload files on the data flow page #9869 (#10153)
### What problem does this PR solve?

Feat: Upload files on the data flow page #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-18 16:19:53 +08:00
e82617f6de feat(dataset): Added data pipeline configuration functionality #9869 (#10132)
### What problem does this PR solve?

feat(dataset): Added data pipeline configuration functionality #9869

- Added a data pipeline selection component to link data pipelines with
knowledge bases
- Added file filtering functionality, supporting custom file filtering
rules
- Optimized the configuration interface layout, adjusting style and
spacing
- Introduced new icons and buttons to enhance the user experience

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-18 09:31:57 +08:00
a7abc57f68 Feat: Add SliderInputFormField story #9869 (#10138)
### What problem does this PR solve?

Feat: Add SliderInputFormField story #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-18 09:29:33 +08:00
cf1f523d03 Feat: Create a data flow #9869 (#10131)
### What problem does this PR solve?

Feat: Create a data flow #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-17 17:54:21 +08:00
ccb255919a Feat: Add HierarchicalMergerForm #9869 (#10122)
### What problem does this PR solve?
Feat:  Add HierarchicalMergerForm #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-17 13:47:50 +08:00
b68c84b52e Feat: Add splitter form #9869 (#10115)
### What problem does this PR solve?

Feat: Add splitter form #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-17 09:36:54 +08:00
93cf0258c3 Feat: Add splitter node component #9869 (#10114)
### What problem does this PR solve?

Feat: Add splitter node component #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-16 17:53:48 +08:00
b79fef1ca8 fix: Modify icon file, knowledge base display style (#10104)
### What problem does this PR solve?

fix: Modify icon file, knowledge base display style #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-16 10:37:08 +08:00
2b50de3186 Feat: Translate the fields of the parsing operator #9869 (#10079)
### What problem does this PR solve?

Feat: Translate the fields of the parsing operator #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-15 11:24:19 +08:00
d8ef22db68 Fix(dataset): Optimized the dataset configuration page UI #9869 (#10066)
### What problem does this PR solve?
fix(dataset): Optimized the dataset configuration page UI

- Added the DataPipelineSelect component for selecting data pipelines
- Restructured the layout and style of the dataset settings page
- Removed unnecessary components and code
- Optimized data pipeline configuration
- Adjusted the Create Dataset dialog box
- Updated the processing log modal style

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-12 16:01:37 +08:00
592f3b1555 Feat: Bind options to the parser operator form. #9869 (#10069)
### What problem does this PR solve?

Feat: Bind options to the parser operator form. #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-12 16:01:24 +08:00
3404469e2a Feat: Dynamically increase the configuration of the parser operator #9869 (#10060)
### What problem does this PR solve?

Feat: Dynamically increase the configuration of the parser operator
#9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-12 10:14:26 +08:00
63d7382dc9 fix: Displays the dataset creation and settings page #9869 (#10052)
### What problem does this PR solve?

[_Briefly describe what this PR aims to solve. Include background
context that will help reviewers understand the purpose of the
PR._](fix: Displays the dataset creation and settings page #9869)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-11 17:25:07 +08:00
914 changed files with 21300 additions and 49592 deletions

View File

@ -16,7 +16,7 @@ concurrency:
jobs: jobs:
release: release:
runs-on: [ "self-hosted", "ragflow-test" ] runs-on: [ "self-hosted", "overseas" ]
steps: steps:
- name: Ensure workspace ownership - name: Ensure workspace ownership
run: echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE run: echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
@ -25,7 +25,7 @@ jobs:
- name: Check out code - name: Check out code
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
token: ${{ secrets.GITHUB_TOKEN }} # Use the secret as an environment variable token: ${{ secrets.MY_GITHUB_TOKEN }} # Use the secret as an environment variable
fetch-depth: 0 fetch-depth: 0
fetch-tags: true fetch-tags: true
@ -69,26 +69,54 @@ jobs:
# https://github.com/actions/upload-release-asset has been replaced by https://github.com/softprops/action-gh-release # https://github.com/actions/upload-release-asset has been replaced by https://github.com/softprops/action-gh-release
uses: softprops/action-gh-release@v2 uses: softprops/action-gh-release@v2
with: with:
token: ${{ secrets.GITHUB_TOKEN }} # Use the secret as an environment variable token: ${{ secrets.MY_GITHUB_TOKEN }} # Use the secret as an environment variable
prerelease: ${{ env.PRERELEASE }} prerelease: ${{ env.PRERELEASE }}
tag_name: ${{ env.RELEASE_TAG }} tag_name: ${{ env.RELEASE_TAG }}
# The body field does not support environment variable substitution directly. # The body field does not support environment variable substitution directly.
body_path: release_body.md body_path: release_body.md
- name: Build and push ragflow-sdk # https://github.com/marketplace/actions/docker-login
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: infiniflow
password: ${{ secrets.DOCKERHUB_TOKEN }}
# https://github.com/marketplace/actions/build-and-push-docker-images
- name: Build and push full image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: |
infiniflow/ragflow:${{ env.RELEASE_TAG }}
infiniflow/ragflow:latest-full
file: Dockerfile
platforms: linux/amd64
# https://github.com/marketplace/actions/build-and-push-docker-images
- name: Build and push slim image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: |
infiniflow/ragflow:${{ env.RELEASE_TAG }}-slim
infiniflow/ragflow:latest-slim
file: Dockerfile
build-args: LIGHTEN=1
platforms: linux/amd64
- name: Build ragflow-sdk
if: startsWith(github.ref, 'refs/tags/v') if: startsWith(github.ref, 'refs/tags/v')
run: | run: |
cd sdk/python && uv build && uv publish --token ${{ secrets.PYPI_API_TOKEN }} cd sdk/python && \
uv build
- name: Build and push ragflow-cli - name: Publish package distributions to PyPI
if: startsWith(github.ref, 'refs/tags/v') if: startsWith(github.ref, 'refs/tags/v')
run: | uses: pypa/gh-action-pypi-publish@release/v1
cd admin/client && uv build && uv publish --token ${{ secrets.PYPI_API_TOKEN }} with:
packages-dir: sdk/python/dist/
- name: Build and push image password: ${{ secrets.PYPI_API_TOKEN }}
run: | verbose: true
echo ${{ secrets.DOCKERHUB_TOKEN }} | sudo docker login --username infiniflow --password-stdin
sudo docker build --build-arg NEED_MIRROR=1 -t infiniflow/ragflow:${RELEASE_TAG} -f Dockerfile .
sudo docker tag infiniflow/ragflow:${RELEASE_TAG} infiniflow/ragflow:latest
sudo docker push infiniflow/ragflow:${RELEASE_TAG}
sudo docker push infiniflow/ragflow:latest

View File

@ -10,7 +10,7 @@ on:
- '*.md' - '*.md'
- '*.mdx' - '*.mdx'
pull_request: pull_request:
types: [ labeled, synchronize, reopened ] types: [ opened, synchronize, reopened, labeled ]
paths-ignore: paths-ignore:
- 'docs/**' - 'docs/**'
- '*.md' - '*.md'
@ -29,15 +29,17 @@ jobs:
# https://docs.github.com/en/actions/using-jobs/using-conditions-to-control-job-execution # https://docs.github.com/en/actions/using-jobs/using-conditions-to-control-job-execution
# https://github.com/orgs/community/discussions/26261 # https://github.com/orgs/community/discussions/26261
if: ${{ github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci') }} if: ${{ github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci') }}
runs-on: [ "self-hosted", "ragflow-test" ] runs-on: [ "self-hosted", "debug" ]
steps: steps:
# https://github.com/hmarr/debug-action # https://github.com/hmarr/debug-action
#- uses: hmarr/debug-action@v2 #- uses: hmarr/debug-action@v2
- name: Ensure workspace ownership - name: Show who triggered this workflow
run: | run: |
echo "Workflow triggered by ${{ github.event_name }}" echo "Workflow triggered by ${{ github.event_name }}"
echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
- name: Ensure workspace ownership
run: echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
# https://github.com/actions/checkout/issues/1781 # https://github.com/actions/checkout/issues/1781
- name: Check out code - name: Check out code
@ -46,44 +48,6 @@ jobs:
fetch-depth: 0 fetch-depth: 0
fetch-tags: true fetch-tags: true
- name: Check workflow duplication
if: ${{ !cancelled() && !failure() && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci')) }}
run: |
if [[ "$GITHUB_EVENT_NAME" != "pull_request" && "$GITHUB_EVENT_NAME" != "schedule" ]]; then
HEAD=$(git rev-parse HEAD)
# Find a PR that introduced a given commit
gh auth login --with-token <<< "${{ secrets.GITHUB_TOKEN }}"
PR_NUMBER=$(gh pr list --search ${HEAD} --state merged --json number --jq .[0].number)
echo "HEAD=${HEAD}"
echo "PR_NUMBER=${PR_NUMBER}"
if [[ -n "${PR_NUMBER}" ]]; then
PR_SHA_FP=${RUNNER_WORKSPACE_PREFIX}/artifacts/${GITHUB_REPOSITORY}/PR_${PR_NUMBER}
if [[ -f "${PR_SHA_FP}" ]]; then
read -r PR_SHA PR_RUN_ID < "${PR_SHA_FP}"
# Calculate the hash of the current workspace content
HEAD_SHA=$(git rev-parse HEAD^{tree})
if [[ "${HEAD_SHA}" == "${PR_SHA}" ]]; then
echo "Cancel myself since the workspace content hash is the same with PR #${PR_NUMBER} merged. See ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}/actions/runs/${PR_RUN_ID} for details."
gh run cancel ${GITHUB_RUN_ID}
while true; do
status=$(gh run view ${GITHUB_RUN_ID} --json status -q .status)
[ "$status" = "completed" ] && break
sleep 5
done
exit 1
fi
fi
fi
else
PR_NUMBER=${{ github.event.pull_request.number }}
PR_SHA_FP=${RUNNER_WORKSPACE_PREFIX}/artifacts/${GITHUB_REPOSITORY}/PR_${PR_NUMBER}
# Calculate the hash of the current workspace content
PR_SHA=$(git rev-parse HEAD^{tree})
echo "PR #${PR_NUMBER} workspace content hash: ${PR_SHA}"
mkdir -p ${RUNNER_WORKSPACE_PREFIX}/artifacts/${GITHUB_REPOSITORY}
echo "${PR_SHA} ${GITHUB_RUN_ID}" > ${PR_SHA_FP}
fi
# https://github.com/astral-sh/ruff-action # https://github.com/astral-sh/ruff-action
- name: Static check with Ruff - name: Static check with Ruff
uses: astral-sh/ruff-action@v3 uses: astral-sh/ruff-action@v3
@ -91,140 +55,122 @@ jobs:
version: ">=0.11.x" version: ">=0.11.x"
args: "check" args: "check"
- name: Build ragflow:nightly - name: Build ragflow:nightly-slim
run: | run: |
RUNNER_WORKSPACE_PREFIX=${RUNNER_WORKSPACE_PREFIX:-$HOME} RUNNER_WORKSPACE_PREFIX=${RUNNER_WORKSPACE_PREFIX:-$HOME}
RAGFLOW_IMAGE=infiniflow/ragflow:${GITHUB_RUN_ID}
echo "RAGFLOW_IMAGE=${RAGFLOW_IMAGE}" >> $GITHUB_ENV
sudo docker pull ubuntu:22.04 sudo docker pull ubuntu:22.04
sudo DOCKER_BUILDKIT=1 docker build --build-arg NEED_MIRROR=1 -f Dockerfile -t ${RAGFLOW_IMAGE} . sudo docker build --progress=plain --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
if [[ "$GITHUB_EVENT_NAME" == "schedule" ]]; then
export HTTP_API_TEST_LEVEL=p3 - name: Build ragflow:nightly
else run: |
export HTTP_API_TEST_LEVEL=p2 sudo docker build --progress=plain --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
fi
echo "HTTP_API_TEST_LEVEL=${HTTP_API_TEST_LEVEL}" >> $GITHUB_ENV - name: Start ragflow:nightly-slim
echo "RAGFLOW_CONTAINER=${GITHUB_RUN_ID}-ragflow-cpu-1" >> $GITHUB_ENV run: |
sudo docker compose -f docker/docker-compose.yml down --volumes --remove-orphans
echo -e "\nRAGFLOW_IMAGE=infiniflow/ragflow:nightly-slim" >> docker/.env
sudo docker compose -f docker/docker-compose.yml up -d
- name: Stop ragflow:nightly-slim
if: always() # always run this step even if previous steps failed
run: |
sudo docker compose -f docker/docker-compose.yml down -v
- name: Start ragflow:nightly - name: Start ragflow:nightly
run: | run: |
# Determine runner number (default to 1 if not found) echo -e "\nRAGFLOW_IMAGE=infiniflow/ragflow:nightly" >> docker/.env
RUNNER_NUM=$(sudo docker inspect $(hostname) --format '{{index .Config.Labels "com.docker.compose.container-number"}}' 2>/dev/null || true) sudo docker compose -f docker/docker-compose.yml up -d
RUNNER_NUM=${RUNNER_NUM:-1}
# Compute port numbers using bash arithmetic
ES_PORT=$((1200 + RUNNER_NUM * 10))
OS_PORT=$((1201 + RUNNER_NUM * 10))
INFINITY_THRIFT_PORT=$((23817 + RUNNER_NUM * 10))
INFINITY_HTTP_PORT=$((23820 + RUNNER_NUM * 10))
INFINITY_PSQL_PORT=$((5432 + RUNNER_NUM * 10))
MYSQL_PORT=$((5455 + RUNNER_NUM * 10))
MINIO_PORT=$((9000 + RUNNER_NUM * 10))
MINIO_CONSOLE_PORT=$((9001 + RUNNER_NUM * 10))
REDIS_PORT=$((6379 + RUNNER_NUM * 10))
TEI_PORT=$((6380 + RUNNER_NUM * 10))
KIBANA_PORT=$((6601 + RUNNER_NUM * 10))
SVR_HTTP_PORT=$((9380 + RUNNER_NUM * 10))
ADMIN_SVR_HTTP_PORT=$((9381 + RUNNER_NUM * 10))
SVR_MCP_PORT=$((9382 + RUNNER_NUM * 10))
SANDBOX_EXECUTOR_MANAGER_PORT=$((9385 + RUNNER_NUM * 10))
SVR_WEB_HTTP_PORT=$((80 + RUNNER_NUM * 10))
SVR_WEB_HTTPS_PORT=$((443 + RUNNER_NUM * 10))
# Persist computed ports into docker/.env so docker-compose uses the correct host bindings
echo "" >> docker/.env
echo -e "ES_PORT=${ES_PORT}" >> docker/.env
echo -e "OS_PORT=${OS_PORT}" >> docker/.env
echo -e "INFINITY_THRIFT_PORT=${INFINITY_THRIFT_PORT}" >> docker/.env
echo -e "INFINITY_HTTP_PORT=${INFINITY_HTTP_PORT}" >> docker/.env
echo -e "INFINITY_PSQL_PORT=${INFINITY_PSQL_PORT}" >> docker/.env
echo -e "MYSQL_PORT=${MYSQL_PORT}" >> docker/.env
echo -e "MINIO_PORT=${MINIO_PORT}" >> docker/.env
echo -e "MINIO_CONSOLE_PORT=${MINIO_CONSOLE_PORT}" >> docker/.env
echo -e "REDIS_PORT=${REDIS_PORT}" >> docker/.env
echo -e "TEI_PORT=${TEI_PORT}" >> docker/.env
echo -e "KIBANA_PORT=${KIBANA_PORT}" >> docker/.env
echo -e "SVR_HTTP_PORT=${SVR_HTTP_PORT}" >> docker/.env
echo -e "ADMIN_SVR_HTTP_PORT=${ADMIN_SVR_HTTP_PORT}" >> docker/.env
echo -e "SVR_MCP_PORT=${SVR_MCP_PORT}" >> docker/.env
echo -e "SANDBOX_EXECUTOR_MANAGER_PORT=${SANDBOX_EXECUTOR_MANAGER_PORT}" >> docker/.env
echo -e "SVR_WEB_HTTP_PORT=${SVR_WEB_HTTP_PORT}" >> docker/.env
echo -e "SVR_WEB_HTTPS_PORT=${SVR_WEB_HTTPS_PORT}" >> docker/.env
echo -e "COMPOSE_PROFILES=\${COMPOSE_PROFILES},tei-cpu" >> docker/.env
echo -e "TEI_MODEL=BAAI/bge-small-en-v1.5" >> docker/.env
echo -e "RAGFLOW_IMAGE=${RAGFLOW_IMAGE}" >> docker/.env
echo "HOST_ADDRESS=http://host.docker.internal:${SVR_HTTP_PORT}" >> $GITHUB_ENV
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} up -d
uv sync --python 3.10 --only-group test --no-default-groups --frozen && uv pip install sdk/python
- name: Run sdk tests against Elasticsearch - name: Run sdk tests against Elasticsearch
run: | run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY="" export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..." echo "Waiting for service to be available..."
sleep 5 sleep 5
done done
source .venv/bin/activate && pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_sdk_api if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
export HTTP_API_TEST_LEVEL=p3
else
export HTTP_API_TEST_LEVEL=p2
fi
UV_LINK_MODE=copy uv sync --python 3.10 --only-group test --no-default-groups --frozen && uv pip install sdk/python && uv run --only-group test --no-default-groups pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_sdk_api
- name: Run frontend api tests against Elasticsearch - name: Run frontend api tests against Elasticsearch
run: | run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY="" export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..." echo "Waiting for service to be available..."
sleep 5 sleep 5
done done
source .venv/bin/activate && pytest -s --tb=short sdk/python/test/test_frontend_api/get_email.py sdk/python/test/test_frontend_api/test_dataset.py cd sdk/python && UV_LINK_MODE=copy uv sync --python 3.10 --group test --frozen && source .venv/bin/activate && cd test/test_frontend_api && pytest -s --tb=short get_email.py test_dataset.py
- name: Run http api tests against Elasticsearch - name: Run http api tests against Elasticsearch
run: | run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY="" export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..." echo "Waiting for service to be available..."
sleep 5 sleep 5
done done
source .venv/bin/activate && pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_http_api if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
export HTTP_API_TEST_LEVEL=p3
else
export HTTP_API_TEST_LEVEL=p2
fi
UV_LINK_MODE=copy uv sync --python 3.10 --only-group test --no-default-groups --frozen && uv run --only-group test --no-default-groups pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_http_api
- name: Stop ragflow:nightly - name: Stop ragflow:nightly
if: always() # always run this step even if previous steps failed if: always() # always run this step even if previous steps failed
run: | run: |
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} down -v sudo docker compose -f docker/docker-compose.yml down -v
- name: Start ragflow:nightly - name: Start ragflow:nightly
run: | run: |
sed -i '1i DOC_ENGINE=infinity' docker/.env sudo DOC_ENGINE=infinity docker compose -f docker/docker-compose.yml up -d
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} up -d
- name: Run sdk tests against Infinity - name: Run sdk tests against Infinity
run: | run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY="" export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..." echo "Waiting for service to be available..."
sleep 5 sleep 5
done done
source .venv/bin/activate && DOC_ENGINE=infinity pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_sdk_api if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
export HTTP_API_TEST_LEVEL=p3
else
export HTTP_API_TEST_LEVEL=p2
fi
UV_LINK_MODE=copy uv sync --python 3.10 --only-group test --no-default-groups --frozen && uv pip install sdk/python && DOC_ENGINE=infinity uv run --only-group test --no-default-groups pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_sdk_api
- name: Run frontend api tests against Infinity - name: Run frontend api tests against Infinity
run: | run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY="" export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..." echo "Waiting for service to be available..."
sleep 5 sleep 5
done done
source .venv/bin/activate && DOC_ENGINE=infinity pytest -s --tb=short sdk/python/test/test_frontend_api/get_email.py sdk/python/test/test_frontend_api/test_dataset.py cd sdk/python && UV_LINK_MODE=copy uv sync --python 3.10 --group test --frozen && source .venv/bin/activate && cd test/test_frontend_api && pytest -s --tb=short get_email.py test_dataset.py
- name: Run http api tests against Infinity - name: Run http api tests against Infinity
run: | run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY="" export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..." echo "Waiting for service to be available..."
sleep 5 sleep 5
done done
source .venv/bin/activate && DOC_ENGINE=infinity pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_http_api if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
export HTTP_API_TEST_LEVEL=p3
else
export HTTP_API_TEST_LEVEL=p2
fi
UV_LINK_MODE=copy uv sync --python 3.10 --only-group test --no-default-groups --frozen && DOC_ENGINE=infinity uv run --only-group test --no-default-groups pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_http_api
- name: Stop ragflow:nightly - name: Stop ragflow:nightly
if: always() # always run this step even if previous steps failed if: always() # always run this step even if previous steps failed
run: | run: |
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} down -v sudo DOC_ENGINE=infinity docker compose -f docker/docker-compose.yml down -v
sudo docker rmi -f ${RAGFLOW_IMAGE:-NO_IMAGE} || true

2
.gitignore vendored
View File

@ -149,7 +149,7 @@ out
# Nuxt.js build / generate output # Nuxt.js build / generate output
.nuxt .nuxt
dist dist
ragflow_cli.egg-info
# Gatsby files # Gatsby files
.cache/ .cache/
# Comment in the public line in if your project uses Gatsby and not Next.js # Comment in the public line in if your project uses Gatsby and not Next.js

116
CLAUDE.md
View File

@ -1,116 +0,0 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It's a full-stack application with:
- Python backend (Flask-based API server)
- React/TypeScript frontend (built with UmiJS)
- Microservices architecture with Docker deployment
- Multiple data stores (MySQL, Elasticsearch/Infinity, Redis, MinIO)
## Architecture
### Backend (`/api/`)
- **Main Server**: `api/ragflow_server.py` - Flask application entry point
- **Apps**: Modular Flask blueprints in `api/apps/` for different functionalities:
- `kb_app.py` - Knowledge base management
- `dialog_app.py` - Chat/conversation handling
- `document_app.py` - Document processing
- `canvas_app.py` - Agent workflow canvas
- `file_app.py` - File upload/management
- **Services**: Business logic in `api/db/services/`
- **Models**: Database models in `api/db/db_models.py`
### Core Processing (`/rag/`)
- **Document Processing**: `deepdoc/` - PDF parsing, OCR, layout analysis
- **LLM Integration**: `rag/llm/` - Model abstractions for chat, embedding, reranking
- **RAG Pipeline**: `rag/flow/` - Chunking, parsing, tokenization
- **Graph RAG**: `graphrag/` - Knowledge graph construction and querying
### Agent System (`/agent/`)
- **Components**: Modular workflow components (LLM, retrieval, categorize, etc.)
- **Templates**: Pre-built agent workflows in `agent/templates/`
- **Tools**: External API integrations (Tavily, Wikipedia, SQL execution, etc.)
### Frontend (`/web/`)
- React/TypeScript with UmiJS framework
- Ant Design + shadcn/ui components
- State management with Zustand
- Tailwind CSS for styling
## Common Development Commands
### Backend Development
```bash
# Install Python dependencies
uv sync --python 3.10 --all-extras
uv run download_deps.py
pre-commit install
# Start dependent services
docker compose -f docker/docker-compose-base.yml up -d
# Run backend (requires services to be running)
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
# Run tests
uv run pytest
# Linting
ruff check
ruff format
```
### Frontend Development
```bash
cd web
npm install
npm run dev # Development server
npm run build # Production build
npm run lint # ESLint
npm run test # Jest tests
```
### Docker Development
```bash
# Full stack with Docker
cd docker
docker compose -f docker-compose.yml up -d
# Check server status
docker logs -f ragflow-server
# Rebuild images
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly .
```
## Key Configuration Files
- `docker/.env` - Environment variables for Docker deployment
- `docker/service_conf.yaml.template` - Backend service configuration
- `pyproject.toml` - Python dependencies and project configuration
- `web/package.json` - Frontend dependencies and scripts
## Testing
- **Python**: pytest with markers (p1/p2/p3 priority levels)
- **Frontend**: Jest with React Testing Library
- **API Tests**: HTTP API and SDK tests in `test/` and `sdk/python/test/`
## Database Engines
RAGFlow supports switching between Elasticsearch (default) and Infinity:
- Set `DOC_ENGINE=infinity` in `docker/.env` to use Infinity
- Requires container restart: `docker compose down -v && docker compose up -d`
## Development Environment Requirements
- Python 3.10-3.12
- Node.js >=18.20.4
- Docker & Docker Compose
- uv package manager
- 16GB+ RAM, 50GB+ disk space

View File

@ -4,6 +4,8 @@ USER root
SHELL ["/bin/bash", "-c"] SHELL ["/bin/bash", "-c"]
ARG NEED_MIRROR=0 ARG NEED_MIRROR=0
ARG LIGHTEN=0
ENV LIGHTEN=${LIGHTEN}
WORKDIR /ragflow WORKDIR /ragflow
@ -15,6 +17,13 @@ RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/huggingface.co
/huggingface.co/InfiniFlow/text_concat_xgb_v1.0 \ /huggingface.co/InfiniFlow/text_concat_xgb_v1.0 \
/huggingface.co/InfiniFlow/deepdoc \ /huggingface.co/InfiniFlow/deepdoc \
| tar -xf - --strip-components=3 -C /ragflow/rag/res/deepdoc | tar -xf - --strip-components=3 -C /ragflow/rag/res/deepdoc
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/huggingface.co,target=/huggingface.co \
if [ "$LIGHTEN" != "1" ]; then \
(tar -cf - \
/huggingface.co/BAAI/bge-large-zh-v1.5 \
/huggingface.co/maidalun1020/bce-embedding-base_v1 \
| tar -xf - --strip-components=2 -C /root/.ragflow) \
fi
# https://github.com/chrismattmann/tika-python # https://github.com/chrismattmann/tika-python
# This is the only way to run python-tika without internet access. Without this set, the default is to check the tika version and pull latest every time from Apache. # This is the only way to run python-tika without internet access. Without this set, the default is to check the tika version and pull latest every time from Apache.
@ -54,11 +63,11 @@ RUN --mount=type=cache,id=ragflow_apt,target=/var/cache/apt,sharing=locked \
apt install -y ghostscript apt install -y ghostscript
RUN if [ "$NEED_MIRROR" == "1" ]; then \ RUN if [ "$NEED_MIRROR" == "1" ]; then \
pip3 config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple && \ pip3 config set global.index-url https://mirrors.aliyun.com/pypi/simple && \
pip3 config set global.trusted-host pypi.tuna.tsinghua.edu.cn; \ pip3 config set global.trusted-host mirrors.aliyun.com; \
mkdir -p /etc/uv && \ mkdir -p /etc/uv && \
echo "[[index]]" > /etc/uv/uv.toml && \ echo "[[index]]" > /etc/uv/uv.toml && \
echo 'url = "https://pypi.tuna.tsinghua.edu.cn/simple"' >> /etc/uv/uv.toml && \ echo 'url = "https://mirrors.aliyun.com/pypi/simple"' >> /etc/uv/uv.toml && \
echo "default = true" >> /etc/uv/uv.toml; \ echo "default = true" >> /etc/uv/uv.toml; \
fi; \ fi; \
pipx install uv pipx install uv
@ -142,11 +151,15 @@ COPY pyproject.toml uv.lock ./
# uv records index url into uv.lock but doesn't failover among multiple indexes # uv records index url into uv.lock but doesn't failover among multiple indexes
RUN --mount=type=cache,id=ragflow_uv,target=/root/.cache/uv,sharing=locked \ RUN --mount=type=cache,id=ragflow_uv,target=/root/.cache/uv,sharing=locked \
if [ "$NEED_MIRROR" == "1" ]; then \ if [ "$NEED_MIRROR" == "1" ]; then \
sed -i 's|pypi.org|pypi.tuna.tsinghua.edu.cn|g' uv.lock; \ sed -i 's|pypi.org|mirrors.aliyun.com/pypi|g' uv.lock; \
else \ else \
sed -i 's|pypi.tuna.tsinghua.edu.cn|pypi.org|g' uv.lock; \ sed -i 's|mirrors.aliyun.com/pypi|pypi.org|g' uv.lock; \
fi; \ fi; \
uv sync --python 3.10 --frozen if [ "$LIGHTEN" == "1" ]; then \
uv sync --python 3.10 --frozen; \
else \
uv sync --python 3.10 --frozen --all-extras; \
fi
COPY web web COPY web web
COPY docs docs COPY docs docs
@ -156,7 +169,11 @@ RUN --mount=type=cache,id=ragflow_npm,target=/root/.npm,sharing=locked \
COPY .git /ragflow/.git COPY .git /ragflow/.git
RUN version_info=$(git describe --tags --match=v* --first-parent --always); \ RUN version_info=$(git describe --tags --match=v* --first-parent --always); \
version_info="$version_info"; \ if [ "$LIGHTEN" == "1" ]; then \
version_info="$version_info slim"; \
else \
version_info="$version_info full"; \
fi; \
echo "RAGFlow version: $version_info"; \ echo "RAGFlow version: $version_info"; \
echo $version_info > /ragflow/VERSION echo $version_info > /ragflow/VERSION
@ -174,7 +191,6 @@ ENV PATH="${VIRTUAL_ENV}/bin:${PATH}"
ENV PYTHONPATH=/ragflow/ ENV PYTHONPATH=/ragflow/
COPY web web COPY web web
COPY admin admin
COPY api api COPY api api
COPY conf conf COPY conf conf
COPY deepdoc deepdoc COPY deepdoc deepdoc
@ -185,7 +201,6 @@ COPY agentic_reasoning agentic_reasoning
COPY pyproject.toml uv.lock ./ COPY pyproject.toml uv.lock ./
COPY mcp mcp COPY mcp mcp
COPY plugin plugin COPY plugin plugin
COPY common common
COPY docker/service_conf.yaml.template ./conf/service_conf.yaml.template COPY docker/service_conf.yaml.template ./conf/service_conf.yaml.template
COPY docker/entrypoint.sh ./ COPY docker/entrypoint.sh ./

View File

@ -1,14 +0,0 @@
FROM ghcr.io/huggingface/text-embeddings-inference:cpu-1.8
# uv tool install huggingface_hub
# hf download --local-dir tei_data/BAAI/bge-small-en-v1.5 BAAI/bge-small-en-v1.5
# hf download --local-dir tei_data/BAAI/bge-m3 BAAI/bge-m3
# hf download --local-dir tei_data/Qwen/Qwen3-Embedding-0.6B Qwen/Qwen3-Embedding-0.6B
COPY tei_data /data
# curl -X POST http://localhost:6380/embed -H "Content-Type: application/json" -d '{"inputs": "Hello, world! This is a test sentence."}'
# curl -X POST http://tei:80/embed -H "Content-Type: application/json" -d '{"inputs": "Hello, world! This is a test sentence."}'
# [[-0.058816575,0.019564206,0.026697718,...]]
# curl -X POST http://localhost:6380/v1/embeddings -H "Content-Type: application/json" -d '{"input": "Hello, world! This is a test sentence."}'
# {"object":"list","data":[{"object":"embedding","embedding":[-0.058816575,0.019564206,...],"index":0}],"model":"BAAI/bge-small-en-v1.5","usage":{"prompt_tokens":12,"total_tokens":12}}

View File

@ -1,6 +1,6 @@
<div align="center"> <div align="center">
<a href="https://demo.ragflow.io/"> <a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.svg" width="520" alt="ragflow logo"> <img src="web/src/assets/logo-with-text.png" width="520" alt="ragflow logo">
</a> </a>
</div> </div>
@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -43,9 +43,7 @@
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
<div align="center" style="margin-top:20px;margin-bottom:20px;"> #
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center"> <div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> <a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
@ -86,9 +84,8 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Latest Updates ## 🔥 Latest Updates
- 2025-10-23 Supports MinerU & Docling as document parsing methods.
- 2025-10-15 Supports orchestrable ingestion pipeline.
- 2025-08-08 Supports OpenAI's latest GPT-5 series models. - 2025-08-08 Supports OpenAI's latest GPT-5 series models.
- 2025-08-04 Supports new models, including Kimi K2 and Grok 4.
- 2025-08-01 Supports agentic workflow and MCP. - 2025-08-01 Supports agentic workflow and MCP.
- 2025-05-23 Adds a Python/JavaScript code executor component to Agent. - 2025-05-23 Adds a Python/JavaScript code executor component to Agent.
- 2025-05-05 Supports cross-language query. - 2025-05-05 Supports cross-language query.
@ -138,7 +135,7 @@ releases! 🌟
## 🔎 System Architecture ## 🔎 System Architecture
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/> <img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div> </div>
## 🎬 Get Started ## 🎬 Get Started
@ -177,42 +174,41 @@ releases! 🌟
> ```bash > ```bash
> vm.max_map_count=262144 > vm.max_map_count=262144
> ``` > ```
>
2. Clone the repo: 2. Clone the repo:
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git $ git clone https://github.com/infiniflow/ragflow.git
``` ```
3. Start up the server using the pre-built Docker images: 3. Start up the server using the pre-built Docker images:
> [!CAUTION] > [!CAUTION]
> All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64. > All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64.
> If you are on an ARM64 platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a Docker image compatible with your system. > If you are on an ARM64 platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a Docker image compatible with your system.
> The command below downloads the `v0.21.1-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.21.1-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. > The command below downloads the `v0.20.5-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.20.5-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5` for the full edition `v0.20.5`.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks: # Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d $ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks: # To use GPU to accelerate embedding and DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env # docker compose -f docker-compose-gpu.yml up -d
# docker compose -f docker-compose.yml up -d ```
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | -------------------------- | |-------------------|-----------------|-----------------------|--------------------------|
| v0.21.1 | &approx;9 | ✔️ | Stable release | | v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release | | v0.20.5-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | | _Unstable_ nightly build | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
> Note: Starting with `v0.22.0`, we ship only the slim edition and no longer append the **-slim** suffix to the image tag.
4. Check the server status after having the server up and running: 4. Check the server status after having the server up and running:
```bash ```bash
$ docker logs -f docker-ragflow-cpu-1 $ docker logs -f ragflow-server
``` ```
_The following output confirms a successful launch of the system:_ _The following output confirms a successful launch of the system:_
@ -230,17 +226,14 @@ releases! 🌟
> If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network anormal` > If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network anormal`
> error because, at that moment, your RAGFlow may not be fully initialized. > error because, at that moment, your RAGFlow may not be fully initialized.
>
5. In your web browser, enter the IP address of your server and log in to RAGFlow.
5. In your web browser, enter the IP address of your server and log in to RAGFlow.
> With the default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (**sans** port number) as the default > With the default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (**sans** port number) as the default
> HTTP serving port `80` can be omitted when using the default configurations. > HTTP serving port `80` can be omitted when using the default configurations.
>
6. In [service_conf.yaml.template](./docker/service_conf.yaml.template), select the desired LLM factory in `user_default_llm` and update 6. In [service_conf.yaml.template](./docker/service_conf.yaml.template), select the desired LLM factory in `user_default_llm` and update
the `API_KEY` field with the corresponding API key. the `API_KEY` field with the corresponding API key.
> See [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) for more information. > See [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) for more information.
>
_The show is on!_ _The show is on!_
@ -279,6 +272,7 @@ RAGFlow uses Elasticsearch by default for storing full text and vectors. To swit
> `-v` will delete the docker container volumes, and the existing data will be cleared. > `-v` will delete the docker container volumes, and the existing data will be cleared.
2. Set `DOC_ENGINE` in **docker/.env** to `infinity`. 2. Set `DOC_ENGINE` in **docker/.env** to `infinity`.
3. Start the containers: 3. Start the containers:
```bash ```bash
@ -292,6 +286,16 @@ RAGFlow uses Elasticsearch by default for storing full text and vectors. To swit
This image is approximately 2 GB in size and relies on external LLM and embedding services. This image is approximately 2 GB in size and relies on external LLM and embedding services.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 Build a Docker image including embedding models
This image is approximately 9 GB in size. As it includes embedding models, it relies on external LLM services only.
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
@ -305,15 +309,17 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
```bash ```bash
pipx install uv pre-commit pipx install uv pre-commit
``` ```
2. Clone the source code and install Python dependencies: 2. Clone the source code and install Python dependencies:
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
uv sync --python 3.10 # install RAGFlow dependent python modules uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv run download_deps.py uv run download_deps.py
pre-commit install pre-commit install
``` ```
3. Launch the dependent services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose: 3. Launch the dependent services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose:
```bash ```bash
@ -325,23 +331,24 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
``` ```
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager 127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
``` ```
4. If you cannot access HuggingFace, set the `HF_ENDPOINT` environment variable to use a mirror site: 4. If you cannot access HuggingFace, set the `HF_ENDPOINT` environment variable to use a mirror site:
```bash ```bash
export HF_ENDPOINT=https://hf-mirror.com export HF_ENDPOINT=https://hf-mirror.com
``` ```
5. If your operating system does not have jemalloc, please install it as follows: 5. If your operating system does not have jemalloc, please install it as follows:
```bash ```bash
# Ubuntu # ubuntu
sudo apt-get install libjemalloc-dev sudo apt-get install libjemalloc-dev
# CentOS # centos
sudo yum install jemalloc sudo yum install jemalloc
# OpenSUSE # mac
sudo zypper install jemalloc
# macOS
sudo brew install jemalloc sudo brew install jemalloc
``` ```
6. Launch backend service: 6. Launch backend service:
```bash ```bash
@ -349,12 +356,14 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
export PYTHONPATH=$(pwd) export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh bash docker/launch_backend_service.sh
``` ```
7. Install frontend dependencies: 7. Install frontend dependencies:
```bash ```bash
cd web cd web
npm install npm install
``` ```
8. Launch frontend service: 8. Launch frontend service:
```bash ```bash
@ -364,12 +373,14 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
_The following output confirms a successful launch of the system:_ _The following output confirms a successful launch of the system:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187) ![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
9. Stop RAGFlow front-end and back-end service after development is complete: 9. Stop RAGFlow front-end and back-end service after development is complete:
```bash ```bash
pkill -f "ragflow_server.py|task_executor.py" pkill -f "ragflow_server.py|task_executor.py"
``` ```
## 📚 Documentation ## 📚 Documentation
- [Quickstart](https://ragflow.io/docs/dev/) - [Quickstart](https://ragflow.io/docs/dev/)

View File

@ -1,6 +1,6 @@
<div align="center"> <div align="center">
<a href="https://demo.ragflow.io/"> <a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.svg" width="520" alt="Logo ragflow"> <img src="web/src/assets/logo-with-text.png" width="520" alt="Logo ragflow">
</a> </a>
</div> </div>
@ -22,7 +22,7 @@
<img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru">
@ -43,13 +43,7 @@
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
<div align="center" style="margin-top:20px;margin-bottom:20px;"> #
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
<details open> <details open>
<summary><b>📕 Daftar Isi </b> </summary> <summary><b>📕 Daftar Isi </b> </summary>
@ -86,9 +80,8 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Pembaruan Terbaru ## 🔥 Pembaruan Terbaru
- 2025-10-23 Mendukung MinerU & Docling sebagai metode penguraian dokumen.
- 2025-10-15 Dukungan untuk jalur data yang terorkestrasi.
- 2025-08-08 Mendukung model seri GPT-5 terbaru dari OpenAI. - 2025-08-08 Mendukung model seri GPT-5 terbaru dari OpenAI.
- 2025-08-04 Mendukung model baru, termasuk Kimi K2 dan Grok 4.
- 2025-08-01 Mendukung alur kerja agen dan MCP. - 2025-08-01 Mendukung alur kerja agen dan MCP.
- 2025-05-23 Menambahkan komponen pelaksana kode Python/JS ke Agen. - 2025-05-23 Menambahkan komponen pelaksana kode Python/JS ke Agen.
- 2025-05-05 Mendukung kueri lintas bahasa. - 2025-05-05 Mendukung kueri lintas bahasa.
@ -136,7 +129,7 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔎 Arsitektur Sistem ## 🔎 Arsitektur Sistem
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/> <img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div> </div>
## 🎬 Mulai ## 🎬 Mulai
@ -175,42 +168,41 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
> ```bash > ```bash
> vm.max_map_count=262144 > vm.max_map_count=262144
> ``` > ```
>
2. Clone repositori: 2. Clone repositori:
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git $ git clone https://github.com/infiniflow/ragflow.git
``` ```
3. Bangun image Docker pre-built dan jalankan server: 3. Bangun image Docker pre-built dan jalankan server:
> [!CAUTION] > [!CAUTION]
> Semua gambar Docker dibangun untuk platform x86. Saat ini, kami tidak menawarkan gambar Docker untuk ARM64. > Semua gambar Docker dibangun untuk platform x86. Saat ini, kami tidak menawarkan gambar Docker untuk ARM64.
> Jika Anda menggunakan platform ARM64, [silakan gunakan panduan ini untuk membangun gambar Docker yang kompatibel dengan sistem Anda](https://ragflow.io/docs/dev/build_docker_image). > Jika Anda menggunakan platform ARM64, [silakan gunakan panduan ini untuk membangun gambar Docker yang kompatibel dengan sistem Anda](https://ragflow.io/docs/dev/build_docker_image).
> Perintah di bawah ini mengunduh edisi v0.21.1 dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.21.1, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. > Perintah di bawah ini mengunduh edisi v0.20.5-slim dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.20.5-slim, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. Misalnya, atur RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5 untuk edisi lengkap v0.20.5.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks: # Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d $ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks: # To use GPU to accelerate embedding and DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env # docker compose -f docker-compose-gpu.yml up -d
# docker compose -f docker-compose.yml up -d
``` ```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | -------------------------- | | ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Stable release | | v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release | | v0.20.5-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | | _Unstable_ nightly build | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
> Catatan: Mulai dari `v0.22.0`, kami hanya menyediakan edisi slim dan tidak lagi menambahkan akhiran **-slim** pada tag image.
1. Periksa status server setelah server aktif dan berjalan: 1. Periksa status server setelah server aktif dan berjalan:
```bash ```bash
$ docker logs -f docker-ragflow-cpu-1 $ docker logs -f ragflow-server
``` ```
_Output berikut menandakan bahwa sistem berhasil diluncurkan:_ _Output berikut menandakan bahwa sistem berhasil diluncurkan:_
@ -228,17 +220,14 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
> Jika Anda melewatkan langkah ini dan langsung login ke RAGFlow, browser Anda mungkin menampilkan error `network anormal` > Jika Anda melewatkan langkah ini dan langsung login ke RAGFlow, browser Anda mungkin menampilkan error `network anormal`
> karena RAGFlow mungkin belum sepenuhnya siap. > karena RAGFlow mungkin belum sepenuhnya siap.
>
2. Buka browser web Anda, masukkan alamat IP server Anda, dan login ke RAGFlow.
2. Buka browser web Anda, masukkan alamat IP server Anda, dan login ke RAGFlow.
> Dengan pengaturan default, Anda hanya perlu memasukkan `http://IP_DEVICE_ANDA` (**tanpa** nomor port) karena > Dengan pengaturan default, Anda hanya perlu memasukkan `http://IP_DEVICE_ANDA` (**tanpa** nomor port) karena
> port HTTP default `80` bisa dihilangkan saat menggunakan konfigurasi default. > port HTTP default `80` bisa dihilangkan saat menggunakan konfigurasi default.
>
3. Dalam [service_conf.yaml.template](./docker/service_conf.yaml.template), pilih LLM factory yang diinginkan di `user_default_llm` dan perbarui 3. Dalam [service_conf.yaml.template](./docker/service_conf.yaml.template), pilih LLM factory yang diinginkan di `user_default_llm` dan perbarui
bidang `API_KEY` dengan kunci API yang sesuai. bidang `API_KEY` dengan kunci API yang sesuai.
> Lihat [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) untuk informasi lebih lanjut. > Lihat [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) untuk informasi lebih lanjut.
>
_Sistem telah siap digunakan!_ _Sistem telah siap digunakan!_
@ -264,6 +253,16 @@ Pembaruan konfigurasi ini memerlukan reboot semua kontainer agar efektif:
Image ini berukuran sekitar 2 GB dan bergantung pada aplikasi LLM eksternal dan embedding. Image ini berukuran sekitar 2 GB dan bergantung pada aplikasi LLM eksternal dan embedding.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 Membangun Docker Image Termasuk Model Embedding
Image ini berukuran sekitar 9 GB. Karena sudah termasuk model embedding, ia hanya bergantung pada aplikasi LLM eksternal.
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
@ -277,15 +276,17 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
```bash ```bash
pipx install uv pre-commit pipx install uv pre-commit
``` ```
2. Clone kode sumber dan instal dependensi Python: 2. Clone kode sumber dan instal dependensi Python:
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
uv sync --python 3.10 # install RAGFlow dependent python modules uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv run download_deps.py uv run download_deps.py
pre-commit install pre-commit install
``` ```
3. Jalankan aplikasi yang diperlukan (MinIO, Elasticsearch, Redis, dan MySQL) menggunakan Docker Compose: 3. Jalankan aplikasi yang diperlukan (MinIO, Elasticsearch, Redis, dan MySQL) menggunakan Docker Compose:
```bash ```bash
@ -297,11 +298,13 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
``` ```
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager 127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
``` ```
4. Jika Anda tidak dapat mengakses HuggingFace, atur variabel lingkungan `HF_ENDPOINT` untuk menggunakan situs mirror: 4. Jika Anda tidak dapat mengakses HuggingFace, atur variabel lingkungan `HF_ENDPOINT` untuk menggunakan situs mirror:
```bash ```bash
export HF_ENDPOINT=https://hf-mirror.com export HF_ENDPOINT=https://hf-mirror.com
``` ```
5. Jika sistem operasi Anda tidak memiliki jemalloc, instal sebagai berikut: 5. Jika sistem operasi Anda tidak memiliki jemalloc, instal sebagai berikut:
```bash ```bash
@ -312,6 +315,7 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
# mac # mac
sudo brew install jemalloc sudo brew install jemalloc
``` ```
6. Jalankan aplikasi backend: 6. Jalankan aplikasi backend:
```bash ```bash
@ -319,12 +323,14 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
export PYTHONPATH=$(pwd) export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh bash docker/launch_backend_service.sh
``` ```
7. Instal dependensi frontend: 7. Instal dependensi frontend:
```bash ```bash
cd web cd web
npm install npm install
``` ```
8. Jalankan aplikasi frontend: 8. Jalankan aplikasi frontend:
```bash ```bash
@ -334,12 +340,15 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
_Output berikut menandakan bahwa sistem berhasil diluncurkan:_ _Output berikut menandakan bahwa sistem berhasil diluncurkan:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187) ![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
9. Hentikan layanan front-end dan back-end RAGFlow setelah pengembangan selesai: 9. Hentikan layanan front-end dan back-end RAGFlow setelah pengembangan selesai:
```bash ```bash
pkill -f "ragflow_server.py|task_executor.py" pkill -f "ragflow_server.py|task_executor.py"
``` ```
## 📚 Dokumentasi ## 📚 Dokumentasi
- [Quickstart](https://ragflow.io/docs/dev/) - [Quickstart](https://ragflow.io/docs/dev/)

View File

@ -1,6 +1,6 @@
<div align="center"> <div align="center">
<a href="https://demo.ragflow.io/"> <a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.svg" width="350" alt="ragflow logo"> <img src="web/src/assets/logo-with-text.png" width="350" alt="ragflow logo">
</a> </a>
</div> </div>
@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -43,13 +43,7 @@
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
<div align="center" style="margin-top:20px;margin-bottom:20px;"> #
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
## 💡 RAGFlow とは? ## 💡 RAGFlow とは?
@ -66,9 +60,8 @@
## 🔥 最新情報 ## 🔥 最新情報
- 2025-10-23 ドキュメント解析方法として MinerU と Docling をサポートします。
- 2025-10-15 オーケストレーションされたデータパイプラインのサポート。
- 2025-08-08 OpenAI の最新 GPT-5 シリーズモデルをサポートします。 - 2025-08-08 OpenAI の最新 GPT-5 シリーズモデルをサポートします。
- 2025-08-04 新モデル、キミK2およびGrok 4をサポート。
- 2025-08-01 エージェントワークフローとMCPをサポート。 - 2025-08-01 エージェントワークフローとMCPをサポート。
- 2025-05-23 エージェントに Python/JS コードエグゼキュータコンポーネントを追加しました。 - 2025-05-23 エージェントに Python/JS コードエグゼキュータコンポーネントを追加しました。
- 2025-05-05 言語間クエリをサポートしました。 - 2025-05-05 言語間クエリをサポートしました。
@ -116,7 +109,7 @@
## 🔎 システム構成 ## 🔎 システム構成
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/> <img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div> </div>
## 🎬 初期設定 ## 🎬 初期設定
@ -154,42 +147,41 @@
> ```bash > ```bash
> vm.max_map_count=262144 > vm.max_map_count=262144
> ``` > ```
>
2. リポジトリをクローンする: 2. リポジトリをクローンする:
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git $ git clone https://github.com/infiniflow/ragflow.git
``` ```
3. ビルド済みの Docker イメージをビルドし、サーバーを起動する: 3. ビルド済みの Docker イメージをビルドし、サーバーを起動する:
> [!CAUTION] > [!CAUTION]
> 現在、公式に提供されているすべての Docker イメージは x86 アーキテクチャ向けにビルドされており、ARM64 用の Docker イメージは提供されていません。 > 現在、公式に提供されているすべての Docker イメージは x86 アーキテクチャ向けにビルドされており、ARM64 用の Docker イメージは提供されていません。
> ARM64 アーキテクチャのオペレーティングシステムを使用している場合は、[このドキュメント](https://ragflow.io/docs/dev/build_docker_image)を参照して Docker イメージを自分でビルドしてください。 > ARM64 アーキテクチャのオペレーティングシステムを使用している場合は、[このドキュメント](https://ragflow.io/docs/dev/build_docker_image)を参照して Docker イメージを自分でビルドしてください。
> 以下のコマンドは、RAGFlow Docker イメージの v0.21.1 エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.21.1 とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。 > 以下のコマンドは、RAGFlow Docker イメージの v0.20.5-slim エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.20.5-slim とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。例えば、完全版 v0.20.5 をダウンロードするには、RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5 と設定します。
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks: # Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d $ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks: # To use GPU to accelerate embedding and DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env # docker compose -f docker-compose-gpu.yml up -d
# docker compose -f docker-compose.yml up -d ```
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | -------------------------- | | ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Stable release | | v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release | | v0.20.5-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | | _Unstable_ nightly build | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
> 注意:`v0.22.0` 以降、当プロジェクトでは slim エディションのみを提供し、イメージタグに **-slim** サフィックスを付けなくなりました。
1. サーバーを立ち上げた後、サーバーの状態を確認する: 1. サーバーを立ち上げた後、サーバーの状態を確認する:
```bash ```bash
$ docker logs -f docker-ragflow-cpu-1 $ docker logs -f ragflow-server
``` ```
_以下の出力は、システムが正常に起動したことを確認するものです:_ _以下の出力は、システムが正常に起動したことを確認するものです:_
@ -205,15 +197,12 @@
``` ```
> もし確認ステップをスキップして直接 RAGFlow にログインした場合、その時点で RAGFlow が完全に初期化されていない可能性があるため、ブラウザーがネットワーク異常エラーを表示するかもしれません。 > もし確認ステップをスキップして直接 RAGFlow にログインした場合、その時点で RAGFlow が完全に初期化されていない可能性があるため、ブラウザーがネットワーク異常エラーを表示するかもしれません。
>
2. ウェブブラウザで、プロンプトに従ってサーバーの IP アドレスを入力し、RAGFlow にログインします。
2. ウェブブラウザで、プロンプトに従ってサーバーの IP アドレスを入力し、RAGFlow にログインします。
> デフォルトの設定を使用する場合、デフォルトの HTTP サービングポート `80` は省略できるので、与えられたシナリオでは、`http://IP_OF_YOUR_MACHINE`(ポート番号は省略)だけを入力すればよい。 > デフォルトの設定を使用する場合、デフォルトの HTTP サービングポート `80` は省略できるので、与えられたシナリオでは、`http://IP_OF_YOUR_MACHINE`(ポート番号は省略)だけを入力すればよい。
>
3. [service_conf.yaml.template](./docker/service_conf.yaml.template) で、`user_default_llm` で希望の LLM ファクトリを選択し、`API_KEY` フィールドを対応する API キーで更新する。 3. [service_conf.yaml.template](./docker/service_conf.yaml.template) で、`user_default_llm` で希望の LLM ファクトリを選択し、`API_KEY` フィールドを対応する API キーで更新する。
> 詳しくは [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) を参照してください。 > 詳しくは [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) を参照してください。
>
_これで初期設定完了ショーの開幕です_ _これで初期設定完了ショーの開幕です_
@ -242,27 +231,33 @@
RAGFlow はデフォルトで Elasticsearch を使用して全文とベクトルを保存します。Infinityに切り替えhttps://github.com/infiniflow/infinity/)、次の手順に従います。 RAGFlow はデフォルトで Elasticsearch を使用して全文とベクトルを保存します。Infinityに切り替えhttps://github.com/infiniflow/infinity/)、次の手順に従います。
1. 実行中のすべてのコンテナを停止するには: 1. 実行中のすべてのコンテナを停止するには:
```bash ```bash
$ docker compose -f docker/docker-compose.yml down -v $ docker compose -f docker/docker-compose.yml down -v
``` ```
Note: `-v` は docker コンテナのボリュームを削除し、既存のデータをクリアします。 Note: `-v` は docker コンテナのボリュームを削除し、既存のデータをクリアします。
2. **docker/.env** の「DOC \_ ENGINE」を「infinity」に設定します。 2. **docker/.env** の「DOC \_ ENGINE」を「infinity」に設定します。
3. 起動コンテナ:
3. 起動コンテナ:
```bash ```bash
$ docker compose -f docker-compose.yml up -d $ docker compose -f docker-compose.yml up -d
``` ```
> [!WARNING] > [!WARNING]
> Linux/arm64 マシンでの Infinity への切り替えは正式にサポートされていません。 > Linux/arm64 マシンでの Infinity への切り替えは正式にサポートされていません。
>
## 🔧 ソースコードで Docker イメージを作成(埋め込みモデルなし) ## 🔧 ソースコードで Docker イメージを作成(埋め込みモデルなし)
この Docker イメージのサイズは約 1GB で、外部の大モデルと埋め込みサービスに依存しています。 この Docker イメージのサイズは約 1GB で、外部の大モデルと埋め込みサービスに依存しています。
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 ソースコードをコンパイルした Docker イメージ(埋め込みモデルを含む)
この Docker のサイズは約 9GB で、埋め込みモデルを含むため、外部の大モデルサービスのみが必要です。
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
@ -276,15 +271,17 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
```bash ```bash
pipx install uv pre-commit pipx install uv pre-commit
``` ```
2. ソースコードをクローンし、Python の依存関係をインストールする: 2. ソースコードをクローンし、Python の依存関係をインストールする:
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
uv sync --python 3.10 # install RAGFlow dependent python modules uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv run download_deps.py uv run download_deps.py
pre-commit install pre-commit install
``` ```
3. Docker Compose を使用して依存サービスMinIO、Elasticsearch、Redis、MySQLを起動する: 3. Docker Compose を使用して依存サービスMinIO、Elasticsearch、Redis、MySQLを起動する:
```bash ```bash
@ -296,11 +293,13 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
``` ```
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager 127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
``` ```
4. HuggingFace にアクセスできない場合は、`HF_ENDPOINT` 環境変数を設定してミラーサイトを使用してください: 4. HuggingFace にアクセスできない場合は、`HF_ENDPOINT` 環境変数を設定してミラーサイトを使用してください:
```bash ```bash
export HF_ENDPOINT=https://hf-mirror.com export HF_ENDPOINT=https://hf-mirror.com
``` ```
5. オペレーティングシステムにjemallocがない場合は、次のようにインストールします: 5. オペレーティングシステムにjemallocがない場合は、次のようにインストールします:
```bash ```bash
@ -311,6 +310,7 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
# mac # mac
sudo brew install jemalloc sudo brew install jemalloc
``` ```
6. バックエンドサービスを起動する: 6. バックエンドサービスを起動する:
```bash ```bash
@ -318,12 +318,14 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
export PYTHONPATH=$(pwd) export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh bash docker/launch_backend_service.sh
``` ```
7. フロントエンドの依存関係をインストールする: 7. フロントエンドの依存関係をインストールする:
```bash ```bash
cd web cd web
npm install npm install
``` ```
8. フロントエンドサービスを起動する: 8. フロントエンドサービスを起動する:
```bash ```bash
@ -333,12 +335,14 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
_以下の画面で、システムが正常に起動したことを示します:_ _以下の画面で、システムが正常に起動したことを示します:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187) ![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
9. 開発が完了したら、RAGFlow のフロントエンド サービスとバックエンド サービスを停止します: 9. 開発が完了したら、RAGFlow のフロントエンド サービスとバックエンド サービスを停止します:
```bash ```bash
pkill -f "ragflow_server.py|task_executor.py" pkill -f "ragflow_server.py|task_executor.py"
``` ```
## 📚 ドキュメンテーション ## 📚 ドキュメンテーション
- [Quickstart](https://ragflow.io/docs/dev/) - [Quickstart](https://ragflow.io/docs/dev/)

View File

@ -1,6 +1,6 @@
<div align="center"> <div align="center">
<a href="https://demo.ragflow.io/"> <a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.svg" width="520" alt="ragflow logo"> <img src="web/src/assets/logo-with-text.png" width="520" alt="ragflow logo">
</a> </a>
</div> </div>
@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -43,14 +43,7 @@
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
<div align="center" style="margin-top:20px;margin-bottom:20px;"> #
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
## 💡 RAGFlow란? ## 💡 RAGFlow란?
@ -67,9 +60,8 @@
## 🔥 업데이트 ## 🔥 업데이트
- 2025-10-23 문서 파싱 방법으로 MinerU 및 Docling을 지원합니다.
- 2025-10-15 조정된 데이터 파이프라인 지원.
- 2025-08-08 OpenAI의 최신 GPT-5 시리즈 모델을 지원합니다. - 2025-08-08 OpenAI의 최신 GPT-5 시리즈 모델을 지원합니다.
- 2025-08-04 새로운 모델인 Kimi K2와 Grok 4를 포함하여 지원합니다.
- 2025-08-01 에이전트 워크플로우와 MCP를 지원합니다. - 2025-08-01 에이전트 워크플로우와 MCP를 지원합니다.
- 2025-05-23 Agent에 Python/JS 코드 실행기 구성 요소를 추가합니다. - 2025-05-23 Agent에 Python/JS 코드 실행기 구성 요소를 추가합니다.
- 2025-05-05 언어 간 쿼리를 지원합니다. - 2025-05-05 언어 간 쿼리를 지원합니다.
@ -117,7 +109,7 @@
## 🔎 시스템 아키텍처 ## 🔎 시스템 아키텍처
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/> <img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div> </div>
## 🎬 시작하기 ## 🎬 시작하기
@ -168,7 +160,7 @@
> 모든 Docker 이미지는 x86 플랫폼을 위해 빌드되었습니다. 우리는 현재 ARM64 플랫폼을 위한 Docker 이미지를 제공하지 않습니다. > 모든 Docker 이미지는 x86 플랫폼을 위해 빌드되었습니다. 우리는 현재 ARM64 플랫폼을 위한 Docker 이미지를 제공하지 않습니다.
> ARM64 플랫폼을 사용 중이라면, [시스템과 호환되는 Docker 이미지를 빌드하려면 이 가이드를 사용해 주세요](https://ragflow.io/docs/dev/build_docker_image). > ARM64 플랫폼을 사용 중이라면, [시스템과 호환되는 Docker 이미지를 빌드하려면 이 가이드를 사용해 주세요](https://ragflow.io/docs/dev/build_docker_image).
> 아래 명령어는 RAGFlow Docker 이미지의 v0.21.1 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.21.1과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. > 아래 명령어는 RAGFlow Docker 이미지의 v0.20.5-slim 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.20.5-slim과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. 예를 들어, 전체 버전인 v0.20.5을 다운로드하려면 RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5로 설정합니다.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
@ -176,22 +168,20 @@
$ docker compose -f docker-compose.yml up -d $ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks: # To use GPU to accelerate embedding and DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env # docker compose -f docker-compose-gpu.yml up -d
# docker compose -f docker-compose.yml up -d
``` ```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ | | ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Stable release | | v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release | | v0.20.5-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | | _Unstable_ nightly build | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
> 참고: `v0.22.0`부터는 slim 에디션만 배포하며 이미지 태그에 **-slim** 접미사를 더 이상 붙이지 않습니다.
1. 서버가 시작된 후 서버 상태를 확인하세요: 1. 서버가 시작된 후 서버 상태를 확인하세요:
```bash ```bash
$ docker logs -f docker-ragflow-cpu-1 $ docker logs -f ragflow-server
``` ```
_다음 출력 결과로 시스템이 성공적으로 시작되었음을 확인합니다:_ _다음 출력 결과로 시스템이 성공적으로 시작되었음을 확인합니다:_
@ -257,6 +247,16 @@ RAGFlow 는 기본적으로 Elasticsearch 를 사용하여 전체 텍스트 및
이 Docker 이미지의 크기는 약 1GB이며, 외부 대형 모델과 임베딩 서비스에 의존합니다. 이 Docker 이미지의 크기는 약 1GB이며, 외부 대형 모델과 임베딩 서비스에 의존합니다.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함)
이 Docker의 크기는 약 9GB이며, 이미 임베딩 모델을 포함하고 있으므로 외부 대형 모델 서비스에만 의존하면 됩니다.
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
@ -276,7 +276,7 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
uv sync --python 3.10 # install RAGFlow dependent python modules uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv run download_deps.py uv run download_deps.py
pre-commit install pre-commit install
``` ```

View File

@ -1,6 +1,6 @@
<div align="center"> <div align="center">
<a href="https://demo.ragflow.io/"> <a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.svg" width="520" alt="ragflow logo"> <img src="web/src/assets/logo-with-text.png" width="520" alt="ragflow logo">
</a> </a>
</div> </div>
@ -22,7 +22,7 @@
<img alt="Badge Estático" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Badge Estático" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Última%20Relese" alt="Última Versão"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Última%20Relese" alt="Última Versão">
@ -43,13 +43,7 @@
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
<div align="center" style="margin-top:20px;margin-bottom:20px;"> #
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
<details open> <details open>
<summary><b>📕 Índice</b></summary> <summary><b>📕 Índice</b></summary>
@ -86,9 +80,8 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Últimas Atualizações ## 🔥 Últimas Atualizações
- 23-10-2025 Suporta MinerU e Docling como métodos de análise de documentos.
- 15-10-2025 Suporte para pipelines de dados orquestrados.
- 08-08-2025 Suporta a mais recente série GPT-5 da OpenAI. - 08-08-2025 Suporta a mais recente série GPT-5 da OpenAI.
- 04-08-2025 Suporta novos modelos, incluindo Kimi K2 e Grok 4.
- 01-08-2025 Suporta fluxo de trabalho agente e MCP. - 01-08-2025 Suporta fluxo de trabalho agente e MCP.
- 23-05-2025 Adicione o componente executor de código Python/JS ao Agente. - 23-05-2025 Adicione o componente executor de código Python/JS ao Agente.
- 05-05-2025 Suporte a consultas entre idiomas. - 05-05-2025 Suporte a consultas entre idiomas.
@ -136,7 +129,7 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔎 Arquitetura do Sistema ## 🔎 Arquitetura do Sistema
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/> <img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div> </div>
## 🎬 Primeiros Passos ## 🎬 Primeiros Passos
@ -154,86 +147,84 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
### 🚀 Iniciar o servidor ### 🚀 Iniciar o servidor
1. Certifique-se de que `vm.max_map_count` >= 262144: 1. Certifique-se de que `vm.max_map_count` >= 262144:
> Para verificar o valor de `vm.max_map_count`: > Para verificar o valor de `vm.max_map_count`:
> >
> ```bash > ```bash
> $ sysctl vm.max_map_count > $ sysctl vm.max_map_count
> ``` > ```
> >
> Se necessário, redefina `vm.max_map_count` para um valor de pelo menos 262144: > Se necessário, redefina `vm.max_map_count` para um valor de pelo menos 262144:
> >
> ```bash > ```bash
> # Neste caso, defina para 262144: > # Neste caso, defina para 262144:
> $ sudo sysctl -w vm.max_map_count=262144 > $ sudo sysctl -w vm.max_map_count=262144
> ``` > ```
> >
> Essa mudança será resetada após a reinicialização do sistema. Para garantir que a alteração permaneça permanente, adicione ou atualize o valor de `vm.max_map_count` em **/etc/sysctl.conf**: > Essa mudança será resetada após a reinicialização do sistema. Para garantir que a alteração permaneça permanente, adicione ou atualize o valor de `vm.max_map_count` em **/etc/sysctl.conf**:
> >
> ```bash > ```bash
> vm.max_map_count=262144 > vm.max_map_count=262144
> ``` > ```
>
2. Clone o repositório:
```bash 2. Clone o repositório:
$ git clone https://github.com/infiniflow/ragflow.git
``` ```bash
3. Inicie o servidor usando as imagens Docker pré-compiladas: $ git clone https://github.com/infiniflow/ragflow.git
```
3. Inicie o servidor usando as imagens Docker pré-compiladas:
> [!CAUTION] > [!CAUTION]
> Todas as imagens Docker são construídas para plataformas x86. Atualmente, não oferecemos imagens Docker para ARM64. > Todas as imagens Docker são construídas para plataformas x86. Atualmente, não oferecemos imagens Docker para ARM64.
> Se você estiver usando uma plataforma ARM64, por favor, utilize [este guia](https://ragflow.io/docs/dev/build_docker_image) para construir uma imagem Docker compatível com o seu sistema. > Se você estiver usando uma plataforma ARM64, por favor, utilize [este guia](https://ragflow.io/docs/dev/build_docker_image) para construir uma imagem Docker compatível com o seu sistema.
> O comando abaixo baixa a edição`v0.21.1` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.21.1`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. > O comando abaixo baixa a edição `v0.20.5-slim` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.20.5-slim`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. Por exemplo: defina `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5` para a edição completa `v0.20.5`.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks: # Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d $ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks: # To use GPU to accelerate embedding and DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env # docker compose -f docker-compose-gpu.yml up -d
# docker compose -f docker-compose.yml up -d ```
```
| Tag da imagem RAGFlow | Tamanho da imagem (GB) | Possui modelos de incorporação? | Estável? | | Tag da imagem RAGFlow | Tamanho da imagem (GB) | Possui modelos de incorporação? | Estável? |
| --------------------- | ---------------------- | --------------------------------- | ------------------------------ | | --------------------- | ---------------------- | ------------------------------- | ------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Lançamento estável | | v0.20.5 | ~9 | :heavy_check_mark: | Lançamento estável |
| v0.21.1-slim | &approx;2 | ❌ | Lançamento estável | | v0.20.5-slim | ~2 | ❌ | Lançamento estável |
| nightly | &approx;2 | ❌ | Construção noturna instável | | nightly | ~9 | :heavy_check_mark: | _Instável_ build noturno |
| nightly-slim | ~2 | ❌ | _Instável_ build noturno |
> Observação: A partir da`v0.22.0`, distribuímos apenas a edição slim e não adicionamos mais o sufixo **-slim** às tags das imagens. 4. Verifique o status do servidor após tê-lo iniciado:
4. Verifique o status do servidor após tê-lo iniciado: ```bash
$ docker logs -f ragflow-server
```
```bash _O seguinte resultado confirma o lançamento bem-sucedido do sistema:_
$ docker logs -f docker-ragflow-cpu-1
```
_O seguinte resultado confirma o lançamento bem-sucedido do sistema:_ ```bash
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
```bash * Rodando em todos os endereços (0.0.0.0)
____ ___ ______ ______ __ ```
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Rodando em todos os endereços (0.0.0.0) > Se você pular essa etapa de confirmação e acessar diretamente o RAGFlow, seu navegador pode exibir um erro `network anormal`, pois, nesse momento, seu RAGFlow pode não estar totalmente inicializado.
```
> Se você pular essa etapa de confirmação e acessar diretamente o RAGFlow, seu navegador pode exibir um erro `network anormal`, pois, nesse momento, seu RAGFlow pode não estar totalmente inicializado. 5. No seu navegador, insira o endereço IP do seu servidor e faça login no RAGFlow.
>
5. No seu navegador, insira o endereço IP do seu servidor e faça login no RAGFlow.
> Com as configurações padrão, você só precisa digitar `http://IP_DO_SEU_MÁQUINA` (**sem** o número da porta), pois a porta HTTP padrão `80` pode ser omitida ao usar as configurações padrão. > Com as configurações padrão, você só precisa digitar `http://IP_DO_SEU_MÁQUINA` (**sem** o número da porta), pois a porta HTTP padrão `80` pode ser omitida ao usar as configurações padrão.
>
6. Em [service_conf.yaml.template](./docker/service_conf.yaml.template), selecione a fábrica LLM desejada em `user_default_llm` e atualize o campo `API_KEY` com a chave de API correspondente.
> Consulte [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) para mais informações. 6. Em [service_conf.yaml.template](./docker/service_conf.yaml.template), selecione a fábrica LLM desejada em `user_default_llm` e atualize o campo `API_KEY` com a chave de API correspondente.
>
> Consulte [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) para mais informações.
_O show está no ar!_ _O show está no ar!_
@ -264,9 +255,9 @@ O RAGFlow usa o Elasticsearch por padrão para armazenar texto completo e vetore
```bash ```bash
$ docker compose -f docker/docker-compose.yml down -v $ docker compose -f docker/docker-compose.yml down -v
``` ```
Note: `-v` irá deletar os volumes do contêiner, e os dados existentes serão apagados. Note: `-v` irá deletar os volumes do contêiner, e os dados existentes serão apagados.
2. Defina `DOC_ENGINE` no **docker/.env** para `infinity`. 2. Defina `DOC_ENGINE` no **docker/.env** para `infinity`.
3. Inicie os contêineres: 3. Inicie os contêineres:
```bash ```bash
@ -280,6 +271,16 @@ O RAGFlow usa o Elasticsearch por padrão para armazenar texto completo e vetore
Esta imagem tem cerca de 2 GB de tamanho e depende de serviços externos de LLM e incorporação. Esta imagem tem cerca de 2 GB de tamanho e depende de serviços externos de LLM e incorporação.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 Criar uma imagem Docker incluindo modelos de incorporação
Esta imagem tem cerca de 9 GB de tamanho. Como inclui modelos de incorporação, depende apenas de serviços externos de LLM.
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
@ -293,15 +294,17 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
```bash ```bash
pipx install uv pre-commit pipx install uv pre-commit
``` ```
2. Clone o código-fonte e instale as dependências Python: 2. Clone o código-fonte e instale as dependências Python:
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
uv sync --python 3.10 # instala os módulos Python dependentes do RAGFlow uv sync --python 3.10 --all-extras # instala os módulos Python dependentes do RAGFlow
uv run download_deps.py uv run download_deps.py
pre-commit install pre-commit install
``` ```
3. Inicie os serviços dependentes (MinIO, Elasticsearch, Redis e MySQL) usando Docker Compose: 3. Inicie os serviços dependentes (MinIO, Elasticsearch, Redis e MySQL) usando Docker Compose:
```bash ```bash
@ -313,21 +316,24 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
``` ```
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager 127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
``` ```
4. Se não conseguir acessar o HuggingFace, defina a variável de ambiente `HF_ENDPOINT` para usar um site espelho: 4. Se não conseguir acessar o HuggingFace, defina a variável de ambiente `HF_ENDPOINT` para usar um site espelho:
```bash ```bash
export HF_ENDPOINT=https://hf-mirror.com export HF_ENDPOINT=https://hf-mirror.com
``` ```
5. Se o seu sistema operacional não tiver jemalloc, instale-o da seguinte maneira: 5. Se o seu sistema operacional não tiver jemalloc, instale-o da seguinte maneira:
```bash ```bash
# ubuntu # ubuntu
sudo apt-get install libjemalloc-dev sudo apt-get install libjemalloc-dev
# centos # centos
sudo yum instalar jemalloc sudo yum instalar jemalloc
# mac # mac
sudo brew install jemalloc sudo brew install jemalloc
``` ```
6. Lance o serviço de back-end: 6. Lance o serviço de back-end:
```bash ```bash
@ -335,12 +341,14 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
export PYTHONPATH=$(pwd) export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh bash docker/launch_backend_service.sh
``` ```
7. Instale as dependências do front-end: 7. Instale as dependências do front-end:
```bash ```bash
cd web cd web
npm install npm install
``` ```
8. Lance o serviço de front-end: 8. Lance o serviço de front-end:
```bash ```bash
@ -350,11 +358,13 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
_O seguinte resultado confirma o lançamento bem-sucedido do sistema:_ _O seguinte resultado confirma o lançamento bem-sucedido do sistema:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187) ![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
9. Pare os serviços de front-end e back-end do RAGFlow após a conclusão do desenvolvimento: 9. Pare os serviços de front-end e back-end do RAGFlow após a conclusão do desenvolvimento:
```bash ```bash
pkill -f "ragflow_server.py|task_executor.py" pkill -f "ragflow_server.py|task_executor.py"
``` ```
## 📚 Documentação ## 📚 Documentação

View File

@ -1,6 +1,6 @@
<div align="center"> <div align="center">
<a href="https://demo.ragflow.io/"> <a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.svg" width="350" alt="ragflow logo"> <img src="web/src/assets/logo-with-text.png" width="350" alt="ragflow logo">
</a> </a>
</div> </div>
@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -43,9 +43,7 @@
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
<div align="center" style="margin-top:20px;margin-bottom:20px;"> #
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center"> <div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> <a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
@ -85,9 +83,8 @@
## 🔥 近期更新 ## 🔥 近期更新
- 2025-10-23 支援 MinerU 和 Docling 作為文件解析方法。
- 2025-10-15 支援可編排的資料管道。
- 2025-08-08 支援 OpenAI 最新的 GPT-5 系列模型。 - 2025-08-08 支援 OpenAI 最新的 GPT-5 系列模型。
- 2025-08-04 支援 Kimi K2 和 Grok 4 等模型.
- 2025-08-01 支援 agentic workflow 和 MCP - 2025-08-01 支援 agentic workflow 和 MCP
- 2025-05-23 為 Agent 新增 Python/JS 程式碼執行器元件。 - 2025-05-23 為 Agent 新增 Python/JS 程式碼執行器元件。
- 2025-05-05 支援跨語言查詢。 - 2025-05-05 支援跨語言查詢。
@ -135,7 +132,7 @@
## 🔎 系統架構 ## 🔎 系統架構
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/> <img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div> </div>
## 🎬 快速開始 ## 🎬 快速開始
@ -173,48 +170,47 @@
> ```bash > ```bash
> vm.max_map_count=262144 > vm.max_map_count=262144
> ``` > ```
>
2. 克隆倉庫: 2. 克隆倉庫:
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git $ git clone https://github.com/infiniflow/ragflow.git
``` ```
3. 進入 **docker** 資料夾,利用事先編譯好的 Docker 映像啟動伺服器: 3. 進入 **docker** 資料夾,利用事先編譯好的 Docker 映像啟動伺服器:
> [!CAUTION] > [!CAUTION]
> 所有 Docker 映像檔都是為 x86 平台建置的。目前,我們不提供 ARM64 平台的 Docker 映像檔。 > 所有 Docker 映像檔都是為 x86 平台建置的。目前,我們不提供 ARM64 平台的 Docker 映像檔。
> 如果您使用的是 ARM64 平台,請使用 [這份指南](https://ragflow.io/docs/dev/build_docker_image) 來建置適合您系統的 Docker 映像檔。 > 如果您使用的是 ARM64 平台,請使用 [這份指南](https://ragflow.io/docs/dev/build_docker_image) 來建置適合您系統的 Docker 映像檔。
> 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.21.1`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.21.1` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。 > 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.20.5-slim`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.20.5-slim` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。例如,你可以透過設定 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5` 來下載 RAGFlow 鏡像的 `v0.20.5` 完整發行版。
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks: # Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d $ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks: # To use GPU to accelerate embedding and DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env # docker compose -f docker-compose-gpu.yml up -d
# docker compose -f docker-compose.yml up -d ```
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | -------------------------- | | ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Stable release | | v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release | | v0.20.5-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | | _Unstable_ nightly build | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
> 注意:自 `v0.22.0` 起,我們僅發佈 slim 版本,並且不再在映像標籤後附加 **-slim** 後綴。 > [!TIP]
> 如果你遇到 Docker 映像檔拉不下來的問題,可以在 **docker/.env** 檔案內根據變數 `RAGFLOW_IMAGE` 的註解提示選擇華為雲或阿里雲的對應映像。
> [!TIP] >
> 如果你遇到 Docker 映像檔拉不下來的問題,可以在 **docker/.env** 檔案內根據變數 `RAGFLOW_IMAGE` 的註解提示選擇華為雲或阿里雲的對應映像。 > - 華為雲鏡像名:`swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow`
> > - 阿里雲鏡像名:`registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow`
> - 華為雲鏡像名:`swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow`
> - 阿里雲鏡像名:`registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow`
4. 伺服器啟動成功後再次確認伺服器狀態: 4. 伺服器啟動成功後再次確認伺服器狀態:
```bash ```bash
$ docker logs -f docker-ragflow-cpu-1 $ docker logs -f ragflow-server
``` ```
_出現以下介面提示說明伺服器啟動成功_ _出現以下介面提示說明伺服器啟動成功_
@ -230,15 +226,12 @@
``` ```
> 如果您跳過這一步驟系統確認步驟就登入 RAGFlow你的瀏覽器有可能會提示 `network anormal` 或 `網路異常`,因為 RAGFlow 可能並未完全啟動成功。 > 如果您跳過這一步驟系統確認步驟就登入 RAGFlow你的瀏覽器有可能會提示 `network anormal` 或 `網路異常`,因為 RAGFlow 可能並未完全啟動成功。
>
5. 在你的瀏覽器中輸入你的伺服器對應的 IP 位址並登入 RAGFlow。
5. 在你的瀏覽器中輸入你的伺服器對應的 IP 位址並登入 RAGFlow。
> 上面這個範例中,您只需輸入 http://IP_OF_YOUR_MACHINE 即可:未改動過設定則無需輸入連接埠(預設的 HTTP 服務連接埠 80 > 上面這個範例中,您只需輸入 http://IP_OF_YOUR_MACHINE 即可:未改動過設定則無需輸入連接埠(預設的 HTTP 服務連接埠 80
>
6. 在 [service_conf.yaml.template](./docker/service_conf.yaml.template) 檔案的 `user_default_llm` 欄位設定 LLM factory並在 `API_KEY` 欄填入和你選擇的大模型相對應的 API key。 6. 在 [service_conf.yaml.template](./docker/service_conf.yaml.template) 檔案的 `user_default_llm` 欄位設定 LLM factory並在 `API_KEY` 欄填入和你選擇的大模型相對應的 API key。
> 詳見 [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup)。 > 詳見 [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup)。
>
_好戲開始接著奏樂接著舞 _ _好戲開始接著奏樂接著舞 _
@ -256,7 +249,7 @@
> [./docker/README](./docker/README.md) 解釋了 [service_conf.yaml.template](./docker/service_conf.yaml.template) 用到的環境變數設定和服務配置。 > [./docker/README](./docker/README.md) 解釋了 [service_conf.yaml.template](./docker/service_conf.yaml.template) 用到的環境變數設定和服務配置。
如需更新預設的 HTTP 服務連接埠(80), 可以在[docker-compose.yml](./docker/docker-compose.yml) 檔案中將配置 `80:80` 改為 `<YOUR_SERVING_PORT>:80` 。 如需更新預設的 HTTP 服務連接埠(80), 可以在[docker-compose.yml](./docker/docker-compose.yml) 檔案中將配置`80:80` 改為`<YOUR_SERVING_PORT>:80` 。
> 所有系統配置都需要透過系統重新啟動生效: > 所有系統配置都需要透過系統重新啟動生效:
> >
@ -273,9 +266,10 @@ RAGFlow 預設使用 Elasticsearch 儲存文字和向量資料. 如果要切換
```bash ```bash
$ docker compose -f docker/docker-compose.yml down -v $ docker compose -f docker/docker-compose.yml down -v
``` ```
Note: `-v` 將會刪除 docker 容器的 volumes已有的資料會被清空。 Note: `-v` 將會刪除 docker 容器的 volumes已有的資料會被清空。
2. 設定 **docker/.env** 目錄中的 `DOC_ENGINE` 為 `infinity`. 2. 設定 **docker/.env** 目錄中的 `DOC_ENGINE` 為 `infinity`.
3. 啟動容器: 3. 啟動容器:
```bash ```bash
@ -292,7 +286,17 @@ RAGFlow 預設使用 Elasticsearch 儲存文字和向量資料. 如果要切換
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly . docker build --platform linux/amd64 --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 原始碼編譯 Docker 映像(包含 embedding 模型)
本 Docker 大小約 9 GB 左右。由於已包含 embedding 模型,所以只需依賴外部的大模型服務即可。
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
``` ```
## 🔨 以原始碼啟動服務 ## 🔨 以原始碼啟動服務
@ -303,15 +307,17 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
pipx install uv pre-commit pipx install uv pre-commit
export UV_INDEX=https://mirrors.aliyun.com/pypi/simple export UV_INDEX=https://mirrors.aliyun.com/pypi/simple
``` ```
2. 下載原始碼並安裝 Python 依賴: 2. 下載原始碼並安裝 Python 依賴:
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
uv sync --python 3.10 # install RAGFlow dependent python modules uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv run download_deps.py uv run download_deps.py
pre-commit install pre-commit install
``` ```
3. 透過 Docker Compose 啟動依賴的服務MinIO, Elasticsearch, Redis, and MySQL 3. 透過 Docker Compose 啟動依賴的服務MinIO, Elasticsearch, Redis, and MySQL
```bash ```bash
@ -323,11 +329,13 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
``` ```
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager 127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
``` ```
4. 如果無法存取 HuggingFace可以把環境變數 `HF_ENDPOINT` 設為對應的鏡像網站: 4. 如果無法存取 HuggingFace可以把環境變數 `HF_ENDPOINT` 設為對應的鏡像網站:
```bash ```bash
export HF_ENDPOINT=https://hf-mirror.com export HF_ENDPOINT=https://hf-mirror.com
``` ```
5. 如果你的操作系统没有 jemalloc请按照如下方式安装 5. 如果你的操作系统没有 jemalloc请按照如下方式安装
```bash ```bash
@ -338,6 +346,7 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
# mac # mac
sudo brew install jemalloc sudo brew install jemalloc
``` ```
6. 啟動後端服務: 6. 啟動後端服務:
```bash ```bash
@ -345,12 +354,14 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
export PYTHONPATH=$(pwd) export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh bash docker/launch_backend_service.sh
``` ```
7. 安裝前端依賴: 7. 安裝前端依賴:
```bash ```bash
cd web cd web
npm install npm install
``` ```
8. 啟動前端服務: 8. 啟動前端服務:
```bash ```bash
@ -360,16 +371,15 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
以下界面說明系統已成功啟動_ 以下界面說明系統已成功啟動_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187) ![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
``` ```
```
9. 開發完成後停止 RAGFlow 前端和後端服務: 9. 開發完成後停止 RAGFlow 前端和後端服務:
```bash ```bash
pkill -f "ragflow_server.py|task_executor.py" pkill -f "ragflow_server.py|task_executor.py"
``` ```
## 📚 技術文檔 ## 📚 技術文檔
- [Quickstart](https://ragflow.io/docs/dev/) - [Quickstart](https://ragflow.io/docs/dev/)

View File

@ -1,6 +1,6 @@
<div align="center"> <div align="center">
<a href="https://demo.ragflow.io/"> <a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.svg" width="350" alt="ragflow logo"> <img src="web/src/assets/logo-with-text.png" width="350" alt="ragflow logo">
</a> </a>
</div> </div>
@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -43,9 +43,7 @@
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
<div align="center" style="margin-top:20px;margin-bottom:20px;"> #
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center"> <div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> <a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
@ -85,9 +83,8 @@
## 🔥 近期更新 ## 🔥 近期更新
- 2025-10-23 支持 MinerU 和 Docling 作为文档解析方法。 - 2025-08-08 支持 OpenAI 最新的 GPT-5 系列模型.
- 2025-10-15 支持可编排的数据管道。 - 2025-08-04 新增对 Kimi K2 和 Grok 4 等模型的支持.
- 2025-08-08 支持 OpenAI 最新的 GPT-5 系列模型。
- 2025-08-01 支持 agentic workflow 和 MCP。 - 2025-08-01 支持 agentic workflow 和 MCP。
- 2025-05-23 Agent 新增 Python/JS 代码执行器组件。 - 2025-05-23 Agent 新增 Python/JS 代码执行器组件。
- 2025-05-05 支持跨语言查询。 - 2025-05-05 支持跨语言查询。
@ -135,7 +132,7 @@
## 🔎 系统架构 ## 🔎 系统架构
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/> <img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div> </div>
## 🎬 快速开始 ## 🎬 快速开始
@ -186,7 +183,7 @@
> 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。 > 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。
> 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。 > 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.21.1`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.21.1` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。 > 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.20.5-slim`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.20.5-slim` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。比如,你可以通过设置 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5` 来下载 RAGFlow 镜像的 `v0.20.5` 完整发行版。
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
@ -194,17 +191,15 @@
$ docker compose -f docker-compose.yml up -d $ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks: # To use GPU to accelerate embedding and DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env # docker compose -f docker-compose-gpu.yml up -d
# docker compose -f docker-compose.yml up -d
``` ```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ | | ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Stable release | | v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release | | v0.20.5-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | | _Unstable_ nightly build | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
> 注意:从 `v0.22.0` 开始,我们只发布 slim 版本,并且不再在镜像标签后附加 **-slim** 后缀。
> [!TIP] > [!TIP]
> 如果你遇到 Docker 镜像拉不下来的问题,可以在 **docker/.env** 文件内根据变量 `RAGFLOW_IMAGE` 的注释提示选择华为云或者阿里云的相应镜像。 > 如果你遇到 Docker 镜像拉不下来的问题,可以在 **docker/.env** 文件内根据变量 `RAGFLOW_IMAGE` 的注释提示选择华为云或者阿里云的相应镜像。
@ -215,7 +210,7 @@
4. 服务器启动成功后再次确认服务器状态: 4. 服务器启动成功后再次确认服务器状态:
```bash ```bash
$ docker logs -f docker-ragflow-cpu-1 $ docker logs -f ragflow-server
``` ```
_出现以下界面提示说明服务器启动成功_ _出现以下界面提示说明服务器启动成功_
@ -291,7 +286,17 @@ RAGFlow 默认使用 Elasticsearch 存储文本和向量数据. 如果要切换
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly . docker build --platform linux/amd64 --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 源码编译 Docker 镜像(包含 embedding 模型)
本 Docker 大小约 9 GB 左右。由于已包含 embedding 模型,所以只需依赖外部的大模型服务即可。
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
``` ```
## 🔨 以源代码启动服务 ## 🔨 以源代码启动服务
@ -308,7 +313,7 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
uv sync --python 3.10 # install RAGFlow dependent python modules uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv run download_deps.py uv run download_deps.py
pre-commit install pre-commit install
``` ```

View File

@ -15,55 +15,22 @@ It consists of a server-side Service and a command-line client (CLI), both imple
- **Admin Service**: A backend service that interfaces with the RAGFlow system to execute administrative operations and monitor its status. - **Admin Service**: A backend service that interfaces with the RAGFlow system to execute administrative operations and monitor its status.
- **Admin CLI**: A command-line interface that allows users to connect to the Admin Service and issue commands for system management. - **Admin CLI**: A command-line interface that allows users to connect to the Admin Service and issue commands for system management.
### Starting the Admin Service ### Starting the Admin Service
#### Launching from source code 1. Before start Admin Service, please make sure RAGFlow system is already started.
1. Before start Admin Service, please make sure RAGFlow system is already started.
2. Launch from source code:
```bash
python admin/server/admin_server.py
```
The service will start and listen for incoming connections from the CLI on the configured port.
#### Using docker image
1. Before startup, please configure the `docker_compose.yml` file to enable admin server:
```bash
command:
- --enable-adminserver
```
2. Start the containers, the service will start and listen for incoming connections from the CLI on the configured port.
2. Run the service script:
```bash
python admin/admin_server.py
```
The service will start and listen for incoming connections from the CLI on the configured port.
### Using the Admin CLI ### Using the Admin CLI
1. Ensure the Admin Service is running. 1. Ensure the Admin Service is running.
2. Install ragflow-cli. 2. Launch the CLI client:
```bash ```bash
pip install ragflow-cli==0.21.1 python admin/admin_client.py -h 0.0.0.0 -p 9381
```
3. Launch the CLI client:
```bash
ragflow-cli -h 127.0.0.1 -p 9381
```
You will be prompted to enter the superuser's password to log in.
The default password is admin.
**Parameters:**
- -h: RAGFlow admin server host address
- -p: RAGFlow admin server port
## Supported Commands ## Supported Commands
@ -75,7 +42,12 @@ Commands are case-insensitive and must be terminated with a semicolon (`;`).
- Lists all available services within the RAGFlow system. - Lists all available services within the RAGFlow system.
- `SHOW SERVICE <id>;` - `SHOW SERVICE <id>;`
- Shows detailed status information for the service identified by `<id>`. - Shows detailed status information for the service identified by `<id>`.
- `STARTUP SERVICE <id>;`
- Attempts to start the service identified by `<id>`.
- `SHUTDOWN SERVICE <id>;`
- Attempts to gracefully shut down the service identified by `<id>`.
- `RESTART SERVICE <id>;`
- Attempts to restart the service identified by `<id>`.
### User Management Commands ### User Management Commands
@ -83,17 +55,10 @@ Commands are case-insensitive and must be terminated with a semicolon (`;`).
- Lists all users known to the system. - Lists all users known to the system.
- `SHOW USER '<username>';` - `SHOW USER '<username>';`
- Shows details and permissions for the specified user. The username must be enclosed in single or double quotes. - Shows details and permissions for the specified user. The username must be enclosed in single or double quotes.
- `CREATE USER <username> <password>;`
- Create user by username and password. The username and password must be enclosed in single or double quotes.
- `DROP USER '<username>';` - `DROP USER '<username>';`
- Removes the specified user from the system. Use with caution. - Removes the specified user from the system. Use with caution.
- `ALTER USER PASSWORD '<username>' '<new_password>';` - `ALTER USER PASSWORD '<username>' '<new_password>';`
- Changes the password for the specified user. - Changes the password for the specified user.
- `ALTER USER ACTIVE <username> <on/off>;`
- Changes the user to active or inactive.
### Data and Agent Commands ### Data and Agent Commands

574
admin/admin_client.py Normal file
View File

@ -0,0 +1,574 @@
import argparse
import base64
from Cryptodome.PublicKey import RSA
from Cryptodome.Cipher import PKCS1_v1_5 as Cipher_pkcs1_v1_5
from typing import Dict, List, Any
from lark import Lark, Transformer, Tree
import requests
from requests.auth import HTTPBasicAuth
from api.common.base64 import encode_to_base64
GRAMMAR = r"""
start: command
command: sql_command | meta_command
sql_command: list_services
| show_service
| startup_service
| shutdown_service
| restart_service
| list_users
| show_user
| drop_user
| alter_user
| create_user
| activate_user
| list_datasets
| list_agents
// meta command definition
meta_command: "\\" meta_command_name [meta_args]
meta_command_name: /[a-zA-Z?]+/
meta_args: (meta_arg)+
meta_arg: /[^\\s"']+/ | quoted_string
// command definition
LIST: "LIST"i
SERVICES: "SERVICES"i
SHOW: "SHOW"i
CREATE: "CREATE"i
SERVICE: "SERVICE"i
SHUTDOWN: "SHUTDOWN"i
STARTUP: "STARTUP"i
RESTART: "RESTART"i
USERS: "USERS"i
DROP: "DROP"i
USER: "USER"i
ALTER: "ALTER"i
ACTIVE: "ACTIVE"i
PASSWORD: "PASSWORD"i
DATASETS: "DATASETS"i
OF: "OF"i
AGENTS: "AGENTS"i
list_services: LIST SERVICES ";"
show_service: SHOW SERVICE NUMBER ";"
startup_service: STARTUP SERVICE NUMBER ";"
shutdown_service: SHUTDOWN SERVICE NUMBER ";"
restart_service: RESTART SERVICE NUMBER ";"
list_users: LIST USERS ";"
drop_user: DROP USER quoted_string ";"
alter_user: ALTER USER PASSWORD quoted_string quoted_string ";"
show_user: SHOW USER quoted_string ";"
create_user: CREATE USER quoted_string quoted_string ";"
activate_user: ALTER USER ACTIVE quoted_string status ";"
list_datasets: LIST DATASETS OF quoted_string ";"
list_agents: LIST AGENTS OF quoted_string ";"
identifier: WORD
quoted_string: QUOTED_STRING
status: WORD
QUOTED_STRING: /'[^']+'/ | /"[^"]+"/
WORD: /[a-zA-Z0-9_\-\.]+/
NUMBER: /[0-9]+/
%import common.WS
%ignore WS
"""
class AdminTransformer(Transformer):
def start(self, items):
return items[0]
def command(self, items):
return items[0]
def list_services(self, items):
result = {'type': 'list_services'}
return result
def show_service(self, items):
service_id = int(items[2])
return {"type": "show_service", "number": service_id}
def startup_service(self, items):
service_id = int(items[2])
return {"type": "startup_service", "number": service_id}
def shutdown_service(self, items):
service_id = int(items[2])
return {"type": "shutdown_service", "number": service_id}
def restart_service(self, items):
service_id = int(items[2])
return {"type": "restart_service", "number": service_id}
def list_users(self, items):
return {"type": "list_users"}
def show_user(self, items):
user_name = items[2]
return {"type": "show_user", "username": user_name}
def drop_user(self, items):
user_name = items[2]
return {"type": "drop_user", "username": user_name}
def alter_user(self, items):
user_name = items[3]
new_password = items[4]
return {"type": "alter_user", "username": user_name, "password": new_password}
def create_user(self, items):
user_name = items[2]
password = items[3]
return {"type": "create_user", "username": user_name, "password": password, "role": "user"}
def activate_user(self, items):
user_name = items[3]
activate_status = items[4]
return {"type": "activate_user", "activate_status": activate_status, "username": user_name}
def list_datasets(self, items):
user_name = items[3]
return {"type": "list_datasets", "username": user_name}
def list_agents(self, items):
user_name = items[3]
return {"type": "list_agents", "username": user_name}
def meta_command(self, items):
command_name = str(items[0]).lower()
args = items[1:] if len(items) > 1 else []
# handle quoted parameter
parsed_args = []
for arg in args:
if hasattr(arg, 'value'):
parsed_args.append(arg.value)
else:
parsed_args.append(str(arg))
return {'type': 'meta', 'command': command_name, 'args': parsed_args}
def meta_command_name(self, items):
return items[0]
def meta_args(self, items):
return items
def encrypt(input_string):
pub = '-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArq9XTUSeYr2+N1h3Afl/z8Dse/2yD0ZGrKwx+EEEcdsBLca9Ynmx3nIB5obmLlSfmskLpBo0UACBmB5rEjBp2Q2f3AG3Hjd4B+gNCG6BDaawuDlgANIhGnaTLrIqWrrcm4EMzJOnAOI1fgzJRsOOUEfaS318Eq9OVO3apEyCCt0lOQK6PuksduOjVxtltDav+guVAA068NrPYmRNabVKRNLJpL8w4D44sfth5RvZ3q9t+6RTArpEtc5sh5ChzvqPOzKGMXW83C95TxmXqpbK6olN4RevSfVjEAgCydH6HN6OhtOQEcnrU97r9H0iZOWwbw3pVrZiUkuRD1R56Wzs2wIDAQAB\n-----END PUBLIC KEY-----'
pub_key = RSA.importKey(pub)
cipher = Cipher_pkcs1_v1_5.new(pub_key)
cipher_text = cipher.encrypt(base64.b64encode(input_string.encode('utf-8')))
return base64.b64encode(cipher_text).decode("utf-8")
class AdminCommandParser:
def __init__(self):
self.parser = Lark(GRAMMAR, start='start', parser='lalr', transformer=AdminTransformer())
self.command_history = []
def parse_command(self, command_str: str) -> Dict[str, Any]:
if not command_str.strip():
return {'type': 'empty'}
self.command_history.append(command_str)
try:
result = self.parser.parse(command_str)
return result
except Exception as e:
return {'type': 'error', 'message': f'Parse error: {str(e)}'}
class AdminCLI:
def __init__(self):
self.parser = AdminCommandParser()
self.is_interactive = False
self.admin_account = "admin@ragflow.io"
self.admin_password: str = "admin"
self.host: str = ""
self.port: int = 0
def verify_admin(self, args):
conn_info = self._parse_connection_args(args)
if 'error' in conn_info:
print(f"Error: {conn_info['error']}")
return
self.host = conn_info['host']
self.port = conn_info['port']
print(f"Attempt to access ip: {self.host}, port: {self.port}")
url = f'http://{self.host}:{self.port}/api/v1/admin/auth'
try_count = 0
while True:
try_count += 1
if try_count > 3:
return False
admin_passwd = input(f"password for {self.admin_account}: ").strip()
try:
self.admin_password = encode_to_base64(admin_passwd)
response = requests.get(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
if response.status_code == 200:
res_json = response.json()
error_code = res_json.get('code', -1)
if error_code == 0:
print("Authentication successful.")
return True
else:
error_message = res_json.get('message', 'Unknown error')
print(f"Authentication failed: {error_message}, try again")
continue
else:
print(f"Bad responsestatus: {response.status_code}, try again")
except Exception:
print(f"Can't access {self.host}, port: {self.port}")
def _print_table_simple(self, data):
if not data:
print("No data to print")
return
if isinstance(data, dict):
# handle single row data
data = [data]
columns = list(data[0].keys())
col_widths = {}
for col in columns:
max_width = len(str(col))
for item in data:
value_len = len(str(item.get(col, '')))
if value_len > max_width:
max_width = value_len
col_widths[col] = max(2, max_width)
# Generate delimiter
separator = "+" + "+".join(["-" * (col_widths[col] + 2) for col in columns]) + "+"
# Print header
print(separator)
header = "|" + "|".join([f" {col:<{col_widths[col]}} " for col in columns]) + "|"
print(header)
print(separator)
# Print data
for item in data:
row = "|"
for col in columns:
value = str(item.get(col, ''))
if len(value) > col_widths[col]:
value = value[:col_widths[col] - 3] + "..."
row += f" {value:<{col_widths[col]}} |"
print(row)
print(separator)
def run_interactive(self):
self.is_interactive = True
print("RAGFlow Admin command line interface - Type '\\?' for help, '\\q' to quit")
while True:
try:
command = input("admin> ").strip()
if not command:
continue
print(f"command: {command}")
result = self.parser.parse_command(command)
self.execute_command(result)
if isinstance(result, Tree):
continue
if result.get('type') == 'meta' and result.get('command') in ['q', 'quit', 'exit']:
break
except KeyboardInterrupt:
print("\nUse '\\q' to quit")
except EOFError:
print("\nGoodbye!")
break
def run_single_command(self, args):
conn_info = self._parse_connection_args(args)
if 'error' in conn_info:
print(f"Error: {conn_info['error']}")
return
def _parse_connection_args(self, args: List[str]) -> Dict[str, Any]:
parser = argparse.ArgumentParser(description='Admin CLI Client', add_help=False)
parser.add_argument('-h', '--host', default='localhost', help='Admin service host')
parser.add_argument('-p', '--port', type=int, default=8080, help='Admin service port')
try:
parsed_args, remaining_args = parser.parse_known_args(args)
return {
'host': parsed_args.host,
'port': parsed_args.port,
}
except SystemExit:
return {'error': 'Invalid connection arguments'}
def execute_command(self, parsed_command: Dict[str, Any]):
command_dict: dict
if isinstance(parsed_command, Tree):
command_dict = parsed_command.children[0]
else:
if parsed_command['type'] == 'error':
print(f"Error: {parsed_command['message']}")
return
else:
command_dict = parsed_command
# print(f"Parsed command: {command_dict}")
command_type = command_dict['type']
match command_type:
case 'list_services':
self._handle_list_services(command_dict)
case 'show_service':
self._handle_show_service(command_dict)
case 'restart_service':
self._handle_restart_service(command_dict)
case 'shutdown_service':
self._handle_shutdown_service(command_dict)
case 'startup_service':
self._handle_startup_service(command_dict)
case 'list_users':
self._handle_list_users(command_dict)
case 'show_user':
self._handle_show_user(command_dict)
case 'drop_user':
self._handle_drop_user(command_dict)
case 'alter_user':
self._handle_alter_user(command_dict)
case 'create_user':
self._handle_create_user(command_dict)
case 'activate_user':
self._handle_activate_user(command_dict)
case 'list_datasets':
self._handle_list_datasets(command_dict)
case 'list_agents':
self._handle_list_agents(command_dict)
case 'meta':
self._handle_meta_command(command_dict)
case _:
print(f"Command '{command_type}' would be executed with API")
def _handle_list_services(self, command):
print("Listing all services")
url = f'http://{self.host}:{self.port}/api/v1/admin/services'
response = requests.get(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get all users, code: {res_json['code']}, message: {res_json['message']}")
def _handle_show_service(self, command):
service_id: int = command['number']
print(f"Showing service: {service_id}")
def _handle_restart_service(self, command):
service_id: int = command['number']
print(f"Restart service {service_id}")
def _handle_shutdown_service(self, command):
service_id: int = command['number']
print(f"Shutdown service {service_id}")
def _handle_startup_service(self, command):
service_id: int = command['number']
print(f"Startup service {service_id}")
def _handle_list_users(self, command):
print("Listing all users")
url = f'http://{self.host}:{self.port}/api/v1/admin/users'
response = requests.get(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get all users, code: {res_json['code']}, message: {res_json['message']}")
def _handle_show_user(self, command):
username_tree: Tree = command['username']
username: str = username_tree.children[0].strip("'\"")
print(f"Showing user: {username}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{username}'
response = requests.get(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get user {username}, code: {res_json['code']}, message: {res_json['message']}")
def _handle_drop_user(self, command):
username_tree: Tree = command['username']
username: str = username_tree.children[0].strip("'\"")
print(f"Drop user: {username}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{username}'
response = requests.delete(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
res_json = response.json()
if response.status_code == 200:
print(res_json["message"])
else:
print(f"Fail to drop user, code: {res_json['code']}, message: {res_json['message']}")
def _handle_alter_user(self, command):
username_tree: Tree = command['username']
username: str = username_tree.children[0].strip("'\"")
password_tree: Tree = command['password']
password: str = password_tree.children[0].strip("'\"")
print(f"Alter user: {username}, password: {password}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{username}/password'
response = requests.put(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password),
json={'new_password': encrypt(password)})
res_json = response.json()
if response.status_code == 200:
print(res_json["message"])
else:
print(f"Fail to alter password, code: {res_json['code']}, message: {res_json['message']}")
def _handle_create_user(self, command):
username_tree: Tree = command['username']
username: str = username_tree.children[0].strip("'\"")
password_tree: Tree = command['password']
password: str = password_tree.children[0].strip("'\"")
role: str = command['role']
print(f"Create user: {username}, password: {password}, role: {role}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users'
response = requests.post(
url,
auth=HTTPBasicAuth(self.admin_account, self.admin_password),
json={'username': username, 'password': encrypt(password), 'role': role}
)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to create user {username}, code: {res_json['code']}, message: {res_json['message']}")
def _handle_activate_user(self, command):
username_tree: Tree = command['username']
username: str = username_tree.children[0].strip("'\"")
activate_tree: Tree = command['activate_status']
activate_status: str = activate_tree.children[0].strip("'\"")
if activate_status.lower() in ['on', 'off']:
print(f"Alter user {username} activate status, turn {activate_status.lower()}.")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{username}/activate'
response = requests.put(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password),
json={'activate_status': activate_status})
res_json = response.json()
if response.status_code == 200:
print(res_json["message"])
else:
print(f"Fail to alter activate status, code: {res_json['code']}, message: {res_json['message']}")
else:
print(f"Unknown activate status: {activate_status}.")
def _handle_list_datasets(self, command):
username_tree: Tree = command['username']
username: str = username_tree.children[0].strip("'\"")
print(f"Listing all datasets of user: {username}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{username}/datasets'
response = requests.get(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get all datasets of {username}, code: {res_json['code']}, message: {res_json['message']}")
def _handle_list_agents(self, command):
username_tree: Tree = command['username']
username: str = username_tree.children[0].strip("'\"")
print(f"Listing all agents of user: {username}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{username}/agents'
response = requests.get(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get all agents of {username}, code: {res_json['code']}, message: {res_json['message']}")
def _handle_meta_command(self, command):
meta_command = command['command']
args = command.get('args', [])
if meta_command in ['?', 'h', 'help']:
self.show_help()
elif meta_command in ['q', 'quit', 'exit']:
print("Goodbye!")
else:
print(f"Meta command '{meta_command}' with args {args}")
def show_help(self):
"""Help info"""
help_text = """
Commands:
LIST SERVICES
SHOW SERVICE <service>
STARTUP SERVICE <service>
SHUTDOWN SERVICE <service>
RESTART SERVICE <service>
LIST USERS
SHOW USER <user>
DROP USER <user>
CREATE USER <user> <password>
ALTER USER PASSWORD <user> <new_password>
ALTER USER ACTIVE <user> <on/off>
LIST DATASETS OF <user>
LIST AGENTS OF <user>
Meta Commands:
\\?, \\h, \\help Show this help
\\q, \\quit, \\exit Quit the CLI
"""
print(help_text)
def main():
import sys
cli = AdminCLI()
if len(sys.argv) == 1 or (len(sys.argv) > 1 and sys.argv[1] == '-'):
print(r"""
____ ___ ______________ ___ __ _
/ __ \/ | / ____/ ____/ /___ _ __ / | ____/ /___ ___ (_)___
/ /_/ / /| |/ / __/ /_ / / __ \ | /| / / / /| |/ __ / __ `__ \/ / __ \
/ _, _/ ___ / /_/ / __/ / / /_/ / |/ |/ / / ___ / /_/ / / / / / / / / / /
/_/ |_/_/ |_\____/_/ /_/\____/|__/|__/ /_/ |_\__,_/_/ /_/ /_/_/_/ /_/
""")
if cli.verify_admin(sys.argv):
cli.run_interactive()
else:
if cli.verify_admin(sys.argv):
cli.run_interactive()
# cli.run_single_command(sys.argv[1:])
if __name__ == '__main__':
main()

View File

@ -1,18 +1,3 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os import os
import signal import signal
@ -23,14 +8,10 @@ import traceback
from werkzeug.serving import run_simple from werkzeug.serving import run_simple
from flask import Flask from flask import Flask
from routes import admin_bp from routes import admin_bp
from common.log_utils import init_root_logger from api.utils.log_utils import init_root_logger
from common.constants import SERVICE_CONF from api.constants import SERVICE_CONF
from common.config_utils import show_configs
from api import settings from api import settings
from config import load_configurations, SERVICE_CONFIGS from config import load_configurations, SERVICE_CONFIGS
from auth import init_default_admin, setup_auth
from flask_session import Session
from flask_login import LoginManager
stop_event = threading.Event() stop_event = threading.Event()
@ -46,18 +27,7 @@ if __name__ == '__main__':
app = Flask(__name__) app = Flask(__name__)
app.register_blueprint(admin_bp) app.register_blueprint(admin_bp)
app.config["SESSION_PERMANENT"] = False
app.config["SESSION_TYPE"] = "filesystem"
app.config["MAX_CONTENT_LENGTH"] = int(
os.environ.get("MAX_CONTENT_LENGTH", 1024 * 1024 * 1024)
)
Session(app)
show_configs()
login_manager = LoginManager()
login_manager.init_app(app)
settings.init_settings() settings.init_settings()
setup_auth(login_manager)
init_default_admin()
SERVICE_CONFIGS.configs = load_configurations(SERVICE_CONF) SERVICE_CONFIGS.configs = load_configurations(SERVICE_CONF)
try: try:

57
admin/auth.py Normal file
View File

@ -0,0 +1,57 @@
import logging
import uuid
from functools import wraps
from flask import request, jsonify
from exceptions import AdminException
from api.db.init_data import encode_to_base64
from api.db.services import UserService
def check_admin(username: str, password: str):
users = UserService.query(email=username)
if not users:
logging.info(f"Username: {username} is not registered!")
user_info = {
"id": uuid.uuid1().hex,
"password": encode_to_base64("admin"),
"nickname": "admin",
"is_superuser": True,
"email": "admin@ragflow.io",
"creator": "system",
"status": "1",
}
if not UserService.save(**user_info):
raise AdminException("Can't init admin.", 500)
user = UserService.query_user(username, password)
if user:
return True
else:
return False
def login_verify(f):
@wraps(f)
def decorated(*args, **kwargs):
auth = request.authorization
if not auth or 'username' not in auth.parameters or 'password' not in auth.parameters:
return jsonify({
"code": 401,
"message": "Authentication required",
"data": None
}), 200
username = auth.parameters['username']
password = auth.parameters['password']
# TODO: to check the username and password from DB
if check_admin(username, password) is False:
return jsonify({
"code": 403,
"message": "Access denied",
"data": None
}), 200
return f(*args, **kwargs)
return decorated

View File

@ -1,47 +0,0 @@
#!/bin/bash
set -e
echo "🚀 Start building..."
echo "================================"
PROJECT_NAME="ragflow-cli"
RELEASE_DIR="release"
BUILD_DIR="dist"
SOURCE_DIR="src"
PACKAGE_DIR="ragflow_cli"
echo "🧹 Clean old build folder..."
rm -rf release/
echo "📁 Prepare source code..."
mkdir release/$PROJECT_NAME/$SOURCE_DIR -p
cp pyproject.toml release/$PROJECT_NAME/pyproject.toml
cp README.md release/$PROJECT_NAME/README.md
mkdir release/$PROJECT_NAME/$SOURCE_DIR/$PACKAGE_DIR -p
cp admin_client.py release/$PROJECT_NAME/$SOURCE_DIR/$PACKAGE_DIR/admin_client.py
if [ -d "release/$PROJECT_NAME/$SOURCE_DIR" ]; then
echo "✅ source dir: release/$PROJECT_NAME/$SOURCE_DIR"
else
echo "❌ source dir not exist: release/$PROJECT_NAME/$SOURCE_DIR"
exit 1
fi
echo "🔨 Make build file..."
cd release/$PROJECT_NAME
export PYTHONPATH=$(pwd)
python -m build
echo "✅ check build result..."
if [ -d "$BUILD_DIR" ]; then
echo "📦 Package generated:"
ls -la $BUILD_DIR/
else
echo "❌ Build Failed: $BUILD_DIR not exist."
exit 1
fi
echo "🎉 Build finished successfully!"

View File

@ -1,931 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import argparse
import base64
from cmd import Cmd
from Cryptodome.PublicKey import RSA
from Cryptodome.Cipher import PKCS1_v1_5 as Cipher_pkcs1_v1_5
from typing import Dict, List, Any
from lark import Lark, Transformer, Tree
import requests
GRAMMAR = r"""
start: command
command: sql_command | meta_command
sql_command: list_services
| show_service
| startup_service
| shutdown_service
| restart_service
| list_users
| show_user
| drop_user
| alter_user
| create_user
| activate_user
| list_datasets
| list_agents
| create_role
| drop_role
| alter_role
| list_roles
| show_role
| grant_permission
| revoke_permission
| alter_user_role
| show_user_permission
// meta command definition
meta_command: "\\" meta_command_name [meta_args]
meta_command_name: /[a-zA-Z?]+/
meta_args: (meta_arg)+
meta_arg: /[^\\s"']+/ | quoted_string
// command definition
LIST: "LIST"i
SERVICES: "SERVICES"i
SHOW: "SHOW"i
CREATE: "CREATE"i
SERVICE: "SERVICE"i
SHUTDOWN: "SHUTDOWN"i
STARTUP: "STARTUP"i
RESTART: "RESTART"i
USERS: "USERS"i
DROP: "DROP"i
USER: "USER"i
ALTER: "ALTER"i
ACTIVE: "ACTIVE"i
PASSWORD: "PASSWORD"i
DATASETS: "DATASETS"i
OF: "OF"i
AGENTS: "AGENTS"i
ROLE: "ROLE"i
ROLES: "ROLES"i
DESCRIPTION: "DESCRIPTION"i
GRANT: "GRANT"i
REVOKE: "REVOKE"i
ALL: "ALL"i
PERMISSION: "PERMISSION"i
TO: "TO"i
FROM: "FROM"i
FOR: "FOR"i
RESOURCES: "RESOURCES"i
ON: "ON"i
SET: "SET"i
list_services: LIST SERVICES ";"
show_service: SHOW SERVICE NUMBER ";"
startup_service: STARTUP SERVICE NUMBER ";"
shutdown_service: SHUTDOWN SERVICE NUMBER ";"
restart_service: RESTART SERVICE NUMBER ";"
list_users: LIST USERS ";"
drop_user: DROP USER quoted_string ";"
alter_user: ALTER USER PASSWORD quoted_string quoted_string ";"
show_user: SHOW USER quoted_string ";"
create_user: CREATE USER quoted_string quoted_string ";"
activate_user: ALTER USER ACTIVE quoted_string status ";"
list_datasets: LIST DATASETS OF quoted_string ";"
list_agents: LIST AGENTS OF quoted_string ";"
create_role: CREATE ROLE identifier [DESCRIPTION quoted_string] ";"
drop_role: DROP ROLE identifier ";"
alter_role: ALTER ROLE identifier SET DESCRIPTION quoted_string ";"
list_roles: LIST ROLES ";"
show_role: SHOW ROLE identifier ";"
grant_permission: GRANT action_list ON identifier TO ROLE identifier ";"
revoke_permission: REVOKE action_list ON identifier FROM ROLE identifier ";"
alter_user_role: ALTER USER quoted_string SET ROLE identifier ";"
show_user_permission: SHOW USER PERMISSION quoted_string ";"
action_list: identifier ("," identifier)*
identifier: WORD
quoted_string: QUOTED_STRING
status: WORD
QUOTED_STRING: /'[^']+'/ | /"[^"]+"/
WORD: /[a-zA-Z0-9_\-\.]+/
NUMBER: /[0-9]+/
%import common.WS
%ignore WS
"""
class AdminTransformer(Transformer):
def start(self, items):
return items[0]
def command(self, items):
return items[0]
def list_services(self, items):
result = {'type': 'list_services'}
return result
def show_service(self, items):
service_id = int(items[2])
return {"type": "show_service", "number": service_id}
def startup_service(self, items):
service_id = int(items[2])
return {"type": "startup_service", "number": service_id}
def shutdown_service(self, items):
service_id = int(items[2])
return {"type": "shutdown_service", "number": service_id}
def restart_service(self, items):
service_id = int(items[2])
return {"type": "restart_service", "number": service_id}
def list_users(self, items):
return {"type": "list_users"}
def show_user(self, items):
user_name = items[2]
return {"type": "show_user", "user_name": user_name}
def drop_user(self, items):
user_name = items[2]
return {"type": "drop_user", "user_name": user_name}
def alter_user(self, items):
user_name = items[3]
new_password = items[4]
return {"type": "alter_user", "user_name": user_name, "password": new_password}
def create_user(self, items):
user_name = items[2]
password = items[3]
return {"type": "create_user", "user_name": user_name, "password": password, "role": "user"}
def activate_user(self, items):
user_name = items[3]
activate_status = items[4]
return {"type": "activate_user", "activate_status": activate_status, "user_name": user_name}
def list_datasets(self, items):
user_name = items[3]
return {"type": "list_datasets", "user_name": user_name}
def list_agents(self, items):
user_name = items[3]
return {"type": "list_agents", "user_name": user_name}
def create_role(self, items):
role_name = items[2]
if len(items) > 4:
description = items[4]
return {"type": "create_role", "role_name": role_name, "description": description}
else:
return {"type": "create_role", "role_name": role_name}
def drop_role(self, items):
role_name = items[2]
return {"type": "drop_role", "role_name": role_name}
def alter_role(self, items):
role_name = items[2]
description = items[5]
return {"type": "alter_role", "role_name": role_name, "description": description}
def list_roles(self, items):
return {"type": "list_roles"}
def show_role(self, items):
role_name = items[2]
return {"type": "show_role", "role_name": role_name}
def grant_permission(self, items):
action_list = items[1]
resource = items[3]
role_name = items[6]
return {"type": "grant_permission", "role_name": role_name, "resource": resource, "actions": action_list}
def revoke_permission(self, items):
action_list = items[1]
resource = items[3]
role_name = items[6]
return {
"type": "revoke_permission",
"role_name": role_name,
"resource": resource, "actions": action_list
}
def alter_user_role(self, items):
user_name = items[2]
role_name = items[5]
return {"type": "alter_user_role", "user_name": user_name, "role_name": role_name}
def show_user_permission(self, items):
user_name = items[3]
return {"type": "show_user_permission", "user_name": user_name}
def action_list(self, items):
return items
def meta_command(self, items):
command_name = str(items[0]).lower()
args = items[1:] if len(items) > 1 else []
# handle quoted parameter
parsed_args = []
for arg in args:
if hasattr(arg, 'value'):
parsed_args.append(arg.value)
else:
parsed_args.append(str(arg))
return {'type': 'meta', 'command': command_name, 'args': parsed_args}
def meta_command_name(self, items):
return items[0]
def meta_args(self, items):
return items
def encrypt(input_string):
pub = '-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArq9XTUSeYr2+N1h3Afl/z8Dse/2yD0ZGrKwx+EEEcdsBLca9Ynmx3nIB5obmLlSfmskLpBo0UACBmB5rEjBp2Q2f3AG3Hjd4B+gNCG6BDaawuDlgANIhGnaTLrIqWrrcm4EMzJOnAOI1fgzJRsOOUEfaS318Eq9OVO3apEyCCt0lOQK6PuksduOjVxtltDav+guVAA068NrPYmRNabVKRNLJpL8w4D44sfth5RvZ3q9t+6RTArpEtc5sh5ChzvqPOzKGMXW83C95TxmXqpbK6olN4RevSfVjEAgCydH6HN6OhtOQEcnrU97r9H0iZOWwbw3pVrZiUkuRD1R56Wzs2wIDAQAB\n-----END PUBLIC KEY-----'
pub_key = RSA.importKey(pub)
cipher = Cipher_pkcs1_v1_5.new(pub_key)
cipher_text = cipher.encrypt(base64.b64encode(input_string.encode('utf-8')))
return base64.b64encode(cipher_text).decode("utf-8")
def encode_to_base64(input_string):
base64_encoded = base64.b64encode(input_string.encode('utf-8'))
return base64_encoded.decode('utf-8')
class AdminCLI(Cmd):
def __init__(self):
super().__init__()
self.parser = Lark(GRAMMAR, start='start', parser='lalr', transformer=AdminTransformer())
self.command_history = []
self.is_interactive = False
self.admin_account = "admin@ragflow.io"
self.admin_password: str = "admin"
self.session = requests.Session()
self.access_token: str = ""
self.host: str = ""
self.port: int = 0
intro = r"""Type "\h" for help."""
prompt = "admin> "
def onecmd(self, command: str) -> bool:
try:
result = self.parse_command(command)
if isinstance(result, dict):
if 'type' in result and result.get('type') == 'empty':
return False
self.execute_command(result)
if isinstance(result, Tree):
return False
if result.get('type') == 'meta' and result.get('command') in ['q', 'quit', 'exit']:
return True
except KeyboardInterrupt:
print("\nUse '\\q' to quit")
except EOFError:
print("\nGoodbye!")
return True
return False
def emptyline(self) -> bool:
return False
def default(self, line: str) -> bool:
return self.onecmd(line)
def parse_command(self, command_str: str) -> dict[str, str]:
if not command_str.strip():
return {'type': 'empty'}
self.command_history.append(command_str)
try:
result = self.parser.parse(command_str)
return result
except Exception as e:
return {'type': 'error', 'message': f'Parse error: {str(e)}'}
def verify_admin(self, arguments: dict, single_command: bool):
self.host = arguments['host']
self.port = arguments['port']
print(f"Attempt to access ip: {self.host}, port: {self.port}")
url = f"http://{self.host}:{self.port}/api/v1/admin/login"
attempt_count = 3
if single_command:
attempt_count = 1
try_count = 0
while True:
try_count += 1
if try_count > attempt_count:
return False
if single_command:
admin_passwd = arguments['password']
else:
admin_passwd = input(f"password for {self.admin_account}: ").strip()
try:
self.admin_password = encrypt(admin_passwd)
response = self.session.post(url, json={'email': self.admin_account, 'password': self.admin_password})
if response.status_code == 200:
res_json = response.json()
error_code = res_json.get('code', -1)
if error_code == 0:
self.session.headers.update({
'Content-Type': 'application/json',
'Authorization': response.headers['Authorization'],
'User-Agent': 'RAGFlow-CLI/0.21.1'
})
print("Authentication successful.")
return True
else:
error_message = res_json.get('message', 'Unknown error')
print(f"Authentication failed: {error_message}, try again")
continue
else:
print(f"Bad responsestatus: {response.status_code}, password is wrong")
except Exception as e:
print(str(e))
print(f"Can't access {self.host}, port: {self.port}")
def _print_table_simple(self, data):
if not data:
print("No data to print")
return
if isinstance(data, dict):
# handle single row data
data = [data]
columns = list(data[0].keys())
col_widths = {}
def get_string_width(text):
half_width_chars = (
" !\"#$%&'()*+,-./0123456789:;<=>?@"
"ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`"
"abcdefghijklmnopqrstuvwxyz{|}~"
"\t\n\r"
)
width = 0
for char in text:
if char in half_width_chars:
width += 1
else:
width += 2
return width
for col in columns:
max_width = get_string_width(str(col))
for item in data:
value_len = get_string_width(str(item.get(col, '')))
if value_len > max_width:
max_width = value_len
col_widths[col] = max(2, max_width)
# Generate delimiter
separator = "+" + "+".join(["-" * (col_widths[col] + 2) for col in columns]) + "+"
# Print header
print(separator)
header = "|" + "|".join([f" {col:<{col_widths[col]}} " for col in columns]) + "|"
print(header)
print(separator)
# Print data
for item in data:
row = "|"
for col in columns:
value = str(item.get(col, ''))
if get_string_width(value) > col_widths[col]:
value = value[:col_widths[col] - 3] + "..."
row += f" {value:<{col_widths[col] - (get_string_width(value) - len(value))}} |"
print(row)
print(separator)
def run_interactive(self):
self.is_interactive = True
print("RAGFlow Admin command line interface - Type '\\?' for help, '\\q' to quit")
while True:
try:
command = input("admin> ").strip()
if not command:
continue
print(f"command: {command}")
result = self.parse_command(command)
self.execute_command(result)
if isinstance(result, Tree):
continue
if result.get('type') == 'meta' and result.get('command') in ['q', 'quit', 'exit']:
break
except KeyboardInterrupt:
print("\nUse '\\q' to quit")
except EOFError:
print("\nGoodbye!")
break
def run_single_command(self, command: str):
result = self.parse_command(command)
self.execute_command(result)
def parse_connection_args(self, args: List[str]) -> Dict[str, Any]:
parser = argparse.ArgumentParser(description='Admin CLI Client', add_help=False)
parser.add_argument('-h', '--host', default='localhost', help='Admin service host')
parser.add_argument('-p', '--port', type=int, default=9381, help='Admin service port')
parser.add_argument('-w', '--password', default='admin', type=str, help='Superuser password')
parser.add_argument('command', nargs='?', help='Single command')
try:
parsed_args, remaining_args = parser.parse_known_args(args)
if remaining_args:
command = remaining_args[0]
return {
'host': parsed_args.host,
'port': parsed_args.port,
'password': parsed_args.password,
'command': command
}
else:
return {
'host': parsed_args.host,
'port': parsed_args.port,
}
except SystemExit:
return {'error': 'Invalid connection arguments'}
def execute_command(self, parsed_command: Dict[str, Any]):
command_dict: dict
if isinstance(parsed_command, Tree):
command_dict = parsed_command.children[0]
else:
if parsed_command['type'] == 'error':
print(f"Error: {parsed_command['message']}")
return
else:
command_dict = parsed_command
# print(f"Parsed command: {command_dict}")
command_type = command_dict['type']
match command_type:
case 'list_services':
self._handle_list_services(command_dict)
case 'show_service':
self._handle_show_service(command_dict)
case 'restart_service':
self._handle_restart_service(command_dict)
case 'shutdown_service':
self._handle_shutdown_service(command_dict)
case 'startup_service':
self._handle_startup_service(command_dict)
case 'list_users':
self._handle_list_users(command_dict)
case 'show_user':
self._handle_show_user(command_dict)
case 'drop_user':
self._handle_drop_user(command_dict)
case 'alter_user':
self._handle_alter_user(command_dict)
case 'create_user':
self._handle_create_user(command_dict)
case 'activate_user':
self._handle_activate_user(command_dict)
case 'list_datasets':
self._handle_list_datasets(command_dict)
case 'list_agents':
self._handle_list_agents(command_dict)
case 'create_role':
self._create_role(command_dict)
case 'drop_role':
self._drop_role(command_dict)
case 'alter_role':
self._alter_role(command_dict)
case 'list_roles':
self._list_roles(command_dict)
case 'show_role':
self._show_role(command_dict)
case 'grant_permission':
self._grant_permission(command_dict)
case 'revoke_permission':
self._revoke_permission(command_dict)
case 'alter_user_role':
self._alter_user_role(command_dict)
case 'show_user_permission':
self._show_user_permission(command_dict)
case 'meta':
self._handle_meta_command(command_dict)
case _:
print(f"Command '{command_type}' would be executed with API")
def _handle_list_services(self, command):
print("Listing all services")
url = f'http://{self.host}:{self.port}/api/v1/admin/services'
response = self.session.get(url)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get all services, code: {res_json['code']}, message: {res_json['message']}")
def _handle_show_service(self, command):
service_id: int = command['number']
print(f"Showing service: {service_id}")
url = f'http://{self.host}:{self.port}/api/v1/admin/services/{service_id}'
response = self.session.get(url)
res_json = response.json()
if response.status_code == 200:
res_data = res_json['data']
if 'status' in res_data and res_data['status'] == 'alive':
print(f"Service {res_data['service_name']} is alive, ")
if isinstance(res_data['message'], str):
print(res_data['message'])
else:
self._print_table_simple(res_data['message'])
else:
print(f"Service {res_data['service_name']} is down, {res_data['message']}")
else:
print(f"Fail to show service, code: {res_json['code']}, message: {res_json['message']}")
def _handle_restart_service(self, command):
service_id: int = command['number']
print(f"Restart service {service_id}")
def _handle_shutdown_service(self, command):
service_id: int = command['number']
print(f"Shutdown service {service_id}")
def _handle_startup_service(self, command):
service_id: int = command['number']
print(f"Startup service {service_id}")
def _handle_list_users(self, command):
print("Listing all users")
url = f'http://{self.host}:{self.port}/api/v1/admin/users'
response = self.session.get(url)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get all users, code: {res_json['code']}, message: {res_json['message']}")
def _handle_show_user(self, command):
username_tree: Tree = command['user_name']
user_name: str = username_tree.children[0].strip("'\"")
print(f"Showing user: {user_name}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name}'
response = self.session.get(url)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get user {user_name}, code: {res_json['code']}, message: {res_json['message']}")
def _handle_drop_user(self, command):
username_tree: Tree = command['user_name']
user_name: str = username_tree.children[0].strip("'\"")
print(f"Drop user: {user_name}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name}'
response = self.session.delete(url)
res_json = response.json()
if response.status_code == 200:
print(res_json["message"])
else:
print(f"Fail to drop user, code: {res_json['code']}, message: {res_json['message']}")
def _handle_alter_user(self, command):
user_name_tree: Tree = command['user_name']
user_name: str = user_name_tree.children[0].strip("'\"")
password_tree: Tree = command['password']
password: str = password_tree.children[0].strip("'\"")
print(f"Alter user: {user_name}, password: {password}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name}/password'
response = self.session.put(url, json={'new_password': encrypt(password)})
res_json = response.json()
if response.status_code == 200:
print(res_json["message"])
else:
print(f"Fail to alter password, code: {res_json['code']}, message: {res_json['message']}")
def _handle_create_user(self, command):
user_name_tree: Tree = command['user_name']
user_name: str = user_name_tree.children[0].strip("'\"")
password_tree: Tree = command['password']
password: str = password_tree.children[0].strip("'\"")
role: str = command['role']
print(f"Create user: {user_name}, password: {password}, role: {role}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users'
response = self.session.post(
url,
json={'user_name': user_name, 'password': encrypt(password), 'role': role}
)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to create user {user_name}, code: {res_json['code']}, message: {res_json['message']}")
def _handle_activate_user(self, command):
user_name_tree: Tree = command['user_name']
user_name: str = user_name_tree.children[0].strip("'\"")
activate_tree: Tree = command['activate_status']
activate_status: str = activate_tree.children[0].strip("'\"")
if activate_status.lower() in ['on', 'off']:
print(f"Alter user {user_name} activate status, turn {activate_status.lower()}.")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name}/activate'
response = self.session.put(url, json={'activate_status': activate_status})
res_json = response.json()
if response.status_code == 200:
print(res_json["message"])
else:
print(f"Fail to alter activate status, code: {res_json['code']}, message: {res_json['message']}")
else:
print(f"Unknown activate status: {activate_status}.")
def _handle_list_datasets(self, command):
username_tree: Tree = command['user_name']
user_name: str = username_tree.children[0].strip("'\"")
print(f"Listing all datasets of user: {user_name}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name}/datasets'
response = self.session.get(url)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get all datasets of {user_name}, code: {res_json['code']}, message: {res_json['message']}")
def _handle_list_agents(self, command):
username_tree: Tree = command['user_name']
user_name: str = username_tree.children[0].strip("'\"")
print(f"Listing all agents of user: {user_name}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name}/agents'
response = self.session.get(url)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get all agents of {user_name}, code: {res_json['code']}, message: {res_json['message']}")
def _create_role(self, command):
role_name_tree: Tree = command['role_name']
role_name: str = role_name_tree.children[0].strip("'\"")
desc_str: str = ''
if 'description' in command:
desc_tree: Tree = command['description']
desc_str = desc_tree.children[0].strip("'\"")
print(f"create role name: {role_name}, description: {desc_str}")
url = f'http://{self.host}:{self.port}/api/v1/admin/roles'
response = self.session.post(
url,
json={'role_name': role_name, 'description': desc_str}
)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to create role {role_name}, code: {res_json['code']}, message: {res_json['message']}")
def _drop_role(self, command):
role_name_tree: Tree = command['role_name']
role_name: str = role_name_tree.children[0].strip("'\"")
print(f"drop role name: {role_name}")
url = f'http://{self.host}:{self.port}/api/v1/admin/roles/{role_name}'
response = self.session.delete(url)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to drop role {role_name}, code: {res_json['code']}, message: {res_json['message']}")
def _alter_role(self, command):
role_name_tree: Tree = command['role_name']
role_name: str = role_name_tree.children[0].strip("'\"")
desc_tree: Tree = command['description']
desc_str: str = desc_tree.children[0].strip("'\"")
print(f"alter role name: {role_name}, description: {desc_str}")
url = f'http://{self.host}:{self.port}/api/v1/admin/roles/{role_name}'
response = self.session.put(
url,
json={'description': desc_str}
)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(
f"Fail to update role {role_name} with description: {desc_str}, code: {res_json['code']}, message: {res_json['message']}")
def _list_roles(self, command):
print("Listing all roles")
url = f'http://{self.host}:{self.port}/api/v1/admin/roles'
response = self.session.get(url)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to list roles, code: {res_json['code']}, message: {res_json['message']}")
def _show_role(self, command):
role_name_tree: Tree = command['role_name']
role_name: str = role_name_tree.children[0].strip("'\"")
print(f"show role: {role_name}")
url = f'http://{self.host}:{self.port}/api/v1/admin/roles/{role_name}/permission'
response = self.session.get(url)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to list roles, code: {res_json['code']}, message: {res_json['message']}")
def _grant_permission(self, command):
role_name_tree: Tree = command['role_name']
role_name_str: str = role_name_tree.children[0].strip("'\"")
resource_tree: Tree = command['resource']
resource_str: str = resource_tree.children[0].strip("'\"")
action_tree_list: list = command['actions']
actions: list = []
for action_tree in action_tree_list:
action_str: str = action_tree.children[0].strip("'\"")
actions.append(action_str)
print(f"grant role_name: {role_name_str}, resource: {resource_str}, actions: {actions}")
url = f'http://{self.host}:{self.port}/api/v1/admin/roles/{role_name_str}/permission'
response = self.session.post(
url,
json={'actions': actions, 'resource': resource_str}
)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(
f"Fail to grant role {role_name_str} with {actions} on {resource_str}, code: {res_json['code']}, message: {res_json['message']}")
def _revoke_permission(self, command):
role_name_tree: Tree = command['role_name']
role_name_str: str = role_name_tree.children[0].strip("'\"")
resource_tree: Tree = command['resource']
resource_str: str = resource_tree.children[0].strip("'\"")
action_tree_list: list = command['actions']
actions: list = []
for action_tree in action_tree_list:
action_str: str = action_tree.children[0].strip("'\"")
actions.append(action_str)
print(f"revoke role_name: {role_name_str}, resource: {resource_str}, actions: {actions}")
url = f'http://{self.host}:{self.port}/api/v1/admin/roles/{role_name_str}/permission'
response = self.session.delete(
url,
json={'actions': actions, 'resource': resource_str}
)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(
f"Fail to revoke role {role_name_str} with {actions} on {resource_str}, code: {res_json['code']}, message: {res_json['message']}")
def _alter_user_role(self, command):
role_name_tree: Tree = command['role_name']
role_name_str: str = role_name_tree.children[0].strip("'\"")
user_name_tree: Tree = command['user_name']
user_name_str: str = user_name_tree.children[0].strip("'\"")
print(f"alter_user_role user_name: {user_name_str}, role_name: {role_name_str}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name_str}/role'
response = self.session.put(
url,
json={'role_name': role_name_str}
)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(
f"Fail to alter user: {user_name_str} to role {role_name_str}, code: {res_json['code']}, message: {res_json['message']}")
def _show_user_permission(self, command):
user_name_tree: Tree = command['user_name']
user_name_str: str = user_name_tree.children[0].strip("'\"")
print(f"show_user_permission user_name: {user_name_str}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{user_name_str}/permission'
response = self.session.get(url)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(
f"Fail to show user: {user_name_str} permission, code: {res_json['code']}, message: {res_json['message']}")
def _handle_meta_command(self, command):
meta_command = command['command']
args = command.get('args', [])
if meta_command in ['?', 'h', 'help']:
self.show_help()
elif meta_command in ['q', 'quit', 'exit']:
print("Goodbye!")
else:
print(f"Meta command '{meta_command}' with args {args}")
def show_help(self):
"""Help info"""
help_text = """
Commands:
LIST SERVICES
SHOW SERVICE <service>
STARTUP SERVICE <service>
SHUTDOWN SERVICE <service>
RESTART SERVICE <service>
LIST USERS
SHOW USER <user>
DROP USER <user>
CREATE USER <user> <password>
ALTER USER PASSWORD <user> <new_password>
ALTER USER ACTIVE <user> <on/off>
LIST DATASETS OF <user>
LIST AGENTS OF <user>
Meta Commands:
\\?, \\h, \\help Show this help
\\q, \\quit, \\exit Quit the CLI
"""
print(help_text)
def main():
import sys
cli = AdminCLI()
args = cli.parse_connection_args(sys.argv)
if 'error' in args:
print(f"Error: {args['error']}")
return
if 'command' in args:
if 'password' not in args:
print("Error: password is missing")
return
if cli.verify_admin(args, single_command=True):
command: str = args['command']
print(f"Run single command: {command}")
cli.run_single_command(command)
else:
if cli.verify_admin(args, single_command=False):
print(r"""
____ ___ ______________ ___ __ _
/ __ \/ | / ____/ ____/ /___ _ __ / | ____/ /___ ___ (_)___
/ /_/ / /| |/ / __/ /_ / / __ \ | /| / / / /| |/ __ / __ `__ \/ / __ \
/ _, _/ ___ / /_/ / __/ / / /_/ / |/ |/ / / ___ / /_/ / / / / / / / / / /
/_/ |_/_/ |_\____/_/ /_/\____/|__/|__/ /_/ |_\__,_/_/ /_/ /_/_/_/ /_/
""")
cli.cmdloop()
if __name__ == '__main__':
main()

View File

@ -1,24 +0,0 @@
[project]
name = "ragflow-cli"
version = "0.21.1"
description = "Admin Service's client of [RAGFlow](https://github.com/infiniflow/ragflow). The Admin Service provides user management and system monitoring. "
authors = [{ name = "Lynn", email = "lynn_inf@hotmail.com" }]
license = { text = "Apache License, Version 2.0" }
readme = "README.md"
requires-python = ">=3.10,<3.13"
dependencies = [
"requests>=2.30.0,<3.0.0",
"beartype>=0.18.5,<0.19.0",
"pycryptodomex>=3.10.0",
"lark>=1.1.0",
]
[dependency-groups]
test = [
"pytest>=8.3.5",
"requests>=2.32.3",
"requests-toolbelt>=1.0.0",
]
[project.scripts]
ragflow-cli = "admin_client:main"

View File

@ -1,33 +1,14 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging import logging
import threading import threading
from enum import Enum from enum import Enum
from pydantic import BaseModel from pydantic import BaseModel
from typing import Any from typing import Any
from common.config_utils import read_config from api.utils.configs import read_config
from urllib.parse import urlparse from urllib.parse import urlparse
class ServiceConfigs: class ServiceConfigs:
configs = dict
def __init__(self): def __init__(self):
self.configs = [] self.configs = []
self.lock = threading.Lock() self.lock = threading.Lock()
@ -51,11 +32,9 @@ class BaseConfig(BaseModel):
host: str host: str
port: int port: int
service_type: str service_type: str
detail_func_name: str
def to_dict(self) -> dict[str, Any]: def to_dict(self) -> dict[str, Any]:
return {'id': self.id, 'name': self.name, 'host': self.host, 'port': self.port, return {'id': self.id, 'name': self.name, 'host': self.host, 'port': self.port, 'service_type': self.service_type}
'service_type': self.service_type}
class MetaConfig(BaseConfig): class MetaConfig(BaseConfig):
@ -230,9 +209,7 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
name: str = f'ragflow_{ragflow_count}' name: str = f'ragflow_{ragflow_count}'
host: str = v['host'] host: str = v['host']
http_port: int = v['http_port'] http_port: int = v['http_port']
config = RAGFlowServerConfig(id=id_count, name=name, host=host, port=http_port, config = RAGFlowServerConfig(id=id_count, name=name, host=host, port=http_port, service_type="ragflow_server")
service_type="ragflow_server",
detail_func_name="check_ragflow_server_alive")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
case "es": case "es":
@ -245,8 +222,7 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
password: str = v.get('password') password: str = v.get('password')
config = ElasticsearchConfig(id=id_count, name=name, host=host, port=port, service_type="retrieval", config = ElasticsearchConfig(id=id_count, name=name, host=host, port=port, service_type="retrieval",
retrieval_type="elasticsearch", retrieval_type="elasticsearch",
username=username, password=password, username=username, password=password)
detail_func_name="get_es_cluster_stats")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
@ -257,9 +233,8 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
host = parts[0] host = parts[0]
port = int(parts[1]) port = int(parts[1])
database: str = v.get('db_name', 'default_db') database: str = v.get('db_name', 'default_db')
config = InfinityConfig(id=id_count, name=name, host=host, port=port, service_type="retrieval", config = InfinityConfig(id=id_count, name=name, host=host, port=port, service_type="retrieval", retrieval_type="infinity",
retrieval_type="infinity", db_name=database)
db_name=database, detail_func_name="get_infinity_status")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
case "minio": case "minio":
@ -270,9 +245,8 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
port = int(parts[1]) port = int(parts[1])
user = v.get('user') user = v.get('user')
password = v.get('password') password = v.get('password')
config = MinioConfig(id=id_count, name=name, host=host, port=port, user=user, password=password, config = MinioConfig(id=id_count, name=name, host=host, port=port, user=user, password=password, service_type="file_store",
service_type="file_store", store_type="minio")
store_type="minio", detail_func_name="check_minio_alive")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
case "redis": case "redis":
@ -284,7 +258,7 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
password = v.get('password') password = v.get('password')
db: int = v.get('db') db: int = v.get('db')
config = RedisConfig(id=id_count, name=name, host=host, port=port, password=password, database=db, config = RedisConfig(id=id_count, name=name, host=host, port=port, password=password, database=db,
service_type="message_queue", mq_type="redis", detail_func_name="get_redis_info") service_type="message_queue", mq_type="redis")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
case "mysql": case "mysql":
@ -294,7 +268,7 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
username = v.get('user') username = v.get('user')
password = v.get('password') password = v.get('password')
config = MySQLConfig(id=id_count, name=name, host=host, port=port, username=username, password=password, config = MySQLConfig(id=id_count, name=name, host=host, port=port, username=username, password=password,
service_type="meta_data", meta_type="mysql", detail_func_name="get_mysql_status") service_type="meta_data", meta_type="mysql")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
case "admin": case "admin":

0
admin/models.py Normal file
View File

15
admin/responses.py Normal file
View File

@ -0,0 +1,15 @@
from flask import jsonify
def success_response(data=None, message="Success", code = 0):
return jsonify({
"code": code,
"message": message,
"data": data
}), 200
def error_response(message="Error", code=-1, data=None):
return jsonify({
"code": code,
"message": message,
"data": data
}), 400

190
admin/routes.py Normal file
View File

@ -0,0 +1,190 @@
from flask import Blueprint, request
from auth import login_verify
from responses import success_response, error_response
from services import UserMgr, ServiceMgr, UserServiceMgr
from exceptions import AdminException
admin_bp = Blueprint('admin', __name__, url_prefix='/api/v1/admin')
@admin_bp.route('/auth', methods=['GET'])
@login_verify
def auth_admin():
try:
return success_response(None, "Admin is authorized", 0)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users', methods=['GET'])
@login_verify
def list_users():
try:
users = UserMgr.get_all_users()
return success_response(users, "Get all users", 0)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users', methods=['POST'])
@login_verify
def create_user():
try:
data = request.get_json()
if not data or 'username' not in data or 'password' not in data:
return error_response("Username and password are required", 400)
username = data['username']
password = data['password']
role = data.get('role', 'user')
res = UserMgr.create_user(username, password, role)
if res["success"]:
user_info = res["user_info"]
user_info.pop("password") # do not return password
return success_response(user_info, "User created successfully")
else:
return error_response("create user failed")
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e))
@admin_bp.route('/users/<username>', methods=['DELETE'])
@login_verify
def delete_user(username):
try:
res = UserMgr.delete_user(username)
if res["success"]:
return success_response(None, res["message"])
else:
return error_response(res["message"])
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>/password', methods=['PUT'])
@login_verify
def change_password(username):
try:
data = request.get_json()
if not data or 'new_password' not in data:
return error_response("New password is required", 400)
new_password = data['new_password']
msg = UserMgr.update_user_password(username, new_password)
return success_response(None, msg)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>/activate', methods=['PUT'])
@login_verify
def alter_user_activate_status(username):
try:
data = request.get_json()
if not data or 'activate_status' not in data:
return error_response("Activation status is required", 400)
activate_status = data['activate_status']
msg = UserMgr.update_user_activate_status(username, activate_status)
return success_response(None, msg)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>', methods=['GET'])
@login_verify
def get_user_details(username):
try:
user_details = UserMgr.get_user_details(username)
return success_response(user_details)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>/datasets', methods=['GET'])
@login_verify
def get_user_datasets(username):
try:
datasets_list = UserServiceMgr.get_user_datasets(username)
return success_response(datasets_list)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>/agents', methods=['GET'])
@login_verify
def get_user_agents(username):
try:
agents_list = UserServiceMgr.get_user_agents(username)
return success_response(agents_list)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/services', methods=['GET'])
@login_verify
def get_services():
try:
services = ServiceMgr.get_all_services()
return success_response(services, "Get all services", 0)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/service_types/<service_type>', methods=['GET'])
@login_verify
def get_services_by_type(service_type_str):
try:
services = ServiceMgr.get_services_by_type(service_type_str)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/services/<service_id>', methods=['GET'])
@login_verify
def get_service(service_id):
try:
services = ServiceMgr.get_service_details(service_id)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/services/<service_id>', methods=['DELETE'])
@login_verify
def shutdown_service(service_id):
try:
services = ServiceMgr.shutdown_service(service_id)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/services/<service_id>', methods=['PUT'])
@login_verify
def restart_service(service_id):
try:
services = ServiceMgr.restart_service(service_id)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)

View File

@ -1,187 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import uuid
from functools import wraps
from datetime import datetime
from flask import request, jsonify
from flask_login import current_user, login_user
from itsdangerous.url_safe import URLSafeTimedSerializer as Serializer
from api import settings
from api.common.exceptions import AdminException, UserNotFoundError
from api.db.init_data import encode_to_base64
from api.db.services import UserService
from common.constants import ActiveEnum, StatusEnum
from api.utils.crypt import decrypt
from common.misc_utils import get_uuid
from common.time_utils import current_timestamp, datetime_format, get_format_time
from common.connection_utils import construct_response
def setup_auth(login_manager):
@login_manager.request_loader
def load_user(web_request):
jwt = Serializer(secret_key=settings.SECRET_KEY)
authorization = web_request.headers.get("Authorization")
if authorization:
try:
access_token = str(jwt.loads(authorization))
if not access_token or not access_token.strip():
logging.warning("Authentication attempt with empty access token")
return None
# Access tokens should be UUIDs (32 hex characters)
if len(access_token.strip()) < 32:
logging.warning(f"Authentication attempt with invalid token format: {len(access_token)} chars")
return None
user = UserService.query(
access_token=access_token, status=StatusEnum.VALID.value
)
if user:
if not user[0].access_token or not user[0].access_token.strip():
logging.warning(f"User {user[0].email} has empty access_token in database")
return None
return user[0]
else:
return None
except Exception as e:
logging.warning(f"load_user got exception {e}")
return None
else:
return None
def init_default_admin():
# Verify that at least one active admin user exists. If not, create a default one.
users = UserService.query(is_superuser=True)
if not users:
default_admin = {
"id": uuid.uuid1().hex,
"password": encode_to_base64("admin"),
"nickname": "admin",
"is_superuser": True,
"email": "admin@ragflow.io",
"creator": "system",
"status": "1",
}
if not UserService.save(**default_admin):
raise AdminException("Can't init admin.", 500)
elif not any([u.is_active == ActiveEnum.ACTIVE.value for u in users]):
raise AdminException("No active admin. Please update 'is_active' in db manually.", 500)
def check_admin_auth(func):
@wraps(func)
def wrapper(*args, **kwargs):
user = UserService.filter_by_id(current_user.id)
if not user:
raise UserNotFoundError(current_user.email)
if not user.is_superuser:
raise AdminException("Not admin", 403)
if user.is_active == ActiveEnum.INACTIVE.value:
raise AdminException(f"User {current_user.email} inactive", 403)
return func(*args, **kwargs)
return wrapper
def login_admin(email: str, password: str):
"""
:param email: admin email
:param password: string before decrypt
"""
users = UserService.query(email=email)
if not users:
raise UserNotFoundError(email)
psw = decrypt(password)
user = UserService.query_user(email, psw)
if not user:
raise AdminException("Email and password do not match!")
if not user.is_superuser:
raise AdminException("Not admin", 403)
if user.is_active == ActiveEnum.INACTIVE.value:
raise AdminException(f"User {email} inactive", 403)
resp = user.to_json()
user.access_token = get_uuid()
login_user(user)
user.update_time = (current_timestamp(),)
user.update_date = (datetime_format(datetime.now()),)
user.last_login_time = get_format_time()
user.save()
msg = "Welcome back!"
return construct_response(data=resp, auth=user.get_id(), message=msg)
def check_admin(username: str, password: str):
users = UserService.query(email=username)
if not users:
logging.info(f"Username: {username} is not registered!")
user_info = {
"id": uuid.uuid1().hex,
"password": encode_to_base64("admin"),
"nickname": "admin",
"is_superuser": True,
"email": "admin@ragflow.io",
"creator": "system",
"status": "1",
}
if not UserService.save(**user_info):
raise AdminException("Can't init admin.", 500)
user = UserService.query_user(username, password)
if user:
return True
else:
return False
def login_verify(f):
@wraps(f)
def decorated(*args, **kwargs):
auth = request.authorization
if not auth or 'username' not in auth.parameters or 'password' not in auth.parameters:
return jsonify({
"code": 401,
"message": "Authentication required",
"data": None
}), 200
username = auth.parameters['username']
password = auth.parameters['password']
try:
if check_admin(username, password) is False:
return jsonify({
"code": 500,
"message": "Access denied",
"data": None
}), 200
except Exception as e:
error_msg = str(e)
return jsonify({
"code": 500,
"message": error_msg
}), 200
return f(*args, **kwargs)
return decorated

View File

@ -1,15 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

View File

@ -1,34 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import jsonify
def success_response(data=None, message="Success", code=0):
return jsonify({
"code": code,
"message": message,
"data": data
}), 200
def error_response(message="Error", code=-1, data=None):
return jsonify({
"code": code,
"message": message,
"data": data
}), 400

View File

@ -1,76 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from typing import Dict, Any
from api.common.exceptions import AdminException
class RoleMgr:
@staticmethod
def create_role(role_name: str, description: str):
error_msg = f"not implement: create role: {role_name}, description: {description}"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def update_role_description(role_name: str, description: str) -> Dict[str, Any]:
error_msg = f"not implement: update role: {role_name} with description: {description}"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def delete_role(role_name: str) -> Dict[str, Any]:
error_msg = f"not implement: drop role: {role_name}"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def list_roles() -> Dict[str, Any]:
error_msg = "not implement: list roles"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def get_role_permission(role_name: str) -> Dict[str, Any]:
error_msg = f"not implement: show role {role_name}"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def grant_role_permission(role_name: str, actions: list, resource: str) -> Dict[str, Any]:
error_msg = f"not implement: grant role {role_name} actions: {actions} on {resource}"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def revoke_role_permission(role_name: str, actions: list, resource: str) -> Dict[str, Any]:
error_msg = f"not implement: revoke role {role_name} actions: {actions} on {resource}"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def update_user_role(user_name: str, role_name: str) -> Dict[str, Any]:
error_msg = f"not implement: update user role: {user_name} to role {role_name}"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def get_user_permission(user_name: str) -> Dict[str, Any]:
error_msg = f"not implement: get user permission: {user_name}"
logging.error(error_msg)
raise AdminException(error_msg)

View File

@ -1,371 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import secrets
from flask import Blueprint, request
from flask_login import current_user, logout_user, login_required
from auth import login_verify, login_admin, check_admin_auth
from responses import success_response, error_response
from services import UserMgr, ServiceMgr, UserServiceMgr
from roles import RoleMgr
from api.common.exceptions import AdminException
admin_bp = Blueprint('admin', __name__, url_prefix='/api/v1/admin')
@admin_bp.route('/login', methods=['POST'])
def login():
if not request.json:
return error_response('Authorize admin failed.' ,400)
try:
email = request.json.get("email", "")
password = request.json.get("password", "")
return login_admin(email, password)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/logout', methods=['GET'])
@login_required
def logout():
try:
current_user.access_token = f"INVALID_{secrets.token_hex(16)}"
current_user.save()
logout_user()
return success_response(True)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/auth', methods=['GET'])
@login_verify
def auth_admin():
try:
return success_response(None, "Admin is authorized", 0)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users', methods=['GET'])
@login_required
@check_admin_auth
def list_users():
try:
users = UserMgr.get_all_users()
return success_response(users, "Get all users", 0)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users', methods=['POST'])
@login_required
@check_admin_auth
def create_user():
try:
data = request.get_json()
if not data or 'username' not in data or 'password' not in data:
return error_response("Username and password are required", 400)
username = data['username']
password = data['password']
role = data.get('role', 'user')
res = UserMgr.create_user(username, password, role)
if res["success"]:
user_info = res["user_info"]
user_info.pop("password") # do not return password
return success_response(user_info, "User created successfully")
else:
return error_response("create user failed")
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e))
@admin_bp.route('/users/<username>', methods=['DELETE'])
@login_required
@check_admin_auth
def delete_user(username):
try:
res = UserMgr.delete_user(username)
if res["success"]:
return success_response(None, res["message"])
else:
return error_response(res["message"])
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>/password', methods=['PUT'])
@login_required
@check_admin_auth
def change_password(username):
try:
data = request.get_json()
if not data or 'new_password' not in data:
return error_response("New password is required", 400)
new_password = data['new_password']
msg = UserMgr.update_user_password(username, new_password)
return success_response(None, msg)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>/activate', methods=['PUT'])
@login_required
@check_admin_auth
def alter_user_activate_status(username):
try:
data = request.get_json()
if not data or 'activate_status' not in data:
return error_response("Activation status is required", 400)
activate_status = data['activate_status']
msg = UserMgr.update_user_activate_status(username, activate_status)
return success_response(None, msg)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>', methods=['GET'])
@login_required
@check_admin_auth
def get_user_details(username):
try:
user_details = UserMgr.get_user_details(username)
return success_response(user_details)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>/datasets', methods=['GET'])
@login_required
@check_admin_auth
def get_user_datasets(username):
try:
datasets_list = UserServiceMgr.get_user_datasets(username)
return success_response(datasets_list)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>/agents', methods=['GET'])
@login_required
@check_admin_auth
def get_user_agents(username):
try:
agents_list = UserServiceMgr.get_user_agents(username)
return success_response(agents_list)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/services', methods=['GET'])
@login_required
@check_admin_auth
def get_services():
try:
services = ServiceMgr.get_all_services()
return success_response(services, "Get all services", 0)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/service_types/<service_type>', methods=['GET'])
@login_required
@check_admin_auth
def get_services_by_type(service_type_str):
try:
services = ServiceMgr.get_services_by_type(service_type_str)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/services/<service_id>', methods=['GET'])
@login_required
@check_admin_auth
def get_service(service_id):
try:
services = ServiceMgr.get_service_details(service_id)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/services/<service_id>', methods=['DELETE'])
@login_required
@check_admin_auth
def shutdown_service(service_id):
try:
services = ServiceMgr.shutdown_service(service_id)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/services/<service_id>', methods=['PUT'])
@login_required
@check_admin_auth
def restart_service(service_id):
try:
services = ServiceMgr.restart_service(service_id)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/roles', methods=['POST'])
@login_required
@check_admin_auth
def create_role():
try:
data = request.get_json()
if not data or 'role_name' not in data:
return error_response("Role name is required", 400)
role_name: str = data['role_name']
description: str = data['description']
res = RoleMgr.create_role(role_name, description)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/roles/<role_name>', methods=['PUT'])
@login_required
@check_admin_auth
def update_role(role_name: str):
try:
data = request.get_json()
if not data or 'description' not in data:
return error_response("Role description is required", 400)
description: str = data['description']
res = RoleMgr.update_role_description(role_name, description)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/roles/<role_name>', methods=['DELETE'])
@login_required
@check_admin_auth
def delete_role(role_name: str):
try:
res = RoleMgr.delete_role(role_name)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/roles', methods=['GET'])
@login_required
@check_admin_auth
def list_roles():
try:
res = RoleMgr.list_roles()
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/roles/<role_name>/permission', methods=['GET'])
@login_required
@check_admin_auth
def get_role_permission(role_name: str):
try:
res = RoleMgr.get_role_permission(role_name)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/roles/<role_name>/permission', methods=['POST'])
@login_required
@check_admin_auth
def grant_role_permission(role_name: str):
try:
data = request.get_json()
if not data or 'actions' not in data or 'resource' not in data:
return error_response("Permission is required", 400)
actions: list = data['actions']
resource: str = data['resource']
res = RoleMgr.grant_role_permission(role_name, actions, resource)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/roles/<role_name>/permission', methods=['DELETE'])
@login_required
@check_admin_auth
def revoke_role_permission(role_name: str):
try:
data = request.get_json()
if not data or 'actions' not in data or 'resource' not in data:
return error_response("Permission is required", 400)
actions: list = data['actions']
resource: str = data['resource']
res = RoleMgr.revoke_role_permission(role_name, actions, resource)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<user_name>/role', methods=['PUT'])
@login_required
@check_admin_auth
def update_user_role(user_name: str):
try:
data = request.get_json()
if not data or 'role_name' not in data:
return error_response("Role name is required", 400)
role_name: str = data['role_name']
res = RoleMgr.update_user_role(user_name, role_name)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<user_name>/permission', methods=['GET'])
@login_required
@check_admin_auth
def get_user_permission(user_name: str):
try:
res = RoleMgr.get_user_permission(user_name)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)

View File

@ -1,48 +1,22 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import re import re
from werkzeug.security import check_password_hash from werkzeug.security import check_password_hash
from common.constants import ActiveEnum from api.db import ActiveEnum
from api.db.services import UserService from api.db.services import UserService
from api.db.joint_services.user_account_service import create_new_user, delete_user_data from api.db.joint_services.user_account_service import create_new_user, delete_user_data
from api.db.services.canvas_service import UserCanvasService from api.db.services.canvas_service import UserCanvasService
from api.db.services.user_service import TenantService from api.db.services.user_service import TenantService
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.utils.crypt import decrypt from api.utils.crypt import decrypt
from api.utils import health_utils from exceptions import AdminException, UserAlreadyExistsError, UserNotFoundError
from api.common.exceptions import AdminException, UserAlreadyExistsError, UserNotFoundError
from config import SERVICE_CONFIGS from config import SERVICE_CONFIGS
class UserMgr: class UserMgr:
@staticmethod @staticmethod
def get_all_users(): def get_all_users():
users = UserService.get_all_users() users = UserService.get_all_users()
result = [] result = []
for user in users: for user in users:
result.append({ result.append({'email': user.email, 'nickname': user.nickname, 'create_date': user.create_date, 'is_active': user.is_active})
'email': user.email,
'nickname': user.nickname,
'create_date': user.create_date,
'is_active': user.is_active,
'is_superuser': user.is_superuser,
})
return result return result
@staticmethod @staticmethod
@ -55,6 +29,7 @@ class UserMgr:
'email': user.email, 'email': user.email,
'language': user.language, 'language': user.language,
'last_login_time': user.last_login_time, 'last_login_time': user.last_login_time,
'is_authenticated': user.is_authenticated,
'is_active': user.is_active, 'is_active': user.is_active,
'is_anonymous': user.is_anonymous, 'is_anonymous': user.is_anonymous,
'login_channel': user.login_channel, 'login_channel': user.login_channel,
@ -135,7 +110,6 @@ class UserMgr:
UserService.update_user(usr.id, {"is_active": target_status}) UserService.update_user(usr.id, {"is_active": target_status})
return f"Turn {_activate_status} user activate status successfully!" return f"Turn {_activate_status} user activate status successfully!"
class UserServiceMgr: class UserServiceMgr:
@staticmethod @staticmethod
@ -170,27 +144,18 @@ class UserServiceMgr:
return [{ return [{
'title': r['title'], 'title': r['title'],
'permission': r['permission'], 'permission': r['permission'],
'canvas_category': r['canvas_category'].split('_')[0] 'canvas_type': r['canvas_type'],
'canvas_category': r['canvas_category']
} for r in res] } for r in res]
class ServiceMgr: class ServiceMgr:
@staticmethod @staticmethod
def get_all_services(): def get_all_services():
result = [] result = []
configs = SERVICE_CONFIGS.configs configs = SERVICE_CONFIGS.configs
for service_id, config in enumerate(configs): for config in configs:
config_dict = config.to_dict() result.append(config.to_dict())
try:
service_detail = ServiceMgr.get_service_details(service_id)
if "status" in service_detail:
config_dict['status'] = service_detail['status']
else:
config_dict['status'] = 'timeout'
except Exception:
config_dict['status'] = 'timeout'
result.append(config_dict)
return result return result
@staticmethod @staticmethod
@ -199,22 +164,7 @@ class ServiceMgr:
@staticmethod @staticmethod
def get_service_details(service_id: int): def get_service_details(service_id: int):
service_id = int(service_id) raise AdminException("get_service_details: not implemented")
configs = SERVICE_CONFIGS.configs
service_config_mapping = {
c.id: {
'name': c.name,
'detail_func_name': c.detail_func_name
} for c in configs
}
service_info = service_config_mapping.get(service_id, {})
if not service_info:
raise AdminException(f"invalid service_id: {service_id}")
detail_func = getattr(health_utils, service_info.get('detail_func_name'))
res = detail_func()
res.update({'service_name': service_info.get('name')})
return res
@staticmethod @staticmethod
def shutdown_service(service_id: int): def shutdown_service(service_id: int):

View File

@ -26,7 +26,7 @@ from typing import Any, Union, Tuple
from agent.component import component_class from agent.component import component_class
from agent.component.base import ComponentBase from agent.component.base import ComponentBase
from api.db.services.file_service import FileService from api.db.services.file_service import FileService
from common.misc_utils import get_uuid, hash_str2int from api.utils import get_uuid, hash_str2int
from rag.prompts.generator import chunks_format from rag.prompts.generator import chunks_format
from rag.utils.redis_conn import REDIS_CONN from rag.utils.redis_conn import REDIS_CONN
@ -161,32 +161,7 @@ class Graph:
cpn = self.get_component(cpn_id) cpn = self.get_component(cpn_id)
if not cpn: if not cpn:
raise Exception(f"Can't find variable: '{cpn_id}@{var_nm}'") raise Exception(f"Can't find variable: '{cpn_id}@{var_nm}'")
parts = var_nm.split(".", 1) return cpn["obj"].output(var_nm)
root_key = parts[0]
rest = parts[1] if len(parts) > 1 else ""
root_val = cpn["obj"].output(root_key)
if not rest:
return root_val
return self.get_variable_param_value(root_val,rest)
def get_variable_param_value(self, obj: Any, path: str) -> Any:
cur = obj
if not path:
return cur
for key in path.split('.'):
if cur is None:
return None
if isinstance(cur, str):
try:
cur = json.loads(cur)
except Exception:
return None
if isinstance(cur, dict):
cur = cur.get(key)
else:
cur = getattr(cur, key, None)
return cur
class Canvas(Graph): class Canvas(Graph):
@ -228,6 +203,7 @@ class Canvas(Graph):
self.history = [] self.history = []
self.retrieval = [] self.retrieval = []
self.memory = [] self.memory = []
for k in self.globals.keys(): for k in self.globals.keys():
if isinstance(self.globals[k], str): if isinstance(self.globals[k], str):
self.globals[k] = "" self.globals[k] = ""
@ -281,21 +257,12 @@ class Canvas(Graph):
def _run_batch(f, t): def _run_batch(f, t):
with ThreadPoolExecutor(max_workers=5) as executor: with ThreadPoolExecutor(max_workers=5) as executor:
thr = [] thr = []
i = f for i in range(f, t):
while i < t:
cpn = self.get_component_obj(self.path[i]) cpn = self.get_component_obj(self.path[i])
if cpn.component_name.lower() in ["begin", "userfillup"]: if cpn.component_name.lower() in ["begin", "userfillup"]:
thr.append(executor.submit(cpn.invoke, inputs=kwargs.get("inputs", {}))) thr.append(executor.submit(cpn.invoke, inputs=kwargs.get("inputs", {})))
i += 1
else: else:
for _, ele in cpn.get_input_elements().items(): thr.append(executor.submit(cpn.invoke, **cpn.get_input()))
if isinstance(ele, dict) and ele.get("_cpn_id") and ele.get("_cpn_id") not in self.path[:i]:
self.path.pop(i)
t -= 1
break
else:
thr.append(executor.submit(cpn.invoke, **cpn.get_input()))
i += 1
for t in thr: for t in thr:
t.result() t.result()
@ -325,7 +292,7 @@ class Canvas(Graph):
"thoughts": self.get_component_thoughts(self.path[i]) "thoughts": self.get_component_thoughts(self.path[i])
}) })
_run_batch(idx, to) _run_batch(idx, to)
to = len(self.path)
# post processing of components invocation # post processing of components invocation
for i in range(idx, to): for i in range(idx, to):
cpn = self.get_component(self.path[i]) cpn = self.get_component(self.path[i])
@ -426,6 +393,7 @@ class Canvas(Graph):
self.path = path self.path = path
yield decorate("user_inputs", {"inputs": another_inputs, "tips": tips}) yield decorate("user_inputs", {"inputs": another_inputs, "tips": tips})
return return
self.path = self.path[:idx] self.path = self.path[:idx]
if not self.error: if not self.error:
yield decorate("workflow_finished", yield decorate("workflow_finished",

View File

@ -27,7 +27,7 @@ from agent.tools.base import LLMToolPluginCallSession, ToolParamBase, ToolBase,
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
from api.db.services.tenant_llm_service import TenantLLMService from api.db.services.tenant_llm_service import TenantLLMService
from api.db.services.mcp_server_service import MCPServerService from api.db.services.mcp_server_service import MCPServerService
from common.connection_utils import timeout from api.utils.api_utils import timeout
from rag.prompts.generator import next_step, COMPLETE_TASK, analyze_task, \ from rag.prompts.generator import next_step, COMPLETE_TASK, analyze_task, \
citation_prompt, reflect, rank_memories, kb_prompt, citation_plus, full_question, message_fit_in citation_prompt, reflect, rank_memories, kb_prompt, citation_plus, full_question, message_fit_in
from rag.utils.mcp_tool_call_conn import MCPToolCallSession, mcp_tool_metadata_to_openai_tool from rag.utils.mcp_tool_call_conn import MCPToolCallSession, mcp_tool_metadata_to_openai_tool
@ -158,12 +158,7 @@ class Agent(LLM, ToolBase):
downstreams = self._canvas.get_component(self._id)["downstream"] if self._canvas.get_component(self._id) else [] downstreams = self._canvas.get_component(self._id)["downstream"] if self._canvas.get_component(self._id) else []
ex = self.exception_handler() ex = self.exception_handler()
output_structure=None if any([self._canvas.get_component_obj(cid).component_name.lower()=="message" for cid in downstreams]) and not self._param.output_structure and not (ex and ex["goto"]):
try:
output_structure=self._param.outputs['structured']
except Exception:
pass
if any([self._canvas.get_component_obj(cid).component_name.lower()=="message" for cid in downstreams]) and not output_structure and not (ex and ex["goto"]):
self.set_output("content", partial(self.stream_output_with_tools, prompt, msg, user_defined_prompt)) self.set_output("content", partial(self.stream_output_with_tools, prompt, msg, user_defined_prompt))
return return
@ -351,11 +346,3 @@ Respond immediately with your final comprehensive answer.
return "Error occurred." return "Error occurred."
def reset(self, temp=False):
"""
Reset all tools if they have a reset method. This avoids errors for tools like MCPToolCallSession.
"""
for k, cpn in self.tools.items():
if hasattr(cpn, "reset") and callable(cpn.reset):
cpn.reset()

View File

@ -25,7 +25,7 @@ from typing import Any, List, Union
import pandas as pd import pandas as pd
import trio import trio
from agent import settings from agent import settings
from common.connection_utils import timeout from api.utils.api_utils import timeout
_FEEDED_DEPRECATED_PARAMS = "_feeded_deprecated_params" _FEEDED_DEPRECATED_PARAMS = "_feeded_deprecated_params"

View File

@ -18,10 +18,10 @@ import os
import re import re
from abc import ABC from abc import ABC
from common.constants import LLMType from api.db import LLMType
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
from agent.component.llm import LLMParam, LLM from agent.component.llm import LLMParam, LLM
from common.connection_utils import timeout from api.utils.api_utils import timeout
from rag.llm.chat_model import ERROR_PREFIX from rag.llm.chat_model import ERROR_PREFIX

View File

@ -1,201 +0,0 @@
from abc import ABC
import ast
import os
from agent.component.base import ComponentBase, ComponentParamBase
from api.utils.api_utils import timeout
class DataOperationsParam(ComponentParamBase):
"""
Define the Data Operations component parameters.
"""
def __init__(self):
super().__init__()
self.inputs = []
self.operations = "literal_eval"
self.select_keys = []
self.filter_values=[]
self.updates=[]
self.remove_keys=[]
self.rename_keys=[]
self.outputs = {
"result": {
"value": [],
"type": "Array of Object"
}
}
def check(self):
self.check_valid_value(self.operations, "Support operations", ["select_keys", "literal_eval","combine","filter_values","append_or_update","remove_keys","rename_keys"])
class DataOperations(ComponentBase,ABC):
component_name = "DataOperations"
def get_input_form(self) -> dict[str, dict]:
return {
k: {"name": o.get("name", ""), "type": "line"}
for input_item in (self._param.inputs or [])
for k, o in self.get_input_elements_from_text(input_item).items()
}
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
def _invoke(self, **kwargs):
self.input_objects=[]
inputs = getattr(self._param, "inputs", None)
if not isinstance(inputs, (list, tuple)):
inputs = [inputs]
for input_ref in self._param.inputs:
input_object=self._canvas.get_variable_value(input_ref)
if input_object is None:
continue
if isinstance(input_object,dict):
self.input_objects.append(input_object)
elif isinstance(input_object,list):
self.input_objects.extend(x for x in input_object if isinstance(x, dict))
else:
continue
if self._param.operations == "select_keys":
self._select_keys()
elif self._param.operations == "literal_eval":
self._literal_eval()
elif self._param.operations == "combine":
self._combine()
elif self._param.operations == "filter_values":
self._filter_values()
elif self._param.operations == "append_or_update":
self._append_or_update()
elif self._param.operations == "remove_keys":
self._remove_keys()
else:
self._rename_keys()
def _select_keys(self):
filter_criteria: list[str] = self._param.select_keys
results = [{key: value for key, value in data_dict.items() if key in filter_criteria} for data_dict in self.input_objects]
self.set_output("result", results)
def _recursive_eval(self, data):
if isinstance(data, dict):
return {k: self.recursive_eval(v) for k, v in data.items()}
if isinstance(data, list):
return [self.recursive_eval(item) for item in data]
if isinstance(data, str):
try:
if (
data.strip().startswith(("{", "[", "(", "'", '"'))
or data.strip().lower() in ("true", "false", "none")
or data.strip().replace(".", "").isdigit()
):
return ast.literal_eval(data)
except (ValueError, SyntaxError, TypeError, MemoryError):
return data
else:
return data
return data
def _literal_eval(self):
self.set_output("result", self._recursive_eval(self.input_objects))
def _combine(self):
result={}
for obj in self.input_objects():
for key, value in obj.items():
if key not in result:
result[key] = value
elif isinstance(result[key], list):
if isinstance(value, list):
result[key].extend(value)
else:
result[key].append(value)
else:
result[key] = (
[result[key], value] if not isinstance(value, list) else [result[key], *value]
)
self.set_output("result", result)
def norm(self,v):
s = "" if v is None else str(v)
return s
def match_rule(self, obj, rule):
key = rule.get("key")
op = (rule.get("operator") or "equals").lower()
target = self.norm(rule.get("value"))
if key not in obj:
return False
val = obj.get(key, None)
v = self.norm(val)
if op == "=":
return v == target
if op == "":
return v != target
if op == "contains":
return target in v
if op == "start with":
return v.startswith(target)
if op == "end with":
return v.endswith(target)
return False
def _filter_values(self):
results=[]
rules = (getattr(self._param, "filter_values", None) or [])
for obj in self.input_objects():
if not rules:
results.append(obj)
continue
if all(self.match_rule(obj, r) for r in rules):
results.append(obj)
self.set_output("result", results)
def _append_or_update(self):
results=[]
updates = getattr(self._param, "updates", []) or []
for obj in self.input_objects():
new_obj = dict(obj)
for item in updates:
if not isinstance(item, dict):
continue
k = (item.get("key") or "").strip()
if not k:
continue
new_obj[k] = item.get("value")
results.append(new_obj)
self.set_output("result", results)
def _remove_keys(self):
results = []
remove_keys = getattr(self._param, "remove_keys", []) or []
for obj in (self.input_objects or []):
new_obj = dict(obj)
for k in remove_keys:
if not isinstance(k, str):
continue
new_obj.pop(k, None)
results.append(new_obj)
self.set_output("result", results)
def _rename_keys(self):
results = []
rename_pairs = getattr(self._param, "rename_keys", []) or []
for obj in (self.input_objects or []):
new_obj = dict(obj)
for pair in rename_pairs:
if not isinstance(pair, dict):
continue
old = (pair.get("old_key") or "").strip()
new = (pair.get("new_key") or "").strip()
if not old or not new or old == new:
continue
if old in new_obj:
new_obj[new] = new_obj.pop(old)
results.append(new_obj)
self.set_output("result", results)
def thoughts(self) -> str:
return "DataOperation in progress"

View File

@ -19,12 +19,11 @@ import os
import re import re
import time import time
from abc import ABC from abc import ABC
import requests import requests
from agent.component.base import ComponentBase, ComponentParamBase from api.utils.api_utils import timeout
from common.connection_utils import timeout
from deepdoc.parser import HtmlParser from deepdoc.parser import HtmlParser
from agent.component.base import ComponentBase, ComponentParamBase
class InvokeParam(ComponentParamBase): class InvokeParam(ComponentParamBase):
@ -44,11 +43,11 @@ class InvokeParam(ComponentParamBase):
self.datatype = "json" # New parameter to determine data posting type self.datatype = "json" # New parameter to determine data posting type
def check(self): def check(self):
self.check_valid_value(self.method.lower(), "Type of content from the crawler", ["get", "post", "put"]) self.check_valid_value(self.method.lower(), "Type of content from the crawler", ['get', 'post', 'put'])
self.check_empty(self.url, "End point URL") self.check_empty(self.url, "End point URL")
self.check_positive_integer(self.timeout, "Timeout time in second") self.check_positive_integer(self.timeout, "Timeout time in second")
self.check_boolean(self.clean_html, "Clean HTML") self.check_boolean(self.clean_html, "Clean HTML")
self.check_valid_value(self.datatype.lower(), "Data post type", ["json", "formdata"]) # Check for valid datapost value self.check_valid_value(self.datatype.lower(), "Data post type", ['json', 'formdata']) # Check for valid datapost value
class Invoke(ComponentBase, ABC): class Invoke(ComponentBase, ABC):
@ -64,18 +63,6 @@ class Invoke(ComponentBase, ABC):
args[para["key"]] = self._canvas.get_variable_value(para["ref"]) args[para["key"]] = self._canvas.get_variable_value(para["ref"])
url = self._param.url.strip() url = self._param.url.strip()
def replace_variable(match):
var_name = match.group(1)
try:
value = self._canvas.get_variable_value(var_name)
return str(value or "")
except Exception:
return ""
# {base_url} or {component_id@variable_name}
url = re.sub(r"\{([a-zA-Z_][a-zA-Z0-9_.@-]*)\}", replace_variable, url)
if url.find("http") != 0: if url.find("http") != 0:
url = "http://" + url url = "http://" + url
@ -88,32 +75,52 @@ class Invoke(ComponentBase, ABC):
proxies = {"http": self._param.proxy, "https": self._param.proxy} proxies = {"http": self._param.proxy, "https": self._param.proxy}
last_e = "" last_e = ""
for _ in range(self._param.max_retries + 1): for _ in range(self._param.max_retries+1):
try: try:
if method == "get": if method == 'get':
response = requests.get(url=url, params=args, headers=headers, proxies=proxies, timeout=self._param.timeout) response = requests.get(url=url,
params=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html: if self._param.clean_html:
sections = HtmlParser()(None, response.content) sections = HtmlParser()(None, response.content)
self.set_output("result", "\n".join(sections)) self.set_output("result", "\n".join(sections))
else: else:
self.set_output("result", response.text) self.set_output("result", response.text)
if method == "put": if method == 'put':
if self._param.datatype.lower() == "json": if self._param.datatype.lower() == 'json':
response = requests.put(url=url, json=args, headers=headers, proxies=proxies, timeout=self._param.timeout) response = requests.put(url=url,
json=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
else: else:
response = requests.put(url=url, data=args, headers=headers, proxies=proxies, timeout=self._param.timeout) response = requests.put(url=url,
data=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html: if self._param.clean_html:
sections = HtmlParser()(None, response.content) sections = HtmlParser()(None, response.content)
self.set_output("result", "\n".join(sections)) self.set_output("result", "\n".join(sections))
else: else:
self.set_output("result", response.text) self.set_output("result", response.text)
if method == "post": if method == 'post':
if self._param.datatype.lower() == "json": if self._param.datatype.lower() == 'json':
response = requests.post(url=url, json=args, headers=headers, proxies=proxies, timeout=self._param.timeout) response = requests.post(url=url,
json=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
else: else:
response = requests.post(url=url, data=args, headers=headers, proxies=proxies, timeout=self._param.timeout) response = requests.post(url=url,
data=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html: if self._param.clean_html:
self.set_output("result", "\n".join(sections)) self.set_output("result", "\n".join(sections))
else: else:

View File

@ -16,13 +16,6 @@
from abc import ABC from abc import ABC
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
"""
class VariableModel(BaseModel):
data_type: Annotated[Literal["string", "number", "Object", "Boolean", "Array<string>", "Array<number>", "Array<object>", "Array<boolean>"], Field(default="Array<string>")]
input_mode: Annotated[Literal["constant", "variable"], Field(default="constant")]
value: Annotated[Any, Field(default=None)]
model_config = ConfigDict(extra="forbid")
"""
class IterationParam(ComponentParamBase): class IterationParam(ComponentParamBase):
""" """

View File

@ -21,12 +21,12 @@ from copy import deepcopy
from typing import Any, Generator from typing import Any, Generator
import json_repair import json_repair
from functools import partial from functools import partial
from common.constants import LLMType from api.db import LLMType
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
from api.db.services.tenant_llm_service import TenantLLMService from api.db.services.tenant_llm_service import TenantLLMService
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
from rag.prompts.generator import tool_call_summary, message_fit_in, citation_prompt, structured_output_prompt from rag.prompts.generator import tool_call_summary, message_fit_in, citation_prompt
class LLMParam(ComponentParamBase): class LLMParam(ComponentParamBase):
@ -214,14 +214,10 @@ class LLM(ComponentBase):
prompt, msg, _ = self._prepare_prompt_variables() prompt, msg, _ = self._prepare_prompt_variables()
error: str = "" error: str = ""
output_structure=None
try: if self._param.output_structure:
output_structure = None#self._param.outputs['structured'] prompt += "\nThe output MUST follow this JSON format:\n"+json.dumps(self._param.output_structure, ensure_ascii=False, indent=2)
except Exception: prompt += "\nRedundant information is FORBIDDEN."
pass
if output_structure:
schema=json.dumps(output_structure, ensure_ascii=False, indent=2)
prompt += structured_output_prompt(schema)
for _ in range(self._param.max_retries+1): for _ in range(self._param.max_retries+1):
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97)) _, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
error = "" error = ""
@ -232,7 +228,7 @@ class LLM(ComponentBase):
error = ans error = ans
continue continue
try: try:
self.set_output("structured", json_repair.loads(clean_formated_answer(ans))) self.set_output("structured_content", json_repair.loads(clean_formated_answer(ans)))
return return
except Exception: except Exception:
msg.append({"role": "user", "content": "The answer can't not be parsed as JSON"}) msg.append({"role": "user", "content": "The answer can't not be parsed as JSON"})
@ -243,7 +239,7 @@ class LLM(ComponentBase):
downstreams = self._canvas.get_component(self._id)["downstream"] if self._canvas.get_component(self._id) else [] downstreams = self._canvas.get_component(self._id)["downstream"] if self._canvas.get_component(self._id) else []
ex = self.exception_handler() ex = self.exception_handler()
if any([self._canvas.get_component_obj(cid).component_name.lower()=="message" for cid in downstreams]) and not output_structure and not (ex and ex["goto"]): if any([self._canvas.get_component_obj(cid).component_name.lower()=="message" for cid in downstreams]) and not self._param.output_structure and not (ex and ex["goto"]):
self.set_output("content", partial(self._stream_output, prompt, msg)) self.set_output("content", partial(self._stream_output, prompt, msg))
return return

View File

@ -23,7 +23,7 @@ from typing import Any
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
from jinja2 import Template as Jinja2Template from jinja2 import Template as Jinja2Template
from common.connection_utils import timeout from api.utils.api_utils import timeout
class MessageParam(ComponentParamBase): class MessageParam(ComponentParamBase):
@ -49,9 +49,6 @@ class MessageParam(ComponentParamBase):
class Message(ComponentBase): class Message(ComponentBase):
component_name = "Message" component_name = "Message"
def get_input_elements(self) -> dict[str, Any]:
return self.get_input_elements_from_text("".join(self._param.content))
def get_kwargs(self, script:str, kwargs:dict = {}, delimiter:str=None) -> tuple[str, dict[str, str | list | Any]]: def get_kwargs(self, script:str, kwargs:dict = {}, delimiter:str=None) -> tuple[str, dict[str, str | list | Any]]:
for k,v in self.get_input_elements_from_text(script).items(): for k,v in self.get_input_elements_from_text(script).items():
if k in kwargs: if k in kwargs:

View File

@ -16,11 +16,9 @@
import os import os
import re import re
from abc import ABC from abc import ABC
from typing import Any
from jinja2 import Template as Jinja2Template from jinja2 import Template as Jinja2Template
from agent.component.base import ComponentParamBase from agent.component.base import ComponentParamBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
from .message import Message from .message import Message
@ -45,9 +43,6 @@ class StringTransformParam(ComponentParamBase):
class StringTransform(Message, ABC): class StringTransform(Message, ABC):
component_name = "StringTransform" component_name = "StringTransform"
def get_input_elements(self) -> dict[str, Any]:
return self.get_input_elements_from_text(self._param.script)
def get_input_form(self) -> dict[str, dict]: def get_input_form(self) -> dict[str, dict]:
if self._param.method == "split": if self._param.method == "split":
return { return {

View File

@ -19,7 +19,7 @@ from abc import ABC
from typing import Any from typing import Any
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
class SwitchParam(ComponentParamBase): class SwitchParam(ComponentParamBase):

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -2,11 +2,9 @@
"id": 8, "id": 8,
"title": { "title": {
"en": "Generate SEO Blog", "en": "Generate SEO Blog",
"de": "SEO Blog generieren",
"zh": "生成SEO博客"}, "zh": "生成SEO博客"},
"description": { "description": {
"en": "This is a multi-agent version of the SEO blog generation workflow. It simulates a small team of AI “writers”, where each agent plays a specialized role — just like a real editorial team.", "en": "This is a multi-agent version of the SEO blog generation workflow. It simulates a small team of AI “writers”, where each agent plays a specialized role — just like a real editorial team.",
"de": "Dies ist eine Multi-Agenten-Version des Workflows zur Erstellung von SEO-Blogs. Sie simuliert ein kleines Team von KI-„Autoren“, in dem jeder Agent eine spezielle Rolle übernimmt genau wie in einem echten Redaktionsteam.",
"zh": "多智能体架构可根据简单的用户输入自动生成完整的SEO博客文章。模拟小型“作家”团队其中每个智能体扮演一个专业角色——就像真正的编辑团队。"}, "zh": "多智能体架构可根据简单的用户输入自动生成完整的SEO博客文章。模拟小型“作家”团队其中每个智能体扮演一个专业角色——就像真正的编辑团队。"},
"canvas_type": "Agent", "canvas_type": "Agent",
"dsl": { "dsl": {

File diff suppressed because one or more lines are too long

View File

@ -2,11 +2,9 @@
"id": 20, "id": 20,
"title": { "title": {
"en": "Report Agent Using Knowledge Base", "en": "Report Agent Using Knowledge Base",
"de": "Berichtsagent mit Wissensdatenbank",
"zh": "知识库检索智能体"}, "zh": "知识库检索智能体"},
"description": { "description": {
"en": "A report generation assistant using local knowledge base, with advanced capabilities in task planning, reasoning, and reflective analysis. Recommended for academic research paper Q&A", "en": "A report generation assistant using local knowledge base, with advanced capabilities in task planning, reasoning, and reflective analysis. Recommended for academic research paper Q&A",
"de": "Ein Berichtsgenerierungsassistent, der eine lokale Wissensdatenbank nutzt, mit erweiterten Fähigkeiten in Aufgabenplanung, Schlussfolgerung und reflektierender Analyse. Empfohlen für akademische Forschungspapier-Fragen und -Antworten.",
"zh": "一个使用本地知识库的报告生成助手,具备高级能力,包括任务规划、推理和反思性分析。推荐用于学术研究论文问答。"}, "zh": "一个使用本地知识库的报告生成助手,具备高级能力,包括任务规划、推理和反思性分析。推荐用于学术研究论文问答。"},
"canvas_type": "Agent", "canvas_type": "Agent",
"dsl": { "dsl": {

View File

@ -1,12 +1,10 @@
{ {
"id": 21, "id": 21,
"title": { "title": {
"en": "Report Agent Using Knowledge Base", "en": "Report Agent Using Knowledge Base",
"de": "Berichtsagent mit Wissensdatenbank",
"zh": "知识库检索智能体"}, "zh": "知识库检索智能体"},
"description": { "description": {
"en": "A report generation assistant using local knowledge base, with advanced capabilities in task planning, reasoning, and reflective analysis. Recommended for academic research paper Q&A", "en": "A report generation assistant using local knowledge base, with advanced capabilities in task planning, reasoning, and reflective analysis. Recommended for academic research paper Q&A",
"de": "Ein Berichtsgenerierungsassistent, der eine lokale Wissensdatenbank nutzt, mit erweiterten Fähigkeiten in Aufgabenplanung, Schlussfolgerung und reflektierender Analyse. Empfohlen für akademische Forschungspapier-Fragen und -Antworten.",
"zh": "一个使用本地知识库的报告生成助手,具备高级能力,包括任务规划、推理和反思性分析。推荐用于学术研究论文问答。"}, "zh": "一个使用本地知识库的报告生成助手,具备高级能力,包括任务规划、推理和反思性分析。推荐用于学术研究论文问答。"},
"canvas_type": "Recommended", "canvas_type": "Recommended",
"dsl": { "dsl": {

View File

@ -2,11 +2,9 @@
"id": 12, "id": 12,
"title": { "title": {
"en": "Generate SEO Blog", "en": "Generate SEO Blog",
"de": "SEO Blog generieren",
"zh": "生成SEO博客"}, "zh": "生成SEO博客"},
"description": { "description": {
"en": "This workflow automatically generates a complete SEO-optimized blog article based on a simple user input. You don't need any writing experience. Just provide a topic or short request — the system will handle the rest.", "en": "This workflow automatically generates a complete SEO-optimized blog article based on a simple user input. You dont need any writing experience. Just provide a topic or short request — the system will handle the rest.",
"de": "Dieser Workflow generiert automatisch einen vollständigen SEO-optimierten Blogartikel basierend auf einer einfachen Benutzereingabe. Sie benötigen keine Schreiberfahrung. Geben Sie einfach ein Thema oder eine kurze Anfrage ein das System übernimmt den Rest.",
"zh": "此工作流根据简单的用户输入自动生成完整的SEO博客文章。你无需任何写作经验只需提供一个主题或简短请求系统将处理其余部分。"}, "zh": "此工作流根据简单的用户输入自动生成完整的SEO博客文章。你无需任何写作经验只需提供一个主题或简短请求系统将处理其余部分。"},
"canvas_type": "Marketing", "canvas_type": "Marketing",
"dsl": { "dsl": {
@ -918,4 +916,4 @@
"retrieval": [] "retrieval": []
}, },
"avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAwADADASIAAhEBAxEB/8QAGQAAAwEBAQAAAAAAAAAAAAAABgkKBwUI/8QAMBAAAAYCAQIEBQQCAwAAAAAAAQIDBAUGBxEhCAkAEjFBFFFhcaETFiKRFyOx8PH/xAAaAQACAwEBAAAAAAAAAAAAAAACAwABBgQF/8QALBEAAgIBAgUCBAcAAAAAAAAAAQIDBBEFEgATITFRIkEGIzJhFBUWgaGx8P/aAAwDAQACEQMRAD8AfF2hez9089t7pvxgQMa1Gb6qZ6oQE9m/NEvCIStyPfJSOF/M1epzMugo/qtMqbiRc1mJjoJKCLMNIxKcsLJedfO1Ct9cI63x9fx6CA/19t+oh4LFA5HfuAgP/A8eOIsnsTBrkBHXA7+v53+Q+ficTgJft9gIgA+/P9/1r342O/YA8A8k3/if+IbAN7+2/f8AAiI6H19PGoPyESTMZQPKUAHkQEN+3r9dh78/YPGUTk2wb/qAZZIugH1OHH5DjkdfbnWw2DsOxPj+xjrnx2H39unBopJGBn9s+PHv1HXjPJtH+J+B40O9a16h/wB/92j/ALrPa/wR104UyAobHlXhuo2HrEtK4qy3CwjKOuJLRHJLSkXWrFKs/gVrJVrE8TUiH8bPrP20UEu8m4hNpMJJuTOfnbUw/kUqyZgMHGjAO9+mtDsQ53sdcB6eMhnpEjhNQxRKICAgHy5+/roOdjr7c+J6O4x07dx484/n7nzw1gexBGfIPkZ/3t39uGpqc6+fP5/Ht8vGFZCzJjWpWuBxvO2yPjrtclUUK7BqmUI4fuASeyhG5FzFI0Bw4aQ0iZNoDgzvRW4qtyFkI4XmwyEk2YNnDp0sVBu3IUyy5iqH8gqKERSIRNIii67hddRJs1at01Xbx2sgzZoLu10UFJR+4V1A5cxF3FqNcLvjwcno43uuLrOxZYjujaClcb4QQfxEizpFiQyM9olcueRnjC2ZMt9iY06zL0qytrMSqSOVGsfHMaGhZ3l4lSRI2MqE74zJvRTveNFWWIh3RWw+XCAM5icKQLrCH57T17FhErSlRXnWvyZXKQwWJ3eraD14p5YuZCFgacskK2oGkVuKO5GYTHzf7DaD12cBD3DgPOIDrWw9PnrXPgDkpVsUDGMG+DD6E9gHXIjrYjwUPQTCXYgHPhIV974+F6E1hpC14Yzmzj56YaQEeZhXsayD1zLPW7pygxaMf81Nzu1iJsnIuDIKnaJAkPldqrHaoORZ73tMVEbFdSXT9nVgRQgnBq6j8e/HCIEATpAnH5KlmRVkFRFJwks/bqImSXJ5VFyA3N6Ikh3bCW3YHp5cowOmCfTgA+xJCnrjtwHKcLvJj2ZGcTRFj19kEhckdzgEjKnABGSSzdc1Fe5byXXGNjKdvRcw5NxvLidNZFFCxUa62KrzMaChw8hhYScFJtROAgmuLByq1MsgkZYPaVVuDe0wraRaqAdJwgRQo+YR8xTlAQNx6b49w41vXiJpCalLh1jZhyrTqRM4+jstdRmYryNkydLQRWg1LNGcWd5jIFFvCythlIySa0mNu74sKRQtaWsTmupqPItw0lE52ufpyYzrSkx6cw5bLmBEpkTsz+dt8P5QFuCRtAIkBH9MuwKHICIaDQhnojMs9mKaeGcrMxXlQtAYkdVljimRrE5MqI4zL8oSqQ6wxjodBqK05qdK3Vo3aCSVkBW7bjuC1NFJJBPaqyx6fp6pWkliYLXK2XrukkRu2CCVoSWMgsdMyySKwoLFcIGWSTUMg4IBgTcICoBhRcplMcpFkhIqQp1ClMBTmA0Zfe1zpjvHfXff65bZlzXpB3jjGTgiirmPjAfs16PHqHeQ75Wbj3xxZpOEkV3LRJJSPdomUBZISJLncV2k+8D07dxXp7xsYuTapA9UkJUYWIzNhadnWEZeCXGLQQiJi1ViHfhHL2unWh+mlORsrW0JFpEFnGVfm1mU4kq0FY3eD6corJncv6dr5NLSMNXVaTUksjTiMnaq8uFfSVuDyiJ1iZpy0LOJtpa3YfkcQ5fdozyxI2m5qqcrHN61YYmHsh6v3o9ParYmYJEtlhIx6+gUbjgD23M6oqg92YL0JyF6Bps+qDValVA9h9Lj5SZI3SHXdEQlj1wiQtLLIe6pGzjO3BlBkK1hxpblLVH5wdW0BcFKf/JwRtjsot2z8omaSdxbzzk1iEjsE0AM9rrRZNRIrVyo7dGO6E+oh8axLlJ5H5VaJKx7ePRGFbW6vUeFfHQIWPTI9Tm7HHfuhqY7E6C7JFqUzM6iZXIoncNxX7+bIVdJnTT48x3OQU1krIDW3UeixVhyISzYz6cadY5Xph6TseRNTRsTElzzBn9Vlly0TAERsdgnMYyLROjyFbg5R4ZlsGaMT4yNi2Zlq1GwjZB3jq0PsaJfA3t0jL0W0Y9xf1V41lpWckXMLaZiwxuKYPqc6LlHdkeRF+Qxswx5ASDqBVrsL+2A/N6SiCbYymV2BywJiMZj3GRRMTnL+lVyHCll3R7Szv0vqXMtQ74T+HijljIScLaEpkKCB3rqMBIi0jPs5JeOKTZMZEi5VVnouzy0k3jXjWSMlY6UcVGDxlKMVDqx91SILWSi3D2KdgYy3kP8E9X/AE1SnRXBNdNRMlefT6g7aY6giK+cPLGNg0bY68rcnpsNh9PqIBve/EcPQ3WIq2dR93xpSgk5SAZ9R6MLAOZFUkpLSUDXp6/KPpGUkmTdswlnKnwbl5ITMdGwcXJi7LKsqzUmT5tWYmkXuF9wjBvb76b7dHheazJ9RElUJOCxViuMlUJC0Gtz6PKyjLBY4qMWUe12r1xZ6lOyT6XPEBKN2CkTDOlZd02TBdTMt7Upx2knrkdCv1UKjDKn1A7XBYH6SCOOrWn5Oi/DtRiu+GleRthDL8rXdVjZlcfWrSIxVlGGGCOnH//Z" "avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAwADADASIAAhEBAxEB/8QAGQAAAwEBAQAAAAAAAAAAAAAABgkKBwUI/8QAMBAAAAYCAQIEBQQCAwAAAAAAAQIDBAUGBxEhCAkAEjFBFFFhcaETFiKRFyOx8PH/xAAaAQACAwEBAAAAAAAAAAAAAAACAwABBgQF/8QALBEAAgIBAgUCBAcAAAAAAAAAAQIDBBEFEgATITFRIkEGIzJhFBUWgaGx8P/aAAwDAQACEQMRAD8AfF2hez9089t7pvxgQMa1Gb6qZ6oQE9m/NEvCIStyPfJSOF/M1epzMugo/qtMqbiRc1mJjoJKCLMNIxKcsLJedfO1Ct9cI63x9fx6CA/19t+oh4LFA5HfuAgP/A8eOIsnsTBrkBHXA7+v53+Q+ficTgJft9gIgA+/P9/1r342O/YA8A8k3/if+IbAN7+2/f8AAiI6H19PGoPyESTMZQPKUAHkQEN+3r9dh78/YPGUTk2wb/qAZZIugH1OHH5DjkdfbnWw2DsOxPj+xjrnx2H39unBopJGBn9s+PHv1HXjPJtH+J+B40O9a16h/wB/92j/ALrPa/wR104UyAobHlXhuo2HrEtK4qy3CwjKOuJLRHJLSkXWrFKs/gVrJVrE8TUiH8bPrP20UEu8m4hNpMJJuTOfnbUw/kUqyZgMHGjAO9+mtDsQ53sdcB6eMhnpEjhNQxRKICAgHy5+/roOdjr7c+J6O4x07dx484/n7nzw1gexBGfIPkZ/3t39uGpqc6+fP5/Ht8vGFZCzJjWpWuBxvO2yPjrtclUUK7BqmUI4fuASeyhG5FzFI0Bw4aQ0iZNoDgzvRW4qtyFkI4XmwyEk2YNnDp0sVBu3IUyy5iqH8gqKERSIRNIii67hddRJs1at01Xbx2sgzZoLu10UFJR+4V1A5cxF3FqNcLvjwcno43uuLrOxZYjujaClcb4QQfxEizpFiQyM9olcueRnjC2ZMt9iY06zL0qytrMSqSOVGsfHMaGhZ3l4lSRI2MqE74zJvRTveNFWWIh3RWw+XCAM5icKQLrCH57T17FhErSlRXnWvyZXKQwWJ3eraD14p5YuZCFgacskK2oGkVuKO5GYTHzf7DaD12cBD3DgPOIDrWw9PnrXPgDkpVsUDGMG+DD6E9gHXIjrYjwUPQTCXYgHPhIV974+F6E1hpC14Yzmzj56YaQEeZhXsayD1zLPW7pygxaMf81Nzu1iJsnIuDIKnaJAkPldqrHaoORZ73tMVEbFdSXT9nVgRQgnBq6j8e/HCIEATpAnH5KlmRVkFRFJwks/bqImSXJ5VFyA3N6Ikh3bCW3YHp5cowOmCfTgA+xJCnrjtwHKcLvJj2ZGcTRFj19kEhckdzgEjKnABGSSzdc1Fe5byXXGNjKdvRcw5NxvLidNZFFCxUa62KrzMaChw8hhYScFJtROAgmuLByq1MsgkZYPaVVuDe0wraRaqAdJwgRQo+YR8xTlAQNx6b49w41vXiJpCalLh1jZhyrTqRM4+jstdRmYryNkydLQRWg1LNGcWd5jIFFvCythlIySa0mNu74sKRQtaWsTmupqPItw0lE52ufpyYzrSkx6cw5bLmBEpkTsz+dt8P5QFuCRtAIkBH9MuwKHICIaDQhnojMs9mKaeGcrMxXlQtAYkdVljimRrE5MqI4zL8oSqQ6wxjodBqK05qdK3Vo3aCSVkBW7bjuC1NFJJBPaqyx6fp6pWkliYLXK2XrukkRu2CCVoSWMgsdMyySKwoLFcIGWSTUMg4IBgTcICoBhRcplMcpFkhIqQp1ClMBTmA0Zfe1zpjvHfXff65bZlzXpB3jjGTgiirmPjAfs16PHqHeQ75Wbj3xxZpOEkV3LRJJSPdomUBZISJLncV2k+8D07dxXp7xsYuTapA9UkJUYWIzNhadnWEZeCXGLQQiJi1ViHfhHL2unWh+mlORsrW0JFpEFnGVfm1mU4kq0FY3eD6corJncv6dr5NLSMNXVaTUksjTiMnaq8uFfSVuDyiJ1iZpy0LOJtpa3YfkcQ5fdozyxI2m5qqcrHN61YYmHsh6v3o9ParYmYJEtlhIx6+gUbjgD23M6oqg92YL0JyF6Bps+qDValVA9h9Lj5SZI3SHXdEQlj1wiQtLLIe6pGzjO3BlBkK1hxpblLVH5wdW0BcFKf/JwRtjsot2z8omaSdxbzzk1iEjsE0AM9rrRZNRIrVyo7dGO6E+oh8axLlJ5H5VaJKx7ePRGFbW6vUeFfHQIWPTI9Tm7HHfuhqY7E6C7JFqUzM6iZXIoncNxX7+bIVdJnTT48x3OQU1krIDW3UeixVhyISzYz6cadY5Xph6TseRNTRsTElzzBn9Vlly0TAERsdgnMYyLROjyFbg5R4ZlsGaMT4yNi2Zlq1GwjZB3jq0PsaJfA3t0jL0W0Y9xf1V41lpWckXMLaZiwxuKYPqc6LlHdkeRF+Qxswx5ASDqBVrsL+2A/N6SiCbYymV2BywJiMZj3GRRMTnL+lVyHCll3R7Szv0vqXMtQ74T+HijljIScLaEpkKCB3rqMBIi0jPs5JeOKTZMZEi5VVnouzy0k3jXjWSMlY6UcVGDxlKMVDqx91SILWSi3D2KdgYy3kP8E9X/AE1SnRXBNdNRMlefT6g7aY6giK+cPLGNg0bY68rcnpsNh9PqIBve/EcPQ3WIq2dR93xpSgk5SAZ9R6MLAOZFUkpLSUDXp6/KPpGUkmTdswlnKnwbl5ITMdGwcXJi7LKsqzUmT5tWYmkXuF9wjBvb76b7dHheazJ9RElUJOCxViuMlUJC0Gtz6PKyjLBY4qMWUe12r1xZ6lOyT6XPEBKN2CkTDOlZd02TBdTMt7Upx2knrkdCv1UKjDKn1A7XBYH6SCOOrWn5Oi/DtRiu+GleRthDL8rXdVjZlcfWrSIxVlGGGCOnH//Z"
} }

View File

@ -2,11 +2,9 @@
"id": 4, "id": 4,
"title": { "title": {
"en": "Generate SEO Blog", "en": "Generate SEO Blog",
"de": "SEO Blog generieren",
"zh": "生成SEO博客"}, "zh": "生成SEO博客"},
"description": { "description": {
"en": "This workflow automatically generates a complete SEO-optimized blog article based on a simple user input. You don't need any writing experience. Just provide a topic or short request — the system will handle the rest.", "en": "This workflow automatically generates a complete SEO-optimized blog article based on a simple user input. You dont need any writing experience. Just provide a topic or short request — the system will handle the rest.",
"de": "Dieser Workflow generiert automatisch einen vollständigen SEO-optimierten Blogartikel basierend auf einer einfachen Benutzereingabe. Sie benötigen keine Schreiberfahrung. Geben Sie einfach ein Thema oder eine kurze Anfrage ein das System übernimmt den Rest.",
"zh": "此工作流根据简单的用户输入自动生成完整的SEO博客文章。你无需任何写作经验只需提供一个主题或简短请求系统将处理其余部分。"}, "zh": "此工作流根据简单的用户输入自动生成完整的SEO博客文章。你无需任何写作经验只需提供一个主题或简短请求系统将处理其余部分。"},
"canvas_type": "Recommended", "canvas_type": "Recommended",
"dsl": { "dsl": {
@ -918,4 +916,4 @@
"retrieval": [] "retrieval": []
}, },
"avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAwADADASIAAhEBAxEB/8QAGQAAAwEBAQAAAAAAAAAAAAAABgkKBwUI/8QAMBAAAAYCAQIEBQQCAwAAAAAAAQIDBAUGBxEhCAkAEjFBFFFhcaETFiKRFyOx8PH/xAAaAQACAwEBAAAAAAAAAAAAAAACAwABBgQF/8QALBEAAgIBAgUCBAcAAAAAAAAAAQIDBBEFEgATITFRIkEGIzJhFBUWgaGx8P/aAAwDAQACEQMRAD8AfF2hez9089t7pvxgQMa1Gb6qZ6oQE9m/NEvCIStyPfJSOF/M1epzMugo/qtMqbiRc1mJjoJKCLMNIxKcsLJedfO1Ct9cI63x9fx6CA/19t+oh4LFA5HfuAgP/A8eOIsnsTBrkBHXA7+v53+Q+ficTgJft9gIgA+/P9/1r342O/YA8A8k3/if+IbAN7+2/f8AAiI6H19PGoPyESTMZQPKUAHkQEN+3r9dh78/YPGUTk2wb/qAZZIugH1OHH5DjkdfbnWw2DsOxPj+xjrnx2H39unBopJGBn9s+PHv1HXjPJtH+J+B40O9a16h/wB/92j/ALrPa/wR104UyAobHlXhuo2HrEtK4qy3CwjKOuJLRHJLSkXWrFKs/gVrJVrE8TUiH8bPrP20UEu8m4hNpMJJuTOfnbUw/kUqyZgMHGjAO9+mtDsQ53sdcB6eMhnpEjhNQxRKICAgHy5+/roOdjr7c+J6O4x07dx484/n7nzw1gexBGfIPkZ/3t39uGpqc6+fP5/Ht8vGFZCzJjWpWuBxvO2yPjrtclUUK7BqmUI4fuASeyhG5FzFI0Bw4aQ0iZNoDgzvRW4qtyFkI4XmwyEk2YNnDp0sVBu3IUyy5iqH8gqKERSIRNIii67hddRJs1at01Xbx2sgzZoLu10UFJR+4V1A5cxF3FqNcLvjwcno43uuLrOxZYjujaClcb4QQfxEizpFiQyM9olcueRnjC2ZMt9iY06zL0qytrMSqSOVGsfHMaGhZ3l4lSRI2MqE74zJvRTveNFWWIh3RWw+XCAM5icKQLrCH57T17FhErSlRXnWvyZXKQwWJ3eraD14p5YuZCFgacskK2oGkVuKO5GYTHzf7DaD12cBD3DgPOIDrWw9PnrXPgDkpVsUDGMG+DD6E9gHXIjrYjwUPQTCXYgHPhIV974+F6E1hpC14Yzmzj56YaQEeZhXsayD1zLPW7pygxaMf81Nzu1iJsnIuDIKnaJAkPldqrHaoORZ73tMVEbFdSXT9nVgRQgnBq6j8e/HCIEATpAnH5KlmRVkFRFJwks/bqImSXJ5VFyA3N6Ikh3bCW3YHp5cowOmCfTgA+xJCnrjtwHKcLvJj2ZGcTRFj19kEhckdzgEjKnABGSSzdc1Fe5byXXGNjKdvRcw5NxvLidNZFFCxUa62KrzMaChw8hhYScFJtROAgmuLByq1MsgkZYPaVVuDe0wraRaqAdJwgRQo+YR8xTlAQNx6b49w41vXiJpCalLh1jZhyrTqRM4+jstdRmYryNkydLQRWg1LNGcWd5jIFFvCythlIySa0mNu74sKRQtaWsTmupqPItw0lE52ufpyYzrSkx6cw5bLmBEpkTsz+dt8P5QFuCRtAIkBH9MuwKHICIaDQhnojMs9mKaeGcrMxXlQtAYkdVljimRrE5MqI4zL8oSqQ6wxjodBqK05qdK3Vo3aCSVkBW7bjuC1NFJJBPaqyx6fp6pWkliYLXK2XrukkRu2CCVoSWMgsdMyySKwoLFcIGWSTUMg4IBgTcICoBhRcplMcpFkhIqQp1ClMBTmA0Zfe1zpjvHfXff65bZlzXpB3jjGTgiirmPjAfs16PHqHeQ75Wbj3xxZpOEkV3LRJJSPdomUBZISJLncV2k+8D07dxXp7xsYuTapA9UkJUYWIzNhadnWEZeCXGLQQiJi1ViHfhHL2unWh+mlORsrW0JFpEFnGVfm1mU4kq0FY3eD6corJncv6dr5NLSMNXVaTUksjTiMnaq8uFfSVuDyiJ1iZpy0LOJtpa3YfkcQ5fdozyxI2m5qqcrHN61YYmHsh6v3o9ParYmYJEtlhIx6+gUbjgD23M6oqg92YL0JyF6Bps+qDValVA9h9Lj5SZI3SHXdEQlj1wiQtLLIe6pGzjO3BlBkK1hxpblLVH5wdW0BcFKf/JwRtjsot2z8omaSdxbzzk1iEjsE0AM9rrRZNRIrVyo7dGO6E+oh8axLlJ5H5VaJKx7ePRGFbW6vUeFfHQIWPTI9Tm7HHfuhqY7E6C7JFqUzM6iZXIoncNxX7+bIVdJnTT48x3OQU1krIDW3UeixVhyISzYz6cadY5Xph6TseRNTRsTElzzBn9Vlly0TAERsdgnMYyLROjyFbg5R4ZlsGaMT4yNi2Zlq1GwjZB3jq0PsaJfA3t0jL0W0Y9xf1V41lpWckXMLaZiwxuKYPqc6LlHdkeRF+Qxswx5ASDqBVrsL+2A/N6SiCbYymV2BywJiMZj3GRRMTnL+lVyHCll3R7Szv0vqXMtQ74T+HijljIScLaEpkKCB3rqMBIi0jPs5JeOKTZMZEi5VVnouzy0k3jXjWSMlY6UcVGDxlKMVDqx91SILWSi3D2KdgYy3kP8E9X/AE1SnRXBNdNRMlefT6g7aY6giK+cPLGNg0bY68rcnpsNh9PqIBve/EcPQ3WIq2dR93xpSgk5SAZ9R6MLAOZFUkpLSUDXp6/KPpGUkmTdswlnKnwbl5ITMdGwcXJi7LKsqzUmT5tWYmkXuF9wjBvb76b7dHheazJ9RElUJOCxViuMlUJC0Gtz6PKyjLBY4qMWUe12r1xZ6lOyT6XPEBKN2CkTDOlZd02TBdTMt7Upx2knrkdCv1UKjDKn1A7XBYH6SCOOrWn5Oi/DtRiu+GleRthDL8rXdVjZlcfWrSIxVlGGGCOnH//Z" "avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAwADADASIAAhEBAxEB/8QAGQAAAwEBAQAAAAAAAAAAAAAABgkKBwUI/8QAMBAAAAYCAQIEBQQCAwAAAAAAAQIDBAUGBxEhCAkAEjFBFFFhcaETFiKRFyOx8PH/xAAaAQACAwEBAAAAAAAAAAAAAAACAwABBgQF/8QALBEAAgIBAgUCBAcAAAAAAAAAAQIDBBEFEgATITFRIkEGIzJhFBUWgaGx8P/aAAwDAQACEQMRAD8AfF2hez9089t7pvxgQMa1Gb6qZ6oQE9m/NEvCIStyPfJSOF/M1epzMugo/qtMqbiRc1mJjoJKCLMNIxKcsLJedfO1Ct9cI63x9fx6CA/19t+oh4LFA5HfuAgP/A8eOIsnsTBrkBHXA7+v53+Q+ficTgJft9gIgA+/P9/1r342O/YA8A8k3/if+IbAN7+2/f8AAiI6H19PGoPyESTMZQPKUAHkQEN+3r9dh78/YPGUTk2wb/qAZZIugH1OHH5DjkdfbnWw2DsOxPj+xjrnx2H39unBopJGBn9s+PHv1HXjPJtH+J+B40O9a16h/wB/92j/ALrPa/wR104UyAobHlXhuo2HrEtK4qy3CwjKOuJLRHJLSkXWrFKs/gVrJVrE8TUiH8bPrP20UEu8m4hNpMJJuTOfnbUw/kUqyZgMHGjAO9+mtDsQ53sdcB6eMhnpEjhNQxRKICAgHy5+/roOdjr7c+J6O4x07dx484/n7nzw1gexBGfIPkZ/3t39uGpqc6+fP5/Ht8vGFZCzJjWpWuBxvO2yPjrtclUUK7BqmUI4fuASeyhG5FzFI0Bw4aQ0iZNoDgzvRW4qtyFkI4XmwyEk2YNnDp0sVBu3IUyy5iqH8gqKERSIRNIii67hddRJs1at01Xbx2sgzZoLu10UFJR+4V1A5cxF3FqNcLvjwcno43uuLrOxZYjujaClcb4QQfxEizpFiQyM9olcueRnjC2ZMt9iY06zL0qytrMSqSOVGsfHMaGhZ3l4lSRI2MqE74zJvRTveNFWWIh3RWw+XCAM5icKQLrCH57T17FhErSlRXnWvyZXKQwWJ3eraD14p5YuZCFgacskK2oGkVuKO5GYTHzf7DaD12cBD3DgPOIDrWw9PnrXPgDkpVsUDGMG+DD6E9gHXIjrYjwUPQTCXYgHPhIV974+F6E1hpC14Yzmzj56YaQEeZhXsayD1zLPW7pygxaMf81Nzu1iJsnIuDIKnaJAkPldqrHaoORZ73tMVEbFdSXT9nVgRQgnBq6j8e/HCIEATpAnH5KlmRVkFRFJwks/bqImSXJ5VFyA3N6Ikh3bCW3YHp5cowOmCfTgA+xJCnrjtwHKcLvJj2ZGcTRFj19kEhckdzgEjKnABGSSzdc1Fe5byXXGNjKdvRcw5NxvLidNZFFCxUa62KrzMaChw8hhYScFJtROAgmuLByq1MsgkZYPaVVuDe0wraRaqAdJwgRQo+YR8xTlAQNx6b49w41vXiJpCalLh1jZhyrTqRM4+jstdRmYryNkydLQRWg1LNGcWd5jIFFvCythlIySa0mNu74sKRQtaWsTmupqPItw0lE52ufpyYzrSkx6cw5bLmBEpkTsz+dt8P5QFuCRtAIkBH9MuwKHICIaDQhnojMs9mKaeGcrMxXlQtAYkdVljimRrE5MqI4zL8oSqQ6wxjodBqK05qdK3Vo3aCSVkBW7bjuC1NFJJBPaqyx6fp6pWkliYLXK2XrukkRu2CCVoSWMgsdMyySKwoLFcIGWSTUMg4IBgTcICoBhRcplMcpFkhIqQp1ClMBTmA0Zfe1zpjvHfXff65bZlzXpB3jjGTgiirmPjAfs16PHqHeQ75Wbj3xxZpOEkV3LRJJSPdomUBZISJLncV2k+8D07dxXp7xsYuTapA9UkJUYWIzNhadnWEZeCXGLQQiJi1ViHfhHL2unWh+mlORsrW0JFpEFnGVfm1mU4kq0FY3eD6corJncv6dr5NLSMNXVaTUksjTiMnaq8uFfSVuDyiJ1iZpy0LOJtpa3YfkcQ5fdozyxI2m5qqcrHN61YYmHsh6v3o9ParYmYJEtlhIx6+gUbjgD23M6oqg92YL0JyF6Bps+qDValVA9h9Lj5SZI3SHXdEQlj1wiQtLLIe6pGzjO3BlBkK1hxpblLVH5wdW0BcFKf/JwRtjsot2z8omaSdxbzzk1iEjsE0AM9rrRZNRIrVyo7dGO6E+oh8axLlJ5H5VaJKx7ePRGFbW6vUeFfHQIWPTI9Tm7HHfuhqY7E6C7JFqUzM6iZXIoncNxX7+bIVdJnTT48x3OQU1krIDW3UeixVhyISzYz6cadY5Xph6TseRNTRsTElzzBn9Vlly0TAERsdgnMYyLROjyFbg5R4ZlsGaMT4yNi2Zlq1GwjZB3jq0PsaJfA3t0jL0W0Y9xf1V41lpWckXMLaZiwxuKYPqc6LlHdkeRF+Qxswx5ASDqBVrsL+2A/N6SiCbYymV2BywJiMZj3GRRMTnL+lVyHCll3R7Szv0vqXMtQ74T+HijljIScLaEpkKCB3rqMBIi0jPs5JeOKTZMZEi5VVnouzy0k3jXjWSMlY6UcVGDxlKMVDqx91SILWSi3D2KdgYy3kP8E9X/AE1SnRXBNdNRMlefT6g7aY6giK+cPLGNg0bY68rcnpsNh9PqIBve/EcPQ3WIq2dR93xpSgk5SAZ9R6MLAOZFUkpLSUDXp6/KPpGUkmTdswlnKnwbl5ITMdGwcXJi7LKsqzUmT5tWYmkXuF9wjBvb76b7dHheazJ9RElUJOCxViuMlUJC0Gtz6PKyjLBY4qMWUe12r1xZ6lOyT6XPEBKN2CkTDOlZd02TBdTMt7Upx2knrkdCv1UKjDKn1A7XBYH6SCOOrWn5Oi/DtRiu+GleRthDL8rXdVjZlcfWrSIxVlGGGCOnH//Z"
} }

View File

@ -2,12 +2,10 @@
"id": 17, "id": 17,
"title": { "title": {
"en": "SQL Assistant", "en": "SQL Assistant",
"de": "SQL Assistent",
"zh": "SQL助理"}, "zh": "SQL助理"},
"description": { "description": {
"en": "SQL Assistant is an AI-powered tool that lets business users turn plain-English questions into fully formed SQL queries. Simply type your question (e.g., 'Show me last quarter's top 10 products by revenue') and SQL Assistant generates the exact SQL, runs it against your database, and returns the results in seconds. ", "en": "SQL Assistant is an AI-powered tool that lets business users turn plain-English questions into fully formed SQL queries. Simply type your question (e.g., Show me last quarters top 10 products by revenue) and SQL Assistant generates the exact SQL, runs it against your database, and returns the results in seconds. ",
"de": "SQL-Assistent ist ein KI-gestütztes Tool, mit dem Geschäftsanwender einfache englische Fragen in vollständige SQL-Abfragen umwandeln können. Geben Sie einfach Ihre Frage ein (z.B. 'Zeige mir die Top 10 Produkte des letzten Quartals nach Umsatz') und der SQL-Assistent generiert das exakte SQL, führt es gegen Ihre Datenbank aus und liefert die Ergebnisse in Sekunden.", "zh": "用户能够将简单文本问题转化为完整的SQL查询并输出结果。只需输入您的问题例如“展示上个季度前十名按收入排序的产品”SQL助理就会生成精确的SQL语句对其运行您的数据库并几秒钟内返回结果。"},
"zh": "用户能够将简单文本问题转化为完整的SQL查询并输出结果。只需输入您的问题例如展示上个季度前十名按收入排序的产品SQL助理就会生成精确的SQL语句对其运行您的数据库并几秒钟内返回结果。"},
"canvas_type": "Marketing", "canvas_type": "Marketing",
"dsl": { "dsl": {
"components": { "components": {
@ -715,4 +713,4 @@
"retrieval": [] "retrieval": []
}, },
"avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAcFBQYFBAcGBQYIBwcIChELCgkJChUPEAwRGBUaGRgVGBcbHichGx0lHRcYIi4iJSgpKywrGiAvMy8qMicqKyr/2wBDAQcICAoJChQLCxQqHBgcKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKir/wAARCAAwADADAREAAhEBAxEB/8QAGgAAAwEBAQEAAAAAAAAAAAAABQYHBAMAAf/EADIQAAEDAwMCBAMHBQAAAAAAAAECAwQFESEABjESEyJBUYEUYXEHFSNSkaGxMjNictH/xAAZAQADAQEBAAAAAAAAAAAAAAACAwQBAAX/xAAlEQACAgICAgEEAwAAAAAAAAABAgARAyESMQRBEyIycYFCkbH/2gAMAwEAAhEDEQA/AKHt2DGpNHXDLrZdWtSrIub39tZ5GbGwPA+pmDFkX7x7idvra85xqQaFNkxUTVIVJQzf8QpBFjbgEenNs681MnA9WJ6fEOKJoxVpSpFLTCo6KEZlTlLcQBIJS20hAv1D1ve+qPk52b0IsYuIGtyt7ZkVVNP+H3A5GdlN2u7GQUBSfmkk8cXH10tmLD6Yl0CG5qmTXBMZiQEMuvupUoKdc6UeEi4FsqOeBxrsKnv1AY+hJ2l5yfu6qQ6/UZtPDRHZ+Eldpsqz1hSrXJGLXwRxqxUQizFs7galPYUFDKT+h15oMuImspQpFiL+2i1A3A1bgxmixUgwlT8ZfgJ/y8P8HXdRuPZoxaqtfkQKbKqF03jtEoDeFKV1lNgfK4H764XfccVUgipvdiwKpFaXMLklFg4juuqV0m3Izg/MaEZCDYMScYqiJOd6xmqfUVfBJcWwtHV1Elfi87k51ViyhrsxL4ivQj1KrFZjTGjTJ8aShdyph5SUqFhwPzX9jpC0dXUqZK3ViHNq7oNaVJjz2Vw5LCrdKknpULZyfMf801MfI1e5NmpAGHUL12EZNFWWlhXSUuWHKgk3xomwEDuDhzLysySU9EndEVyIz3GmxJR+KpBIdCLlRHn/AFEjjIF9AMJlZ8gLZ/qUiJSg1Tu0HO4plFj4FC1h9NYfHIU7kwzgnqCJlKLiCO2s6hKytWiPJoFdfnLW7HS0or6bqXbjg2AI99XjAa3NPlL6jFTduOR5sd1+oyfjQMONqI7QOMA4V7/pqjHjC9SLNn56I1HiqrqTUKM0hbq2lpst5CQSST54xjSPJbICOHUhawISiRQ02T2Uq6AAkqFj/GquJQks1iEr/INLU82bploKSFXusG9xfjHofXQuQUNRoQqQT0ZwVEST5687iZWGgpDsebNbaTDfKVL/ALnbQU/UkKNhjXpFt0BJBVXe/wAGGG6YMlvvNkjlBGmKeJimHIVc0TY89akCKspT28C5BKgDyR7fvrCFI+q/1DQsvVfudYcVyKw49KU6tZyQbmwHFhrOKr9s0uz0CAIpbr3RKo1Rbh02C4HJISp2ZIz0pJ8IQk5Nr/QXznSX6NSnGAwHI/gD/TM+3vtAj1arJpcpgtPdPSH0kFt5wDxAWOOLgamIAFwijCfD927N2tGXuNxlK2W0occUhJWpR+QzzrPjc+pvyqT3Ftf2zbObf7YYecb6CrrDAGfy20wYMkA5Vjbtev7b3nEcXRela27d1ogoWi/rnQsjrqZzHdwzKoKUsqWz3mOnJUlZJt8uokD621w+RdzgynUkUpoUafPZXMnSHlrKluyX1Eug8XF7GwxbgWxrubMO5WmNRsCKtLfcY3rAU0nIltkBP+w0X8Jjdz//2Q==" "avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAcFBQYFBAcGBQYIBwcIChELCgkJChUPEAwRGBUaGRgVGBcbHichGx0lHRcYIi4iJSgpKywrGiAvMy8qMicqKyr/2wBDAQcICAoJChQLCxQqHBgcKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKir/wAARCAAwADADAREAAhEBAxEB/8QAGgAAAwEBAQEAAAAAAAAAAAAABQYHBAMAAf/EADIQAAEDAwMCBAMHBQAAAAAAAAECAwQFESEABjESEyJBUYEUYXEHFSNSkaGxMjNictH/xAAZAQADAQEBAAAAAAAAAAAAAAACAwQBAAX/xAAlEQACAgICAgEEAwAAAAAAAAABAgARAyESMQRBEyIycYFCkbH/2gAMAwEAAhEDEQA/AKHt2DGpNHXDLrZdWtSrIub39tZ5GbGwPA+pmDFkX7x7idvra85xqQaFNkxUTVIVJQzf8QpBFjbgEenNs681MnA9WJ6fEOKJoxVpSpFLTCo6KEZlTlLcQBIJS20hAv1D1ve+qPk52b0IsYuIGtyt7ZkVVNP+H3A5GdlN2u7GQUBSfmkk8cXH10tmLD6Yl0CG5qmTXBMZiQEMuvupUoKdc6UeEi4FsqOeBxrsKnv1AY+hJ2l5yfu6qQ6/UZtPDRHZ+Eldpsqz1hSrXJGLXwRxqxUQizFs7galPYUFDKT+h15oMuImspQpFiL+2i1A3A1bgxmixUgwlT8ZfgJ/y8P8HXdRuPZoxaqtfkQKbKqF03jtEoDeFKV1lNgfK4H764XfccVUgipvdiwKpFaXMLklFg4juuqV0m3Izg/MaEZCDYMScYqiJOd6xmqfUVfBJcWwtHV1Elfi87k51ViyhrsxL4ivQj1KrFZjTGjTJ8aShdyph5SUqFhwPzX9jpC0dXUqZK3ViHNq7oNaVJjz2Vw5LCrdKknpULZyfMf801MfI1e5NmpAGHUL12EZNFWWlhXSUuWHKgk3xomwEDuDhzLysySU9EndEVyIz3GmxJR+KpBIdCLlRHn/AFEjjIF9AMJlZ8gLZ/qUiJSg1Tu0HO4plFj4FC1h9NYfHIU7kwzgnqCJlKLiCO2s6hKytWiPJoFdfnLW7HS0or6bqXbjg2AI99XjAa3NPlL6jFTduOR5sd1+oyfjQMONqI7QOMA4V7/pqjHjC9SLNn56I1HiqrqTUKM0hbq2lpst5CQSST54xjSPJbICOHUhawISiRQ02T2Uq6AAkqFj/GquJQks1iEr/INLU82bploKSFXusG9xfjHofXQuQUNRoQqQT0ZwVEST5687iZWGgpDsebNbaTDfKVL/ALnbQU/UkKNhjXpFt0BJBVXe/wAGGG6YMlvvNkjlBGmKeJimHIVc0TY89akCKspT28C5BKgDyR7fvrCFI+q/1DQsvVfudYcVyKw49KU6tZyQbmwHFhrOKr9s0uz0CAIpbr3RKo1Rbh02C4HJISp2ZIz0pJ8IQk5Nr/QXznSX6NSnGAwHI/gD/TM+3vtAj1arJpcpgtPdPSH0kFt5wDxAWOOLgamIAFwijCfD927N2tGXuNxlK2W0occUhJWpR+QzzrPjc+pvyqT3Ftf2zbObf7YYecb6CrrDAGfy20wYMkA5Vjbtev7b3nEcXRela27d1ogoWi/rnQsjrqZzHdwzKoKUsqWz3mOnJUlZJt8uokD621w+RdzgynUkUpoUafPZXMnSHlrKluyX1Eug8XF7GwxbgWxrubMO5WmNRsCKtLfcY3rAU0nIltkBP+w0X8Jjdz//2Q=="
} }

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -19,7 +19,7 @@ import time
from abc import ABC from abc import ABC
import arxiv import arxiv
from agent.tools.base import ToolParamBase, ToolMeta, ToolBase from agent.tools.base import ToolParamBase, ToolMeta, ToolBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
class ArXivParam(ToolParamBase): class ArXivParam(ToolParamBase):

View File

@ -20,7 +20,7 @@ from copy import deepcopy
from functools import partial from functools import partial
from typing import TypedDict, List, Any from typing import TypedDict, List, Any
from agent.component.base import ComponentParamBase, ComponentBase from agent.component.base import ComponentParamBase, ComponentBase
from common.misc_utils import hash_str2int from api.utils import hash_str2int
from rag.llm.chat_model import ToolCallSession from rag.llm.chat_model import ToolCallSession
from rag.prompts.generator import kb_prompt from rag.prompts.generator import kb_prompt
from rag.utils.mcp_tool_call_conn import MCPToolCallSession from rag.utils.mcp_tool_call_conn import MCPToolCallSession

View File

@ -22,7 +22,7 @@ from typing import Optional
from pydantic import BaseModel, Field, field_validator from pydantic import BaseModel, Field, field_validator
from agent.tools.base import ToolParamBase, ToolBase, ToolMeta from agent.tools.base import ToolParamBase, ToolBase, ToolMeta
from api import settings from api import settings
from common.connection_utils import timeout from api.utils.api_utils import timeout
class Language(StrEnum): class Language(StrEnum):
@ -156,8 +156,8 @@ class CodeExec(ToolBase, ABC):
self.set_output("_ERROR", "construct code request error: " + str(e)) self.set_output("_ERROR", "construct code request error: " + str(e))
try: try:
resp = requests.post(url=f"http://{settings.SANDBOX_HOST}:9385/run", json=code_req, timeout=int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60))) resp = requests.post(url=f"http://{settings.SANDBOX_HOST}:9385/run", json=code_req, timeout=os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60))
logging.info(f"http://{settings.SANDBOX_HOST}:9385/run, code_req: {code_req}, resp.status_code {resp.status_code}:") logging.info(f"http://{settings.SANDBOX_HOST}:9385/run", code_req, resp.status_code)
if resp.status_code != 200: if resp.status_code != 200:
resp.raise_for_status() resp.raise_for_status()
body = resp.json() body = resp.json()

View File

@ -19,7 +19,7 @@ import time
from abc import ABC from abc import ABC
from duckduckgo_search import DDGS from duckduckgo_search import DDGS
from agent.tools.base import ToolMeta, ToolParamBase, ToolBase from agent.tools.base import ToolMeta, ToolParamBase, ToolBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
class DuckDuckGoParam(ToolParamBase): class DuckDuckGoParam(ToolParamBase):

View File

@ -25,7 +25,7 @@ from email.header import Header
from email.utils import formataddr from email.utils import formataddr
from agent.tools.base import ToolParamBase, ToolBase, ToolMeta from agent.tools.base import ToolParamBase, ToolBase, ToolMeta
from common.connection_utils import timeout from api.utils.api_utils import timeout
class EmailParam(ToolParamBase): class EmailParam(ToolParamBase):

View File

@ -22,7 +22,7 @@ import pymysql
import psycopg2 import psycopg2
import pyodbc import pyodbc
from agent.tools.base import ToolParamBase, ToolBase, ToolMeta from agent.tools.base import ToolParamBase, ToolBase, ToolMeta
from common.connection_utils import timeout from api.utils.api_utils import timeout
class ExeSQLParam(ToolParamBase): class ExeSQLParam(ToolParamBase):
@ -53,13 +53,12 @@ class ExeSQLParam(ToolParamBase):
self.max_records = 1024 self.max_records = 1024
def check(self): def check(self):
self.check_valid_value(self.db_type, "Choose DB type", ['mysql', 'postgres', 'mariadb', 'mssql', 'IBM DB2', 'trino']) self.check_valid_value(self.db_type, "Choose DB type", ['mysql', 'postgres', 'mariadb', 'mssql', 'IBM DB2'])
self.check_empty(self.database, "Database name") self.check_empty(self.database, "Database name")
self.check_empty(self.username, "database username") self.check_empty(self.username, "database username")
self.check_empty(self.host, "IP Address") self.check_empty(self.host, "IP Address")
self.check_positive_integer(self.port, "IP Port") self.check_positive_integer(self.port, "IP Port")
if self.db_type != "trino": self.check_empty(self.password, "Database password")
self.check_empty(self.password, "Database password")
self.check_positive_integer(self.max_records, "Maximum number of records") self.check_positive_integer(self.max_records, "Maximum number of records")
if self.database == "rag_flow": if self.database == "rag_flow":
if self.host == "ragflow-mysql": if self.host == "ragflow-mysql":
@ -124,45 +123,6 @@ class ExeSQL(ToolBase, ABC):
r'PWD=' + self._param.password r'PWD=' + self._param.password
) )
db = pyodbc.connect(conn_str) db = pyodbc.connect(conn_str)
elif self._param.db_type == 'trino':
try:
import trino
from trino.auth import BasicAuthentication
except Exception:
raise Exception("Missing dependency 'trino'. Please install: pip install trino")
def _parse_catalog_schema(db: str):
if not db:
return None, None
if "." in db:
c, s = db.split(".", 1)
elif "/" in db:
c, s = db.split("/", 1)
else:
c, s = db, "default"
return c, s
catalog, schema = _parse_catalog_schema(self._param.database)
if not catalog:
raise Exception("For Trino, `database` must be 'catalog.schema' or at least 'catalog'.")
http_scheme = "https" if os.environ.get("TRINO_USE_TLS", "0") == "1" else "http"
auth = None
if http_scheme == "https" and self._param.password:
auth = BasicAuthentication(self._param.username, self._param.password)
try:
db = trino.dbapi.connect(
host=self._param.host,
port=int(self._param.port or 8080),
user=self._param.username or "ragflow",
catalog=catalog,
schema=schema or "default",
http_scheme=http_scheme,
auth=auth
)
except Exception as e:
raise Exception("Database Connection Failed! \n" + str(e))
elif self._param.db_type == 'IBM DB2': elif self._param.db_type == 'IBM DB2':
import ibm_db import ibm_db
conn_str = ( conn_str = (

View File

@ -19,7 +19,7 @@ import time
from abc import ABC from abc import ABC
import requests import requests
from agent.tools.base import ToolParamBase, ToolMeta, ToolBase from agent.tools.base import ToolParamBase, ToolMeta, ToolBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
class GitHubParam(ToolParamBase): class GitHubParam(ToolParamBase):

View File

@ -19,7 +19,7 @@ import time
from abc import ABC from abc import ABC
from serpapi import GoogleSearch from serpapi import GoogleSearch
from agent.tools.base import ToolParamBase, ToolMeta, ToolBase from agent.tools.base import ToolParamBase, ToolMeta, ToolBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
class GoogleParam(ToolParamBase): class GoogleParam(ToolParamBase):

View File

@ -19,7 +19,7 @@ import time
from abc import ABC from abc import ABC
from scholarly import scholarly from scholarly import scholarly
from agent.tools.base import ToolMeta, ToolParamBase, ToolBase from agent.tools.base import ToolMeta, ToolParamBase, ToolBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
class GoogleScholarParam(ToolParamBase): class GoogleScholarParam(ToolParamBase):

View File

@ -21,7 +21,7 @@ from Bio import Entrez
import re import re
import xml.etree.ElementTree as ET import xml.etree.ElementTree as ET
from agent.tools.base import ToolParamBase, ToolMeta, ToolBase from agent.tools.base import ToolParamBase, ToolMeta, ToolBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
class PubMedParam(ToolParamBase): class PubMedParam(ToolParamBase):
@ -85,7 +85,13 @@ class PubMed(ToolBase, ABC):
self._retrieve_chunks(pubmedcnt.findall("PubmedArticle"), self._retrieve_chunks(pubmedcnt.findall("PubmedArticle"),
get_title=lambda child: child.find("MedlineCitation").find("Article").find("ArticleTitle").text, get_title=lambda child: child.find("MedlineCitation").find("Article").find("ArticleTitle").text,
get_url=lambda child: "https://pubmed.ncbi.nlm.nih.gov/" + child.find("MedlineCitation").find("PMID").text, get_url=lambda child: "https://pubmed.ncbi.nlm.nih.gov/" + child.find("MedlineCitation").find("PMID").text,
get_content=lambda child: self._format_pubmed_content(child),) get_content=lambda child: child.find("MedlineCitation") \
.find("Article") \
.find("Abstract") \
.find("AbstractText").text \
if child.find("MedlineCitation")\
.find("Article").find("Abstract") \
else "No abstract available")
return self.output("formalized_content") return self.output("formalized_content")
except Exception as e: except Exception as e:
last_e = e last_e = e
@ -98,50 +104,5 @@ class PubMed(ToolBase, ABC):
assert False, self.output() assert False, self.output()
def _format_pubmed_content(self, child):
"""Extract structured reference info from PubMed XML"""
def safe_find(path):
node = child
for p in path.split("/"):
if node is None:
return None
node = node.find(p)
return node.text if node is not None and node.text else None
title = safe_find("MedlineCitation/Article/ArticleTitle") or "No title"
abstract = safe_find("MedlineCitation/Article/Abstract/AbstractText") or "No abstract available"
journal = safe_find("MedlineCitation/Article/Journal/Title") or "Unknown Journal"
volume = safe_find("MedlineCitation/Article/Journal/JournalIssue/Volume") or "-"
issue = safe_find("MedlineCitation/Article/Journal/JournalIssue/Issue") or "-"
pages = safe_find("MedlineCitation/Article/Pagination/MedlinePgn") or "-"
# Authors
authors = []
for author in child.findall(".//AuthorList/Author"):
lastname = safe_find("LastName") or ""
forename = safe_find("ForeName") or ""
fullname = f"{forename} {lastname}".strip()
if fullname:
authors.append(fullname)
authors_str = ", ".join(authors) if authors else "Unknown Authors"
# DOI
doi = None
for eid in child.findall(".//ArticleId"):
if eid.attrib.get("IdType") == "doi":
doi = eid.text
break
return (
f"Title: {title}\n"
f"Authors: {authors_str}\n"
f"Journal: {journal}\n"
f"Volume: {volume}\n"
f"Issue: {issue}\n"
f"Pages: {pages}\n"
f"DOI: {doi or '-'}\n"
f"Abstract: {abstract.strip()}"
)
def thoughts(self) -> str: def thoughts(self) -> str:
return "Looking for scholarly papers on `{}`,” prioritising reputable sources.".format(self.get_input().get("query", "-_-!")) return "Looking for scholarly papers on `{}`,” prioritising reputable sources.".format(self.get_input().get("query", "-_-!"))

View File

@ -13,22 +13,17 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from functools import partial
import json
import os import os
import re import re
from abc import ABC from abc import ABC
from agent.tools.base import ToolParamBase, ToolBase, ToolMeta from agent.tools.base import ToolParamBase, ToolBase, ToolMeta
from common.constants import LLMType from api.db import LLMType
from api.db.services.document_service import DocumentService
from api.db.services.dialog_service import meta_filter
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
from api import settings from api import settings
from common import globals from api.utils.api_utils import timeout
from common.connection_utils import timeout
from rag.app.tag import label_question from rag.app.tag import label_question
from rag.prompts.generator import cross_languages, kb_prompt, gen_meta_filter from rag.prompts.generator import cross_languages, kb_prompt
class RetrievalParam(ToolParamBase): class RetrievalParam(ToolParamBase):
@ -62,8 +57,6 @@ class RetrievalParam(ToolParamBase):
self.empty_response = "" self.empty_response = ""
self.use_kg = False self.use_kg = False
self.cross_languages = [] self.cross_languages = []
self.toc_enhance = False
self.meta_data_filter={}
def check(self): def check(self):
self.check_decimal_float(self.similarity_threshold, "[Retrieval] Similarity threshold") self.check_decimal_float(self.similarity_threshold, "[Retrieval] Similarity threshold")
@ -123,55 +116,12 @@ class Retrieval(ToolBase, ABC):
vars = self.get_input_elements_from_text(kwargs["query"]) vars = self.get_input_elements_from_text(kwargs["query"])
vars = {k:o["value"] for k,o in vars.items()} vars = {k:o["value"] for k,o in vars.items()}
query = self.string_format(kwargs["query"], vars) query = self.string_format(kwargs["query"], vars)
doc_ids=[]
if self._param.meta_data_filter!={}:
metas = DocumentService.get_meta_by_kbs(kb_ids)
if self._param.meta_data_filter.get("method") == "auto":
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT)
filters = gen_meta_filter(chat_mdl, metas, query)
doc_ids.extend(meta_filter(metas, filters))
if not doc_ids:
doc_ids = None
elif self._param.meta_data_filter.get("method") == "manual":
filters=self._param.meta_data_filter["manual"]
for flt in filters:
pat = re.compile(r"\{* *\{([a-zA-Z:0-9]+@[A-Za-z:0-9_.-]+|sys\.[a-z_]+)\} *\}*")
s = flt["value"]
out_parts = []
last = 0
for m in pat.finditer(s):
out_parts.append(s[last:m.start()])
key = m.group(1)
v = self._canvas.get_variable_value(key)
if v is None:
rep = ""
elif isinstance(v, partial):
buf = []
for chunk in v():
buf.append(chunk)
rep = "".join(buf)
elif isinstance(v, str):
rep = v
else:
rep = json.dumps(v, ensure_ascii=False)
out_parts.append(rep)
last = m.end()
out_parts.append(s[last:])
flt["value"] = "".join(out_parts)
doc_ids.extend(meta_filter(metas, filters))
if not doc_ids:
doc_ids = None
if self._param.cross_languages: if self._param.cross_languages:
query = cross_languages(kbs[0].tenant_id, None, query, self._param.cross_languages) query = cross_languages(kbs[0].tenant_id, None, query, self._param.cross_languages)
if kbs: if kbs:
query = re.sub(r"^user[:\s]*", "", query, flags=re.IGNORECASE) query = re.sub(r"^user[:\s]*", "", query, flags=re.IGNORECASE)
kbinfos = globals.retriever.retrieval( kbinfos = settings.retrievaler.retrieval(
query, query,
embd_mdl, embd_mdl,
[kb.tenant_id for kb in kbs], [kb.tenant_id for kb in kbs],
@ -180,18 +130,12 @@ class Retrieval(ToolBase, ABC):
self._param.top_n, self._param.top_n,
self._param.similarity_threshold, self._param.similarity_threshold,
1 - self._param.keywords_similarity_weight, 1 - self._param.keywords_similarity_weight,
doc_ids=doc_ids,
aggs=False, aggs=False,
rerank_mdl=rerank_mdl, rerank_mdl=rerank_mdl,
rank_feature=label_question(query, kbs), rank_feature=label_question(query, kbs),
) )
if self._param.toc_enhance:
chat_mdl = LLMBundle(self._canvas._tenant_id, LLMType.CHAT)
cks = globals.retriever.retrieval_by_toc(query, kbinfos["chunks"], [kb.tenant_id for kb in kbs], chat_mdl, self._param.top_n)
if cks:
kbinfos["chunks"] = cks
if self._param.use_kg: if self._param.use_kg:
ck = settings.kg_retriever.retrieval(query, ck = settings.kg_retrievaler.retrieval(query,
[kb.tenant_id for kb in kbs], [kb.tenant_id for kb in kbs],
kb_ids, kb_ids,
embd_mdl, embd_mdl,
@ -202,7 +146,7 @@ class Retrieval(ToolBase, ABC):
kbinfos = {"chunks": [], "doc_aggs": []} kbinfos = {"chunks": [], "doc_aggs": []}
if self._param.use_kg and kbs: if self._param.use_kg and kbs:
ck = settings.kg_retriever.retrieval(query, [kb.tenant_id for kb in kbs], filtered_kb_ids, embd_mdl, LLMBundle(kbs[0].tenant_id, LLMType.CHAT)) ck = settings.kg_retrievaler.retrieval(query, [kb.tenant_id for kb in kbs], filtered_kb_ids, embd_mdl, LLMBundle(kbs[0].tenant_id, LLMType.CHAT))
if ck["content_with_weight"]: if ck["content_with_weight"]:
ck["content"] = ck["content_with_weight"] ck["content"] = ck["content_with_weight"]
del ck["content_with_weight"] del ck["content_with_weight"]

View File

@ -19,7 +19,7 @@ import time
from abc import ABC from abc import ABC
import requests import requests
from agent.tools.base import ToolMeta, ToolParamBase, ToolBase from agent.tools.base import ToolMeta, ToolParamBase, ToolBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
class SearXNGParam(ToolParamBase): class SearXNGParam(ToolParamBase):
@ -85,7 +85,7 @@ class SearXNG(ToolBase, ABC):
self.set_output("formalized_content", "") self.set_output("formalized_content", "")
return "" return ""
searxng_url = (getattr(self._param, "searxng_url", "") or kwargs.get("searxng_url") or "").strip() searxng_url = (kwargs.get("searxng_url") or getattr(self._param, "searxng_url", "") or "").strip()
# In try-run, if no URL configured, just return empty instead of raising # In try-run, if no URL configured, just return empty instead of raising
if not searxng_url: if not searxng_url:
self.set_output("formalized_content", "") self.set_output("formalized_content", "")

View File

@ -19,7 +19,7 @@ import time
from abc import ABC from abc import ABC
from tavily import TavilyClient from tavily import TavilyClient
from agent.tools.base import ToolParamBase, ToolBase, ToolMeta from agent.tools.base import ToolParamBase, ToolBase, ToolMeta
from common.connection_utils import timeout from api.utils.api_utils import timeout
class TavilySearchParam(ToolParamBase): class TavilySearchParam(ToolParamBase):

View File

@ -21,7 +21,7 @@ import pandas as pd
import pywencai import pywencai
from agent.tools.base import ToolParamBase, ToolMeta, ToolBase from agent.tools.base import ToolParamBase, ToolMeta, ToolBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
class WenCaiParam(ToolParamBase): class WenCaiParam(ToolParamBase):

View File

@ -19,7 +19,7 @@ import time
from abc import ABC from abc import ABC
import wikipedia import wikipedia
from agent.tools.base import ToolMeta, ToolParamBase, ToolBase from agent.tools.base import ToolMeta, ToolParamBase, ToolBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
class WikipediaParam(ToolParamBase): class WikipediaParam(ToolParamBase):

View File

@ -20,7 +20,7 @@ from abc import ABC
import pandas as pd import pandas as pd
import yfinance as yf import yfinance as yf
from agent.tools.base import ToolMeta, ToolParamBase, ToolBase from agent.tools.base import ToolMeta, ToolParamBase, ToolBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
class YahooFinanceParam(ToolParamBase): class YahooFinanceParam(ToolParamBase):

View File

@ -24,10 +24,10 @@ from flask_cors import CORS
from flasgger import Swagger from flasgger import Swagger
from itsdangerous.url_safe import URLSafeTimedSerializer as Serializer from itsdangerous.url_safe import URLSafeTimedSerializer as Serializer
from common.constants import StatusEnum from api.db import StatusEnum
from api.db.db_models import close_connection from api.db.db_models import close_connection
from api.db.services import UserService from api.db.services import UserService
from api.utils.json_encode import CustomJSONEncoder from api.utils.json import CustomJSONEncoder
from api.utils import commands from api.utils import commands
from flask_mail import Mail from flask_mail import Mail

View File

@ -21,7 +21,7 @@ from flask import request, Response
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
from flask_login import login_required, current_user from flask_login import login_required, current_user
from api.db import VALID_FILE_TYPES, FileType from api.db import VALID_FILE_TYPES, VALID_TASK_STATUS, FileType, LLMType, ParserType, FileSource
from api.db.db_models import APIToken, Task, File from api.db.db_models import APIToken, Task, File
from api.db.services import duplicate_name from api.db.services import duplicate_name
from api.db.services.api_service import APITokenService, API4ConversationService from api.db.services.api_service import APITokenService, API4ConversationService
@ -32,8 +32,8 @@ from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.task_service import queue_tasks, TaskService from api.db.services.task_service import queue_tasks, TaskService
from api.db.services.user_service import UserTenantService from api.db.services.user_service import UserTenantService
from common.misc_utils import get_uuid from api import settings
from common.constants import RetCode, VALID_TASK_STATUS, LLMType, ParserType, FileSource from api.utils import get_uuid, current_timestamp, datetime_format
from api.utils.api_utils import server_error_response, get_data_error_result, get_json_result, validate_request, \ from api.utils.api_utils import server_error_response, get_data_error_result, get_json_result, validate_request, \
generate_confirmation_token generate_confirmation_token
@ -41,13 +41,11 @@ from api.utils.file_utils import filename_type, thumbnail
from rag.app.tag import label_question from rag.app.tag import label_question
from rag.prompts.generator import keyword_extraction from rag.prompts.generator import keyword_extraction
from rag.utils.storage_factory import STORAGE_IMPL from rag.utils.storage_factory import STORAGE_IMPL
from common.time_utils import current_timestamp, datetime_format
from api.db.services.canvas_service import UserCanvasService from api.db.services.canvas_service import UserCanvasService
from agent.canvas import Canvas from agent.canvas import Canvas
from functools import partial from functools import partial
from pathlib import Path from pathlib import Path
from common import globals
@manager.route('/new_token', methods=['POST']) # noqa: F821 @manager.route('/new_token', methods=['POST']) # noqa: F821
@ -60,7 +58,7 @@ def new_token():
return get_data_error_result(message="Tenant not found!") return get_data_error_result(message="Tenant not found!")
tenant_id = tenants[0].tenant_id tenant_id = tenants[0].tenant_id
obj = {"tenant_id": tenant_id, "token": generate_confirmation_token(), obj = {"tenant_id": tenant_id, "token": generate_confirmation_token(tenant_id),
"create_time": current_timestamp(), "create_time": current_timestamp(),
"create_date": datetime_format(datetime.now()), "create_date": datetime_format(datetime.now()),
"update_time": None, "update_time": None,
@ -146,7 +144,7 @@ def set_conversation():
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
try: try:
if objs[0].source == "agent": if objs[0].source == "agent":
e, cvs = UserCanvasService.get_by_id(objs[0].dialog_id) e, cvs = UserCanvasService.get_by_id(objs[0].dialog_id)
@ -187,7 +185,7 @@ def completion():
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json req = request.json
e, conv = API4ConversationService.get_by_id(req["conversation_id"]) e, conv = API4ConversationService.get_by_id(req["conversation_id"])
if not e: if not e:
@ -353,7 +351,7 @@ def get_conversation(conversation_id):
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
try: try:
e, conv = API4ConversationService.get_by_id(conversation_id) e, conv = API4ConversationService.get_by_id(conversation_id)
@ -363,7 +361,7 @@ def get_conversation(conversation_id):
conv = conv.to_dict() conv = conv.to_dict()
if token != APIToken.query(dialog_id=conv['dialog_id'])[0].token: if token != APIToken.query(dialog_id=conv['dialog_id'])[0].token:
return get_json_result(data=False, message='Authentication error: API key is invalid for this conversation_id!"', return get_json_result(data=False, message='Authentication error: API key is invalid for this conversation_id!"',
code=RetCode.AUTHENTICATION_ERROR) code=settings.RetCode.AUTHENTICATION_ERROR)
for referenct_i in conv['reference']: for referenct_i in conv['reference']:
if referenct_i is None or len(referenct_i) == 0: if referenct_i is None or len(referenct_i) == 0:
@ -384,7 +382,7 @@ def upload():
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
kb_name = request.form.get("kb_name").strip() kb_name = request.form.get("kb_name").strip()
tenant_id = objs[0].tenant_id tenant_id = objs[0].tenant_id
@ -400,12 +398,12 @@ def upload():
if 'file' not in request.files: if 'file' not in request.files:
return get_json_result( return get_json_result(
data=False, message='No file part!', code=RetCode.ARGUMENT_ERROR) data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
file = request.files['file'] file = request.files['file']
if file.filename == '': if file.filename == '':
return get_json_result( return get_json_result(
data=False, message='No file selected!', code=RetCode.ARGUMENT_ERROR) data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
root_folder = FileService.get_root_folder(tenant_id) root_folder = FileService.get_root_folder(tenant_id)
pf_id = root_folder["id"] pf_id = root_folder["id"]
@ -497,17 +495,17 @@ def upload_parse():
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
if 'file' not in request.files: if 'file' not in request.files:
return get_json_result( return get_json_result(
data=False, message='No file part!', code=RetCode.ARGUMENT_ERROR) data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist('file') file_objs = request.files.getlist('file')
for file_obj in file_objs: for file_obj in file_objs:
if file_obj.filename == '': if file_obj.filename == '':
return get_json_result( return get_json_result(
data=False, message='No file selected!', code=RetCode.ARGUMENT_ERROR) data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
doc_ids = doc_upload_and_parse(request.form.get("conversation_id"), file_objs, objs[0].tenant_id) doc_ids = doc_upload_and_parse(request.form.get("conversation_id"), file_objs, objs[0].tenant_id)
return get_json_result(data=doc_ids) return get_json_result(data=doc_ids)
@ -520,7 +518,7 @@ def list_chunks():
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json req = request.json
@ -538,7 +536,7 @@ def list_chunks():
) )
kb_ids = KnowledgebaseService.get_kb_ids(tenant_id) kb_ids = KnowledgebaseService.get_kb_ids(tenant_id)
res = globals.retriever.chunk_list(doc_id, tenant_id, kb_ids) res = settings.retrievaler.chunk_list(doc_id, tenant_id, kb_ids)
res = [ res = [
{ {
"content": res_item["content_with_weight"], "content": res_item["content_with_weight"],
@ -560,11 +558,11 @@ def get_chunk(chunk_id):
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
try: try:
tenant_id = objs[0].tenant_id tenant_id = objs[0].tenant_id
kb_ids = KnowledgebaseService.get_kb_ids(tenant_id) kb_ids = KnowledgebaseService.get_kb_ids(tenant_id)
chunk = globals.docStoreConn.get(chunk_id, search.index_name(tenant_id), kb_ids) chunk = settings.docStoreConn.get(chunk_id, search.index_name(tenant_id), kb_ids)
if chunk is None: if chunk is None:
return server_error_response(Exception("Chunk not found")) return server_error_response(Exception("Chunk not found"))
k = [] k = []
@ -585,7 +583,7 @@ def list_kb_docs():
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json req = request.json
tenant_id = objs[0].tenant_id tenant_id = objs[0].tenant_id
@ -638,7 +636,7 @@ def docinfos():
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json req = request.json
doc_ids = req["doc_ids"] doc_ids = req["doc_ids"]
docs = DocumentService.get_by_ids(doc_ids) docs = DocumentService.get_by_ids(doc_ids)
@ -652,7 +650,7 @@ def document_rm():
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
tenant_id = objs[0].tenant_id tenant_id = objs[0].tenant_id
req = request.json req = request.json
@ -704,7 +702,7 @@ def document_rm():
errors += str(e) errors += str(e)
if errors: if errors:
return get_json_result(data=False, message=errors, code=RetCode.SERVER_ERROR) return get_json_result(data=False, message=errors, code=settings.RetCode.SERVER_ERROR)
return get_json_result(data=True) return get_json_result(data=True)
@ -719,7 +717,7 @@ def completion_faq():
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
e, conv = API4ConversationService.get_by_id(req["conversation_id"]) e, conv = API4ConversationService.get_by_id(req["conversation_id"])
if not e: if not e:
@ -858,7 +856,7 @@ def retrieval():
objs = APIToken.query(token=token) objs = APIToken.query(token=token)
if not objs: if not objs:
return get_json_result( return get_json_result(
data=False, message='Authentication error: API key is invalid!"', code=RetCode.AUTHENTICATION_ERROR) data=False, message='Authentication error: API key is invalid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json req = request.json
kb_ids = req.get("kb_id", []) kb_ids = req.get("kb_id", [])
@ -869,7 +867,7 @@ def retrieval():
similarity_threshold = float(req.get("similarity_threshold", 0.2)) similarity_threshold = float(req.get("similarity_threshold", 0.2))
vector_similarity_weight = float(req.get("vector_similarity_weight", 0.3)) vector_similarity_weight = float(req.get("vector_similarity_weight", 0.3))
top = int(req.get("top_k", 1024)) top = int(req.get("top_k", 1024))
highlight = bool(req.get("highlight", False)) highlight = bool(req.get("highlight", False))
try: try:
kbs = KnowledgebaseService.get_by_ids(kb_ids) kbs = KnowledgebaseService.get_by_ids(kb_ids)
@ -877,7 +875,7 @@ def retrieval():
if len(embd_nms) != 1: if len(embd_nms) != 1:
return get_json_result( return get_json_result(
data=False, message='Knowledge bases use different embedding models or does not exist."', data=False, message='Knowledge bases use different embedding models or does not exist."',
code=RetCode.AUTHENTICATION_ERROR) code=settings.RetCode.AUTHENTICATION_ERROR)
embd_mdl = LLMBundle(kbs[0].tenant_id, LLMType.EMBEDDING, llm_name=kbs[0].embd_id) embd_mdl = LLMBundle(kbs[0].tenant_id, LLMType.EMBEDDING, llm_name=kbs[0].embd_id)
rerank_mdl = None rerank_mdl = None
@ -886,7 +884,7 @@ def retrieval():
if req.get("keyword", False): if req.get("keyword", False):
chat_mdl = LLMBundle(kbs[0].tenant_id, LLMType.CHAT) chat_mdl = LLMBundle(kbs[0].tenant_id, LLMType.CHAT)
question += keyword_extraction(chat_mdl, question) question += keyword_extraction(chat_mdl, question)
ranks = globals.retriever.retrieval(question, embd_mdl, kbs[0].tenant_id, kb_ids, page, size, ranks = settings.retrievaler.retrieval(question, embd_mdl, kbs[0].tenant_id, kb_ids, page, size,
similarity_threshold, vector_similarity_weight, top, similarity_threshold, vector_similarity_weight, top,
doc_ids, rerank_mdl=rerank_mdl, highlight= highlight, doc_ids, rerank_mdl=rerank_mdl, highlight= highlight,
rank_feature=label_question(question, kbs)) rank_feature=label_question(question, kbs))
@ -896,5 +894,5 @@ def retrieval():
except Exception as e: except Exception as e:
if str(e).find("not_found") > 0: if str(e).find("not_found") > 0:
return get_json_result(data=False, message='No chunk found! Check the chunk status please!', return get_json_result(data=False, message='No chunk found! Check the chunk status please!',
code=RetCode.DATA_ERROR) code=settings.RetCode.DATA_ERROR)
return server_error_response(e) return server_error_response(e)

View File

@ -25,6 +25,7 @@ from flask import request, Response
from flask_login import login_required, current_user from flask_login import login_required, current_user
from agent.component import LLM from agent.component import LLM
from api import settings
from api.db import CanvasCategory, FileType from api.db import CanvasCategory, FileType
from api.db.services.canvas_service import CanvasTemplateService, UserCanvasService, API4ConversationService from api.db.services.canvas_service import CanvasTemplateService, UserCanvasService, API4ConversationService
from api.db.services.document_service import DocumentService from api.db.services.document_service import DocumentService
@ -33,8 +34,8 @@ from api.db.services.pipeline_operation_log_service import PipelineOperationLogS
from api.db.services.task_service import queue_dataflow, CANVAS_DEBUG_DOC_ID, TaskService from api.db.services.task_service import queue_dataflow, CANVAS_DEBUG_DOC_ID, TaskService
from api.db.services.user_service import TenantService from api.db.services.user_service import TenantService
from api.db.services.user_canvas_version import UserCanvasVersionService from api.db.services.user_canvas_version import UserCanvasVersionService
from common.constants import RetCode from api.settings import RetCode
from common.misc_utils import get_uuid from api.utils import get_uuid
from api.utils.api_utils import get_json_result, server_error_response, validate_request, get_data_error_result from api.utils.api_utils import get_json_result, server_error_response, validate_request, get_data_error_result
from agent.canvas import Canvas from agent.canvas import Canvas
from peewee import MySQLDatabase, PostgresqlDatabase from peewee import MySQLDatabase, PostgresqlDatabase
@ -45,13 +46,12 @@ from api.utils.file_utils import filename_type, read_potential_broken_pdf
from rag.flow.pipeline import Pipeline from rag.flow.pipeline import Pipeline
from rag.nlp import search from rag.nlp import search
from rag.utils.redis_conn import REDIS_CONN from rag.utils.redis_conn import REDIS_CONN
from common import globals
@manager.route('/templates', methods=['GET']) # noqa: F821 @manager.route('/templates', methods=['GET']) # noqa: F821
@login_required @login_required
def templates(): def templates():
return get_json_result(data=[c.to_dict() for c in CanvasTemplateService.get_all()]) return get_json_result(data=[c.to_dict() for c in CanvasTemplateService.query(canvas_category=CanvasCategory.Agent)])
@manager.route('/rm', methods=['POST']) # noqa: F821 @manager.route('/rm', methods=['POST']) # noqa: F821
@ -192,8 +192,8 @@ def rerun():
if 0 < doc["progress"] < 1: if 0 < doc["progress"] < 1:
return get_data_error_result(message=f"`{doc['name']}` is processing...") return get_data_error_result(message=f"`{doc['name']}` is processing...")
if globals.docStoreConn.indexExist(search.index_name(current_user.id), doc["kb_id"]): if settings.docStoreConn.indexExist(search.index_name(current_user.id), doc["kb_id"]):
globals.docStoreConn.delete({"doc_id": doc["id"]}, search.index_name(current_user.id), doc["kb_id"]) settings.docStoreConn.delete({"doc_id": doc["id"]}, search.index_name(current_user.id), doc["kb_id"])
doc["progress_msg"] = "" doc["progress_msg"] = ""
doc["chunk_num"] = 0 doc["chunk_num"] = 0
doc["token_num"] = 0 doc["token_num"] = 0
@ -409,49 +409,6 @@ def test_db_connect():
ibm_db.fetch_assoc(stmt) ibm_db.fetch_assoc(stmt)
ibm_db.close(conn) ibm_db.close(conn)
return get_json_result(data="Database Connection Successful!") return get_json_result(data="Database Connection Successful!")
elif req["db_type"] == 'trino':
def _parse_catalog_schema(db: str):
if not db:
return None, None
if "." in db:
c, s = db.split(".", 1)
elif "/" in db:
c, s = db.split("/", 1)
else:
c, s = db, "default"
return c, s
try:
import trino
import os
from trino.auth import BasicAuthentication
except Exception:
return server_error_response("Missing dependency 'trino'. Please install: pip install trino")
catalog, schema = _parse_catalog_schema(req["database"])
if not catalog:
return server_error_response("For Trino, 'database' must be 'catalog.schema' or at least 'catalog'.")
http_scheme = "https" if os.environ.get("TRINO_USE_TLS", "0") == "1" else "http"
auth = None
if http_scheme == "https" and req.get("password"):
auth = BasicAuthentication(req.get("username") or "ragflow", req["password"])
conn = trino.dbapi.connect(
host=req["host"],
port=int(req["port"] or 8080),
user=req["username"] or "ragflow",
catalog=catalog,
schema=schema or "default",
http_scheme=http_scheme,
auth=auth
)
cur = conn.cursor()
cur.execute("SELECT 1")
cur.fetchall()
cur.close()
conn.close()
return get_json_result(data="Database Connection Successful!")
else: else:
return server_error_response("Unsupported database type.") return server_error_response("Unsupported database type.")
if req["db_type"] != 'mssql': if req["db_type"] != 'mssql':

View File

@ -22,6 +22,7 @@ from flask import request
from flask_login import current_user, login_required from flask_login import current_user, login_required
from api import settings from api import settings
from api.db import LLMType, ParserType
from api.db.services.dialog_service import meta_filter from api.db.services.dialog_service import meta_filter
from api.db.services.document_service import DocumentService from api.db.services.document_service import DocumentService
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
@ -34,9 +35,7 @@ from rag.app.tag import label_question
from rag.nlp import rag_tokenizer, search from rag.nlp import rag_tokenizer, search
from rag.prompts.generator import gen_meta_filter, cross_languages, keyword_extraction from rag.prompts.generator import gen_meta_filter, cross_languages, keyword_extraction
from rag.settings import PAGERANK_FLD from rag.settings import PAGERANK_FLD
from common.string_utils import remove_redundant_spaces from rag.utils import rmSpace
from common.constants import RetCode, LLMType, ParserType
from common import globals
@manager.route('/list', methods=['POST']) # noqa: F821 @manager.route('/list', methods=['POST']) # noqa: F821
@ -61,12 +60,12 @@ def list_chunk():
} }
if "available_int" in req: if "available_int" in req:
query["available_int"] = int(req["available_int"]) query["available_int"] = int(req["available_int"])
sres = globals.retriever.search(query, search.index_name(tenant_id), kb_ids, highlight=["content_ltks"]) sres = settings.retrievaler.search(query, search.index_name(tenant_id), kb_ids, highlight=True)
res = {"total": sres.total, "chunks": [], "doc": doc.to_dict()} res = {"total": sres.total, "chunks": [], "doc": doc.to_dict()}
for id in sres.ids: for id in sres.ids:
d = { d = {
"chunk_id": id, "chunk_id": id,
"content_with_weight": remove_redundant_spaces(sres.highlight[id]) if question and id in sres.highlight else sres.field[ "content_with_weight": rmSpace(sres.highlight[id]) if question and id in sres.highlight else sres.field[
id].get( id].get(
"content_with_weight", ""), "content_with_weight", ""),
"doc_id": sres.field[id]["doc_id"], "doc_id": sres.field[id]["doc_id"],
@ -84,7 +83,7 @@ def list_chunk():
except Exception as e: except Exception as e:
if str(e).find("not_found") > 0: if str(e).find("not_found") > 0:
return get_json_result(data=False, message='No chunk found!', return get_json_result(data=False, message='No chunk found!',
code=RetCode.DATA_ERROR) code=settings.RetCode.DATA_ERROR)
return server_error_response(e) return server_error_response(e)
@ -99,7 +98,7 @@ def get():
return get_data_error_result(message="Tenant not found!") return get_data_error_result(message="Tenant not found!")
for tenant in tenants: for tenant in tenants:
kb_ids = KnowledgebaseService.get_kb_ids(tenant.tenant_id) kb_ids = KnowledgebaseService.get_kb_ids(tenant.tenant_id)
chunk = globals.docStoreConn.get(chunk_id, search.index_name(tenant.tenant_id), kb_ids) chunk = settings.docStoreConn.get(chunk_id, search.index_name(tenant.tenant_id), kb_ids)
if chunk: if chunk:
break break
if chunk is None: if chunk is None:
@ -116,7 +115,7 @@ def get():
except Exception as e: except Exception as e:
if str(e).find("NotFoundError") >= 0: if str(e).find("NotFoundError") >= 0:
return get_json_result(data=False, message='Chunk not found!', return get_json_result(data=False, message='Chunk not found!',
code=RetCode.DATA_ERROR) code=settings.RetCode.DATA_ERROR)
return server_error_response(e) return server_error_response(e)
@ -171,7 +170,7 @@ def set():
v, c = embd_mdl.encode([doc.name, req["content_with_weight"] if not d.get("question_kwd") else "\n".join(d["question_kwd"])]) v, c = embd_mdl.encode([doc.name, req["content_with_weight"] if not d.get("question_kwd") else "\n".join(d["question_kwd"])])
v = 0.1 * v[0] + 0.9 * v[1] if doc.parser_id != ParserType.QA else v[1] v = 0.1 * v[0] + 0.9 * v[1] if doc.parser_id != ParserType.QA else v[1]
d["q_%d_vec" % len(v)] = v.tolist() d["q_%d_vec" % len(v)] = v.tolist()
globals.docStoreConn.update({"id": req["chunk_id"]}, d, search.index_name(tenant_id), doc.kb_id) settings.docStoreConn.update({"id": req["chunk_id"]}, d, search.index_name(tenant_id), doc.kb_id)
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@ -187,7 +186,7 @@ def switch():
if not e: if not e:
return get_data_error_result(message="Document not found!") return get_data_error_result(message="Document not found!")
for cid in req["chunk_ids"]: for cid in req["chunk_ids"]:
if not globals.docStoreConn.update({"id": cid}, if not settings.docStoreConn.update({"id": cid},
{"available_int": int(req["available_int"])}, {"available_int": int(req["available_int"])},
search.index_name(DocumentService.get_tenant_id(req["doc_id"])), search.index_name(DocumentService.get_tenant_id(req["doc_id"])),
doc.kb_id): doc.kb_id):
@ -207,7 +206,7 @@ def rm():
e, doc = DocumentService.get_by_id(req["doc_id"]) e, doc = DocumentService.get_by_id(req["doc_id"])
if not e: if not e:
return get_data_error_result(message="Document not found!") return get_data_error_result(message="Document not found!")
if not globals.docStoreConn.delete({"id": req["chunk_ids"]}, if not settings.docStoreConn.delete({"id": req["chunk_ids"]},
search.index_name(DocumentService.get_tenant_id(req["doc_id"])), search.index_name(DocumentService.get_tenant_id(req["doc_id"])),
doc.kb_id): doc.kb_id):
return get_data_error_result(message="Chunk deleting failure") return get_data_error_result(message="Chunk deleting failure")
@ -271,7 +270,7 @@ def create():
v, c = embd_mdl.encode([doc.name, req["content_with_weight"] if not d["question_kwd"] else "\n".join(d["question_kwd"])]) v, c = embd_mdl.encode([doc.name, req["content_with_weight"] if not d["question_kwd"] else "\n".join(d["question_kwd"])])
v = 0.1 * v[0] + 0.9 * v[1] v = 0.1 * v[0] + 0.9 * v[1]
d["q_%d_vec" % len(v)] = v.tolist() d["q_%d_vec" % len(v)] = v.tolist()
globals.docStoreConn.insert([d], search.index_name(tenant_id), doc.kb_id) settings.docStoreConn.insert([d], search.index_name(tenant_id), doc.kb_id)
DocumentService.increment_chunk_num( DocumentService.increment_chunk_num(
doc.id, doc.kb_id, c, 1, 0) doc.id, doc.kb_id, c, 1, 0)
@ -293,7 +292,7 @@ def retrieval_test():
kb_ids = [kb_ids] kb_ids = [kb_ids]
if not kb_ids: if not kb_ids:
return get_json_result(data=False, message='Please specify dataset firstly.', return get_json_result(data=False, message='Please specify dataset firstly.',
code=RetCode.DATA_ERROR) code=settings.RetCode.DATA_ERROR)
doc_ids = req.get("doc_ids", []) doc_ids = req.get("doc_ids", [])
use_kg = req.get("use_kg", False) use_kg = req.get("use_kg", False)
@ -327,7 +326,7 @@ def retrieval_test():
else: else:
return get_json_result( return get_json_result(
data=False, message='Only owner of knowledgebase authorized for this operation.', data=False, message='Only owner of knowledgebase authorized for this operation.',
code=RetCode.OPERATING_ERROR) code=settings.RetCode.OPERATING_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_ids[0]) e, kb = KnowledgebaseService.get_by_id(kb_ids[0])
if not e: if not e:
@ -347,16 +346,15 @@ def retrieval_test():
question += keyword_extraction(chat_mdl, question) question += keyword_extraction(chat_mdl, question)
labels = label_question(question, [kb]) labels = label_question(question, [kb])
ranks = globals.retriever.retrieval(question, embd_mdl, tenant_ids, kb_ids, page, size, ranks = settings.retrievaler.retrieval(question, embd_mdl, tenant_ids, kb_ids, page, size,
float(req.get("similarity_threshold", 0.0)), float(req.get("similarity_threshold", 0.0)),
float(req.get("vector_similarity_weight", 0.3)), float(req.get("vector_similarity_weight", 0.3)),
top, top,
doc_ids, rerank_mdl=rerank_mdl, doc_ids, rerank_mdl=rerank_mdl, highlight=req.get("highlight"),
highlight=req.get("highlight", False),
rank_feature=labels rank_feature=labels
) )
if use_kg: if use_kg:
ck = settings.kg_retriever.retrieval(question, ck = settings.kg_retrievaler.retrieval(question,
tenant_ids, tenant_ids,
kb_ids, kb_ids,
embd_mdl, embd_mdl,
@ -372,7 +370,7 @@ def retrieval_test():
except Exception as e: except Exception as e:
if str(e).find("not_found") > 0: if str(e).find("not_found") > 0:
return get_json_result(data=False, message='No chunk found! Check the chunk status please!', return get_json_result(data=False, message='No chunk found! Check the chunk status please!',
code=RetCode.DATA_ERROR) code=settings.RetCode.DATA_ERROR)
return server_error_response(e) return server_error_response(e)
@ -386,7 +384,7 @@ def knowledge_graph():
"doc_ids": [doc_id], "doc_ids": [doc_id],
"knowledge_graph_kwd": ["graph", "mind_map"] "knowledge_graph_kwd": ["graph", "mind_map"]
} }
sres = globals.retriever.search(req, search.index_name(tenant_id), kb_ids) sres = settings.retrievaler.search(req, search.index_name(tenant_id), kb_ids)
obj = {"graph": {}, "mind_map": {}} obj = {"graph": {}, "mind_map": {}}
for id in sres.ids[:2]: for id in sres.ids[:2]:
ty = sres.field[id]["knowledge_graph_kwd"] ty = sres.field[id]["knowledge_graph_kwd"]

View File

@ -1,106 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import time
from flask import request
from flask_login import login_required, current_user
from api.db import InputType
from api.db.services.connector_service import ConnectorService, Connector2KbService, SyncLogsService
from api.utils.api_utils import get_json_result, validate_request, get_data_error_result
from common.misc_utils import get_uuid
from common.constants import RetCode, TaskStatus
@manager.route("/set", methods=["POST"]) # noqa: F821
@login_required
def set_connector():
req = request.json
if req.get("id"):
conn = {fld: req[fld] for fld in ["prune_freq", "refresh_freq", "config", "timeout_secs"] if fld in req}
ConnectorService.update_by_id(req["id"], conn)
else:
req["id"] = get_uuid()
conn = {
"id": req["id"],
"tenant_id": current_user.id,
"name": req["name"],
"source": req["source"],
"input_type": InputType.POLL,
"config": req["config"],
"refresh_freq": int(req.get("refresh_freq", 30)),
"prune_freq": int(req.get("prune_freq", 720)),
"timeout_secs": int(req.get("timeout_secs", 60*29)),
"status": TaskStatus.SCHEDULE
}
conn["status"] = TaskStatus.SCHEDULE
ConnectorService.save(**conn)
time.sleep(1)
e, conn = ConnectorService.get_by_id(req["id"])
return get_json_result(data=conn.to_dict())
@manager.route("/list", methods=["GET"]) # noqa: F821
@login_required
def list_connector():
return get_json_result(data=ConnectorService.list(current_user.id))
@manager.route("/<connector_id>", methods=["GET"]) # noqa: F821
@login_required
def get_connector(connector_id):
e, conn = ConnectorService.get_by_id(connector_id)
if not e:
return get_data_error_result(message="Can't find this Connector!")
return get_json_result(data=conn.to_dict())
@manager.route("/<connector_id>/logs", methods=["GET"]) # noqa: F821
@login_required
def list_logs(connector_id):
req = request.args.to_dict(flat=True)
return get_json_result(data=SyncLogsService.list_sync_tasks(connector_id, int(req.get("page", 1)), int(req.get("page_size", 15))))
@manager.route("/<connector_id>/resume", methods=["PUT"]) # noqa: F821
@login_required
def resume(connector_id):
req = request.json
if req.get("resume"):
ConnectorService.resume(connector_id, TaskStatus.SCHEDULE)
else:
ConnectorService.resume(connector_id, TaskStatus.CANCEL)
return get_json_result(data=True)
@manager.route("/<connector_id>/link", methods=["POST"]) # noqa: F821
@validate_request("kb_ids")
@login_required
def link_kb(connector_id):
req = request.json
errors = Connector2KbService.link_kb(connector_id, req["kb_ids"], current_user.id)
if errors:
return get_json_result(data=False, message=errors, code=RetCode.SERVER_ERROR)
return get_json_result(data=True)
@manager.route("/<connector_id>/rm", methods=["POST"]) # noqa: F821
@login_required
def rm_connector(connector_id):
ConnectorService.resume(connector_id, TaskStatus.CANCEL)
ConnectorService.delete_by_id(connector_id)
return get_json_result(data=True)

View File

@ -19,6 +19,8 @@ import logging
from copy import deepcopy from copy import deepcopy
from flask import Response, request from flask import Response, request
from flask_login import current_user, login_required from flask_login import current_user, login_required
from api import settings
from api.db import LLMType
from api.db.db_models import APIToken from api.db.db_models import APIToken
from api.db.services.conversation_service import ConversationService, structure_answer from api.db.services.conversation_service import ConversationService, structure_answer
from api.db.services.dialog_service import DialogService, ask, chat, gen_mindmap from api.db.services.dialog_service import DialogService, ask, chat, gen_mindmap
@ -29,7 +31,6 @@ from api.db.services.user_service import TenantService, UserTenantService
from api.utils.api_utils import get_data_error_result, get_json_result, server_error_response, validate_request from api.utils.api_utils import get_data_error_result, get_json_result, server_error_response, validate_request
from rag.prompts.template import load_prompt from rag.prompts.template import load_prompt
from rag.prompts.generator import chunks_format from rag.prompts.generator import chunks_format
from common.constants import RetCode, LLMType
@manager.route("/set", methods=["POST"]) # noqa: F821 @manager.route("/set", methods=["POST"]) # noqa: F821
@ -92,7 +93,7 @@ def get():
avatar = dialog[0].icon avatar = dialog[0].icon
break break
else: else:
return get_json_result(data=False, message="Only owner of conversation authorized for this operation.", code=RetCode.OPERATING_ERROR) return get_json_result(data=False, message="Only owner of conversation authorized for this operation.", code=settings.RetCode.OPERATING_ERROR)
for ref in conv.reference: for ref in conv.reference:
if isinstance(ref, list): if isinstance(ref, list):
@ -141,7 +142,7 @@ def rm():
if DialogService.query(tenant_id=tenant.tenant_id, id=conv.dialog_id): if DialogService.query(tenant_id=tenant.tenant_id, id=conv.dialog_id):
break break
else: else:
return get_json_result(data=False, message="Only owner of conversation authorized for this operation.", code=RetCode.OPERATING_ERROR) return get_json_result(data=False, message="Only owner of conversation authorized for this operation.", code=settings.RetCode.OPERATING_ERROR)
ConversationService.delete_by_id(cid) ConversationService.delete_by_id(cid)
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
@ -154,7 +155,7 @@ def list_conversation():
dialog_id = request.args["dialog_id"] dialog_id = request.args["dialog_id"]
try: try:
if not DialogService.query(tenant_id=current_user.id, id=dialog_id): if not DialogService.query(tenant_id=current_user.id, id=dialog_id):
return get_json_result(data=False, message="Only owner of dialog authorized for this operation.", code=RetCode.OPERATING_ERROR) return get_json_result(data=False, message="Only owner of dialog authorized for this operation.", code=settings.RetCode.OPERATING_ERROR)
convs = ConversationService.query(dialog_id=dialog_id, order_by=ConversationService.model.create_time, reverse=True) convs = ConversationService.query(dialog_id=dialog_id, order_by=ConversationService.model.create_time, reverse=True)
convs = [d.to_dict() for d in convs] convs = [d.to_dict() for d in convs]

View File

@ -18,13 +18,13 @@ from flask import request
from flask_login import login_required, current_user from flask_login import login_required, current_user
from api.db.services import duplicate_name from api.db.services import duplicate_name
from api.db.services.dialog_service import DialogService from api.db.services.dialog_service import DialogService
from common.constants import StatusEnum from api.db import StatusEnum
from api.db.services.tenant_llm_service import TenantLLMService from api.db.services.tenant_llm_service import TenantLLMService
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.user_service import TenantService, UserTenantService from api.db.services.user_service import TenantService, UserTenantService
from api import settings
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from common.misc_utils import get_uuid from api.utils import get_uuid
from common.constants import RetCode
from api.utils.api_utils import get_json_result from api.utils.api_utils import get_json_result
@ -219,7 +219,7 @@ def rm():
else: else:
return get_json_result( return get_json_result(
data=False, message='Only owner of dialog authorized for this operation.', data=False, message='Only owner of dialog authorized for this operation.',
code=RetCode.OPERATING_ERROR) code=settings.RetCode.OPERATING_ERROR)
dialog_list.append({"id": id,"status":StatusEnum.INVALID.value}) dialog_list.append({"id": id,"status":StatusEnum.INVALID.value})
DialogService.update_many_by_id(dialog_list) DialogService.update_many_by_id(dialog_list)
return get_json_result(data=True) return get_json_result(data=True)

View File

@ -23,32 +23,29 @@ import flask
from flask import request from flask import request
from flask_login import current_user, login_required from flask_login import current_user, login_required
from api.common.check_team_permission import check_kb_team_permission from api import settings
from api.constants import FILE_NAME_LEN_LIMIT, IMG_BASE64_PREFIX from api.constants import FILE_NAME_LEN_LIMIT, IMG_BASE64_PREFIX
from api.db import VALID_FILE_TYPES, FileType from api.db import VALID_FILE_TYPES, VALID_TASK_STATUS, FileSource, FileType, ParserType, TaskStatus
from api.db.db_models import Task from api.db.db_models import File, Task
from api.db.services import duplicate_name from api.db.services import duplicate_name
from api.db.services.document_service import DocumentService, doc_upload_and_parse from api.db.services.document_service import DocumentService, doc_upload_and_parse
from api.db.services.file2document_service import File2DocumentService from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.task_service import TaskService, cancel_all_task_of from api.db.services.task_service import TaskService, cancel_all_task_of, queue_tasks, queue_dataflow
from api.db.services.user_service import UserTenantService from api.db.services.user_service import UserTenantService
from common.misc_utils import get_uuid from api.utils import get_uuid
from api.utils.api_utils import ( from api.utils.api_utils import (
get_data_error_result, get_data_error_result,
get_json_result, get_json_result,
server_error_response, server_error_response,
validate_request, validate_request,
) )
from api.utils.file_utils import filename_type, thumbnail from api.utils.file_utils import filename_type, get_project_base_directory, thumbnail
from common.file_utils import get_project_base_directory
from common.constants import RetCode, VALID_TASK_STATUS, ParserType, TaskStatus
from api.utils.web_utils import CONTENT_TYPE_MAP, html2pdf, is_valid_url from api.utils.web_utils import CONTENT_TYPE_MAP, html2pdf, is_valid_url
from deepdoc.parser.html_parser import RAGFlowHtmlParser from deepdoc.parser.html_parser import RAGFlowHtmlParser
from rag.nlp import search, rag_tokenizer from rag.nlp import search
from rag.utils.storage_factory import STORAGE_IMPL from rag.utils.storage_factory import STORAGE_IMPL
from common import globals
@manager.route("/upload", methods=["POST"]) # noqa: F821 @manager.route("/upload", methods=["POST"]) # noqa: F821
@ -57,29 +54,27 @@ from common import globals
def upload(): def upload():
kb_id = request.form.get("kb_id") kb_id = request.form.get("kb_id")
if not kb_id: if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
if "file" not in request.files: if "file" not in request.files:
return get_json_result(data=False, message="No file part!", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message="No file part!", code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist("file") file_objs = request.files.getlist("file")
for file_obj in file_objs: for file_obj in file_objs:
if file_obj.filename == "": if file_obj.filename == "":
return get_json_result(data=False, message="No file selected!", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message="No file selected!", code=settings.RetCode.ARGUMENT_ERROR)
if len(file_obj.filename.encode("utf-8")) > FILE_NAME_LEN_LIMIT: if len(file_obj.filename.encode("utf-8")) > FILE_NAME_LEN_LIMIT:
return get_json_result(data=False, message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=settings.RetCode.ARGUMENT_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_id) e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e: if not e:
raise LookupError("Can't find this knowledgebase!") raise LookupError("Can't find this knowledgebase!")
if not check_kb_team_permission(kb, current_user.id):
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR)
err, files = FileService.upload_document(kb, file_objs, current_user.id) err, files = FileService.upload_document(kb, file_objs, current_user.id)
if err: if err:
return get_json_result(data=files, message="\n".join(err), code=RetCode.SERVER_ERROR) return get_json_result(data=files, message="\n".join(err), code=settings.RetCode.SERVER_ERROR)
if not files: if not files:
return get_json_result(data=files, message="There seems to be an issue with your file format. Please verify it is correct and not corrupted.", code=RetCode.DATA_ERROR) return get_json_result(data=files, message="There seems to be an issue with your file format. Please verify it is correct and not corrupted.", code=settings.RetCode.DATA_ERROR)
files = [f[0] for f in files] # remove the blob files = [f[0] for f in files] # remove the blob
return get_json_result(data=files) return get_json_result(data=files)
@ -91,16 +86,14 @@ def upload():
def web_crawl(): def web_crawl():
kb_id = request.form.get("kb_id") kb_id = request.form.get("kb_id")
if not kb_id: if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
name = request.form.get("name") name = request.form.get("name")
url = request.form.get("url") url = request.form.get("url")
if not is_valid_url(url): if not is_valid_url(url):
return get_json_result(data=False, message="The URL format is invalid", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message="The URL format is invalid", code=settings.RetCode.ARGUMENT_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_id) e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e: if not e:
raise LookupError("Can't find this knowledgebase!") raise LookupError("Can't find this knowledgebase!")
if check_kb_team_permission(kb, current_user.id):
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR)
blob = html2pdf(url) blob = html2pdf(url)
if not blob: if not blob:
@ -157,12 +150,12 @@ def create():
req = request.json req = request.json
kb_id = req["kb_id"] kb_id = req["kb_id"]
if not kb_id: if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
if len(req["name"].encode("utf-8")) > FILE_NAME_LEN_LIMIT: if len(req["name"].encode("utf-8")) > FILE_NAME_LEN_LIMIT:
return get_json_result(data=False, message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=settings.RetCode.ARGUMENT_ERROR)
if req["name"].strip() == "": if req["name"].strip() == "":
return get_json_result(data=False, message="File name can't be empty.", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message="File name can't be empty.", code=settings.RetCode.ARGUMENT_ERROR)
req["name"] = req["name"].strip() req["name"] = req["name"].strip()
try: try:
@ -212,13 +205,13 @@ def create():
def list_docs(): def list_docs():
kb_id = request.args.get("kb_id") kb_id = request.args.get("kb_id")
if not kb_id: if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
tenants = UserTenantService.query(user_id=current_user.id) tenants = UserTenantService.query(user_id=current_user.id)
for tenant in tenants: for tenant in tenants:
if KnowledgebaseService.query(tenant_id=tenant.tenant_id, id=kb_id): if KnowledgebaseService.query(tenant_id=tenant.tenant_id, id=kb_id):
break break
else: else:
return get_json_result(data=False, message="Only owner of knowledgebase authorized for this operation.", code=RetCode.OPERATING_ERROR) return get_json_result(data=False, message="Only owner of knowledgebase authorized for this operation.", code=settings.RetCode.OPERATING_ERROR)
keywords = request.args.get("keywords", "") keywords = request.args.get("keywords", "")
page_number = int(request.args.get("page", 0)) page_number = int(request.args.get("page", 0))
@ -274,13 +267,13 @@ def get_filter():
kb_id = req.get("kb_id") kb_id = req.get("kb_id")
if not kb_id: if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
tenants = UserTenantService.query(user_id=current_user.id) tenants = UserTenantService.query(user_id=current_user.id)
for tenant in tenants: for tenant in tenants:
if KnowledgebaseService.query(tenant_id=tenant.tenant_id, id=kb_id): if KnowledgebaseService.query(tenant_id=tenant.tenant_id, id=kb_id):
break break
else: else:
return get_json_result(data=False, message="Only owner of knowledgebase authorized for this operation.", code=RetCode.OPERATING_ERROR) return get_json_result(data=False, message="Only owner of knowledgebase authorized for this operation.", code=settings.RetCode.OPERATING_ERROR)
keywords = req.get("keywords", "") keywords = req.get("keywords", "")
@ -312,7 +305,7 @@ def docinfos():
doc_ids = req["doc_ids"] doc_ids = req["doc_ids"]
for doc_id in doc_ids: for doc_id in doc_ids:
if not DocumentService.accessible(doc_id, current_user.id): if not DocumentService.accessible(doc_id, current_user.id):
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR) return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
docs = DocumentService.get_by_ids(doc_ids) docs = DocumentService.get_by_ids(doc_ids)
return get_json_result(data=list(docs.dicts())) return get_json_result(data=list(docs.dicts()))
@ -322,7 +315,7 @@ def docinfos():
def thumbnails(): def thumbnails():
doc_ids = request.args.getlist("doc_ids") doc_ids = request.args.getlist("doc_ids")
if not doc_ids: if not doc_ids:
return get_json_result(data=False, message='Lack of "Document ID"', code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message='Lack of "Document ID"', code=settings.RetCode.ARGUMENT_ERROR)
try: try:
docs = DocumentService.get_thumbnails(doc_ids) docs = DocumentService.get_thumbnails(doc_ids)
@ -345,7 +338,7 @@ def change_status():
status = str(req.get("status", "")) status = str(req.get("status", ""))
if status not in ["0", "1"]: if status not in ["0", "1"]:
return get_json_result(data=False, message='"Status" must be either 0 or 1!', code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message='"Status" must be either 0 or 1!', code=settings.RetCode.ARGUMENT_ERROR)
result = {} result = {}
for doc_id in doc_ids: for doc_id in doc_ids:
@ -367,7 +360,7 @@ def change_status():
continue continue
status_int = int(status) status_int = int(status)
if not globals.docStoreConn.update({"doc_id": doc_id}, {"available_int": status_int}, search.index_name(kb.tenant_id), doc.kb_id): if not settings.docStoreConn.update({"doc_id": doc_id}, {"available_int": status_int}, search.index_name(kb.tenant_id), doc.kb_id):
result[doc_id] = {"error": "Database error (docStore update)!"} result[doc_id] = {"error": "Database error (docStore update)!"}
result[doc_id] = {"status": status} result[doc_id] = {"status": status}
except Exception as e: except Exception as e:
@ -387,12 +380,50 @@ def rm():
for doc_id in doc_ids: for doc_id in doc_ids:
if not DocumentService.accessible4deletion(doc_id, current_user.id): if not DocumentService.accessible4deletion(doc_id, current_user.id):
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR) return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
errors = FileService.delete_docs(doc_ids, current_user.id) root_folder = FileService.get_root_folder(current_user.id)
pf_id = root_folder["id"]
FileService.init_knowledgebase_docs(pf_id, current_user.id)
errors = ""
kb_table_num_map = {}
for doc_id in doc_ids:
try:
e, doc = DocumentService.get_by_id(doc_id)
if not e:
return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id:
return get_data_error_result(message="Tenant not found!")
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
TaskService.filter_delete([Task.doc_id == doc_id])
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(message="Database error (Document removal)!")
f2d = File2DocumentService.get_by_document_id(doc_id)
deleted_file_count = 0
if f2d:
deleted_file_count = FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
File2DocumentService.delete_by_document_id(doc_id)
if deleted_file_count > 0:
STORAGE_IMPL.rm(b, n)
doc_parser = doc.parser_id
if doc_parser == ParserType.TABLE:
kb_id = doc.kb_id
if kb_id not in kb_table_num_map:
counts = DocumentService.count_by_kb_id(kb_id=kb_id, keywords="", run_status=[TaskStatus.DONE], types=[])
kb_table_num_map[kb_id] = counts
kb_table_num_map[kb_id] -= 1
if kb_table_num_map[kb_id] <= 0:
KnowledgebaseService.delete_field_map(kb_id)
except Exception as e:
errors += str(e)
if errors: if errors:
return get_json_result(data=False, message=errors, code=RetCode.SERVER_ERROR) return get_json_result(data=False, message=errors, code=settings.RetCode.SERVER_ERROR)
return get_json_result(data=True) return get_json_result(data=True)
@ -404,7 +435,7 @@ def run():
req = request.json req = request.json
for doc_id in req["doc_ids"]: for doc_id in req["doc_ids"]:
if not DocumentService.accessible(doc_id, current_user.id): if not DocumentService.accessible(doc_id, current_user.id):
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR) return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
try: try:
kb_table_num_map = {} kb_table_num_map = {}
for id in req["doc_ids"]: for id in req["doc_ids"]:
@ -432,12 +463,28 @@ def run():
DocumentService.update_by_id(id, info) DocumentService.update_by_id(id, info)
if req.get("delete", False): if req.get("delete", False):
TaskService.filter_delete([Task.doc_id == id]) TaskService.filter_delete([Task.doc_id == id])
if globals.docStoreConn.indexExist(search.index_name(tenant_id), doc.kb_id): if settings.docStoreConn.indexExist(search.index_name(tenant_id), doc.kb_id):
globals.docStoreConn.delete({"doc_id": id}, search.index_name(tenant_id), doc.kb_id) settings.docStoreConn.delete({"doc_id": id}, search.index_name(tenant_id), doc.kb_id)
if str(req["run"]) == TaskStatus.RUNNING.value: if str(req["run"]) == TaskStatus.RUNNING.value:
doc = doc.to_dict() doc = doc.to_dict()
DocumentService.run(tenant_id, doc, kb_table_num_map) doc["tenant_id"] = tenant_id
doc_parser = doc.get("parser_id", ParserType.NAIVE)
if doc_parser == ParserType.TABLE:
kb_id = doc.get("kb_id")
if not kb_id:
continue
if kb_id not in kb_table_num_map:
count = DocumentService.count_by_kb_id(kb_id=kb_id, keywords="", run_status=[TaskStatus.DONE], types=[])
kb_table_num_map[kb_id] = count
if kb_table_num_map[kb_id] <= 0:
KnowledgebaseService.delete_field_map(kb_id)
if doc.get("pipeline_id", ""):
queue_dataflow(tenant_id, flow_id=doc["pipeline_id"], task_id=get_uuid(), doc_id=id)
else:
bucket, name = File2DocumentService.get_storage_address(doc_id=doc["id"])
queue_tasks(doc, bucket, name, 0)
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
@ -450,15 +497,15 @@ def run():
def rename(): def rename():
req = request.json req = request.json
if not DocumentService.accessible(req["doc_id"], current_user.id): if not DocumentService.accessible(req["doc_id"], current_user.id):
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR) return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
try: try:
e, doc = DocumentService.get_by_id(req["doc_id"]) e, doc = DocumentService.get_by_id(req["doc_id"])
if not e: if not e:
return get_data_error_result(message="Document not found!") return get_data_error_result(message="Document not found!")
if pathlib.Path(req["name"].lower()).suffix != pathlib.Path(doc.name.lower()).suffix: if pathlib.Path(req["name"].lower()).suffix != pathlib.Path(doc.name.lower()).suffix:
return get_json_result(data=False, message="The extension of file can't be changed", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message="The extension of file can't be changed", code=settings.RetCode.ARGUMENT_ERROR)
if len(req["name"].encode("utf-8")) > FILE_NAME_LEN_LIMIT: if len(req["name"].encode("utf-8")) > FILE_NAME_LEN_LIMIT:
return get_json_result(data=False, message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=settings.RetCode.ARGUMENT_ERROR)
for d in DocumentService.query(name=req["name"], kb_id=doc.kb_id): for d in DocumentService.query(name=req["name"], kb_id=doc.kb_id):
if d.name == req["name"]: if d.name == req["name"]:
@ -472,21 +519,6 @@ def rename():
e, file = FileService.get_by_id(informs[0].file_id) e, file = FileService.get_by_id(informs[0].file_id)
FileService.update_by_id(file.id, {"name": req["name"]}) FileService.update_by_id(file.id, {"name": req["name"]})
tenant_id = DocumentService.get_tenant_id(req["doc_id"])
title_tks = rag_tokenizer.tokenize(req["name"])
es_body = {
"docnm_kwd": req["name"],
"title_tks": title_tks,
"title_sm_tks": rag_tokenizer.fine_grained_tokenize(title_tks),
}
if globals.docStoreConn.indexExist(search.index_name(tenant_id), doc.kb_id):
globals.docStoreConn.update(
{"doc_id": req["doc_id"]},
es_body,
search.index_name(tenant_id),
doc.kb_id,
)
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@ -520,10 +552,10 @@ def get(doc_id):
@login_required @login_required
@validate_request("doc_id") @validate_request("doc_id")
def change_parser(): def change_parser():
req = request.json req = request.json
if not DocumentService.accessible(req["doc_id"], current_user.id): if not DocumentService.accessible(req["doc_id"], current_user.id):
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR) return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
e, doc = DocumentService.get_by_id(req["doc_id"]) e, doc = DocumentService.get_by_id(req["doc_id"])
if not e: if not e:
@ -531,7 +563,7 @@ def change_parser():
def reset_doc(): def reset_doc():
nonlocal doc nonlocal doc
e = DocumentService.update_by_id(doc.id, {"pipeline_id": req["pipeline_id"], "parser_id": req["parser_id"], "progress": 0, "progress_msg": "", "run": TaskStatus.UNSTART.value}) e = DocumentService.update_by_id(doc.id, {"parser_id": req["parser_id"], "progress": 0, "progress_msg": "", "run": TaskStatus.UNSTART.value})
if not e: if not e:
return get_data_error_result(message="Document not found!") return get_data_error_result(message="Document not found!")
if doc.token_num > 0: if doc.token_num > 0:
@ -541,11 +573,11 @@ def change_parser():
tenant_id = DocumentService.get_tenant_id(req["doc_id"]) tenant_id = DocumentService.get_tenant_id(req["doc_id"])
if not tenant_id: if not tenant_id:
return get_data_error_result(message="Tenant not found!") return get_data_error_result(message="Tenant not found!")
if globals.docStoreConn.indexExist(search.index_name(tenant_id), doc.kb_id): if settings.docStoreConn.indexExist(search.index_name(tenant_id), doc.kb_id):
globals.docStoreConn.delete({"doc_id": doc.id}, search.index_name(tenant_id), doc.kb_id) settings.docStoreConn.delete({"doc_id": doc.id}, search.index_name(tenant_id), doc.kb_id)
try: try:
if "pipeline_id" in req and req["pipeline_id"] != "": if "pipeline_id" in req:
if doc.pipeline_id == req["pipeline_id"]: if doc.pipeline_id == req["pipeline_id"]:
return get_json_result(data=True) return get_json_result(data=True)
DocumentService.update_by_id(doc.id, {"pipeline_id": req["pipeline_id"]}) DocumentService.update_by_id(doc.id, {"pipeline_id": req["pipeline_id"]})
@ -589,12 +621,12 @@ def get_image(image_id):
@validate_request("conversation_id") @validate_request("conversation_id")
def upload_and_parse(): def upload_and_parse():
if "file" not in request.files: if "file" not in request.files:
return get_json_result(data=False, message="No file part!", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message="No file part!", code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist("file") file_objs = request.files.getlist("file")
for file_obj in file_objs: for file_obj in file_objs:
if file_obj.filename == "": if file_obj.filename == "":
return get_json_result(data=False, message="No file selected!", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message="No file selected!", code=settings.RetCode.ARGUMENT_ERROR)
doc_ids = doc_upload_and_parse(request.form.get("conversation_id"), file_objs, current_user.id) doc_ids = doc_upload_and_parse(request.form.get("conversation_id"), file_objs, current_user.id)
@ -607,7 +639,7 @@ def parse():
url = request.json.get("url") if request.json else "" url = request.json.get("url") if request.json else ""
if url: if url:
if not is_valid_url(url): if not is_valid_url(url):
return get_json_result(data=False, message="The URL format is invalid", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message="The URL format is invalid", code=settings.RetCode.ARGUMENT_ERROR)
download_path = os.path.join(get_project_base_directory(), "logs/downloads") download_path = os.path.join(get_project_base_directory(), "logs/downloads")
os.makedirs(download_path, exist_ok=True) os.makedirs(download_path, exist_ok=True)
from seleniumwire.webdriver import Chrome, ChromeOptions from seleniumwire.webdriver import Chrome, ChromeOptions
@ -640,13 +672,13 @@ def parse():
r = re.search(r"filename=\"([^\"]+)\"", str(res_headers)) r = re.search(r"filename=\"([^\"]+)\"", str(res_headers))
if not r or not r.group(1): if not r or not r.group(1):
return get_json_result(data=False, message="Can't not identify downloaded file", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message="Can't not identify downloaded file", code=settings.RetCode.ARGUMENT_ERROR)
f = File(r.group(1), os.path.join(download_path, r.group(1))) f = File(r.group(1), os.path.join(download_path, r.group(1)))
txt = FileService.parse_docs([f], current_user.id) txt = FileService.parse_docs([f], current_user.id)
return get_json_result(data=txt) return get_json_result(data=txt)
if "file" not in request.files: if "file" not in request.files:
return get_json_result(data=False, message="No file part!", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message="No file part!", code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist("file") file_objs = request.files.getlist("file")
txt = FileService.parse_docs(file_objs, current_user.id) txt = FileService.parse_docs(file_objs, current_user.id)
@ -660,18 +692,18 @@ def parse():
def set_meta(): def set_meta():
req = request.json req = request.json
if not DocumentService.accessible(req["doc_id"], current_user.id): if not DocumentService.accessible(req["doc_id"], current_user.id):
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR) return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
try: try:
meta = json.loads(req["meta"]) meta = json.loads(req["meta"])
if not isinstance(meta, dict): if not isinstance(meta, dict):
return get_json_result(data=False, message="Only dictionary type supported.", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message="Only dictionary type supported.", code=settings.RetCode.ARGUMENT_ERROR)
for k, v in meta.items(): for k, v in meta.items():
if not isinstance(v, str) and not isinstance(v, int) and not isinstance(v, float): if not isinstance(v, str) and not isinstance(v, int) and not isinstance(v, float):
return get_json_result(data=False, message=f"The type is not supported: {v}", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message=f"The type is not supported: {v}", code=settings.RetCode.ARGUMENT_ERROR)
except Exception as e: except Exception as e:
return get_json_result(data=False, message=f"Json syntax error: {e}", code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message=f"Json syntax error: {e}", code=settings.RetCode.ARGUMENT_ERROR)
if not isinstance(meta, dict): if not isinstance(meta, dict):
return get_json_result(data=False, message='Meta data should be in Json map format, like {"key": "value"}', code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message='Meta data should be in Json map format, like {"key": "value"}', code=settings.RetCode.ARGUMENT_ERROR)
try: try:
e, doc = DocumentService.get_by_id(req["doc_id"]) e, doc = DocumentService.get_by_id(req["doc_id"])

View File

@ -23,10 +23,10 @@ from flask import request
from flask_login import login_required, current_user from flask_login import login_required, current_user
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from common.misc_utils import get_uuid from api.utils import get_uuid
from common.constants import RetCode
from api.db import FileType from api.db import FileType
from api.db.services.document_service import DocumentService from api.db.services.document_service import DocumentService
from api import settings
from api.utils.api_utils import get_json_result from api.utils.api_utils import get_json_result
@ -108,7 +108,7 @@ def rm():
file_ids = req["file_ids"] file_ids = req["file_ids"]
if not file_ids: if not file_ids:
return get_json_result( return get_json_result(
data=False, message='Lack of "Files ID"', code=RetCode.ARGUMENT_ERROR) data=False, message='Lack of "Files ID"', code=settings.RetCode.ARGUMENT_ERROR)
try: try:
for file_id in file_ids: for file_id in file_ids:
informs = File2DocumentService.get_by_file_id(file_id) informs = File2DocumentService.get_by_file_id(file_id)

View File

@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License # limitations under the License
# #
import logging
import os import os
import pathlib import pathlib
import re import re
@ -22,15 +21,14 @@ import flask
from flask import request from flask import request
from flask_login import login_required, current_user from flask_login import login_required, current_user
from api.common.check_team_permission import check_file_team_permission
from api.db.services.document_service import DocumentService from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService from api.db.services.file2document_service import File2DocumentService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from common.misc_utils import get_uuid from api.utils import get_uuid
from common.constants import RetCode, FileSource from api.db import FileType, FileSource
from api.db import FileType
from api.db.services import duplicate_name from api.db.services import duplicate_name
from api.db.services.file_service import FileService from api.db.services.file_service import FileService
from api import settings
from api.utils.api_utils import get_json_result from api.utils.api_utils import get_json_result
from api.utils.file_utils import filename_type from api.utils.file_utils import filename_type
from api.utils.web_utils import CONTENT_TYPE_MAP from api.utils.web_utils import CONTENT_TYPE_MAP
@ -49,21 +47,21 @@ def upload():
if 'file' not in request.files: if 'file' not in request.files:
return get_json_result( return get_json_result(
data=False, message='No file part!', code=RetCode.ARGUMENT_ERROR) data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist('file') file_objs = request.files.getlist('file')
for file_obj in file_objs: for file_obj in file_objs:
if file_obj.filename == '': if file_obj.filename == '':
return get_json_result( return get_json_result(
data=False, message='No file selected!', code=RetCode.ARGUMENT_ERROR) data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
file_res = [] file_res = []
try: try:
e, pf_folder = FileService.get_by_id(pf_id) e, pf_folder = FileService.get_by_id(pf_id)
if not e: if not e:
return get_data_error_result( message="Can't find this folder!") return get_data_error_result( message="Can't find this folder!")
for file_obj in file_objs: for file_obj in file_objs:
MAX_FILE_NUM_PER_USER: int = int(os.environ.get('MAX_FILE_NUM_PER_USER', 0)) MAX_FILE_NUM_PER_USER = int(os.environ.get('MAX_FILE_NUM_PER_USER', 0))
if 0 < MAX_FILE_NUM_PER_USER <= DocumentService.get_doc_count(current_user.id): if MAX_FILE_NUM_PER_USER > 0 and DocumentService.get_doc_count(current_user.id) >= MAX_FILE_NUM_PER_USER:
return get_data_error_result( message="Exceed the maximum file number of a free user!") return get_data_error_result( message="Exceed the maximum file number of a free user!")
# split file name path # split file name path
@ -134,7 +132,7 @@ def create():
try: try:
if not FileService.is_parent_folder_exist(pf_id): if not FileService.is_parent_folder_exist(pf_id):
return get_json_result( return get_json_result(
data=False, message="Parent Folder Doesn't Exist!", code=RetCode.OPERATING_ERROR) data=False, message="Parent Folder Doesn't Exist!", code=settings.RetCode.OPERATING_ERROR)
if FileService.query(name=req["name"], parent_id=pf_id): if FileService.query(name=req["name"], parent_id=pf_id):
return get_data_error_result( return get_data_error_result(
message="Duplicated folder name in the same folder.") message="Duplicated folder name in the same folder.")
@ -235,63 +233,54 @@ def get_all_parent_folders():
return server_error_response(e) return server_error_response(e)
@manager.route("/rm", methods=["POST"]) # noqa: F821 @manager.route('/rm', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("file_ids") @validate_request("file_ids")
def rm(): def rm():
req = request.json req = request.json
file_ids = req["file_ids"] file_ids = req["file_ids"]
def _delete_single_file(file):
try:
if file.location:
STORAGE_IMPL.rm(file.parent_id, file.location)
except Exception:
logging.exception(f"Fail to remove object: {file.parent_id}/{file.location}")
informs = File2DocumentService.get_by_file_id(file.id)
for inform in informs:
doc_id = inform.document_id
e, doc = DocumentService.get_by_id(doc_id)
if e and doc:
tenant_id = DocumentService.get_tenant_id(doc_id)
if tenant_id:
DocumentService.remove_document(doc, tenant_id)
File2DocumentService.delete_by_file_id(file.id)
FileService.delete(file)
def _delete_folder_recursive(folder, tenant_id):
sub_files = FileService.list_all_files_by_parent_id(folder.id)
for sub_file in sub_files:
if sub_file.type == FileType.FOLDER.value:
_delete_folder_recursive(sub_file, tenant_id)
else:
_delete_single_file(sub_file)
FileService.delete(folder)
try: try:
for file_id in file_ids: for file_id in file_ids:
e, file = FileService.get_by_id(file_id) e, file = FileService.get_by_id(file_id)
if not e or not file: if not e:
return get_data_error_result(message="File or Folder not found!") return get_data_error_result(message="File or Folder not found!")
if not file.tenant_id: if not file.tenant_id:
return get_data_error_result(message="Tenant not found!") return get_data_error_result(message="Tenant not found!")
if not check_file_team_permission(file, current_user.id): if file.tenant_id != current_user.id:
return get_json_result(data=False, message="No authorization.", code=RetCode.AUTHENTICATION_ERROR) return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR)
if file.source_type == FileSource.KNOWLEDGEBASE: if file.source_type == FileSource.KNOWLEDGEBASE:
continue continue
if file.type == FileType.FOLDER.value: if file.type == FileType.FOLDER.value:
_delete_folder_recursive(file, current_user.id) file_id_list = FileService.get_all_innermost_file_ids(file_id, [])
continue for inner_file_id in file_id_list:
e, file = FileService.get_by_id(inner_file_id)
if not e:
return get_data_error_result(message="File not found!")
STORAGE_IMPL.rm(file.parent_id, file.location)
FileService.delete_folder_by_pf_id(current_user.id, file_id)
else:
STORAGE_IMPL.rm(file.parent_id, file.location)
if not FileService.delete(file):
return get_data_error_result(
message="Database error (File removal)!")
_delete_single_file(file) # delete file2document
informs = File2DocumentService.get_by_file_id(file_id)
for inform in informs:
doc_id = inform.document_id
e, doc = DocumentService.get_by_id(doc_id)
if not e:
return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id:
return get_data_error_result(message="Tenant not found!")
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(
message="Database error (Document removal)!")
File2DocumentService.delete_by_file_id(file_id)
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@ -305,15 +294,15 @@ def rename():
e, file = FileService.get_by_id(req["file_id"]) e, file = FileService.get_by_id(req["file_id"])
if not e: if not e:
return get_data_error_result(message="File not found!") return get_data_error_result(message="File not found!")
if not check_file_team_permission(file, current_user.id): if file.tenant_id != current_user.id:
return get_json_result(data=False, message='No authorization.', code=RetCode.AUTHENTICATION_ERROR) return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR)
if file.type != FileType.FOLDER.value \ if file.type != FileType.FOLDER.value \
and pathlib.Path(req["name"].lower()).suffix != pathlib.Path( and pathlib.Path(req["name"].lower()).suffix != pathlib.Path(
file.name.lower()).suffix: file.name.lower()).suffix:
return get_json_result( return get_json_result(
data=False, data=False,
message="The extension of file can't be changed", message="The extension of file can't be changed",
code=RetCode.ARGUMENT_ERROR) code=settings.RetCode.ARGUMENT_ERROR)
for file in FileService.query(name=req["name"], pf_id=file.parent_id): for file in FileService.query(name=req["name"], pf_id=file.parent_id):
if file.name == req["name"]: if file.name == req["name"]:
return get_data_error_result( return get_data_error_result(
@ -343,8 +332,8 @@ def get(file_id):
e, file = FileService.get_by_id(file_id) e, file = FileService.get_by_id(file_id)
if not e: if not e:
return get_data_error_result(message="Document not found!") return get_data_error_result(message="Document not found!")
if not check_file_team_permission(file, current_user.id): if file.tenant_id != current_user.id:
return get_json_result(data=False, message='No authorization.', code=RetCode.AUTHENTICATION_ERROR) return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR)
blob = STORAGE_IMPL.get(file.parent_id, file.location) blob = STORAGE_IMPL.get(file.parent_id, file.location)
if not blob: if not blob:
@ -365,89 +354,31 @@ def get(file_id):
return server_error_response(e) return server_error_response(e)
@manager.route("/mv", methods=["POST"]) # noqa: F821 @manager.route('/mv', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("src_file_ids", "dest_file_id") @validate_request("src_file_ids", "dest_file_id")
def move(): def move():
req = request.json req = request.json
try: try:
file_ids = req["src_file_ids"] file_ids = req["src_file_ids"]
dest_parent_id = req["dest_file_id"] parent_id = req["dest_file_id"]
ok, dest_folder = FileService.get_by_id(dest_parent_id)
if not ok or not dest_folder:
return get_data_error_result(message="Parent folder not found!")
files = FileService.get_by_ids(file_ids) files = FileService.get_by_ids(file_ids)
if not files: files_dict = {}
return get_data_error_result(message="Source files not found!") for file in files:
files_dict[file.id] = file
files_dict = {f.id: f for f in files}
for file_id in file_ids: for file_id in file_ids:
file = files_dict.get(file_id) file = files_dict[file_id]
if not file: if not file:
return get_data_error_result(message="File or folder not found!") return get_data_error_result(message="File or Folder not found!")
if not file.tenant_id: if not file.tenant_id:
return get_data_error_result(message="Tenant not found!") return get_data_error_result(message="Tenant not found!")
if not check_file_team_permission(file, current_user.id): if file.tenant_id != current_user.id:
return get_json_result( return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR)
data=False, fe, _ = FileService.get_by_id(parent_id)
message="No authorization.", if not fe:
code=RetCode.AUTHENTICATION_ERROR, return get_data_error_result(message="Parent Folder not found!")
) FileService.move_file(file_ids, parent_id)
def _move_entry_recursive(source_file_entry, dest_folder):
if source_file_entry.type == FileType.FOLDER.value:
existing_folder = FileService.query(name=source_file_entry.name, parent_id=dest_folder.id)
if existing_folder:
new_folder = existing_folder[0]
else:
new_folder = FileService.insert(
{
"id": get_uuid(),
"parent_id": dest_folder.id,
"tenant_id": source_file_entry.tenant_id,
"created_by": current_user.id,
"name": source_file_entry.name,
"location": "",
"size": 0,
"type": FileType.FOLDER.value,
}
)
sub_files = FileService.list_all_files_by_parent_id(source_file_entry.id)
for sub_file in sub_files:
_move_entry_recursive(sub_file, new_folder)
FileService.delete_by_id(source_file_entry.id)
return
old_parent_id = source_file_entry.parent_id
old_location = source_file_entry.location
filename = source_file_entry.name
new_location = filename
while STORAGE_IMPL.obj_exist(dest_folder.id, new_location):
new_location += "_"
try:
STORAGE_IMPL.move(old_parent_id, old_location, dest_folder.id, new_location)
except Exception as storage_err:
raise RuntimeError(f"Move file failed at storage layer: {str(storage_err)}")
FileService.update_by_id(
source_file_entry.id,
{
"parent_id": dest_folder.id,
"location": new_location,
},
)
for file in files:
_move_entry_recursive(file, dest_folder)
return get_json_result(data=True) return get_json_result(data=True)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)

View File

@ -15,15 +15,11 @@
# #
import json import json
import logging import logging
import random
from flask import request from flask import request
from flask_login import login_required, current_user from flask_login import login_required, current_user
import numpy as np
from api.db.services import duplicate_name
from api.db.services.connector_service import Connector2KbService
from api.db.services.llm_service import LLMBundle
from api.db.services.document_service import DocumentService, queue_raptor_o_graphrag_tasks from api.db.services.document_service import DocumentService, queue_raptor_o_graphrag_tasks
from api.db.services.file2document_service import File2DocumentService from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService from api.db.services.file_service import FileService
@ -31,35 +27,78 @@ from api.db.services.pipeline_operation_log_service import PipelineOperationLogS
from api.db.services.task_service import TaskService, GRAPH_RAPTOR_FAKE_DOC_ID from api.db.services.task_service import TaskService, GRAPH_RAPTOR_FAKE_DOC_ID
from api.db.services.user_service import TenantService, UserTenantService from api.db.services.user_service import TenantService, UserTenantService
from api.utils.api_utils import get_error_data_result, server_error_response, get_data_error_result, validate_request, not_allowed_parameters from api.utils.api_utils import get_error_data_result, server_error_response, get_data_error_result, validate_request, not_allowed_parameters
from api.db import VALID_FILE_TYPES from api.utils import get_uuid
from api.db import PipelineTaskType, StatusEnum, FileSource, VALID_FILE_TYPES, VALID_TASK_STATUS
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.db_models import File from api.db.db_models import File
from api.utils.api_utils import get_json_result from api.utils.api_utils import get_json_result
from api import settings
from rag.nlp import search from rag.nlp import search
from api.constants import DATASET_NAME_LIMIT from api.constants import DATASET_NAME_LIMIT
from rag.settings import PAGERANK_FLD from rag.settings import PAGERANK_FLD
from rag.utils.redis_conn import REDIS_CONN
from rag.utils.storage_factory import STORAGE_IMPL from rag.utils.storage_factory import STORAGE_IMPL
from rag.utils.doc_store_conn import OrderByExpr
from common.constants import RetCode, PipelineTaskType, StatusEnum, VALID_TASK_STATUS, FileSource, LLMType
from common import globals
@manager.route('/create', methods=['post']) # noqa: F821 @manager.route('/create', methods=['post']) # noqa: F821
@login_required @login_required
@validate_request("name") @validate_request("name")
def create(): def create():
req = request.json req = request.json
req = KnowledgebaseService.create_with_name( dataset_name = req["name"]
name = req.pop("name", None), if not isinstance(dataset_name, str):
tenant_id = current_user.id, return get_data_error_result(message="Dataset name must be string.")
parser_id = req.pop("parser_id", None), if dataset_name.strip() == "":
**req return get_data_error_result(message="Dataset name can't be empty.")
) if len(dataset_name.encode("utf-8")) > DATASET_NAME_LIMIT:
return get_data_error_result(
message=f"Dataset name length is {len(dataset_name)} which is larger than {DATASET_NAME_LIMIT}")
dataset_name = dataset_name.strip()
dataset_name = duplicate_name(
KnowledgebaseService.query,
name=dataset_name,
tenant_id=current_user.id,
status=StatusEnum.VALID.value)
try: try:
req["id"] = get_uuid()
req["name"] = dataset_name
req["tenant_id"] = current_user.id
req["created_by"] = current_user.id
if not req.get("parser_id"):
req["parser_id"] = "naive"
e, t = TenantService.get_by_id(current_user.id)
if not e:
return get_data_error_result(message="Tenant not found.")
req["parser_config"] = {
"layout_recognize": "DeepDOC",
"chunk_token_num": 512,
"delimiter": "\n",
"auto_keywords": 0,
"auto_questions": 0,
"html4excel": False,
"topn_tags": 3,
"raptor": {
"use_raptor": True,
"prompt": "Please summarize the following paragraphs. Be careful with the numbers, do not make things up. Paragraphs as following:\n {cluster_content}\nThe above is the content you need to summarize.",
"max_token": 256,
"threshold": 0.1,
"max_cluster": 64,
"random_seed": 0
},
"graphrag": {
"use_graphrag": True,
"entity_types": [
"organization",
"person",
"geo",
"event",
"category"
],
"method": "light"
}
}
if not KnowledgebaseService.save(**req): if not KnowledgebaseService.save(**req):
return get_data_error_result() return get_data_error_result()
return get_json_result(data={"kb_id":req["id"]}) return get_json_result(data={"kb_id": req["id"]})
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@ -83,14 +122,14 @@ def update():
return get_json_result( return get_json_result(
data=False, data=False,
message='No authorization.', message='No authorization.',
code=RetCode.AUTHENTICATION_ERROR code=settings.RetCode.AUTHENTICATION_ERROR
) )
try: try:
if not KnowledgebaseService.query( if not KnowledgebaseService.query(
created_by=current_user.id, id=req["kb_id"]): created_by=current_user.id, id=req["kb_id"]):
return get_json_result( return get_json_result(
data=False, message='Only owner of knowledgebase authorized for this operation.', data=False, message='Only owner of knowledgebase authorized for this operation.',
code=RetCode.OPERATING_ERROR) code=settings.RetCode.OPERATING_ERROR)
e, kb = KnowledgebaseService.get_by_id(req["kb_id"]) e, kb = KnowledgebaseService.get_by_id(req["kb_id"])
if not e: if not e:
@ -109,11 +148,11 @@ def update():
if kb.pagerank != req.get("pagerank", 0): if kb.pagerank != req.get("pagerank", 0):
if req.get("pagerank", 0) > 0: if req.get("pagerank", 0) > 0:
globals.docStoreConn.update({"kb_id": kb.id}, {PAGERANK_FLD: req["pagerank"]}, settings.docStoreConn.update({"kb_id": kb.id}, {PAGERANK_FLD: req["pagerank"]},
search.index_name(kb.tenant_id), kb.id) search.index_name(kb.tenant_id), kb.id)
else: else:
# Elasticsearch requires PAGERANK_FLD be non-zero! # Elasticsearch requires PAGERANK_FLD be non-zero!
globals.docStoreConn.update({"exists": PAGERANK_FLD}, {"remove": PAGERANK_FLD}, settings.docStoreConn.update({"exists": PAGERANK_FLD}, {"remove": PAGERANK_FLD},
search.index_name(kb.tenant_id), kb.id) search.index_name(kb.tenant_id), kb.id)
e, kb = KnowledgebaseService.get_by_id(kb.id) e, kb = KnowledgebaseService.get_by_id(kb.id)
@ -141,17 +180,12 @@ def detail():
else: else:
return get_json_result( return get_json_result(
data=False, message='Only owner of knowledgebase authorized for this operation.', data=False, message='Only owner of knowledgebase authorized for this operation.',
code=RetCode.OPERATING_ERROR) code=settings.RetCode.OPERATING_ERROR)
kb = KnowledgebaseService.get_detail(kb_id) kb = KnowledgebaseService.get_detail(kb_id)
if not kb: if not kb:
return get_data_error_result( return get_data_error_result(
message="Can't find this knowledgebase!") message="Can't find this knowledgebase!")
kb["size"] = DocumentService.get_total_size_by_kb_id(kb_id=kb["id"],keywords="", run_status=[], types=[]) kb["size"] = DocumentService.get_total_size_by_kb_id(kb_id=kb["id"],keywords="", run_status=[], types=[])
kb["connectors"] = Connector2KbService.list_connectors(kb_id)
for key in ["graphrag_task_finish_at", "raptor_task_finish_at", "mindmap_task_finish_at"]:
if finish_at := kb.get(key):
kb[key] = finish_at.strftime("%Y-%m-%d %H:%M:%S")
return get_json_result(data=kb) return get_json_result(data=kb)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@ -201,7 +235,7 @@ def rm():
return get_json_result( return get_json_result(
data=False, data=False,
message='No authorization.', message='No authorization.',
code=RetCode.AUTHENTICATION_ERROR code=settings.RetCode.AUTHENTICATION_ERROR
) )
try: try:
kbs = KnowledgebaseService.query( kbs = KnowledgebaseService.query(
@ -209,7 +243,7 @@ def rm():
if not kbs: if not kbs:
return get_json_result( return get_json_result(
data=False, message='Only owner of knowledgebase authorized for this operation.', data=False, message='Only owner of knowledgebase authorized for this operation.',
code=RetCode.OPERATING_ERROR) code=settings.RetCode.OPERATING_ERROR)
for doc in DocumentService.query(kb_id=req["kb_id"]): for doc in DocumentService.query(kb_id=req["kb_id"]):
if not DocumentService.remove_document(doc, kbs[0].tenant_id): if not DocumentService.remove_document(doc, kbs[0].tenant_id):
@ -225,8 +259,8 @@ def rm():
return get_data_error_result( return get_data_error_result(
message="Database error (Knowledgebase removal)!") message="Database error (Knowledgebase removal)!")
for kb in kbs: for kb in kbs:
globals.docStoreConn.delete({"kb_id": kb.id}, search.index_name(kb.tenant_id), kb.id) settings.docStoreConn.delete({"kb_id": kb.id}, search.index_name(kb.tenant_id), kb.id)
globals.docStoreConn.deleteIdx(search.index_name(kb.tenant_id), kb.id) settings.docStoreConn.deleteIdx(search.index_name(kb.tenant_id), kb.id)
if hasattr(STORAGE_IMPL, 'remove_bucket'): if hasattr(STORAGE_IMPL, 'remove_bucket'):
STORAGE_IMPL.remove_bucket(kb.id) STORAGE_IMPL.remove_bucket(kb.id)
return get_json_result(data=True) return get_json_result(data=True)
@ -241,13 +275,13 @@ def list_tags(kb_id):
return get_json_result( return get_json_result(
data=False, data=False,
message='No authorization.', message='No authorization.',
code=RetCode.AUTHENTICATION_ERROR code=settings.RetCode.AUTHENTICATION_ERROR
) )
tenants = UserTenantService.get_tenants_by_user_id(current_user.id) tenants = UserTenantService.get_tenants_by_user_id(current_user.id)
tags = [] tags = []
for tenant in tenants: for tenant in tenants:
tags += globals.retriever.all_tags(tenant["tenant_id"], [kb_id]) tags += settings.retrievaler.all_tags(tenant["tenant_id"], [kb_id])
return get_json_result(data=tags) return get_json_result(data=tags)
@ -260,13 +294,13 @@ def list_tags_from_kbs():
return get_json_result( return get_json_result(
data=False, data=False,
message='No authorization.', message='No authorization.',
code=RetCode.AUTHENTICATION_ERROR code=settings.RetCode.AUTHENTICATION_ERROR
) )
tenants = UserTenantService.get_tenants_by_user_id(current_user.id) tenants = UserTenantService.get_tenants_by_user_id(current_user.id)
tags = [] tags = []
for tenant in tenants: for tenant in tenants:
tags += globals.retriever.all_tags(tenant["tenant_id"], kb_ids) tags += settings.retrievaler.all_tags(tenant["tenant_id"], kb_ids)
return get_json_result(data=tags) return get_json_result(data=tags)
@ -278,12 +312,12 @@ def rm_tags(kb_id):
return get_json_result( return get_json_result(
data=False, data=False,
message='No authorization.', message='No authorization.',
code=RetCode.AUTHENTICATION_ERROR code=settings.RetCode.AUTHENTICATION_ERROR
) )
e, kb = KnowledgebaseService.get_by_id(kb_id) e, kb = KnowledgebaseService.get_by_id(kb_id)
for t in req["tags"]: for t in req["tags"]:
globals.docStoreConn.update({"tag_kwd": t, "kb_id": [kb_id]}, settings.docStoreConn.update({"tag_kwd": t, "kb_id": [kb_id]},
{"remove": {"tag_kwd": t}}, {"remove": {"tag_kwd": t}},
search.index_name(kb.tenant_id), search.index_name(kb.tenant_id),
kb_id) kb_id)
@ -298,11 +332,11 @@ def rename_tags(kb_id):
return get_json_result( return get_json_result(
data=False, data=False,
message='No authorization.', message='No authorization.',
code=RetCode.AUTHENTICATION_ERROR code=settings.RetCode.AUTHENTICATION_ERROR
) )
e, kb = KnowledgebaseService.get_by_id(kb_id) e, kb = KnowledgebaseService.get_by_id(kb_id)
globals.docStoreConn.update({"tag_kwd": req["from_tag"], "kb_id": [kb_id]}, settings.docStoreConn.update({"tag_kwd": req["from_tag"], "kb_id": [kb_id]},
{"remove": {"tag_kwd": req["from_tag"].strip()}, "add": {"tag_kwd": req["to_tag"]}}, {"remove": {"tag_kwd": req["from_tag"].strip()}, "add": {"tag_kwd": req["to_tag"]}},
search.index_name(kb.tenant_id), search.index_name(kb.tenant_id),
kb_id) kb_id)
@ -316,7 +350,7 @@ def knowledge_graph(kb_id):
return get_json_result( return get_json_result(
data=False, data=False,
message='No authorization.', message='No authorization.',
code=RetCode.AUTHENTICATION_ERROR code=settings.RetCode.AUTHENTICATION_ERROR
) )
_, kb = KnowledgebaseService.get_by_id(kb_id) _, kb = KnowledgebaseService.get_by_id(kb_id)
req = { req = {
@ -325,9 +359,9 @@ def knowledge_graph(kb_id):
} }
obj = {"graph": {}, "mind_map": {}} obj = {"graph": {}, "mind_map": {}}
if not globals.docStoreConn.indexExist(search.index_name(kb.tenant_id), kb_id): if not settings.docStoreConn.indexExist(search.index_name(kb.tenant_id), kb_id):
return get_json_result(data=obj) return get_json_result(data=obj)
sres = globals.retriever.search(req, search.index_name(kb.tenant_id), [kb_id]) sres = settings.retrievaler.search(req, search.index_name(kb.tenant_id), [kb_id])
if not len(sres.ids): if not len(sres.ids):
return get_json_result(data=obj) return get_json_result(data=obj)
@ -356,10 +390,10 @@ def delete_knowledge_graph(kb_id):
return get_json_result( return get_json_result(
data=False, data=False,
message='No authorization.', message='No authorization.',
code=RetCode.AUTHENTICATION_ERROR code=settings.RetCode.AUTHENTICATION_ERROR
) )
_, kb = KnowledgebaseService.get_by_id(kb_id) _, kb = KnowledgebaseService.get_by_id(kb_id)
globals.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]}, search.index_name(kb.tenant_id), kb_id) settings.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]}, search.index_name(kb.tenant_id), kb_id)
return get_json_result(data=True) return get_json_result(data=True)
@ -373,7 +407,7 @@ def get_meta():
return get_json_result( return get_json_result(
data=False, data=False,
message='No authorization.', message='No authorization.',
code=RetCode.AUTHENTICATION_ERROR code=settings.RetCode.AUTHENTICATION_ERROR
) )
return get_json_result(data=DocumentService.get_meta_by_kbs(kb_ids)) return get_json_result(data=DocumentService.get_meta_by_kbs(kb_ids))
@ -386,7 +420,7 @@ def get_basic_info():
return get_json_result( return get_json_result(
data=False, data=False,
message='No authorization.', message='No authorization.',
code=RetCode.AUTHENTICATION_ERROR code=settings.RetCode.AUTHENTICATION_ERROR
) )
basic_info = DocumentService.knowledgebase_basic_info(kb_id) basic_info = DocumentService.knowledgebase_basic_info(kb_id)
@ -399,7 +433,7 @@ def get_basic_info():
def list_pipeline_logs(): def list_pipeline_logs():
kb_id = request.args.get("kb_id") kb_id = request.args.get("kb_id")
if not kb_id: if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
keywords = request.args.get("keywords", "") keywords = request.args.get("keywords", "")
@ -443,7 +477,7 @@ def list_pipeline_logs():
def list_pipeline_dataset_logs(): def list_pipeline_dataset_logs():
kb_id = request.args.get("kb_id") kb_id = request.args.get("kb_id")
if not kb_id: if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
page_number = int(request.args.get("page", 0)) page_number = int(request.args.get("page", 0))
items_per_page = int(request.args.get("page_size", 0)) items_per_page = int(request.args.get("page_size", 0))
@ -477,7 +511,7 @@ def list_pipeline_dataset_logs():
def delete_pipeline_logs(): def delete_pipeline_logs():
kb_id = request.args.get("kb_id") kb_id = request.args.get("kb_id")
if not kb_id: if not kb_id:
return get_json_result(data=False, message='Lack of "KB ID"', code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
req = request.get_json() req = request.get_json()
log_ids = req.get("log_ids", []) log_ids = req.get("log_ids", [])
@ -492,7 +526,7 @@ def delete_pipeline_logs():
def pipeline_log_detail(): def pipeline_log_detail():
log_id = request.args.get("log_id") log_id = request.args.get("log_id")
if not log_id: if not log_id:
return get_json_result(data=False, message='Lack of "Pipeline log ID"', code=RetCode.ARGUMENT_ERROR) return get_json_result(data=False, message='Lack of "Pipeline log ID"', code=settings.RetCode.ARGUMENT_ERROR)
ok, log = PipelineOperationLogService.get_by_id(log_id) ok, log = PipelineOperationLogService.get_by_id(log_id)
if not ok: if not ok:
@ -540,7 +574,7 @@ def run_graphrag():
sample_document = documents[0] sample_document = documents[0]
document_ids = [document["id"] for document in documents] document_ids = [document["id"] for document in documents]
task_id = queue_raptor_o_graphrag_tasks(sample_doc_id=sample_document, ty="graphrag", priority=0, fake_doc_id=GRAPH_RAPTOR_FAKE_DOC_ID, doc_ids=list(document_ids)) task_id = queue_raptor_o_graphrag_tasks(doc=sample_document, ty="graphrag", priority=0, fake_doc_id=GRAPH_RAPTOR_FAKE_DOC_ID, doc_ids=list(document_ids))
if not KnowledgebaseService.update_by_id(kb.id, {"graphrag_task_id": task_id}): if not KnowledgebaseService.update_by_id(kb.id, {"graphrag_task_id": task_id}):
logging.warning(f"Cannot save graphrag_task_id for kb {kb_id}") logging.warning(f"Cannot save graphrag_task_id for kb {kb_id}")
@ -609,7 +643,7 @@ def run_raptor():
sample_document = documents[0] sample_document = documents[0]
document_ids = [document["id"] for document in documents] document_ids = [document["id"] for document in documents]
task_id = queue_raptor_o_graphrag_tasks(sample_doc_id=sample_document, ty="raptor", priority=0, fake_doc_id=GRAPH_RAPTOR_FAKE_DOC_ID, doc_ids=list(document_ids)) task_id = queue_raptor_o_graphrag_tasks(doc=sample_document, ty="raptor", priority=0, fake_doc_id=GRAPH_RAPTOR_FAKE_DOC_ID, doc_ids=list(document_ids))
if not KnowledgebaseService.update_by_id(kb.id, {"raptor_task_id": task_id}): if not KnowledgebaseService.update_by_id(kb.id, {"raptor_task_id": task_id}):
logging.warning(f"Cannot save raptor_task_id for kb {kb_id}") logging.warning(f"Cannot save raptor_task_id for kb {kb_id}")
@ -678,7 +712,7 @@ def run_mindmap():
sample_document = documents[0] sample_document = documents[0]
document_ids = [document["id"] for document in documents] document_ids = [document["id"] for document in documents]
task_id = queue_raptor_o_graphrag_tasks(sample_doc_id=sample_document, ty="mindmap", priority=0, fake_doc_id=GRAPH_RAPTOR_FAKE_DOC_ID, doc_ids=list(document_ids)) task_id = queue_raptor_o_graphrag_tasks(doc=sample_document, ty="mindmap", priority=0, fake_doc_id=GRAPH_RAPTOR_FAKE_DOC_ID, doc_ids=list(document_ids))
if not KnowledgebaseService.update_by_id(kb.id, {"mindmap_task_id": task_id}): if not KnowledgebaseService.update_by_id(kb.id, {"mindmap_task_id": task_id}):
logging.warning(f"Cannot save mindmap_task_id for kb {kb_id}") logging.warning(f"Cannot save mindmap_task_id for kb {kb_id}")
@ -722,183 +756,22 @@ def delete_kb_task():
if not pipeline_task_type or pipeline_task_type not in [PipelineTaskType.GRAPH_RAG, PipelineTaskType.RAPTOR, PipelineTaskType.MINDMAP]: if not pipeline_task_type or pipeline_task_type not in [PipelineTaskType.GRAPH_RAG, PipelineTaskType.RAPTOR, PipelineTaskType.MINDMAP]:
return get_error_data_result(message="Invalid task type") return get_error_data_result(message="Invalid task type")
def cancel_task(task_id):
REDIS_CONN.set(f"{task_id}-cancel", "x")
match pipeline_task_type: match pipeline_task_type:
case PipelineTaskType.GRAPH_RAG: case PipelineTaskType.GRAPH_RAG:
kb_task_id_field = "graphrag_task_id" settings.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]}, search.index_name(kb.tenant_id), kb_id)
task_id = kb.graphrag_task_id kb_task_id = "graphrag_task_id"
kb_task_finish_at = "graphrag_task_finish_at" kb_task_finish_at = "graphrag_task_finish_at"
cancel_task(task_id)
globals.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]}, search.index_name(kb.tenant_id), kb_id)
case PipelineTaskType.RAPTOR: case PipelineTaskType.RAPTOR:
kb_task_id_field = "raptor_task_id" kb_task_id = "raptor_task_id"
task_id = kb.raptor_task_id
kb_task_finish_at = "raptor_task_finish_at" kb_task_finish_at = "raptor_task_finish_at"
cancel_task(task_id)
globals.docStoreConn.delete({"raptor_kwd": ["raptor"]}, search.index_name(kb.tenant_id), kb_id)
case PipelineTaskType.MINDMAP: case PipelineTaskType.MINDMAP:
kb_task_id_field = "mindmap_task_id" kb_task_id = "mindmap_task_id"
task_id = kb.mindmap_task_id
kb_task_finish_at = "mindmap_task_finish_at" kb_task_finish_at = "mindmap_task_finish_at"
cancel_task(task_id)
case _: case _:
return get_error_data_result(message="Internal Error: Invalid task type") return get_error_data_result(message="Internal Error: Invalid task type")
ok = KnowledgebaseService.update_by_id(kb_id, {kb_task_id: "", kb_task_finish_at: None})
ok = KnowledgebaseService.update_by_id(kb_id, {kb_task_id_field: "", kb_task_finish_at: None})
if not ok: if not ok:
return server_error_response(f"Internal error: cannot delete task {pipeline_task_type}") return server_error_response(f"Internal error: cannot delete task {pipeline_task_type}")
return get_json_result(data=True) return get_json_result(data=True)
@manager.route("/check_embedding", methods=["post"]) # noqa: F821
@login_required
def check_embedding():
def _guess_vec_field(src: dict) -> str | None:
for k in src or {}:
if k.endswith("_vec"):
return k
return None
def _as_float_vec(v):
if v is None:
return []
if isinstance(v, str):
return [float(x) for x in v.split("\t") if x != ""]
if isinstance(v, (list, tuple, np.ndarray)):
return [float(x) for x in v]
return []
def _to_1d(x):
a = np.asarray(x, dtype=np.float32)
return a.reshape(-1)
def _cos_sim(a, b, eps=1e-12):
a = _to_1d(a)
b = _to_1d(b)
na = np.linalg.norm(a)
nb = np.linalg.norm(b)
if na < eps or nb < eps:
return 0.0
return float(np.dot(a, b) / (na * nb))
def sample_random_chunks_with_vectors(
docStoreConn,
tenant_id: str,
kb_id: str,
n: int = 5,
base_fields=("docnm_kwd","doc_id","content_with_weight","page_num_int","position_int","top_int"),
):
index_nm = search.index_name(tenant_id)
res0 = docStoreConn.search(
selectFields=[], highlightFields=[],
condition={"kb_id": kb_id, "available_int": 1},
matchExprs=[], orderBy=OrderByExpr(),
offset=0, limit=1,
indexNames=index_nm, knowledgebaseIds=[kb_id]
)
total = docStoreConn.getTotal(res0)
if total <= 0:
return []
n = min(n, total)
offsets = sorted(random.sample(range(total), n))
out = []
for off in offsets:
res1 = docStoreConn.search(
selectFields=list(base_fields),
highlightFields=[],
condition={"kb_id": kb_id, "available_int": 1},
matchExprs=[], orderBy=OrderByExpr(),
offset=off, limit=1,
indexNames=index_nm, knowledgebaseIds=[kb_id]
)
ids = docStoreConn.getChunkIds(res1)
if not ids:
continue
cid = ids[0]
full_doc = docStoreConn.get(cid, index_nm, [kb_id]) or {}
vec_field = _guess_vec_field(full_doc)
vec = _as_float_vec(full_doc.get(vec_field))
out.append({
"chunk_id": cid,
"kb_id": kb_id,
"doc_id": full_doc.get("doc_id"),
"doc_name": full_doc.get("docnm_kwd"),
"vector_field": vec_field,
"vector_dim": len(vec),
"vector": vec,
"page_num_int": full_doc.get("page_num_int"),
"position_int": full_doc.get("position_int"),
"top_int": full_doc.get("top_int"),
"content_with_weight": full_doc.get("content_with_weight") or "",
})
return out
req = request.json
kb_id = req.get("kb_id", "")
embd_id = req.get("embd_id", "")
n = int(req.get("check_num", 5))
_, kb = KnowledgebaseService.get_by_id(kb_id)
tenant_id = kb.tenant_id
emb_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING, embd_id)
samples = sample_random_chunks_with_vectors(globals.docStoreConn, tenant_id=tenant_id, kb_id=kb_id, n=n)
results, eff_sims = [], []
for ck in samples:
txt = (ck.get("content_with_weight") or "").strip()
if not txt:
results.append({"chunk_id": ck["chunk_id"], "reason": "no_text"})
continue
if not ck.get("vector"):
results.append({"chunk_id": ck["chunk_id"], "reason": "no_stored_vector"})
continue
try:
qv, _ = emb_mdl.encode_queries(txt)
sim = _cos_sim(qv, ck["vector"])
except Exception:
return get_error_data_result(message="embedding failure")
eff_sims.append(sim)
results.append({
"chunk_id": ck["chunk_id"],
"doc_id": ck["doc_id"],
"doc_name": ck["doc_name"],
"vector_field": ck["vector_field"],
"vector_dim": ck["vector_dim"],
"cos_sim": round(sim, 6),
})
summary = {
"kb_id": kb_id,
"model": embd_id,
"sampled": len(samples),
"valid": len(eff_sims),
"avg_cos_sim": round(float(np.mean(eff_sims)) if eff_sims else 0.0, 6),
"min_cos_sim": round(float(np.min(eff_sims)) if eff_sims else 0.0, 6),
"max_cos_sim": round(float(np.max(eff_sims)) if eff_sims else 0.0, 6),
}
if summary["avg_cos_sim"] > 0.99:
return get_json_result(data={"summary": summary, "results": results})
return get_json_result(code=RetCode.NOT_EFFECTIVE, message="failed", data={"summary": summary, "results": results})
@manager.route("/<kb_id>/link", methods=["POST"]) # noqa: F821
@validate_request("connector_ids")
@login_required
def link_connector(kb_id):
req = request.json
errors = Connector2KbService.link_connectors(kb_id, req["connector_ids"], current_user.id)
if errors:
return get_json_result(data=False, message=errors, code=RetCode.SERVER_ERROR)
return get_json_result(data=True)

View File

@ -15,24 +15,24 @@
# #
import logging import logging
import json import json
import os
from flask import request from flask import request
from flask_login import login_required, current_user from flask_login import login_required, current_user
from api.db.services.tenant_llm_service import LLMFactoriesService, TenantLLMService from api.db.services.tenant_llm_service import LLMFactoriesService, TenantLLMService
from api.db.services.llm_service import LLMService from api.db.services.llm_service import LLMService
from api import settings
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from common.constants import StatusEnum, LLMType from api.db import StatusEnum, LLMType
from api.db.db_models import TenantLLM from api.db.db_models import TenantLLM
from api.utils.api_utils import get_json_result, get_allowed_llm_factories from api.utils.api_utils import get_json_result
from rag.utils.base64_image import test_image from api.utils.base64_image import test_image
from rag.llm import EmbeddingModel, ChatModel, RerankModel, CvModel, TTSModel from rag.llm import EmbeddingModel, ChatModel, RerankModel, CvModel, TTSModel
@manager.route("/factories", methods=["GET"]) # noqa: F821 @manager.route('/factories', methods=['GET']) # noqa: F821
@login_required @login_required
def factories(): def factories():
try: try:
fac = get_allowed_llm_factories() fac = LLMFactoriesService.get_all()
fac = [f.to_dict() for f in fac if f.name not in ["Youdao", "FastEmbed", "BAAI"]] fac = [f.to_dict() for f in fac if f.name not in ["Youdao", "FastEmbed", "BAAI"]]
llms = LLMService.get_all() llms = LLMService.get_all()
mdl_types = {} mdl_types = {}
@ -43,13 +43,14 @@ def factories():
mdl_types[m.fid] = set([]) mdl_types[m.fid] = set([])
mdl_types[m.fid].add(m.model_type) mdl_types[m.fid].add(m.model_type)
for f in fac: for f in fac:
f["model_types"] = list(mdl_types.get(f["name"], [LLMType.CHAT, LLMType.EMBEDDING, LLMType.RERANK, LLMType.IMAGE2TEXT, LLMType.SPEECH2TEXT, LLMType.TTS])) f["model_types"] = list(mdl_types.get(f["name"], [LLMType.CHAT, LLMType.EMBEDDING, LLMType.RERANK,
LLMType.IMAGE2TEXT, LLMType.SPEECH2TEXT, LLMType.TTS]))
return get_json_result(data=fac) return get_json_result(data=fac)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route("/set_api_key", methods=["POST"]) # noqa: F821 @manager.route('/set_api_key', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("llm_factory", "api_key") @validate_request("llm_factory", "api_key")
def set_api_key(): def set_api_key():
@ -62,7 +63,8 @@ def set_api_key():
for llm in LLMService.query(fid=factory): for llm in LLMService.query(fid=factory):
if not embd_passed and llm.model_type == LLMType.EMBEDDING.value: if not embd_passed and llm.model_type == LLMType.EMBEDDING.value:
assert factory in EmbeddingModel, f"Embedding model from {factory} is not supported yet." assert factory in EmbeddingModel, f"Embedding model from {factory} is not supported yet."
mdl = EmbeddingModel[factory](req["api_key"], llm.llm_name, base_url=req.get("base_url")) mdl = EmbeddingModel[factory](
req["api_key"], llm.llm_name, base_url=req.get("base_url"))
try: try:
arr, tc = mdl.encode(["Test if the api key is available"]) arr, tc = mdl.encode(["Test if the api key is available"])
if len(arr[0]) == 0: if len(arr[0]) == 0:
@ -72,40 +74,52 @@ def set_api_key():
msg += f"\nFail to access embedding model({llm.llm_name}) using this api key." + str(e) msg += f"\nFail to access embedding model({llm.llm_name}) using this api key." + str(e)
elif not chat_passed and llm.model_type == LLMType.CHAT.value: elif not chat_passed and llm.model_type == LLMType.CHAT.value:
assert factory in ChatModel, f"Chat model from {factory} is not supported yet." assert factory in ChatModel, f"Chat model from {factory} is not supported yet."
mdl = ChatModel[factory](req["api_key"], llm.llm_name, base_url=req.get("base_url"), **extra) mdl = ChatModel[factory](
req["api_key"], llm.llm_name, base_url=req.get("base_url"), **extra)
try: try:
m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}], {"temperature": 0.9, "max_tokens": 50}) m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}],
{"temperature": 0.9, 'max_tokens': 50})
if m.find("**ERROR**") >= 0: if m.find("**ERROR**") >= 0:
raise Exception(m) raise Exception(m)
chat_passed = True chat_passed = True
except Exception as e: except Exception as e:
msg += f"\nFail to access model({llm.fid}/{llm.llm_name}) using this api key." + str(e) msg += f"\nFail to access model({llm.fid}/{llm.llm_name}) using this api key." + str(
e)
elif not rerank_passed and llm.model_type == LLMType.RERANK: elif not rerank_passed and llm.model_type == LLMType.RERANK:
assert factory in RerankModel, f"Re-rank model from {factory} is not supported yet." assert factory in RerankModel, f"Re-rank model from {factory} is not supported yet."
mdl = RerankModel[factory](req["api_key"], llm.llm_name, base_url=req.get("base_url")) mdl = RerankModel[factory](
req["api_key"], llm.llm_name, base_url=req.get("base_url"))
try: try:
arr, tc = mdl.similarity("What's the weather?", ["Is it sunny today?"]) arr, tc = mdl.similarity("What's the weather?", ["Is it sunny today?"])
if len(arr) == 0 or tc == 0: if len(arr) == 0 or tc == 0:
raise Exception("Fail") raise Exception("Fail")
rerank_passed = True rerank_passed = True
logging.debug(f"passed model rerank {llm.llm_name}") logging.debug(f'passed model rerank {llm.llm_name}')
except Exception as e: except Exception as e:
msg += f"\nFail to access model({llm.fid}/{llm.llm_name}) using this api key." + str(e) msg += f"\nFail to access model({llm.fid}/{llm.llm_name}) using this api key." + str(
e)
if any([embd_passed, chat_passed, rerank_passed]): if any([embd_passed, chat_passed, rerank_passed]):
msg = "" msg = ''
break break
if msg: if msg:
return get_data_error_result(message=msg) return get_data_error_result(message=msg)
llm_config = {"api_key": req["api_key"], "api_base": req.get("base_url", "")} llm_config = {
"api_key": req["api_key"],
"api_base": req.get("base_url", "")
}
for n in ["model_type", "llm_name"]: for n in ["model_type", "llm_name"]:
if n in req: if n in req:
llm_config[n] = req[n] llm_config[n] = req[n]
for llm in LLMService.query(fid=factory): for llm in LLMService.query(fid=factory):
llm_config["max_tokens"] = llm.max_tokens llm_config["max_tokens"]=llm.max_tokens
if not TenantLLMService.filter_update([TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == factory, TenantLLM.llm_name == llm.llm_name], llm_config): if not TenantLLMService.filter_update(
[TenantLLM.tenant_id == current_user.id,
TenantLLM.llm_factory == factory,
TenantLLM.llm_name == llm.llm_name],
llm_config):
TenantLLMService.save( TenantLLMService.save(
tenant_id=current_user.id, tenant_id=current_user.id,
llm_factory=factory, llm_factory=factory,
@ -113,13 +127,13 @@ def set_api_key():
model_type=llm.model_type, model_type=llm.model_type,
api_key=llm_config["api_key"], api_key=llm_config["api_key"],
api_base=llm_config["api_base"], api_base=llm_config["api_base"],
max_tokens=llm_config["max_tokens"], max_tokens=llm_config["max_tokens"]
) )
return get_json_result(data=True) return get_json_result(data=True)
@manager.route("/add_llm", methods=["POST"]) # noqa: F821 @manager.route('/add_llm', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("llm_factory") @validate_request("llm_factory")
def add_llm(): def add_llm():
@ -128,9 +142,6 @@ def add_llm():
api_key = req.get("api_key", "x") api_key = req.get("api_key", "x")
llm_name = req.get("llm_name") llm_name = req.get("llm_name")
if factory not in get_allowed_llm_factories():
return get_data_error_result(message=f"LLM factory {factory} is not allowed")
def apikey_json(keys): def apikey_json(keys):
nonlocal req nonlocal req
return json.dumps({k: req.get(k, "") for k in keys}) return json.dumps({k: req.get(k, "") for k in keys})
@ -183,9 +194,6 @@ def add_llm():
elif factory == "Azure-OpenAI": elif factory == "Azure-OpenAI":
api_key = apikey_json(["api_key", "api_version"]) api_key = apikey_json(["api_key", "api_version"])
elif factory == "OpenRouter":
api_key = apikey_json(["api_key", "provider_order"])
llm = { llm = {
"tenant_id": current_user.id, "tenant_id": current_user.id,
"llm_factory": factory, "llm_factory": factory,
@ -193,7 +201,7 @@ def add_llm():
"llm_name": llm_name, "llm_name": llm_name,
"api_base": req.get("api_base", ""), "api_base": req.get("api_base", ""),
"api_key": api_key, "api_key": api_key,
"max_tokens": req.get("max_tokens"), "max_tokens": req.get("max_tokens")
} }
msg = "" msg = ""
@ -201,7 +209,10 @@ def add_llm():
extra = {"provider": factory} extra = {"provider": factory}
if llm["model_type"] == LLMType.EMBEDDING.value: if llm["model_type"] == LLMType.EMBEDDING.value:
assert factory in EmbeddingModel, f"Embedding model from {factory} is not supported yet." assert factory in EmbeddingModel, f"Embedding model from {factory} is not supported yet."
mdl = EmbeddingModel[factory](key=llm["api_key"], model_name=mdl_nm, base_url=llm["api_base"]) mdl = EmbeddingModel[factory](
key=llm['api_key'],
model_name=mdl_nm,
base_url=llm["api_base"])
try: try:
arr, tc = mdl.encode(["Test if the api key is available"]) arr, tc = mdl.encode(["Test if the api key is available"])
if len(arr[0]) == 0: if len(arr[0]) == 0:
@ -211,41 +222,54 @@ def add_llm():
elif llm["model_type"] == LLMType.CHAT.value: elif llm["model_type"] == LLMType.CHAT.value:
assert factory in ChatModel, f"Chat model from {factory} is not supported yet." assert factory in ChatModel, f"Chat model from {factory} is not supported yet."
mdl = ChatModel[factory]( mdl = ChatModel[factory](
key=llm["api_key"], key=llm['api_key'],
model_name=mdl_nm, model_name=mdl_nm,
base_url=llm["api_base"], base_url=llm["api_base"],
**extra, **extra,
) )
try: try:
m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}], {"temperature": 0.9}) m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}], {
"temperature": 0.9})
if not tc and m.find("**ERROR**:") >= 0: if not tc and m.find("**ERROR**:") >= 0:
raise Exception(m) raise Exception(m)
except Exception as e: except Exception as e:
msg += f"\nFail to access model({factory}/{mdl_nm})." + str(e) msg += f"\nFail to access model({factory}/{mdl_nm})." + str(
e)
elif llm["model_type"] == LLMType.RERANK: elif llm["model_type"] == LLMType.RERANK:
assert factory in RerankModel, f"RE-rank model from {factory} is not supported yet." assert factory in RerankModel, f"RE-rank model from {factory} is not supported yet."
try: try:
mdl = RerankModel[factory](key=llm["api_key"], model_name=mdl_nm, base_url=llm["api_base"]) mdl = RerankModel[factory](
key=llm["api_key"],
model_name=mdl_nm,
base_url=llm["api_base"]
)
arr, tc = mdl.similarity("Hello~ RAGFlower!", ["Hi, there!", "Ohh, my friend!"]) arr, tc = mdl.similarity("Hello~ RAGFlower!", ["Hi, there!", "Ohh, my friend!"])
if len(arr) == 0: if len(arr) == 0:
raise Exception("Not known.") raise Exception("Not known.")
except KeyError: except KeyError:
msg += f"{factory} dose not support this model({factory}/{mdl_nm})" msg += f"{factory} dose not support this model({factory}/{mdl_nm})"
except Exception as e: except Exception as e:
msg += f"\nFail to access model({factory}/{mdl_nm})." + str(e) msg += f"\nFail to access model({factory}/{mdl_nm})." + str(
e)
elif llm["model_type"] == LLMType.IMAGE2TEXT.value: elif llm["model_type"] == LLMType.IMAGE2TEXT.value:
assert factory in CvModel, f"Image to text model from {factory} is not supported yet." assert factory in CvModel, f"Image to text model from {factory} is not supported yet."
mdl = CvModel[factory](key=llm["api_key"], model_name=mdl_nm, base_url=llm["api_base"]) mdl = CvModel[factory](
key=llm["api_key"],
model_name=mdl_nm,
base_url=llm["api_base"]
)
try: try:
image_data = test_image image_data = test_image
m, tc = mdl.describe(image_data) m, tc = mdl.describe(image_data)
if not tc and m.find("**ERROR**:") >= 0: if not m and not tc:
raise Exception(m) raise Exception(m)
except Exception as e: except Exception as e:
msg += f"\nFail to access model({factory}/{mdl_nm})." + str(e) msg += f"\nFail to access model({factory}/{mdl_nm})." + str(e)
elif llm["model_type"] == LLMType.TTS: elif llm["model_type"] == LLMType.TTS:
assert factory in TTSModel, f"TTS model from {factory} is not supported yet." assert factory in TTSModel, f"TTS model from {factory} is not supported yet."
mdl = TTSModel[factory](key=llm["api_key"], model_name=mdl_nm, base_url=llm["api_base"]) mdl = TTSModel[factory](
key=llm["api_key"], model_name=mdl_nm, base_url=llm["api_base"]
)
try: try:
for resp in mdl.tts("Hello~ RAGFlower!"): for resp in mdl.tts("Hello~ RAGFlower!"):
pass pass
@ -258,46 +282,40 @@ def add_llm():
if msg: if msg:
return get_data_error_result(message=msg) return get_data_error_result(message=msg)
if not TenantLLMService.filter_update([TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == factory, TenantLLM.llm_name == llm["llm_name"]], llm): if not TenantLLMService.filter_update(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == factory,
TenantLLM.llm_name == llm["llm_name"]], llm):
TenantLLMService.save(**llm) TenantLLMService.save(**llm)
return get_json_result(data=True) return get_json_result(data=True)
@manager.route("/delete_llm", methods=["POST"]) # noqa: F821 @manager.route('/delete_llm', methods=['POST']) # noqa: F821
@login_required @login_required
@validate_request("llm_factory", "llm_name") @validate_request("llm_factory", "llm_name")
def delete_llm(): def delete_llm():
req = request.json req = request.json
TenantLLMService.filter_delete([TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"], TenantLLM.llm_name == req["llm_name"]]) TenantLLMService.filter_delete(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"],
TenantLLM.llm_name == req["llm_name"]])
return get_json_result(data=True) return get_json_result(data=True)
@manager.route("/enable_llm", methods=["POST"]) # noqa: F821 @manager.route('/delete_factory', methods=['POST']) # noqa: F821
@login_required
@validate_request("llm_factory", "llm_name")
def enable_llm():
req = request.json
TenantLLMService.filter_update(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"], TenantLLM.llm_name == req["llm_name"]], {"status": str(req.get("status", "1"))}
)
return get_json_result(data=True)
@manager.route("/delete_factory", methods=["POST"]) # noqa: F821
@login_required @login_required
@validate_request("llm_factory") @validate_request("llm_factory")
def delete_factory(): def delete_factory():
req = request.json req = request.json
TenantLLMService.filter_delete([TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"]]) TenantLLMService.filter_delete(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"]])
return get_json_result(data=True) return get_json_result(data=True)
@manager.route("/my_llms", methods=["GET"]) # noqa: F821 @manager.route('/my_llms', methods=['GET']) # noqa: F821
@login_required @login_required
def my_llms(): def my_llms():
try: try:
include_details = request.args.get("include_details", "false").lower() == "true" include_details = request.args.get('include_details', 'false').lower() == 'true'
if include_details: if include_details:
res = {} res = {}
@ -313,46 +331,51 @@ def my_llms():
break break
if o_dict["llm_factory"] not in res: if o_dict["llm_factory"] not in res:
res[o_dict["llm_factory"]] = {"tags": factory_tags, "llm": []} res[o_dict["llm_factory"]] = {
"tags": factory_tags,
res[o_dict["llm_factory"]]["llm"].append( "llm": []
{
"type": o_dict["model_type"],
"name": o_dict["llm_name"],
"used_token": o_dict["used_tokens"],
"api_base": o_dict["api_base"] or "",
"max_tokens": o_dict["max_tokens"] or 8192,
"status": o_dict["status"] or "1",
} }
)
res[o_dict["llm_factory"]]["llm"].append({
"type": o_dict["model_type"],
"name": o_dict["llm_name"],
"used_token": o_dict["used_tokens"],
"api_base": o_dict["api_base"] or "",
"max_tokens": o_dict["max_tokens"] or 8192
})
else: else:
res = {} res = {}
for o in TenantLLMService.get_my_llms(current_user.id): for o in TenantLLMService.get_my_llms(current_user.id):
if o["llm_factory"] not in res: if o["llm_factory"] not in res:
res[o["llm_factory"]] = {"tags": o["tags"], "llm": []} res[o["llm_factory"]] = {
res[o["llm_factory"]]["llm"].append({"type": o["model_type"], "name": o["llm_name"], "used_token": o["used_tokens"], "status": o["status"]}) "tags": o["tags"],
"llm": []
}
res[o["llm_factory"]]["llm"].append({
"type": o["model_type"],
"name": o["llm_name"],
"used_token": o["used_tokens"]
})
return get_json_result(data=res) return get_json_result(data=res)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@manager.route("/list", methods=["GET"]) # noqa: F821 @manager.route('/list', methods=['GET']) # noqa: F821
@login_required @login_required
def list_app(): def list_app():
self_deployed = ["FastEmbed", "Ollama", "Xinference", "LocalAI", "LM-Studio", "GPUStack"] self_deployed = ["Youdao", "FastEmbed", "BAAI", "Ollama", "Xinference", "LocalAI", "LM-Studio", "GPUStack"]
weighted = [] weighted = ["Youdao", "FastEmbed", "BAAI"] if settings.LIGHTEN != 0 else []
model_type = request.args.get("model_type") model_type = request.args.get("model_type")
try: try:
objs = TenantLLMService.query(tenant_id=current_user.id) objs = TenantLLMService.query(tenant_id=current_user.id)
facts = set([o.to_dict()["llm_factory"] for o in objs if o.api_key and o.status == StatusEnum.VALID.value]) facts = set([o.to_dict()["llm_factory"] for o in objs if o.api_key])
status = {(o.llm_name + "@" + o.llm_factory) for o in objs if o.status == StatusEnum.VALID.value}
llms = LLMService.get_all() llms = LLMService.get_all()
llms = [m.to_dict() for m in llms if m.status == StatusEnum.VALID.value and m.fid not in weighted and (m.llm_name + "@" + m.fid) in status] llms = [m.to_dict()
for m in llms if m.status == StatusEnum.VALID.value and m.fid not in weighted]
for m in llms: for m in llms:
m["available"] = m["fid"] in facts or m["llm_name"].lower() == "flag-embedding" or m["fid"] in self_deployed m["available"] = m["fid"] in facts or m["llm_name"].lower() == "flag-embedding" or m["fid"] in self_deployed
if "tei-" in os.getenv("COMPOSE_PROFILES", "") and m["model_type"] == LLMType.EMBEDDING and m["fid"] == "Builtin" and m["llm_name"] == os.getenv("TEI_MODEL", ""):
m["available"] = True
llm_set = set([m["llm_name"] + "@" + m["fid"] for m in llms]) llm_set = set([m["llm_name"] + "@" + m["fid"] for m in llms])
for o in objs: for o in objs:

View File

@ -16,12 +16,13 @@
from flask import Response, request from flask import Response, request
from flask_login import current_user, login_required from flask_login import current_user, login_required
from api.db import VALID_MCP_SERVER_TYPES
from api.db.db_models import MCPServer from api.db.db_models import MCPServer
from api.db.services.mcp_server_service import MCPServerService from api.db.services.mcp_server_service import MCPServerService
from api.db.services.user_service import TenantService from api.db.services.user_service import TenantService
from common.constants import RetCode, VALID_MCP_SERVER_TYPES from api.settings import RetCode
from common.misc_utils import get_uuid from api.utils import get_uuid
from api.utils.api_utils import get_data_error_result, get_json_result, server_error_response, validate_request, \ from api.utils.api_utils import get_data_error_result, get_json_result, server_error_response, validate_request, \
get_mcp_tools get_mcp_tools
from api.utils.web_utils import get_float, safe_json_parse from api.utils.web_utils import get_float, safe_json_parse

View File

@ -1,26 +1,8 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import Response from flask import Response
from flask_login import login_required from flask_login import login_required
from api.utils.api_utils import get_json_result from api.utils.api_utils import get_json_result
from plugin import GlobalPluginManager from plugin import GlobalPluginManager
@manager.route('/llm_tools', methods=['GET']) # noqa: F821 @manager.route('/llm_tools', methods=['GET']) # noqa: F821
@login_required @login_required
def llm_tools() -> Response: def llm_tools() -> Response:

View File

@ -19,13 +19,12 @@ import time
from typing import Any, cast from typing import Any, cast
from api.db.services.canvas_service import UserCanvasService from api.db.services.canvas_service import UserCanvasService
from api.db.services.user_canvas_version import UserCanvasVersionService from api.db.services.user_canvas_version import UserCanvasVersionService
from common.constants import RetCode from api.settings import RetCode
from common.misc_utils import get_uuid from api.utils import get_uuid
from api.utils.api_utils import get_data_error_result, get_error_data_result, get_json_result, token_required from api.utils.api_utils import get_data_error_result, get_error_data_result, get_json_result, token_required
from api.utils.api_utils import get_result from api.utils.api_utils import get_result
from flask import request from flask import request
@manager.route('/agents', methods=['GET']) # noqa: F821 @manager.route('/agents', methods=['GET']) # noqa: F821
@token_required @token_required
def list_agents(tenant_id): def list_agents(tenant_id):
@ -42,7 +41,7 @@ def list_agents(tenant_id):
desc = False desc = False
else: else:
desc = True desc = True
canvas = UserCanvasService.get_list(tenant_id, page_number, items_per_page, orderby, desc, id, title) canvas = UserCanvasService.get_list(tenant_id,page_number,items_per_page,orderby,desc,id,title)
return get_result(data=canvas) return get_result(data=canvas)
@ -94,7 +93,7 @@ def update_agent(tenant_id: str, agent_id: str):
req["dsl"] = json.dumps(req["dsl"], ensure_ascii=False) req["dsl"] = json.dumps(req["dsl"], ensure_ascii=False)
req["dsl"] = json.loads(req["dsl"]) req["dsl"] = json.loads(req["dsl"])
if req.get("title") is not None: if req.get("title") is not None:
req["title"] = req["title"].strip() req["title"] = req["title"].strip()

View File

@ -17,12 +17,13 @@ import logging
from flask import request from flask import request
from api import settings
from api.db import StatusEnum
from api.db.services.dialog_service import DialogService from api.db.services.dialog_service import DialogService
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.tenant_llm_service import TenantLLMService from api.db.services.tenant_llm_service import TenantLLMService
from api.db.services.user_service import TenantService from api.db.services.user_service import TenantService
from common.misc_utils import get_uuid from api.utils import get_uuid
from common.constants import RetCode, StatusEnum
from api.utils.api_utils import check_duplicate_ids, get_error_data_result, get_result, token_required from api.utils.api_utils import check_duplicate_ids, get_error_data_result, get_result, token_required
@ -44,7 +45,7 @@ def create(tenant_id):
embd_ids = [TenantLLMService.split_model_name_and_factory(kb.embd_id)[0] for kb in kbs] # remove vendor suffix for comparison embd_ids = [TenantLLMService.split_model_name_and_factory(kb.embd_id)[0] for kb in kbs] # remove vendor suffix for comparison
embd_count = list(set(embd_ids)) embd_count = list(set(embd_ids))
if len(embd_count) > 1: if len(embd_count) > 1:
return get_result(message='Datasets use different embedding models."', code=RetCode.AUTHENTICATION_ERROR) return get_result(message='Datasets use different embedding models."', code=settings.RetCode.AUTHENTICATION_ERROR)
req["kb_ids"] = ids req["kb_ids"] = ids
# llm # llm
llm = req.get("llm") llm = req.get("llm")
@ -166,10 +167,8 @@ def update(tenant_id, chat_id):
embd_ids = [TenantLLMService.split_model_name_and_factory(kb.embd_id)[0] for kb in kbs] # remove vendor suffix for comparison embd_ids = [TenantLLMService.split_model_name_and_factory(kb.embd_id)[0] for kb in kbs] # remove vendor suffix for comparison
embd_count = list(set(embd_ids)) embd_count = list(set(embd_ids))
if len(embd_count) > 1: if len(embd_count) > 1:
return get_result(message='Datasets use different embedding models."', code=RetCode.AUTHENTICATION_ERROR) return get_result(message='Datasets use different embedding models."', code=settings.RetCode.AUTHENTICATION_ERROR)
req["kb_ids"] = ids req["kb_ids"] = ids
else:
req["kb_ids"] = []
llm = req.get("llm") llm = req.get("llm")
if llm: if llm:
if "model_name" in llm: if "model_name" in llm:

View File

@ -20,17 +20,20 @@ import os
import json import json
from flask import request from flask import request
from peewee import OperationalError from peewee import OperationalError
from api import settings
from api.db import FileSource, StatusEnum
from api.db.db_models import File from api.db.db_models import File
from api.db.services.document_service import DocumentService from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.user_service import TenantService from api.db.services.user_service import TenantService
from common.constants import RetCode, FileSource, StatusEnum from api.utils import get_uuid
from api.utils.api_utils import ( from api.utils.api_utils import (
deep_merge, deep_merge,
get_error_argument_result, get_error_argument_result,
get_error_data_result, get_error_data_result,
get_error_operating_result,
get_error_permission_result, get_error_permission_result,
get_parser_config, get_parser_config,
get_result, get_result,
@ -48,7 +51,6 @@ from api.utils.validation_utils import (
) )
from rag.nlp import search from rag.nlp import search
from rag.settings import PAGERANK_FLD from rag.settings import PAGERANK_FLD
from common import globals
@manager.route("/datasets", methods=["POST"]) # noqa: F821 @manager.route("/datasets", methods=["POST"]) # noqa: F821
@ -78,28 +80,29 @@ def create(tenant_id):
properties: properties:
name: name:
type: string type: string
description: Dataset name (required). description: Name of the dataset.
avatar: avatar:
type: string type: string
description: Optional base64-encoded avatar image. description: Base64 encoding of the avatar.
description: description:
type: string type: string
description: Optional dataset description. description: Description of the dataset.
embedding_model: embedding_model:
type: string type: string
description: Optional embedding model name; if omitted, the tenant's default embedding model is used. description: Embedding model Name.
permission: permission:
type: string type: string
enum: ['me', 'team'] enum: ['me', 'team']
description: Visibility of the dataset (private to me or shared with team). description: Dataset permission.
chunk_method: chunk_method:
type: string type: string
enum: ["naive", "book", "email", "laws", "manual", "one", "paper", enum: ["naive", "book", "email", "laws", "manual", "one", "paper",
"picture", "presentation", "qa", "table", "tag"] "picture", "presentation", "qa", "table", "tag"
description: Chunking method; if omitted, defaults to "naive". ]
description: Chunking method.
parser_config: parser_config:
type: object type: object
description: Optional parser configuration; server-side defaults will be applied. description: Parser configuration.
responses: responses:
200: 200:
description: Successful operation. description: Successful operation.
@ -114,43 +117,44 @@ def create(tenant_id):
# |----------------|-------------| # |----------------|-------------|
# | embedding_model| embd_id | # | embedding_model| embd_id |
# | chunk_method | parser_id | # | chunk_method | parser_id |
req, err = validate_and_parse_json_request(request, CreateDatasetReq) req, err = validate_and_parse_json_request(request, CreateDatasetReq)
if err is not None: if err is not None:
return get_error_argument_result(err) return get_error_argument_result(err)
req = KnowledgebaseService.create_with_name(
name = req.pop("name", None),
tenant_id = tenant_id,
parser_id = req.pop("parser_id", None),
**req
)
# Insert embedding model(embd id)
ok, t = TenantService.get_by_id(tenant_id)
if not ok:
return get_error_permission_result(message="Tenant not found")
if not req.get("embd_id"):
req["embd_id"] = t.embd_id
else:
ok, err = verify_embedding_availability(req["embd_id"], tenant_id)
if not ok:
return err
try: try:
if not KnowledgebaseService.save(**req): if KnowledgebaseService.get_or_none(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_error_data_result() return get_error_operating_result(message=f"Dataset name '{req['name']}' already exists")
ok, k = KnowledgebaseService.get_by_id(req["id"])
if not ok: req["parser_config"] = get_parser_config(req["parser_id"], req["parser_config"])
return get_error_data_result(message="Dataset created failed") req["id"] = get_uuid()
req["tenant_id"] = tenant_id
response_data = remap_dictionary_keys(k.to_dict()) req["created_by"] = tenant_id
return get_result(data=response_data)
except Exception as e: ok, t = TenantService.get_by_id(tenant_id)
if not ok:
return get_error_permission_result(message="Tenant not found")
if not req.get("embd_id"):
req["embd_id"] = t.embd_id
else:
ok, err = verify_embedding_availability(req["embd_id"], tenant_id)
if not ok:
return err
if not KnowledgebaseService.save(**req):
return get_error_data_result(message="Create dataset error.(Database error)")
ok, k = KnowledgebaseService.get_by_id(req["id"])
if not ok:
return get_error_data_result(message="Dataset created failed")
response_data = remap_dictionary_keys(k.to_dict())
return get_result(data=response_data)
except OperationalError as e:
logging.exception(e) logging.exception(e)
return get_error_data_result(message="Database operation failed") return get_error_data_result(message="Database operation failed")
@manager.route("/datasets", methods=["DELETE"]) # noqa: F821 @manager.route("/datasets", methods=["DELETE"]) # noqa: F821
@token_required @token_required
def delete(tenant_id): def delete(tenant_id):
@ -211,8 +215,7 @@ def delete(tenant_id):
continue continue
kb_id_instance_pairs.append((kb_id, kb)) kb_id_instance_pairs.append((kb_id, kb))
if len(error_kb_ids) > 0: if len(error_kb_ids) > 0:
return get_error_permission_result( return get_error_permission_result(message=f"""User '{tenant_id}' lacks permission for datasets: '{", ".join(error_kb_ids)}'""")
message=f"""User '{tenant_id}' lacks permission for datasets: '{", ".join(error_kb_ids)}'""")
errors = [] errors = []
success_count = 0 success_count = 0
@ -229,8 +232,7 @@ def delete(tenant_id):
] ]
) )
File2DocumentService.delete_by_document_id(doc.id) File2DocumentService.delete_by_document_id(doc.id)
FileService.filter_delete( FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.type == "folder", File.name == kb.name])
[File.source_type == FileSource.KNOWLEDGEBASE, File.type == "folder", File.name == kb.name])
if not KnowledgebaseService.delete_by_id(kb_id): if not KnowledgebaseService.delete_by_id(kb_id):
errors.append(f"Delete dataset error for {kb_id}") errors.append(f"Delete dataset error for {kb_id}")
continue continue
@ -327,8 +329,7 @@ def update(tenant_id, dataset_id):
try: try:
kb = KnowledgebaseService.get_or_none(id=dataset_id, tenant_id=tenant_id) kb = KnowledgebaseService.get_or_none(id=dataset_id, tenant_id=tenant_id)
if kb is None: if kb is None:
return get_error_permission_result( return get_error_permission_result(message=f"User '{tenant_id}' lacks permission for dataset '{dataset_id}'")
message=f"User '{tenant_id}' lacks permission for dataset '{dataset_id}'")
if req.get("parser_config"): if req.get("parser_config"):
req["parser_config"] = deep_merge(kb.parser_config, req["parser_config"]) req["parser_config"] = deep_merge(kb.parser_config, req["parser_config"])
@ -340,8 +341,7 @@ def update(tenant_id, dataset_id):
del req["parser_config"] del req["parser_config"]
if "name" in req and req["name"].lower() != kb.name.lower(): if "name" in req and req["name"].lower() != kb.name.lower():
exists = KnowledgebaseService.get_or_none(name=req["name"], tenant_id=tenant_id, exists = KnowledgebaseService.get_or_none(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value)
status=StatusEnum.VALID.value)
if exists: if exists:
return get_error_data_result(message=f"Dataset name '{req['name']}' already exists") return get_error_data_result(message=f"Dataset name '{req['name']}' already exists")
@ -349,8 +349,7 @@ def update(tenant_id, dataset_id):
if not req["embd_id"]: if not req["embd_id"]:
req["embd_id"] = kb.embd_id req["embd_id"] = kb.embd_id
if kb.chunk_num != 0 and req["embd_id"] != kb.embd_id: if kb.chunk_num != 0 and req["embd_id"] != kb.embd_id:
return get_error_data_result( return get_error_data_result(message=f"When chunk_num ({kb.chunk_num}) > 0, embedding_model must remain {kb.embd_id}")
message=f"When chunk_num ({kb.chunk_num}) > 0, embedding_model must remain {kb.embd_id}")
ok, err = verify_embedding_availability(req["embd_id"], tenant_id) ok, err = verify_embedding_availability(req["embd_id"], tenant_id)
if not ok: if not ok:
return err return err
@ -360,12 +359,10 @@ def update(tenant_id, dataset_id):
return get_error_argument_result(message="'pagerank' can only be set when doc_engine is elasticsearch") return get_error_argument_result(message="'pagerank' can only be set when doc_engine is elasticsearch")
if req["pagerank"] > 0: if req["pagerank"] > 0:
globals.docStoreConn.update({"kb_id": kb.id}, {PAGERANK_FLD: req["pagerank"]}, settings.docStoreConn.update({"kb_id": kb.id}, {PAGERANK_FLD: req["pagerank"]}, search.index_name(kb.tenant_id), kb.id)
search.index_name(kb.tenant_id), kb.id)
else: else:
# Elasticsearch requires PAGERANK_FLD be non-zero! # Elasticsearch requires PAGERANK_FLD be non-zero!
globals.docStoreConn.update({"exists": PAGERANK_FLD}, {"remove": PAGERANK_FLD}, settings.docStoreConn.update({"exists": PAGERANK_FLD}, {"remove": PAGERANK_FLD}, search.index_name(kb.tenant_id), kb.id)
search.index_name(kb.tenant_id), kb.id)
if not KnowledgebaseService.update_by_id(kb.id, req): if not KnowledgebaseService.update_by_id(kb.id, req):
return get_error_data_result(message="Update dataset error.(Database error)") return get_error_data_result(message="Update dataset error.(Database error)")
@ -457,7 +454,7 @@ def list_datasets(tenant_id):
return get_error_permission_result(message=f"User '{tenant_id}' lacks permission for dataset '{name}'") return get_error_permission_result(message=f"User '{tenant_id}' lacks permission for dataset '{name}'")
tenants = TenantService.get_joined_tenants_by_user_id(tenant_id) tenants = TenantService.get_joined_tenants_by_user_id(tenant_id)
kbs, total = KnowledgebaseService.get_list( kbs = KnowledgebaseService.get_list(
[m["tenant_id"] for m in tenants], [m["tenant_id"] for m in tenants],
tenant_id, tenant_id,
args["page"], args["page"],
@ -471,20 +468,19 @@ def list_datasets(tenant_id):
response_data_list = [] response_data_list = []
for kb in kbs: for kb in kbs:
response_data_list.append(remap_dictionary_keys(kb)) response_data_list.append(remap_dictionary_keys(kb))
return get_result(data=response_data_list, total=total) return get_result(data=response_data_list)
except OperationalError as e: except OperationalError as e:
logging.exception(e) logging.exception(e)
return get_error_data_result(message="Database operation failed") return get_error_data_result(message="Database operation failed")
@manager.route('/datasets/<dataset_id>/knowledge_graph', methods=['GET']) # noqa: F821 @manager.route('/datasets/<dataset_id>/knowledge_graph', methods=['GET']) # noqa: F821
@token_required @token_required
def knowledge_graph(tenant_id, dataset_id): def knowledge_graph(tenant_id,dataset_id):
if not KnowledgebaseService.accessible(dataset_id, tenant_id): if not KnowledgebaseService.accessible(dataset_id, tenant_id):
return get_result( return get_result(
data=False, data=False,
message='No authorization.', message='No authorization.',
code=RetCode.AUTHENTICATION_ERROR code=settings.RetCode.AUTHENTICATION_ERROR
) )
_, kb = KnowledgebaseService.get_by_id(dataset_id) _, kb = KnowledgebaseService.get_by_id(dataset_id)
req = { req = {
@ -493,9 +489,9 @@ def knowledge_graph(tenant_id, dataset_id):
} }
obj = {"graph": {}, "mind_map": {}} obj = {"graph": {}, "mind_map": {}}
if not globals.docStoreConn.indexExist(search.index_name(kb.tenant_id), dataset_id): if not settings.docStoreConn.indexExist(search.index_name(kb.tenant_id), dataset_id):
return get_result(data=obj) return get_result(data=obj)
sres = globals.retriever.search(req, search.index_name(kb.tenant_id), [dataset_id]) sres = settings.retrievaler.search(req, search.index_name(kb.tenant_id), [dataset_id])
if not len(sres.ids): if not len(sres.ids):
return get_result(data=obj) return get_result(data=obj)
@ -511,24 +507,21 @@ def knowledge_graph(tenant_id, dataset_id):
if "nodes" in obj["graph"]: if "nodes" in obj["graph"]:
obj["graph"]["nodes"] = sorted(obj["graph"]["nodes"], key=lambda x: x.get("pagerank", 0), reverse=True)[:256] obj["graph"]["nodes"] = sorted(obj["graph"]["nodes"], key=lambda x: x.get("pagerank", 0), reverse=True)[:256]
if "edges" in obj["graph"]: if "edges" in obj["graph"]:
node_id_set = {o["id"] for o in obj["graph"]["nodes"]} node_id_set = { o["id"] for o in obj["graph"]["nodes"] }
filtered_edges = [o for o in obj["graph"]["edges"] if filtered_edges = [o for o in obj["graph"]["edges"] if o["source"] != o["target"] and o["source"] in node_id_set and o["target"] in node_id_set]
o["source"] != o["target"] and o["source"] in node_id_set and o["target"] in node_id_set]
obj["graph"]["edges"] = sorted(filtered_edges, key=lambda x: x.get("weight", 0), reverse=True)[:128] obj["graph"]["edges"] = sorted(filtered_edges, key=lambda x: x.get("weight", 0), reverse=True)[:128]
return get_result(data=obj) return get_result(data=obj)
@manager.route('/datasets/<dataset_id>/knowledge_graph', methods=['DELETE']) # noqa: F821 @manager.route('/datasets/<dataset_id>/knowledge_graph', methods=['DELETE']) # noqa: F821
@token_required @token_required
def delete_knowledge_graph(tenant_id, dataset_id): def delete_knowledge_graph(tenant_id,dataset_id):
if not KnowledgebaseService.accessible(dataset_id, tenant_id): if not KnowledgebaseService.accessible(dataset_id, tenant_id):
return get_result( return get_result(
data=False, data=False,
message='No authorization.', message='No authorization.',
code=RetCode.AUTHENTICATION_ERROR code=settings.RetCode.AUTHENTICATION_ERROR
) )
_, kb = KnowledgebaseService.get_by_id(dataset_id) _, kb = KnowledgebaseService.get_by_id(dataset_id)
globals.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]}, settings.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]}, search.index_name(kb.tenant_id), dataset_id)
search.index_name(kb.tenant_id), dataset_id)
return get_result(data=True) return get_result(data=True)

View File

@ -1,4 +1,4 @@
# #
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved. # Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
@ -17,6 +17,7 @@ import logging
from flask import request, jsonify from flask import request, jsonify
from api.db import LLMType
from api.db.services.document_service import DocumentService from api.db.services.document_service import DocumentService
from api.db.services.knowledgebase_service import KnowledgebaseService from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
@ -24,96 +25,12 @@ from api import settings
from api.utils.api_utils import validate_request, build_error_result, apikey_required from api.utils.api_utils import validate_request, build_error_result, apikey_required
from rag.app.tag import label_question from rag.app.tag import label_question
from api.db.services.dialog_service import meta_filter, convert_conditions from api.db.services.dialog_service import meta_filter, convert_conditions
from common.constants import RetCode, LLMType
from common import globals
@manager.route('/dify/retrieval', methods=['POST']) # noqa: F821 @manager.route('/dify/retrieval', methods=['POST']) # noqa: F821
@apikey_required @apikey_required
@validate_request("knowledge_id", "query") @validate_request("knowledge_id", "query")
def retrieval(tenant_id): def retrieval(tenant_id):
"""
Dify-compatible retrieval API
---
tags:
- SDK
security:
- ApiKeyAuth: []
parameters:
- in: body
name: body
required: true
schema:
type: object
required:
- knowledge_id
- query
properties:
knowledge_id:
type: string
description: Knowledge base ID
query:
type: string
description: Query text
use_kg:
type: boolean
description: Whether to use knowledge graph
default: false
retrieval_setting:
type: object
description: Retrieval configuration
properties:
score_threshold:
type: number
description: Similarity threshold
default: 0.0
top_k:
type: integer
description: Number of results to return
default: 1024
metadata_condition:
type: object
description: Metadata filter condition
properties:
conditions:
type: array
items:
type: object
properties:
name:
type: string
description: Field name
comparison_operator:
type: string
description: Comparison operator
value:
type: string
description: Field value
responses:
200:
description: Retrieval succeeded
schema:
type: object
properties:
records:
type: array
items:
type: object
properties:
content:
type: string
description: Content text
score:
type: number
description: Similarity score
title:
type: string
description: Document title
metadata:
type: object
description: Metadata info
404:
description: Knowledge base or document not found
"""
req = request.json req = request.json
question = req["query"] question = req["query"]
kb_id = req["knowledge_id"] kb_id = req["knowledge_id"]
@ -121,24 +38,24 @@ def retrieval(tenant_id):
retrieval_setting = req.get("retrieval_setting", {}) retrieval_setting = req.get("retrieval_setting", {})
similarity_threshold = float(retrieval_setting.get("score_threshold", 0.0)) similarity_threshold = float(retrieval_setting.get("score_threshold", 0.0))
top = int(retrieval_setting.get("top_k", 1024)) top = int(retrieval_setting.get("top_k", 1024))
metadata_condition = req.get("metadata_condition", {}) metadata_condition = req.get("metadata_condition",{})
metas = DocumentService.get_meta_by_kbs([kb_id]) metas = DocumentService.get_meta_by_kbs([kb_id])
doc_ids = [] doc_ids = []
try: try:
e, kb = KnowledgebaseService.get_by_id(kb_id) e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e: if not e:
return build_error_result(message="Knowledgebase not found!", code=RetCode.NOT_FOUND) return build_error_result(message="Knowledgebase not found!", code=settings.RetCode.NOT_FOUND)
embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id) embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
print(metadata_condition) print(metadata_condition)
# print("after", convert_conditions(metadata_condition)) print("after",convert_conditions(metadata_condition))
doc_ids.extend(meta_filter(metas, convert_conditions(metadata_condition))) doc_ids.extend(meta_filter(metas, convert_conditions(metadata_condition)))
# print("doc_ids", doc_ids) print("doc_ids",doc_ids)
if not doc_ids and metadata_condition is not None: if not doc_ids and metadata_condition is not None:
doc_ids = ['-999'] doc_ids = ['-999']
ranks = globals.retriever.retrieval( ranks = settings.retrievaler.retrieval(
question, question,
embd_mdl, embd_mdl,
kb.tenant_id, kb.tenant_id,
@ -153,17 +70,17 @@ def retrieval(tenant_id):
) )
if use_kg: if use_kg:
ck = settings.kg_retriever.retrieval(question, ck = settings.kg_retrievaler.retrieval(question,
[tenant_id], [tenant_id],
[kb_id], [kb_id],
embd_mdl, embd_mdl,
LLMBundle(kb.tenant_id, LLMType.CHAT)) LLMBundle(kb.tenant_id, LLMType.CHAT))
if ck["content_with_weight"]: if ck["content_with_weight"]:
ranks["chunks"].insert(0, ck) ranks["chunks"].insert(0, ck)
records = [] records = []
for c in ranks["chunks"]: for c in ranks["chunks"]:
e, doc = DocumentService.get_by_id(c["doc_id"]) e, doc = DocumentService.get_by_id( c["doc_id"])
c.pop("vector", None) c.pop("vector", None)
meta = getattr(doc, 'meta_fields', {}) meta = getattr(doc, 'meta_fields', {})
meta["doc_id"] = c["doc_id"] meta["doc_id"] = c["doc_id"]
@ -179,7 +96,9 @@ def retrieval(tenant_id):
if str(e).find("not_found") > 0: if str(e).find("not_found") > 0:
return build_error_result( return build_error_result(
message='No chunk found! Check the chunk status please!', message='No chunk found! Check the chunk status please!',
code=RetCode.NOT_FOUND code=settings.RetCode.NOT_FOUND
) )
logging.exception(e) logging.exception(e)
return build_error_result(message=str(e), code=RetCode.SERVER_ERROR) return build_error_result(message=str(e), code=settings.RetCode.SERVER_ERROR)

View File

@ -26,7 +26,7 @@ from pydantic import BaseModel, Field, validator
from api import settings from api import settings
from api.constants import FILE_NAME_LEN_LIMIT from api.constants import FILE_NAME_LEN_LIMIT
from api.db import FileType from api.db import FileSource, FileType, LLMType, ParserType, TaskStatus
from api.db.db_models import File, Task from api.db.db_models import File, Task
from api.db.services.document_service import DocumentService from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService from api.db.services.file2document_service import File2DocumentService
@ -41,10 +41,8 @@ from rag.app.qa import beAdoc, rmPrefix
from rag.app.tag import label_question from rag.app.tag import label_question
from rag.nlp import rag_tokenizer, search from rag.nlp import rag_tokenizer, search
from rag.prompts.generator import cross_languages, keyword_extraction from rag.prompts.generator import cross_languages, keyword_extraction
from rag.utils import rmSpace
from rag.utils.storage_factory import STORAGE_IMPL from rag.utils.storage_factory import STORAGE_IMPL
from common.string_utils import remove_redundant_spaces
from common.constants import RetCode, LLMType, ParserType, TaskStatus, FileSource
from common import globals
MAXIMUM_OF_UPLOADING_FILES = 256 MAXIMUM_OF_UPLOADING_FILES = 256
@ -129,13 +127,13 @@ def upload(dataset_id, tenant_id):
description: Processing status. description: Processing status.
""" """
if "file" not in request.files: if "file" not in request.files:
return get_error_data_result(message="No file part!", code=RetCode.ARGUMENT_ERROR) return get_error_data_result(message="No file part!", code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist("file") file_objs = request.files.getlist("file")
for file_obj in file_objs: for file_obj in file_objs:
if file_obj.filename == "": if file_obj.filename == "":
return get_result(message="No file selected!", code=RetCode.ARGUMENT_ERROR) return get_result(message="No file selected!", code=settings.RetCode.ARGUMENT_ERROR)
if len(file_obj.filename.encode("utf-8")) > FILE_NAME_LEN_LIMIT: if len(file_obj.filename.encode("utf-8")) > FILE_NAME_LEN_LIMIT:
return get_result(message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=RetCode.ARGUMENT_ERROR) return get_result(message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", code=settings.RetCode.ARGUMENT_ERROR)
""" """
# total size # total size
total_size = 0 total_size = 0
@ -147,7 +145,7 @@ def upload(dataset_id, tenant_id):
if total_size > MAX_TOTAL_FILE_SIZE: if total_size > MAX_TOTAL_FILE_SIZE:
return get_result( return get_result(
message=f"Total file size exceeds 10MB limit! ({total_size / (1024 * 1024):.2f} MB)", message=f"Total file size exceeds 10MB limit! ({total_size / (1024 * 1024):.2f} MB)",
code=RetCode.ARGUMENT_ERROR, code=settings.RetCode.ARGUMENT_ERROR,
) )
""" """
e, kb = KnowledgebaseService.get_by_id(dataset_id) e, kb = KnowledgebaseService.get_by_id(dataset_id)
@ -155,7 +153,7 @@ def upload(dataset_id, tenant_id):
raise LookupError(f"Can't find the dataset with ID {dataset_id}!") raise LookupError(f"Can't find the dataset with ID {dataset_id}!")
err, files = FileService.upload_document(kb, file_objs, tenant_id) err, files = FileService.upload_document(kb, file_objs, tenant_id)
if err: if err:
return get_result(message="\n".join(err), code=RetCode.SERVER_ERROR) return get_result(message="\n".join(err), code=settings.RetCode.SERVER_ERROR)
# rename key's name # rename key's name
renamed_doc_list = [] renamed_doc_list = []
for file in files: for file in files:
@ -255,12 +253,12 @@ def update_doc(tenant_id, dataset_id, document_id):
if len(req["name"].encode("utf-8")) > FILE_NAME_LEN_LIMIT: if len(req["name"].encode("utf-8")) > FILE_NAME_LEN_LIMIT:
return get_result( return get_result(
message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.", message=f"File name must be {FILE_NAME_LEN_LIMIT} bytes or less.",
code=RetCode.ARGUMENT_ERROR, code=settings.RetCode.ARGUMENT_ERROR,
) )
if pathlib.Path(req["name"].lower()).suffix != pathlib.Path(doc.name.lower()).suffix: if pathlib.Path(req["name"].lower()).suffix != pathlib.Path(doc.name.lower()).suffix:
return get_result( return get_result(
message="The extension of file can't be changed", message="The extension of file can't be changed",
code=RetCode.ARGUMENT_ERROR, code=settings.RetCode.ARGUMENT_ERROR,
) )
for d in DocumentService.query(name=req["name"], kb_id=doc.kb_id): for d in DocumentService.query(name=req["name"], kb_id=doc.kb_id):
if d.name == req["name"]: if d.name == req["name"]:
@ -308,7 +306,7 @@ def update_doc(tenant_id, dataset_id, document_id):
) )
if not e: if not e:
return get_error_data_result(message="Document not found!") return get_error_data_result(message="Document not found!")
globals.docStoreConn.delete({"doc_id": doc.id}, search.index_name(tenant_id), dataset_id) settings.docStoreConn.delete({"doc_id": doc.id}, search.index_name(tenant_id), dataset_id)
if "enabled" in req: if "enabled" in req:
status = int(req["enabled"]) status = int(req["enabled"])
@ -317,7 +315,7 @@ def update_doc(tenant_id, dataset_id, document_id):
if not DocumentService.update_by_id(doc.id, {"status": str(status)}): if not DocumentService.update_by_id(doc.id, {"status": str(status)}):
return get_error_data_result(message="Database error (Document update)!") return get_error_data_result(message="Database error (Document update)!")
globals.docStoreConn.update({"doc_id": doc.id}, {"available_int": status}, search.index_name(kb.tenant_id), doc.kb_id) settings.docStoreConn.update({"doc_id": doc.id}, {"available_int": status}, search.index_name(kb.tenant_id), doc.kb_id)
return get_result(data=True) return get_result(data=True)
except Exception as e: except Exception as e:
return server_error_response(e) return server_error_response(e)
@ -404,7 +402,7 @@ def download(tenant_id, dataset_id, document_id):
doc_id, doc_location = File2DocumentService.get_storage_address(doc_id=document_id) # minio address doc_id, doc_location = File2DocumentService.get_storage_address(doc_id=document_id) # minio address
file_stream = STORAGE_IMPL.get(doc_id, doc_location) file_stream = STORAGE_IMPL.get(doc_id, doc_location)
if not file_stream: if not file_stream:
return construct_json_result(message="This file is empty.", code=RetCode.DATA_ERROR) return construct_json_result(message="This file is empty.", code=settings.RetCode.DATA_ERROR)
file = BytesIO(file_stream) file = BytesIO(file_stream)
# Use send_file with a proper filename and MIME type # Use send_file with a proper filename and MIME type
return send_file( return send_file(
@ -460,7 +458,7 @@ def list_docs(dataset_id, tenant_id):
required: false required: false
default: true default: true
description: Order in descending. description: Order in descending.
- in: query - in: query
name: create_time_from name: create_time_from
type: integer type: integer
required: false required: false
@ -472,20 +470,6 @@ def list_docs(dataset_id, tenant_id):
required: false required: false
default: 0 default: 0
description: Unix timestamp for filtering documents created before this time. 0 means no filter. description: Unix timestamp for filtering documents created before this time. 0 means no filter.
- in: query
name: suffix
type: array
items:
type: string
required: false
description: Filter by file suffix (e.g., ["pdf", "txt", "docx"]).
- in: query
name: run
type: array
items:
type: string
required: false
description: Filter by document run status. Supports both numeric ("0", "1", "2", "3", "4") and text formats ("UNSTART", "RUNNING", "CANCEL", "DONE", "FAIL").
- in: header - in: header
name: Authorization name: Authorization
type: string type: string
@ -528,62 +512,63 @@ def list_docs(dataset_id, tenant_id):
description: Processing status. description: Processing status.
""" """
if not KnowledgebaseService.accessible(kb_id=dataset_id, user_id=tenant_id): if not KnowledgebaseService.accessible(kb_id=dataset_id, user_id=tenant_id):
return get_error_data_result(message=f"You don't own the dataset {dataset_id}. ") return get_error_data_result(message=f"You don't own the dataset {dataset_id}. ")
id = request.args.get("id")
name = request.args.get("name")
q = request.args if id and not DocumentService.query(id=id, kb_id=dataset_id):
document_id = q.get("id") return get_error_data_result(message=f"You don't own the document {id}.")
name = q.get("name")
if document_id and not DocumentService.query(id=document_id, kb_id=dataset_id):
return get_error_data_result(message=f"You don't own the document {document_id}.")
if name and not DocumentService.query(name=name, kb_id=dataset_id): if name and not DocumentService.query(name=name, kb_id=dataset_id):
return get_error_data_result(message=f"You don't own the document {name}.") return get_error_data_result(message=f"You don't own the document {name}.")
page = int(q.get("page", 1)) page = int(request.args.get("page", 1))
page_size = int(q.get("page_size", 30)) keywords = request.args.get("keywords", "")
orderby = q.get("orderby", "create_time") page_size = int(request.args.get("page_size", 30))
desc = str(q.get("desc", "true")).strip().lower() != "false" orderby = request.args.get("orderby", "create_time")
keywords = q.get("keywords", "") if request.args.get("desc") == "False":
desc = False
else:
desc = True
docs, tol = DocumentService.get_list(dataset_id, page, page_size, orderby, desc, keywords, id, name)
# filters - align with OpenAPI parameter names create_time_from = int(request.args.get("create_time_from", 0))
suffix = q.getlist("suffix") create_time_to = int(request.args.get("create_time_to", 0))
run_status = q.getlist("run")
create_time_from = int(q.get("create_time_from", 0))
create_time_to = int(q.get("create_time_to", 0))
# map run status (accept text or numeric) - align with API parameter
run_status_text_to_numeric = {"UNSTART": "0", "RUNNING": "1", "CANCEL": "2", "DONE": "3", "FAIL": "4"}
run_status_converted = [run_status_text_to_numeric.get(v, v) for v in run_status]
docs, total = DocumentService.get_list(
dataset_id, page, page_size, orderby, desc, keywords, document_id, name, suffix, run_status_converted
)
# time range filter (0 means no bound)
if create_time_from or create_time_to: if create_time_from or create_time_to:
docs = [ filtered_docs = []
d for d in docs for doc in docs:
if (create_time_from == 0 or d.get("create_time", 0) >= create_time_from) doc_create_time = doc.get("create_time", 0)
and (create_time_to == 0 or d.get("create_time", 0) <= create_time_to) if (create_time_from == 0 or doc_create_time >= create_time_from) and (create_time_to == 0 or doc_create_time <= create_time_to):
] filtered_docs.append(doc)
docs = filtered_docs
# rename keys + map run status back to text for output # rename key's name
renamed_doc_list = []
key_mapping = { key_mapping = {
"chunk_num": "chunk_count", "chunk_num": "chunk_count",
"kb_id": "dataset_id", "kb_id": "dataset_id",
"token_num": "token_count", "token_num": "token_count",
"parser_id": "chunk_method", "parser_id": "chunk_method",
} }
run_status_numeric_to_text = {"0": "UNSTART", "1": "RUNNING", "2": "CANCEL", "3": "DONE", "4": "FAIL"} run_mapping = {
"0": "UNSTART",
"1": "RUNNING",
"2": "CANCEL",
"3": "DONE",
"4": "FAIL",
}
for doc in docs:
renamed_doc = {}
for key, value in doc.items():
if key == "run":
renamed_doc["run"] = run_mapping.get(str(value))
new_key = key_mapping.get(key, key)
renamed_doc[new_key] = value
if key == "run":
renamed_doc["run"] = run_mapping.get(value)
renamed_doc_list.append(renamed_doc)
return get_result(data={"total": tol, "docs": renamed_doc_list})
output_docs = []
for d in docs:
renamed_doc = {key_mapping.get(k, k): v for k, v in d.items()}
if "run" in d:
renamed_doc["run"] = run_status_numeric_to_text.get(str(d["run"]), d["run"])
output_docs.append(renamed_doc)
return get_result(data={"total": total, "docs": output_docs})
@manager.route("/datasets/<dataset_id>/documents", methods=["DELETE"]) # noqa: F821 @manager.route("/datasets/<dataset_id>/documents", methods=["DELETE"]) # noqa: F821
@token_required @token_required
@ -678,10 +663,10 @@ def delete(tenant_id, dataset_id):
errors += str(e) errors += str(e)
if not_found: if not_found:
return get_result(message=f"Documents not found: {not_found}", code=RetCode.DATA_ERROR) return get_result(message=f"Documents not found: {not_found}", code=settings.RetCode.DATA_ERROR)
if errors: if errors:
return get_result(message=errors, code=RetCode.SERVER_ERROR) return get_result(message=errors, code=settings.RetCode.SERVER_ERROR)
if duplicate_messages: if duplicate_messages:
if success_count > 0: if success_count > 0:
@ -756,7 +741,7 @@ def parse(tenant_id, dataset_id):
return get_error_data_result("Can't parse document that is currently being processed") return get_error_data_result("Can't parse document that is currently being processed")
info = {"run": "1", "progress": 0, "progress_msg": "", "chunk_num": 0, "token_num": 0} info = {"run": "1", "progress": 0, "progress_msg": "", "chunk_num": 0, "token_num": 0}
DocumentService.update_by_id(id, info) DocumentService.update_by_id(id, info)
globals.docStoreConn.delete({"doc_id": id}, search.index_name(tenant_id), dataset_id) settings.docStoreConn.delete({"doc_id": id}, search.index_name(tenant_id), dataset_id)
TaskService.filter_delete([Task.doc_id == id]) TaskService.filter_delete([Task.doc_id == id])
e, doc = DocumentService.get_by_id(id) e, doc = DocumentService.get_by_id(id)
doc = doc.to_dict() doc = doc.to_dict()
@ -765,7 +750,7 @@ def parse(tenant_id, dataset_id):
queue_tasks(doc, bucket, name, 0) queue_tasks(doc, bucket, name, 0)
success_count += 1 success_count += 1
if not_found: if not_found:
return get_result(message=f"Documents not found: {not_found}", code=RetCode.DATA_ERROR) return get_result(message=f"Documents not found: {not_found}", code=settings.RetCode.DATA_ERROR)
if duplicate_messages: if duplicate_messages:
if success_count > 0: if success_count > 0:
return get_result( return get_result(
@ -836,7 +821,7 @@ def stop_parsing(tenant_id, dataset_id):
return get_error_data_result("Can't stop parsing document with progress at 0 or 1") return get_error_data_result("Can't stop parsing document with progress at 0 or 1")
info = {"run": "2", "progress": 0, "chunk_num": 0} info = {"run": "2", "progress": 0, "chunk_num": 0}
DocumentService.update_by_id(id, info) DocumentService.update_by_id(id, info)
globals.docStoreConn.delete({"doc_id": doc[0].id}, search.index_name(tenant_id), dataset_id) settings.docStoreConn.delete({"doc_id": doc[0].id}, search.index_name(tenant_id), dataset_id)
success_count += 1 success_count += 1
if duplicate_messages: if duplicate_messages:
if success_count > 0: if success_count > 0:
@ -969,9 +954,9 @@ def list_chunks(tenant_id, dataset_id, document_id):
res = {"total": 0, "chunks": [], "doc": renamed_doc} res = {"total": 0, "chunks": [], "doc": renamed_doc}
if req.get("id"): if req.get("id"):
chunk = globals.docStoreConn.get(req.get("id"), search.index_name(tenant_id), [dataset_id]) chunk = settings.docStoreConn.get(req.get("id"), search.index_name(tenant_id), [dataset_id])
if not chunk: if not chunk:
return get_result(message=f"Chunk not found: {dataset_id}/{req.get('id')}", code=RetCode.NOT_FOUND) return get_result(message=f"Chunk not found: {dataset_id}/{req.get('id')}", code=settings.RetCode.NOT_FOUND)
k = [] k = []
for n in chunk.keys(): for n in chunk.keys():
if re.search(r"(_vec$|_sm_|_tks|_ltks)", n): if re.search(r"(_vec$|_sm_|_tks|_ltks)", n):
@ -996,13 +981,13 @@ def list_chunks(tenant_id, dataset_id, document_id):
res["chunks"].append(final_chunk) res["chunks"].append(final_chunk)
_ = Chunk(**final_chunk) _ = Chunk(**final_chunk)
elif globals.docStoreConn.indexExist(search.index_name(tenant_id), dataset_id): elif settings.docStoreConn.indexExist(search.index_name(tenant_id), dataset_id):
sres = globals.retriever.search(query, search.index_name(tenant_id), [dataset_id], emb_mdl=None, highlight=True) sres = settings.retrievaler.search(query, search.index_name(tenant_id), [dataset_id], emb_mdl=None, highlight=True)
res["total"] = sres.total res["total"] = sres.total
for id in sres.ids: for id in sres.ids:
d = { d = {
"id": id, "id": id,
"content": (remove_redundant_spaces(sres.highlight[id]) if question and id in sres.highlight else sres.field[id].get("content_with_weight", "")), "content": (rmSpace(sres.highlight[id]) if question and id in sres.highlight else sres.field[id].get("content_with_weight", "")),
"document_id": sres.field[id]["doc_id"], "document_id": sres.field[id]["doc_id"],
"docnm_kwd": sres.field[id]["docnm_kwd"], "docnm_kwd": sres.field[id]["docnm_kwd"],
"important_keywords": sres.field[id].get("important_kwd", []), "important_keywords": sres.field[id].get("important_kwd", []),
@ -1121,7 +1106,7 @@ def add_chunk(tenant_id, dataset_id, document_id):
v, c = embd_mdl.encode([doc.name, req["content"] if not d["question_kwd"] else "\n".join(d["question_kwd"])]) v, c = embd_mdl.encode([doc.name, req["content"] if not d["question_kwd"] else "\n".join(d["question_kwd"])])
v = 0.1 * v[0] + 0.9 * v[1] v = 0.1 * v[0] + 0.9 * v[1]
d["q_%d_vec" % len(v)] = v.tolist() d["q_%d_vec" % len(v)] = v.tolist()
globals.docStoreConn.insert([d], search.index_name(tenant_id), dataset_id) settings.docStoreConn.insert([d], search.index_name(tenant_id), dataset_id)
DocumentService.increment_chunk_num(doc.id, doc.kb_id, c, 1, 0) DocumentService.increment_chunk_num(doc.id, doc.kb_id, c, 1, 0)
# rename keys # rename keys
@ -1202,7 +1187,7 @@ def rm_chunk(tenant_id, dataset_id, document_id):
if "chunk_ids" in req: if "chunk_ids" in req:
unique_chunk_ids, duplicate_messages = check_duplicate_ids(req["chunk_ids"], "chunk") unique_chunk_ids, duplicate_messages = check_duplicate_ids(req["chunk_ids"], "chunk")
condition["id"] = unique_chunk_ids condition["id"] = unique_chunk_ids
chunk_number = globals.docStoreConn.delete(condition, search.index_name(tenant_id), dataset_id) chunk_number = settings.docStoreConn.delete(condition, search.index_name(tenant_id), dataset_id)
if chunk_number != 0: if chunk_number != 0:
DocumentService.decrement_chunk_num(document_id, dataset_id, 1, chunk_number, 0) DocumentService.decrement_chunk_num(document_id, dataset_id, 1, chunk_number, 0)
if "chunk_ids" in req and chunk_number != len(unique_chunk_ids): if "chunk_ids" in req and chunk_number != len(unique_chunk_ids):
@ -1274,7 +1259,7 @@ def update_chunk(tenant_id, dataset_id, document_id, chunk_id):
schema: schema:
type: object type: object
""" """
chunk = globals.docStoreConn.get(chunk_id, search.index_name(tenant_id), [dataset_id]) chunk = settings.docStoreConn.get(chunk_id, search.index_name(tenant_id), [dataset_id])
if chunk is None: if chunk is None:
return get_error_data_result(f"Can't find this chunk {chunk_id}") return get_error_data_result(f"Can't find this chunk {chunk_id}")
if not KnowledgebaseService.accessible(kb_id=dataset_id, user_id=tenant_id): if not KnowledgebaseService.accessible(kb_id=dataset_id, user_id=tenant_id):
@ -1303,10 +1288,6 @@ def update_chunk(tenant_id, dataset_id, document_id, chunk_id):
d["question_tks"] = rag_tokenizer.tokenize("\n".join(req["questions"])) d["question_tks"] = rag_tokenizer.tokenize("\n".join(req["questions"]))
if "available" in req: if "available" in req:
d["available_int"] = int(req["available"]) d["available_int"] = int(req["available"])
if "positions" in req:
if not isinstance(req["positions"], list):
return get_error_data_result("`positions` should be a list")
d["position_int"] = req["positions"]
embd_id = DocumentService.get_embd_id(document_id) embd_id = DocumentService.get_embd_id(document_id)
embd_mdl = TenantLLMService.model_instance(tenant_id, LLMType.EMBEDDING.value, embd_id) embd_mdl = TenantLLMService.model_instance(tenant_id, LLMType.EMBEDDING.value, embd_id)
if doc.parser_id == ParserType.QA: if doc.parser_id == ParserType.QA:
@ -1319,7 +1300,7 @@ def update_chunk(tenant_id, dataset_id, document_id, chunk_id):
v, c = embd_mdl.encode([doc.name, d["content_with_weight"] if not d.get("question_kwd") else "\n".join(d["question_kwd"])]) v, c = embd_mdl.encode([doc.name, d["content_with_weight"] if not d.get("question_kwd") else "\n".join(d["question_kwd"])])
v = 0.1 * v[0] + 0.9 * v[1] if doc.parser_id != ParserType.QA else v[1] v = 0.1 * v[0] + 0.9 * v[1] if doc.parser_id != ParserType.QA else v[1]
d["q_%d_vec" % len(v)] = v.tolist() d["q_%d_vec" % len(v)] = v.tolist()
globals.docStoreConn.update({"id": chunk_id}, d, search.index_name(tenant_id), dataset_id) settings.docStoreConn.update({"id": chunk_id}, d, search.index_name(tenant_id), dataset_id)
return get_result() return get_result()
@ -1420,7 +1401,7 @@ def retrieval_test(tenant_id):
if len(embd_nms) != 1: if len(embd_nms) != 1:
return get_result( return get_result(
message='Datasets use different embedding models."', message='Datasets use different embedding models."',
code=RetCode.DATA_ERROR, code=settings.RetCode.DATA_ERROR,
) )
if "question" not in req: if "question" not in req:
return get_error_data_result("`question` is required.") return get_error_data_result("`question` is required.")
@ -1465,7 +1446,7 @@ def retrieval_test(tenant_id):
chat_mdl = LLMBundle(kb.tenant_id, LLMType.CHAT) chat_mdl = LLMBundle(kb.tenant_id, LLMType.CHAT)
question += keyword_extraction(chat_mdl, question) question += keyword_extraction(chat_mdl, question)
ranks = globals.retriever.retrieval( ranks = settings.retrievaler.retrieval(
question, question,
embd_mdl, embd_mdl,
tenant_ids, tenant_ids,
@ -1481,7 +1462,7 @@ def retrieval_test(tenant_id):
rank_feature=label_question(question, kbs), rank_feature=label_question(question, kbs),
) )
if use_kg: if use_kg:
ck = settings.kg_retriever.retrieval(question, [k.tenant_id for k in kbs], kb_ids, embd_mdl, LLMBundle(kb.tenant_id, LLMType.CHAT)) ck = settings.kg_retrievaler.retrieval(question, [k.tenant_id for k in kbs], kb_ids, embd_mdl, LLMBundle(kb.tenant_id, LLMType.CHAT))
if ck["content_with_weight"]: if ck["content_with_weight"]:
ranks["chunks"].insert(0, ck) ranks["chunks"].insert(0, ck)
@ -1511,6 +1492,6 @@ def retrieval_test(tenant_id):
if str(e).find("not_found") > 0: if str(e).find("not_found") > 0:
return get_result( return get_result(
message="No chunk found! Check the chunk status please!", message="No chunk found! Check the chunk status please!",
code=RetCode.DATA_ERROR, code=settings.RetCode.DATA_ERROR,
) )
return server_error_response(e) return server_error_response(e)

Some files were not shown because too many files have changed in this diff Show More